id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
168,680
https://en.wikipedia.org/wiki/Teradata
Teradata Corporation is an American software company that provides cloud database and analytics-related software, products, and services. The company was formed in 1979 in Brentwood, California, as a collaboration between researchers at Caltech and Citibank's advanced technology group. Overview Teradata is an enterprise software company that develops and sells database analytics software. The company provides three main services: business analytics, cloud products, and consulting. It operates in North and Latin America, Europe, the Middle East, Africa, and Asia. Teradata is headquartered in San Diego, California and has additional major U.S. locations in Atlanta and San Francisco, where its data center research and development is housed. It is publicly traded on the New York Stock Exchange (NYSE) under the stock symbol TDC. Steve McMillan has served as the company's president and chief executive officer since 2020. The company reported $1.836 billion in revenue, a net income of $129 million, and 8,535 employees globally, as of February 9, 2020. History The concept of Teradata grew from research at the California Institute of Technology and from the discussions of Citibank's advanced technology group in the 1970s. In 1979, the company was incorporated in Brentwood, California by Jack E. Shemer, Philip M. Neches, Walter E. Muir, Jerold R. Modes, William P. Worth, Carroll Reed and David Hartke. Teradata released its DBC/1012 database machine in 1984. In 1990, the company acquired Sharebase, originally named Britton Lee. In September 1991, AT&T Corporation acquired NCR Corporation, which announced the acquisition of Teradata for about $250 million in December. Teradata built the first system over 1 terabyte for Wal-Mart in 1992. NCR acquired Strategic Technologies & Systems in 1999 and appointed Stephen Brobst as chief technology officer of Teradata Solutions Group. In 2000, NCR acquired Ceres Integrated Solutions and its customer relationship management software for $90 million, as well as Stirling Douglas Group and its demand chain management software. Teradata acquired financial management software from DecisionPoint in 2005. In January 2007, NCR announced Teradata would become an independent public company, led by Michael F. Koehler. The new company's shares started trading in October. In April 2016, a hardware product line called IntelliFlex was announced. Victor L. Lund became the chief executive on May 5, 2016. In October 2018, Teradata started promoting its cloud analytics software called Vantage (which evolved from the Teradata Database). On May 7, 2020, Teradata announced the appointment of Steve McMillan as president and chief executive officer, effective June 8, 2020 In December 2024, Teradata successfully appealed a California federal judge's decision in favor of SAP SE in a case involving allegations of trade secret misappropriation and antitrust violations. The lawsuit accused SAP of using Teradata's trade secrets to address technical issues in its software and of violating antitrust laws by bundling products and requiring customers to purchase them together. A federal appeals court reversed the earlier ruling, reviving Teradata's claims. Acquisitions and divestitures Teradata has acquired several companies since becoming an independent public company in 2008. In March 2008, Teradata acquired professional services company Claraview, which previously had spun out software provider Clarabridge. Teradata acquired column-oriented DBMS vendor Kickfire in August 2010, followed by the marketing software company Aprimo for about $550 million in December. In March 2011, the company acquired Aster Data Systems for about $263 million. Teradata acquired Software-as-a-service digital marketing company eCircle in May 2012, which was merged into the Aprimo business. In 2014, Teradata acquired the assets of Revelytix, a provider of information management products and for a reported $50 million. In September, Teradata acquired Hadoop service firm Think Big Analytics. In December, Teradata acquired RainStor, a company specializing in online data archiving on Hadoop. Teradata acquired Appoxxee, a mobile marketing software as a service provider, for about $20 million in January 2015, followed by the Netherlands-based digital marketing company FLXone in September. That same year Teradata acquired a small Business Intelligence firm, MiaPearl. In July 2016, the marketing applications division, using the Aprimo brand, was sold to private equity firm Marlin Equity Partners for about $90 million with Aprimo under CEO John Stammen moving its headquarters to Chicago. while absorbing Revenew Inc. that Marlin had also bought. Teradata acquired Big Data Partnership, a service company based in the UK, on July 21, 2016. In July 2017, Teradata acquired StackIQ, maker of the Stacki cluster manager software. Technology and products Teradata offers three primary services to its customers: cloud and hardware-based data warehousing, business analytics, and consulting services. In September 2016, the company launched Teradata Everywhere, which allows users to submit queries against public and private databases. The service uses massively parallel processing across both its physical data warehouse and cloud storage, including managed environments such as Amazon Web Services, Microsoft Azure, VMware, and Teradata's Managed Cloud and IntelliFlex. Teradata offers customers both hybrid cloud and multi-cloud storage. In March 2017, Teradata introduced Teradata IntelliCloud, a secure managed cloud for data and analytic software as a service. IntelliCloud is compatible with Teradata's data warehouse platform, IntelliFlex. The Teradata Analytics Platform was unveiled in 2017. Big data Teradata began to use the term "big data" in 2010. CTO Stephen Brobst attributed the rise of big data to "new media sources, such as social media." The increase in semi-structured and unstructured data gathered from online interactions prompted Teradata to form the "Petabyte club" in 2011 for its heaviest big data users. The rise of big data resulted in many traditional data warehousing companies updating their products and technology. For Teradata, big data prompted the acquisition of Aster Data Systems in 2011 for the company's MapReduce capabilities and ability to store and analyze semi-structured data. Old hardware line end of support To focus on its cloud database and analytics software business, Teradata plans to end support for its old hardware platforms; Teradata will provide remedial maintenance services for six (6) years from its Platform Sales Discontinuation Date. Platform Support Discontinuation is the end of the support date for a particular Teradata hardware platform. Teradata hardware platforms saw support discontinuation start in 2017, with the 2580 generation of their data warehouse appliance, with only their IntelliFlex 2.5 and R740 hardware platforms being listed as "current product". All other platform models/classes have a published support discontinuation date. Teradata customers may request extended support for a Teradata platform that has reached its end of support life. The extended support for a platform requires approval from the local Area Director and will be delivered on a "best effort" basis, as spare parts for older platforms may no longer be available. Teradata Vantage analytics In October 2018, Teradata started calling the cloud analytics software product line Vantage. Vantage is composed of various analytics engines on a core relational database, including the Aster graph database and a machine learning engine. The capability to leverage open-source analytic engines such as Spark and TensorFlow was released. Vantage can be deployed across public clouds, on-premises, and commodity infrastructure. Vantage provides storage and analysis for multi-structured data formats. References External links Companies listed on the New York Stock Exchange Data warehousing products Companies based in San Diego Software companies based in California NCR Corporation Big data companies Computer companies of the United States Computer companies established in 1979 Software companies established in 1979 American companies established in 1979 1979 establishments in California Corporate spin-offs Software companies of the United States 2007 initial public offerings Computer hardware companies Companies in the S&P 400
Teradata
[ "Technology" ]
1,708
[ "Computer hardware companies", "Computers" ]
168,701
https://en.wikipedia.org/wiki/Open%20Database%20Connectivity
In computing, Open Database Connectivity (ODBC) is a standard application programming interface (API) for accessing database management systems (DBMS). The designers of ODBC aimed to make it independent of database systems and operating systems. An application written using ODBC can be ported to other platforms, both on the client and server side, with few changes to the data access code. ODBC accomplishes DBMS independence by using an ODBC driver as a translation layer between the application and the DBMS. The application uses ODBC functions through an ODBC driver manager with which it is linked, and the driver passes the query to the DBMS. An ODBC driver can be thought of as analogous to a printer driver or other driver, providing a standard set of functions for the application to use, and implementing DBMS-specific functionality. An application that can use ODBC is referred to as "ODBC-compliant". Any ODBC-compliant application can access any DBMS for which a driver is installed. Drivers exist for all major DBMSs, many other data sources like address book systems and Microsoft Excel, and even for text or comma-separated values (CSV) files. ODBC was originally developed by Microsoft and Simba Technologies during the early 1990s, and became the basis for the Call Level Interface (CLI) standardized by SQL Access Group in the Unix and mainframe field. ODBC retained several features that were removed as part of the CLI effort. Full ODBC was later ported back to those platforms, and became a de facto standard considerably better known than CLI. The CLI remains similar to ODBC, and applications can be ported from one platform to the other with few changes. History Before ODBC The introduction of the mainframe-based relational database during the 1970s led to a proliferation of data access methods. Generally these systems operated together with a simple command processor that allowed users to type in English-like commands, and receive output. The best-known examples are SQL from IBM and QUEL from the Ingres project. These systems may or may not allow other applications to access the data directly, and those that did use a wide variety of methodologies. The introduction of SQL aimed to solve the problem of language standardization, although substantial differences in implementation remained. Since the SQL language had only rudimentary programming features, users often wanted to use SQL within a program written in another language, say Fortran or C. This led to the concept of Embedded SQL, which allowed SQL code to be embedded within another language. For instance, a SQL statement like SELECT * FROM city could be inserted as text within C source code, and during compiling it would be converted into a custom format that directly called a function within a library that would pass the statement into the SQL system. Results returned from the statements would be interpreted back into C data formats like char * using similar library code. There were several problems with the Embedded SQL approach. Like the different varieties of SQL, the Embedded SQLs that used them varied widely, not only from platform to platform, but even across languages on one platform – a system that allowed calls into IBM Db2 would look very different from one that called into their own SQL/DS. Another key problem to the Embedded SQL concept was that the SQL code could only be changed in the program's source code, so that even small changes to the query required considerable programmer effort to modify. The SQL market referred to this as static SQL, versus dynamic SQL which could be changed at any time, like the command-line interfaces that shipped with almost all SQL systems, or a programming interface that left the SQL as plain text until it was called. Dynamic SQL systems became a major focus for SQL vendors during the 1980s. Older mainframe databases, and the newer microcomputer based systems that were based on them, generally did not have a SQL-like command processor between the user and the database engine. Instead, the data was accessed directly by the program – a programming library in the case of large mainframe systems, or a command line interface or interactive forms system in the case of dBASE and similar applications. Data from dBASE could not generally be accessed directly by other programs running on the machine. Those programs may be given a way to access this data, often through libraries, but it would not work with any other database engine, or even different databases in the same engine. In effect, all such systems were static, which presented considerable problems. Early efforts By the mid-1980s the rapid improvement in microcomputers, and especially the introduction of the graphical user interface and data-rich application programs like Lotus 1-2-3 led to an increasing interest in using personal computers as the client-side platform of choice in client–server computing. Under this model, large mainframes and minicomputers would be used primarily to serve up data over local area networks to microcomputers that would interpret, display and manipulate that data. For this model to work, a data access standard was a requirement – in the mainframe field it was highly likely that all of the computers in a shop were from one vendor and clients were computer terminals talking directly to them, but in the micro field there was no such standardization and any client might access any server using any networking system. By the late 1980s there were several efforts underway to provide an abstraction layer for this purpose. Some of these were mainframe related, designed to allow programs running on those machines to translate between the variety of SQL's and provide a single common interface which could then be called by other mainframe or microcomputer programs. These solutions included IBM's Distributed Relational Database Architecture (DRDA) and Apple Computer's Data Access Language. Much more common, however, were systems that ran entirely on microcomputers, including a complete protocol stack that included any required networking or file translation support. One of the early examples of such a system was Lotus Development's DataLens, initially known as Blueprint. Blueprint, developed for 1-2-3, supported a variety of data sources, including SQL/DS, DB2, FOCUS and a variety of similar mainframe systems, as well as microcomputer systems like dBase and the early Microsoft/Ashton-Tate efforts that would eventually develop into Microsoft SQL Server. Unlike the later ODBC, Blueprint was a purely code-based system, lacking anything approximating a command language like SQL. Instead, programmers used data structures to store the query information, constructing a query by linking many of these structures together. Lotus referred to these compound structures as query trees. Around the same time, an industry team including members from Sybase (Tom Haggin), Tandem Computers (Jim Gray & Rao Yendluri) and Microsoft (Kyle Geiger) were working on a standardized dynamic SQL concept. Much of the system was based on Sybase's DB-Library system, with the Sybase-specific sections removed and several additions to support other platforms. DB-Library was aided by an industry-wide move from library systems that were tightly linked to a specific language, to library systems that were provided by the operating system and required the languages on that platform to conform to its standards. This meant that a single library could be used with (potentially) any programming language on a given platform. The first draft of the Microsoft Data Access API was published in April 1989, about the same time as Lotus' announcement of Blueprint. In spite of Blueprint's great lead – it was running when MSDA was still a paper project – Lotus eventually joined the MSDA efforts as it became clear that SQL would become the de facto database standard. After considerable industry input, in the summer of 1989 the standard became SQL Connectivity (SQLC). SAG and CLI In 1988 several vendors, mostly from the Unix and database communities, formed the SQL Access Group (SAG) in an effort to produce a single basic standard for the SQL language. At the first meeting there was considerable debate over whether or not the effort should work solely on the SQL language itself, or attempt a wider standardization which included a dynamic SQL language-embedding system as well, what they called a Call Level Interface (CLI). While attending the meeting with an early draft of what was then still known as MS Data Access, Kyle Geiger of Microsoft invited Jeff Balboni and Larry Barnes of Digital Equipment Corporation (DEC) to join the SQLC meetings as well. SQLC was a potential solution to the call for the CLI, which was being led by DEC. The new SQLC "gang of four", MS, Tandem, DEC and Sybase, brought an updated version of SQLC to the next SAG meeting in June 1990. The SAG responded by opening the standard effort to any competing design, but of the many proposals, only Oracle Corp had a system that presented serious competition. In the end, SQLC won the votes and became the draft standard, but only after large portions of the API were removed – the standards document was trimmed from 120 pages to 50 during this time. It was also during this period that the name Call Level Interface was formally adopted. In 1995 SQL/CLI became part of the international SQL standard, ISO/IEC 9075-3. The SAG itself was taken over by the X/Open group in 1996, and, over time, became part of The Open Group's Common Application Environment. MS continued working with the original SQLC standard, retaining many of the advanced features that were removed from the CLI version. These included features like scrollable cursors, and metadata information queries. The commands in the API were split into groups; the Core group was identical to the CLI, the Level 1 extensions were commands that would be easy to implement in drivers, while Level 2 commands contained the more advanced features like cursors. A proposed standard was released in December 1991, and industry input was gathered and worked into the system through 1992, resulting in yet another name change to ODBC. JET and ODBC During this time, Microsoft was in the midst of developing their Jet database system. Jet combined three primary subsystems; an ISAM-based database engine (also named Jet, confusingly), a C-based interface allowing applications to access that data, and a selection of driver dynamic-link libraries (DLL) that allowed the same C interface to redirect input and output to other ISAM-based databases, like Paradox and xBase. Jet allowed using one set of calls to access common microcomputer databases in a fashion similar to Blueprint, by then renamed DataLens. However, Jet did not use SQL; like DataLens, the interface was in C and consisted of data structures and function calls. The SAG standardization efforts presented an opportunity for Microsoft to adapt their Jet system to the new CLI standard. This would not only make Windows a premier platform for CLI development, but also allow users to use SQL to access both Jet and other databases as well. What was missing was the SQL parser that could convert those calls from their text form into the C-interface used in Jet. To solve this, MS partnered with PageAhead Software to use their existing query processor, SIMBA. SIMBA was used as a parser above Jet's C library, turning Jet into an SQL database. And because Jet could forward those C-based calls to other databases, this also allowed SIMBA to query other systems. Microsoft included drivers for Excel to turn its spreadsheet documents into SQL-accessible database tables. Release and continued development ODBC 1.0 was released in September 1992. At the time, there was little direct support for SQL databases (versus ISAM), and early drivers were noted for poor performance. Some of this was unavoidable due to the path that the calls took through the Jet-based stack; ODBC calls to SQL databases were first converted from Simba Technologies's SQL dialect to Jet's internal C-based format, then passed to a driver for conversion back into SQL calls for the database. Digital Equipment and Oracle both contracted Simba Technologies to develop drivers for their databases as well. Circa 1993, OpenLink Software shipped one of the first independently developed third-party ODBC drivers, for the PROGRESS DBMS, and soon followed with their UDBC (a cross-platform API equivalent of ODBC and the SAG/CLI) SDK and associated drivers for PROGRESS, Sybase, Oracle, and other DBMS, for use on Unix-like OS (AIX, HP-UX, Solaris, Linux, etc.), VMS, Windows NT, OS/2, and other OS. Meanwhile, the CLI standard effort dragged on, and it was not until March 1995 that the definitive version was finalized. By then, Microsoft had already granted Visigenic Software a source code license to develop ODBC on non-Windows platforms. Visigenic ported ODBC to the classic Mac OS, and a wide variety of Unix platforms, where ODBC quickly became the de facto standard. "Real" CLI is rare today. The two systems remain similar, and many applications can be ported from ODBC to CLI with few or no changes. Over time, database vendors took over the driver interfaces and provided direct links to their products. Skipping the intermediate conversions to and from Jet or similar wrappers often resulted in higher performance. However, by then Microsoft had changed focus to their OLE DB concept (recently reinstated ), which provided direct access to a wider variety of data sources from address books to text files. Several new systems followed which further turned their attention from ODBC, including ActiveX Data Objects (ADO) and ADO.net, which interacted more or less with ODBC over their lifetimes. As Microsoft turned its attention away from working directly on ODBC, the Unix field was increasingly embracing it. This was propelled by two changes within the market, the introduction of graphical user interfaces (GUIs) like GNOME that provided a need to access these sources in non-text form, and the emergence of open software database systems like PostgreSQL and MySQL, initially under Unix. The later adoption of ODBC by Apple for using the standard Unix-side iODBC package Mac OS X 10.2 (Jaguar) (which OpenLink Software had been independently providing for Mac OS X 10.0 and even Mac OS 9 since 2001) further cemented ODBC as the standard for cross-platform data access. Sun Microsystems used the ODBC system as the basis for their own open standard, Java Database Connectivity (JDBC). In most ways, JDBC can be considered a version of ODBC for the programming language Java instead of C. JDBC-to-ODBC bridges allow Java-based programs to access data sources through ODBC drivers on platforms lacking a native JDBC driver, although these are now relatively rare. Inversely, ODBC-to-JDBC bridges allow C-based programs to access data sources through JDBC drivers on platforms or from databases lacking suitable ODBC drivers. ODBC today ODBC remains in wide use today, with drivers available for most platforms and most databases. It is not uncommon to find ODBC drivers for database engines that are meant to be embedded, like SQLite, as a way to allow existing tools to act as front-ends to these engines for testing and debugging. Version history ODBC specifications 1.0: released in September 1992 2.0: 1994 2.5 3.0: 1995, John Goodson of Intersolv and Frank Pellow and Paul Cotton of IBM provided significant input to ODBC 3.0 3.5: 1997 3.8: 2009, with Windows 7 4.0: Development announced June 2016 with first implementation with SQL Server 2017 released Sep 2017 and additional desktop drivers late 2018 final spec on Github Desktop Database Drivers 1.0 (1993–08): Used the SIMBA query processor produced by PageAhead Software. 2.0 (1994–12): Used with ODBC 2.0. 3.0 (1995–10): Supports Windows 95 and Windows NT Workstation or NT Server 3.51. Only 32-bit drivers were included in this release. 3.5 (1996–10): Supports double-byte character set (DBCS), and accommodated the use of File data source names (DSNs). The Microsoft Access driver was released in an RISC version for use on Alpha platforms for Windows 95/98 and Windows NT 3.51 and later operating systems. 4.0 (late 1998): Support Microsoft Jet Engine Unicode format along with compatibility for ANSI format of earlier versions. Drivers and Managers Drivers ODBC is based on the device driver model, where the driver encapsulates the logic needed to convert a standard set of commands and functions into the specific calls required by the underlying system. For instance, a printer driver presents a standard set of printing commands, the API, to applications using the printing system. Calls made to those APIs are converted by the driver into the format used by the actual hardware, say PostScript or PCL. In the case of ODBC, the drivers encapsulate many functions that can be broken down into several broad categories. One set of functions is primarily concerned with finding, connecting to and disconnecting from the DBMS that driver talks to. A second set is used to send SQL commands from the ODBC system to the DBMS, converting or interpreting any commands that are not supported internally. For instance, a DBMS that does not support cursors can emulate this functionality in the driver. Finally, another set of commands, mostly used internally, is used to convert data from the DBMS's internal formats to a set of standardized ODBC formats, which are based on the C language formats. An ODBC driver enables an ODBC-compliant application to use a data source, normally a DBMS. Some non-DBMS drivers exist, for such data sources as CSV files, by implementing a small DBMS inside the driver itself. ODBC drivers exist for most DBMSs, including Oracle, PostgreSQL, MySQL, Microsoft SQL Server (but not for the Compact aka CE edition), Mimer SQL, Sybase ASE, SAP HANA and IBM Db2. Because different technologies have different capabilities, most ODBC drivers do not implement all functionality defined in the ODBC standard. Some drivers offer extra functionality not defined by the standard. Driver Manager Device drivers are normally enumerated, set up and managed by a separate Manager layer, which may provide additional functionality. For instance, printing systems often include functionality to provide spooling functionality on top of the drivers, providing print spooling for any supported printer. In ODBC the Driver Manager (DM) provides these features. The DM can enumerate the installed drivers and present this as a list, often in a GUI-based form. But more important to the operation of the ODBC system is the DM's concept of a Data Source Name (DSN). DSNs collect additional information needed to connect to a specific data source, versus the DBMS itself. For instance, the same MySQL driver can be used to connect to any MySQL server, but the connection information to connect to a local private server is different from the information needed to connect to an internet-hosted public server. The DSN stores this information in a standardized format, and the DM provides this to the driver during connection requests. The DM also includes functionality to present a list of DSNs using human readable names, and to select them at run-time to connect to different resources. The DM also includes the ability to save partially complete DSN's, with code and logic to ask the user for any missing information at runtime. For instance, a DSN can be created without a required password. When an ODBC application attempts to connect to the DBMS using this DSN, the system will pause and ask the user to provide the password before continuing. This frees the application developer from having to create this sort of code, as well as having to know which questions to ask. All of this is included in the driver and the DSNs. Bridging configurations A bridge is a special kind of driver: a driver that uses another driver-based technology. ODBC-to-JDBC (ODBC-JDBC) bridges An ODBC-JDBC bridge consists of an ODBC driver which uses the services of a JDBC driver to connect to a database. This driver translates ODBC function-calls into JDBC method-calls. Programmers usually use such a bridge when they lack an ODBC driver for some database but have access to a JDBC driver. Examples: OpenLink ODBC-JDBC Bridge, SequeLink ODBC-JDBC Bridge. JDBC-to-ODBC (JDBC-ODBC) bridges A JDBC-ODBC bridge consists of a JDBC driver which employs an ODBC driver to connect to a target database. This driver translates JDBC method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks a JDBC driver, but is accessible through an ODBC driver. Sun Microsystems included one such bridge in the JVM, but viewed it as a stop-gap measure while few JDBC drivers existed (The built-in JDBC-ODBC bridge was dropped from the JVM in Java 8). Sun never intended its bridge for production environments, and generally recommended against its use. independent data-access vendors deliver JDBC-ODBC bridges which support current standards for both mechanisms, and which far outperform the JVM built-in. Examples: OpenLink JDBC-ODBC Bridge, SequeLink JDBC-ODBC Bridge, ZappySys JDBC-ODBC Bridge. OLE DB-to-ODBC bridges An OLE DB-ODBC bridge consists of an OLE DB Provider which uses the services of an ODBC driver to connect to a target database. This provider translates OLE DB method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks an OLE DB provider, but is accessible through an ODBC driver. Microsoft ships one, MSDASQL.DLL, as part of the MDAC system component bundle, together with other database drivers, to simplify development in COM-aware languages (e.g. Visual Basic). Third parties have also developed such, notably OpenLink Software whose 64-bit OLE DB Provider for ODBC Data Sources filled the gap when Microsoft initially deprecated this bridge for their 64-bit OS. (Microsoft later relented, and 64-bit Windows starting with Windows Server 2008 and Windows Vista SP1 have shipped with a 64-bit version of MSDASQL.) Examples: OpenLink OLEDB-ODBC Bridge , SequeLink OLEDB-ODBC Bridge. ADO.NET-to-ODBC bridges An ADO.NET-ODBC bridge consists of an ADO.NET Provider which uses the services of an ODBC driver to connect to a target database. This provider translates ADO.NET method calls into ODBC function calls. Programmers usually use such a bridge when a given database lacks an ADO.NET provider, but is accessible through an ODBC driver. Microsoft ships one as part of the MDAC system component bundle, together with other database drivers, to simplify development in C#. Third parties have also developed such. Examples: OpenLink ADO.NET-ODBC Bridge, SequeLink ADO.NET-ODBC Bridge. See also GNU Data Access Java Database Connectivity (JDBC) Windows Open Services Architecture ODBC Administrator References Bibliography Citations External links Microsoft ODBC Overview IBM i ODBC Administration Presentation slides from www.roth.net Microsoft ODBC & Data Access APIs History Article. Github page: Microsoft ODBC 4.0 Specification Computer programming Microsoft application programming interfaces Database APIs SQL data access
Open Database Connectivity
[ "Technology", "Engineering" ]
5,032
[ "Software engineering", "Computer programming", "Computers" ]
168,703
https://en.wikipedia.org/wiki/Thujone
Thujone () is a ketone and a monoterpene that occurs predominantly in two diastereomeric (epimeric) forms: (−)-α-thujone and (+)-β-thujone. Though it is best known as a chemical compound in the spirit absinthe, it is only present in trace amounts and is unlikely to be responsible for the spirit's purported stimulant and psychoactive effects. Thujone acts on the GABAA receptor as an antagonist. As a competitive antagonist of GABAA receptor, thujone alone is considered to be convulsant, though by interfering with the inhibitory transmitter GABA, it may convey stimulating, mood-elevating effects at low doses. It is also found in perfumery as a component of several essential oils. In addition to the naturally occurring (−)-α-thujone and (+)-β-thujone, two other forms are possible: (+)-α-thujone and (−)-β-thujone. In 2016, they were found in nature as well, in Salvia officinalis. Sources Thujone is found in a number of plants, such as arborvitae (genus Thuja, hence the derivation of the name), Nootka cypress, some junipers, mugwort, oregano, common sage, tansy, and wormwood, most notably grand wormwood (Artemisia absinthium), usually as a mix of isomers in a 1:2 ratio. It is also found in various species of Mentha (mint). Biosynthesis The biosynthesis of thujone is similar to the synthesis of other monoterpenes and begins with the formation of geranyl diphosphate (GPP) from dimethylallyl pyrophosphate (DMAPP) and isopentenyl diphosphate (IPP), catalyzed by the enzyme geranyl diphosphate synthase. Quantitative 13C NMR spectroscopic analysis has demonstrated that the isoprene units used to form thujone in plants are derived from the methylerythritol phosphate pathway (MEP). The reactions that generate the thujone skeleton in sabinene from GPP are mediated by the enzyme sabinene synthase which has GPP as its substrate. GPP (1) first isomerizes to linalyl diphosphate (LPP) (2) and neryl diphosphate (NPP) (3). LPP preferentially forms a delocalized allylic cation-diphosphate (4). The ion-pair intermediate then cyclizes in an electrophilic addition to yield the α-terpinyl tertiary cation (5). The α-terpinyl cation (5) then undergoes a 1,2 hydride shift via a Wagner–Meerwein rearrangement, leading to the formation of the terpinen-4-yl cation (6). This cation undergoes a second cyclization to form the thujyl cation intermediate (7) before loss of a proton to form the thujone precursor, (+)-sabinene (8). From (+)-sabinene (8), the proposed biosynthetic route to generate thujone follows a three-step pathway: (+)-sabinene is first oxidized to an isomer of (+)-sabinol (9-1, 9-2) by a cytochrome P450 enzyme, followed by conversion to (+)-sabinone (10) via a dehydrogenase. Finally, a reductase mediates the conversion to α-thujone (11-1) and β-thujone (11-2). The isomerism of the (+)-sabinol intermediate varies among thujone-producing plants; for instance, in the western redcedar (Thuja plicata), thujone is derived exclusively from the (+)-trans-sabinol intermediate (9-1) whereas in the common garden sage (Salvia officinalis), thujone is formed from the (+)-cis-sabinol intermediate (9-2). Pharmacology Based on a hypothesis that considered only molecular shape, it was speculated that thujone may act similarly to THC on the cannabinoid receptors, however, thujone failed to evoke a cannabimimetic response in a 1999 investigative study. Thujone is a GABAA receptor antagonist and more specifically, a GABAA receptor competitive antagonist. By inhibiting GABA receptor activation, neurons may fire more easily, which can cause muscle spasms and convulsions. This interaction with the GABAA receptor is specific to alpha-thujone. Thujone is also a 5-HT3 antagonist. The median lethal dose, or LD50, of α-thujone, the more active of the two isomers, in mice, is around 45 mg/kg, with 0% mortality rate at 30 mg/kg and 100% at 60 mg/kg. Mice exposed to the higher dose have convulsions that lead to death within 1 minute. From 30 to 45 mg/kg, the mice experience muscle spasms in the legs, which progress to general convulsions until death or recovery. These effects are in line with other GABA antagonists. Also, α-thujone is metabolized quickly in the liver in mice. Pretreatment with GABA positive allosteric modulators like diazepam, phenobarbital, or 1 g/kg of ethanol protects against a lethal dose of 100 mg/kg. Attention performance has been tested with low and high doses of thujone in alcohol. The high dose had a short-term negative effect on attention performance. The lower dose showed no noticeable effect. Thujone is reported to be toxic to brain, kidney, and liver cells and could cause convulsions if used in too high a dose. Other thujone-containing plants such as the tree arborvitae (Thuja occidentalis) are used in herbal medicine, mainly for their alleged immune-system stimulating effects. Side effects from the essential oil of this plant include anxiety, sleeplessness, and convulsions, which confirms the central nervous system effects of thujone. In absinthe Thujone is most commonly known for being a compound in the spirit absinthe. In the past, absinthe was thought to contain up to 260–350 mg/L thujone, but modern tests have shown this estimate to be far too high. A 2008 study of 13 pre-ban (1895–1910) bottles using gas chromatography–mass spectrometry (GC-MS) found that the bottles had between 0.5 and 48.3 mg/L and averaged 25.4 mg/L A 2005 study recreated three 1899 high-wormwood recipes and tested with GC–MS, and found that the highest contained 4.3 mg/L thujone. GC–MS testing is important in this capacity, because gas chromatography alone may record an inaccurately high reading of thujone as other compounds may interfere with and add to the apparent measured amount. History The compound was discovered after absinthe became popular in the mid-19th century. Valentin Magnan, who studied alcoholism, tested pure wormwood oil on animals and discovered it caused seizures independent from the effects of alcohol. Based on this, absinthe, which contains a small amount of wormwood oil, was assumed to be more dangerous than ordinary alcohol. Eventually, thujone was isolated as the cause of these reactions. Magnan went on to study 250 abusers of alcohol and noted that those who drank absinthe had seizures and hallucinations. The seizures are caused by the (+)-α-thujone interacting with the GABA receptors, causing epileptic activity. In light of modern evidence, these conclusions are questionable, as they are based on a poor understanding of other compounds and diseases, and clouded by Magnan's belief that alcohol and absinthe were degenerating the French race. After absinthe was banned, research dropped off until the 1970s, when the British scientific journal Nature published an article comparing the molecular shape of thujone to tetrahydrocannabinol (THC), the primary psychoactive substance found in cannabis, and hypothesized it would act the same way on the brain, sparking the myth that thujone was a cannabinoid. More recently, following European Council Directive No. 88/388/EEC (1988) allowing certain levels of thujone in foodstuffs in the EU, the studies described above were conducted and found only minute levels of thujone in absinthe. Regulations European Union Maximum thujone levels in the EU are: 0.5 mg/kg in food prepared with Artemisia species, excluding those prepared with sage and non alcoholic beverages 10 mg/kg in alcoholic beverages not prepared with Artemisia species 25 mg/kg in food prepared with sage 35 mg/kg in alcoholic beverages prepared with Artemisia species United States In the United States, the addition of pure thujone to foods is not permitted. Foods or beverages that contain Artemisia species, white cedar, oakmoss, tansy, or yarrow, must be thujone-free, which in practice means that they contain less than 10 parts per million thujone. Other herbs that contain thujone have no restrictions. For example, sage and sage oil (which can be up to 50% thujone) are on the Food and Drug Administration's list of generally recognized as safe (GRAS) substances. Absinthe offered for sale in the United States must be thujone-free by the same standard that applies to other beverages containing Artemisia, so absinthe with small amounts of thujone may be legally imported. Canada In Canada, liquor laws are the domain of the provincial governments. Alberta, Ontario, and Nova Scotia allow 10 mg/kg thujone; Quebec allows 15 mg per kg; Manitoba allows 6–8 mg thujone per litre; British Columbia adheres to the same levels as Ontario. However, in Saskatchewan and Quebec, one can purchase any liquor available in the world upon the purchase of a maximum of one case, usually 12 750-ml bottles or 9 L. The individual liquor boards must approve each product before it may be sold on shelves. See also Piołunówka – Polish alcoholic preparation with thujone content higher than in absinthe References Further reading External links Absinthe absolved , Cern Courier, July 8, 2008 The Shaky History of Thujone – Wormwood Society article on thujone and its history. Absinthe GABAA receptor negative allosteric modulators Convulsants Monoterpenes 5-HT3 antagonists Ketones Perfume ingredients Bicyclic compounds Cyclopentanes Cyclopropanes Isopropyl compounds Neurotoxins Plant toxins
Thujone
[ "Chemistry" ]
2,363
[ "Chemical ecology", "Ketones", "Functional groups", "Plant toxins", "Neurochemistry", "Neurotoxins" ]
168,753
https://en.wikipedia.org/wiki/Data%20haven
A data haven, like a corporate haven or tax haven, is a refuge for uninterrupted or unregulated data. Data havens are locations with legal environments that are friendly to the concept of a computer network freely holding data and even protecting its content and associated information. They tend to fit into three categories: a physical locality with weak information-system enforcement and extradition laws, a physical locality with intentionally strong protections of data, and virtual domains designed to secure data via technical means (such as encryption) regardless of any legal environment. Tor's onion space, I2P (both hidden services), HavenCo (centralized), and Freenet (decentralized) are four models of modern-day virtual data havens. Purposes of data havens Reasons for establishing data havens include access to free political speech for users in countries where censorship of the Internet is practiced. Other reasons can include: Whistleblowing Distributing software, data or speech that violates laws such as the DMCA Copyright infringement Circumventing data protection laws Online gambling Pornography Cybercrime History of the term The 1978 report of the British government's Data Protection Committee expressed concern that different privacy standards in different countries would lead to the transfer of personal data to countries with weaker protections; it feared that Britain might become a "data haven". Also in 1978, Adrian Norman published a mock consulting study on the feasibility of setting up a company providing a wide range of data haven services, called "Project Goldfish". Science fiction novelist William Gibson used the term in his novels Count Zero and Mona Lisa Overdrive, as did Bruce Sterling in Islands in the Net. The 1990s segments of Neal Stephenson's 1999 novel Cryptonomicon concern a small group of entrepreneurs attempting to create a data haven. See also Corporate haven Crypto-anarchism International Modern Media Institute References Computer law Anonymity networks Crypto-anarchism Information privacy Internet privacy Data laws
Data haven
[ "Technology" ]
397
[ "Computer law", "Computing and society" ]
168,848
https://en.wikipedia.org/wiki/Human%20skeleton
The human skeleton is the internal framework of the human body. It is composed of around 270 bones at birth – this total decreases to around 206 bones by adulthood after some bones get fused together. The bone mass in the skeleton makes up about 14% of the total body weight (ca. 10–11 kg for an average person) and reaches maximum mass between the ages of 25 and 30. The human skeleton can be divided into the axial skeleton and the appendicular skeleton. The axial skeleton is formed by the vertebral column, the rib cage, the skull and other associated bones. The appendicular skeleton, which is attached to the axial skeleton, is formed by the shoulder girdle, the pelvic girdle and the bones of the upper and lower limbs. The human skeleton performs six major functions: support, movement, protection, production of blood cells, storage of minerals, and endocrine regulation. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis exist. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. The human female pelvis is also different from that of males in order to facilitate childbirth. Unlike most primates, human males do not have penile bones. Divisions Axial The axial skeleton (80 bones) is formed by the vertebral column (32–34 bones; the number of the vertebrae differs from human to human as the lower 2 parts, sacral and coccygeal bone may vary in length), a part of the rib cage (12 pairs of ribs and the sternum), and the skull (22 bones and 7 associated bones). The upright posture of humans is maintained by the axial skeleton, which transmits the weight from the head, the trunk, and the upper extremities down to the lower extremities at the hip joints. The bones of the spine are supported by many ligaments. The erector spinae muscles are also supporting and are useful for balance. Appendicular The appendicular skeleton (126 bones) is formed by the pectoral girdles, the upper limbs, the pelvic girdle or pelvis, and the lower limbs. Their functions are to make locomotion possible and to protect the major organs of digestion, excretion and reproduction. Functions The skeleton serves six major functions: support, movement, protection, production of blood cells, storage of minerals and endocrine regulation. Support The skeleton provides the framework which supports the body and maintains its shape. The pelvis, associated ligaments and muscles provide a floor for the pelvic structures. Without the rib cages, costal cartilages, and intercostal muscles, the lungs would collapse. Movement The joints between bones allow movement, some allowing a wider range of movement than others, e.g. the ball and socket joint allows a greater range of movement than the pivot joint at the neck. Movement is powered by skeletal muscles, which are attached to the skeleton at various sites on bones. Muscles, bones, and joints provide the principal mechanics for movement, all coordinated by the nervous system. It is believed that the reduction of human bone density in prehistoric times reduced the agility and dexterity of human movement. Shifting from hunting to agriculture has caused human bone density to reduce significantly. Protection The skeleton helps to protect many vital internal organs from being damaged. The skull protects the brain The vertebrae protect the spinal cord. The rib cage, spine, and sternum protect the lungs, heart and major blood vessels. Blood cell production The skeleton is the site of haematopoiesis, the development of blood cells that takes place in the bone marrow. In children, haematopoiesis occurs primarily in the marrow of the long bones such as the femur and tibia. In adults, it occurs mainly in the pelvis, cranium, vertebrae, and sternum. Storage The bone matrix can store calcium and is involved in calcium metabolism, and bone marrow can store iron in ferritin and is involved in iron metabolism. However, bones are not entirely made of calcium, but a mixture of chondroitin sulfate and hydroxyapatite, the latter making up 70% of a bone. Hydroxyapatite is in turn composed of 39.8% of calcium, 41.4% of oxygen, 18.5% of phosphorus, and 0.2% of hydrogen by mass. Chondroitin sulfate is a sugar made up primarily of oxygen and carbon. Endocrine regulation Bone cells release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat. Sex differences Anatomical differences between human males and females are highly pronounced in some soft tissue areas, but tend to be limited in the skeleton. The human skeleton is not as sexually dimorphic as that of many other primate species, but subtle differences between sexes in the morphology of the skull, dentition, long bones, and pelvis are exhibited across human populations. In general, female skeletal elements tend to be smaller and less robust than corresponding male elements within a given population. It is not known whether or to what extent those differences are genetic or environmental. Skull A variety of gross morphological traits of the human skull demonstrate sexual dimorphism, such as the median nuchal line, mastoid processes, supraorbital margin, supraorbital ridge, and the chin. Dentition Human inter-sex dental dimorphism centers on the canine teeth, but it is not nearly as pronounced as in the other great apes. Long bones Long bones are generally larger in males than in females within a given population. Muscle attachment sites on long bones are often more robust in males than in females, reflecting a difference in overall muscle mass and development between sexes. Sexual dimorphism in the long bones is commonly characterized by morphometric or gross morphological analyses. Pelvis The human pelvis exhibits greater sexual dimorphism than other bones, specifically in the size and shape of the pelvic cavity, ilia, greater sciatic notches, and the sub-pubic angle. The Phenice method is commonly used to determine the sex of an unidentified human skeleton by anthropologists with 96% to 100% accuracy in some populations. Women's pelvises are wider in the pelvic inlet and are wider throughout the pelvis to allow for child birth. The sacrum in the women's pelvis is curved inwards to allow the child to have a "funnel" to assist in the child's pathway from the uterus to the birth canal. Clinical significance There are many classified skeletal disorders. One of the most common is osteoporosis. Also common is scoliosis, a side-to-side curve in the back or spine, often creating a pronounced "C" or "S" shape when viewed on an x-ray of the spine. This condition is most apparent during adolescence, and is most common with females. Arthritis Arthritis is a disorder of the joints. It involves inflammation of one or more joints. When affected by arthritis, the joint or joints affected may be painful to move, may move in unusual directions or may be immobile completely. The symptoms of arthritis will vary differently between types of arthritis. The most common form of arthritis, osteoarthritis, can affect both the larger and smaller joints of the human skeleton. The cartilage in the affected joints will degrade, soften and wear away. This decreases the mobility of the joints and decreases the space between bones where cartilage should be. Osteoporosis Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined by the World Health Organization in women as a bone mineral density 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average, as measured by dual energy X-ray absorptiometry, with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and be at risk of fracture. Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium supplements may also be advised, as may vitamin D. When medication is used, it may include bisphosphonates, strontium ranelate, and osteoporosis may be one factor considered when commencing hormone replacement therapy. History India The Sushruta Samhita, composed between the 6th century BCE and 5th century CE speaks of 360 bones. Books on Salya-Shastra (surgical science) know of only 300. The text then lists the total of 300 as follows: 120 in the extremities (e.g. hands, legs), 117 in the pelvic area, sides, back, abdomen and breast, and 63 in the neck and upwards. The text then explains how these subtotals were empirically verified. The discussion shows that the Indian tradition nurtured diversity of thought, with Sushruta school reaching its own conclusions and differing from the Atreya-Caraka tradition. The differences in the count of bones in the two schools is partly because Charaka Samhita includes 32 tooth sockets in its count, and their difference of opinions on how and when to count a cartilage as bone (which both sometimes do, unlike modern anatomy). Hellenistic world The study of bones in ancient Greece started under Ptolemaic kings due to their link to Egypt. Herophilos, through his work by studying dissected human corpses in Alexandria, is credited to be the pioneer of the field. His works are lost but are often cited by notable persons in the field such as Galen and Rufus of Ephesus. Galen himself did little dissection though and relied on the work of others like Marinus of Alexandria, as well as his own observations of gladiator cadavers and animals. According to Katherine Park, in medieval Europe dissection continued to be practiced, contrary to the popular understanding that such practices were taboo and thus completely banned. The practice of holy autopsy, such as in the case of Clare of Montefalco further supports the claim. Alexandria continued as a center of anatomy under Islamic rule, with Ibn Zuhr a notable figure. Chinese understandings are divergent, as the closest corresponding concept in the medicinal system seems to be the meridians, although given that Hua Tuo regularly performed surgery, there may be some distance between medical theory and actual understanding. Renaissance Leonardo da Vinci made studies of the skeleton, albeit unpublished in his time. Many artists, Antonio del Pollaiuolo being the first, performed dissections for better understanding of the body, although they concentrated mostly on the muscles. Vesalius, regarded as the founder of modern anatomy, authored the book De humani corporis fabrica, which contained many illustrations of the skeleton and other body parts, correcting some theories dating from Galen, such as the lower jaw being a single bone instead of two. Various other figures like Alessandro Achillini also contributed to the further understanding of the skeleton. 18th century As early as 1797, the death goddess or folk saint known as Santa Muerte has been represented as a skeleton. See also List of bones of the human skeleton Distraction osteogenesis References Bibliography Further reading Endocrine system Human anatomy
Human skeleton
[ "Biology" ]
2,518
[ "Organ systems", "Endocrine system" ]
168,864
https://en.wikipedia.org/wiki/Well-ordering%20principle
In mathematics, the well-ordering principle states that every non-empty subset of nonnegative integers contains a least element. In other words, the set of nonnegative integers is well-ordered by its "natural" or "magnitude" order in which precedes if and only if is either or the sum of and some nonnegative integer (other orderings include the ordering ; and ). The phrase "well-ordering principle" is sometimes taken to be synonymous with the "well-ordering theorem". On other occasions it is understood to be the proposition that the set of integers contains a well-ordered subset, called the natural numbers, in which every nonempty subset contains a least element. Properties Depending on the framework in which the natural numbers are introduced, this (second-order) property of the set of natural numbers is either an axiom or a provable theorem. For example: In Peano arithmetic, second-order arithmetic and related systems, and indeed in most (not necessarily formal) mathematical treatments of the well-ordering principle, the principle is derived from the principle of mathematical induction, which is itself taken as basic. Considering the natural numbers as a subset of the real numbers, and assuming that we know already that the real numbers are complete (again, either as an axiom or a theorem about the real number system), i.e., every bounded (from below) set has an infimum, then also every set of natural numbers has an infimum, say . We can now find an integer such that lies in the half-open interval , and can then show that we must have , and in . In axiomatic set theory, the natural numbers are defined as the smallest inductive set (i.e., set containing 0 and closed under the successor operation). One can (even without invoking the regularity axiom) show that the set of all natural numbers such that " is well-ordered" is inductive, and must therefore contain all natural numbers; from this property one can conclude that the set of all natural numbers is also well-ordered. In the second sense, this phrase is used when that proposition is relied on for the purpose of justifying proofs that take the following form: to prove that every natural number belongs to a specified set , assume the contrary, which implies that the set of counterexamples is non-empty and thus contains a smallest counterexample. Then show that for any counterexample there is a still smaller counterexample, producing a contradiction. This mode of argument is the contrapositive of proof by complete induction. It is known light-heartedly as the "minimal criminal" method and is similar in its nature to Fermat's method of "infinite descent". Garrett Birkhoff and Saunders Mac Lane wrote in A Survey of Modern Algebra that this property, like the least upper bound axiom for real numbers, is non-algebraic; i.e., it cannot be deduced from the algebraic properties of the integers (which form an ordered integral domain). Example applications The well-ordering principle can be used in the following proofs. Prime factorization Theorem: Every integer greater than one can be factored as a product of primes. This theorem constitutes part of the Prime Factorization Theorem. Proof (by well-ordering principle). Let be the set of all integers greater than one that cannot be factored as a product of primes. We show that is empty. Assume for the sake of contradiction that is not empty. Then, by the well-ordering principle, there is a least element ; cannot be prime since a prime number itself is considered a length-one product of primes. By the definition of non-prime numbers, has factors , where are integers greater than one and less than . Since , they are not in as is the smallest element of . So, can be factored as products of primes, where and , meaning that , a product of primes. This contradicts the assumption that , so the assumption that is nonempty must be false. Integer summation Theorem: for all positive integers . Proof. Suppose for the sake of contradiction that the above theorem is false. Then, there exists a non-empty set of positive integers . By the well-ordering principle, has a minimum element such that when , the equation is false, but true for all positive integers less than . The equation is true for , so ; is a positive integer less than , so the equation holds for as it is not in . Therefore, which shows that the equation holds for , a contradiction. So, the equation must hold for all positive integers. References Wellfoundedness Mathematical principles cs:Princip dobrého uspořádání
Well-ordering principle
[ "Mathematics" ]
981
[ "Mathematical principles", "Order theory", "Wellfoundedness", "Mathematical induction" ]
168,865
https://en.wikipedia.org/wiki/Corollary
In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else. Overview In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof. In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines). Peirce's theory of deductive reasoning Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction: "It is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case" while in theorematic deduction: "It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to the truth of the conclusion." Peirce also held that corollarial deduction matches Aristotle's conception of direct demonstration, which Aristotle regarded as the only thoroughly satisfactory demonstration, while theorematic deduction is: The kind more prized by mathematicians Peculiar to mathematics Involves in its course the introduction of a lemma or at least a definition uncontemplated in the thesis (the proposition that is to be proved), in remarkable cases that definition is of an abstraction that "ought to be supported by a proper postulate." See also Lemma (mathematics) Porism Proposition Lodge Corollary to the Monroe Doctrine Roosevelt Corollary to the Monroe Doctrine References Further reading Cut the knot: Sample corollaries of the Pythagorean theorem Geeks for geeks: Corollaries of binomial theorem Mathematical terminology Theorems Statements
Corollary
[ "Mathematics" ]
582
[ "nan" ]
168,905
https://en.wikipedia.org/wiki/Mathematical%20folklore
In common mathematical parlance, a mathematical result is called folklore if it is an unpublished result with no clear originator, but which is well-circulated and believed to be true among the specialists. More specifically, folk mathematics, or mathematical folklore, is the body of theorems, definitions, proofs, facts or techniques that circulate among mathematicians by word of mouth, but have not yet appeared in print, either in books or in scholarly journals. Quite important at times for researchers are folk theorems, which are results known, at least to experts in a field, and are considered to have established status, though not published in complete form. Sometimes, these are only alluded to in the public literature. An example is a book of exercises, described on the back cover: Another distinct category is well-knowable mathematics, a term introduced by John Conway. These mathematical matters are known and factual, but not in active circulation in relation with current research (i.e., untrendy). Both of these concepts are attempts to describe the actual context in which research work is done. Some people, in particular non-mathematicians, use the term folk mathematics to refer to the informal mathematics studied in many ethno-cultural studies of mathematics. Although the term "mathematical folklore" can also be used within the mathematics circle to describe the various aspects of their esoteric culture and practices (e.g., slang, proverb, limerick, joke). Stories, sayings and jokes Mathematical folklore can also refer to the unusual (and possibly apocryphal) stories or jokes involving mathematicians or mathematics that are told verbally in mathematics departments. Compilations include tales collected in G. H. Hardy's A Mathematician's Apology and ; examples include: Srinivasa Ramanujan's taxicab numbers. Galileo dropping weights from the Leaning Tower of Pisa. An apple falling on Isaac Newton's head to inspire his theory of gravitation. John von Neumann's encounter with the famous fly puzzle. The drinking, duel, and early death of Galois. Richard Feynman cracking safes in the Manhattan Project. Alfréd Rényi's definition of a mathematician: "a device for turning coffee into theorems". Pál Turán's suggestion that weak coffee was only suitable for lemmata. The "turtles all the way down" story told by Stephen Hawking. Fermat's lost simple proof. The unwieldy proof and associated controversies of the Four Color Theorem. The murder of Hippasus by the Pythagoreans for his discovery of irrational numbers, specifically, √2. Sir William Rowan Hamilton, in a sudden moment of inspiration, discovered quaternions while crossing Brougham Bridge. See also List of mathematical jargon References Bibliography David Harel, "On Folk Theorems", Communications of the ACM 23:7:379-389 (July 1980) External links Mathematical humor: Collection of mathematical folklore Philosophy of mathematics Mathematics and culture Scientific folklore Sociology of scientific knowledge
Mathematical folklore
[ "Mathematics" ]
624
[ "nan" ]
168,907
https://en.wikipedia.org/wiki/Na%C3%AFve%20physics
Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings. Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism. Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate. Examples Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature: What goes up must come down A dropped object falls straight down A solid object cannot pass through another solid object A vacuum sucks things towards it An object is either at rest or moving, in an absolute sense Two events are either simultaneous or they are not Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it. Psychological research The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, because infants cannot use words to explain things (such as their reactions) the way most adults or older children can. Research in naïve physics relies on technology to measure eye gaze and reaction time in particular. Through observation, researchers know that infants get bored looking at the same stimulus after a certain amount of time. That boredom is called habituation. When an infant is sufficiently habituated to a stimulus, he or she will typically look away, alerting the experimenter to his or her boredom. At this point, the experimenter will introduce another stimulus. The infant will then dishabituate by attending to the new stimulus. In each case, the experimenter measures the time it takes for the infant to habituate to each stimulus. As an example of the use of this method, research by Susan Hespos and colleagues studied five-month-old infants' responses to the physics of liquids and solids. Infants in this research were shown liquid being poured from one glass to another until they were habituated to the event. That is, they spent less time looking at this event. Then, the infants were shown an event in which the liquid turned to a solid, which tumbled from the glass rather flowed. The infants looked longer at the new event. That is, they dishabituated. Researchers infer that the longer the infant takes to habituate to a new stimulus, the more it violates his or her expectations of physical phenomena. When an adult observes an optical illusion that seems physically impossible, they will attend to it until it makes sense. It is commonly believed that our understanding of physical laws emerges strictly from experience. But research shows that infants, who do not yet have such expansive knowledge of the world, have the same extended reaction to events that appear physically impossible. Such studies hypothesize that all people are born with an innate ability to understand the physical world. Smith and Casati (1994) have reviewed the early history of naïve physics, and especially the role of the Italian psychologist Paolo Bozzi. Types of experiments The basic experimental procedure of a study on naïve physics involves three steps: prediction of the infant's expectation, violation of that expectation, and measurement of the results. As mentioned above, the physically impossible event holds the infant's attention longer, indicating surprise when expectations are violated. Solidity An experiment that tests an infant's knowledge of solidity involves the impossible event of one solid object passing through another. First, the infant is shown a flat, solid square moving from 0° to 180° in an arch formation. Next, a solid block is placed in the path of the screen, preventing it from completing its full range of motion. The infant habituates to this event, as it is what anyone would expect. Then, the experimenter creates the impossible event, and the solid screen passes through the solid block. The infant is confused by the event and attends longer than in probable event trial. Occlusion An occlusion event tests the knowledge that an object exists even if it is not immediately visible. Jean Piaget originally called this concept object permanence. When Piaget formed his developmental theory in the 1950s, he claimed that object permanence is learned, not innate. The children's game peek-a-boo is a classic example of this phenomenon, and one which obscures the true grasp infants have on permanence. To disprove this notion, an experimenter designs an impossible occlusion event. The infant is shown a block and a transparent screen. The infant habituates, then a solid panel is placed in front of the objects to block them from view. When the panel is removed, the block is gone, but the screen remains. The infant is confused because the block has disappeared indicating that they understand that objects maintain location in space and do not simply disappear. Containment A containment event tests the infant's recognition that an object that is bigger than a container cannot fit completely into that container. Elizabeth Spelke, one of the psychologists who founded the naïve physics movement, identified the continuity principle, which conveys an understanding that objects exist continuously in time and space. Both occlusion and containment experiments hinge on the continuity principle. In the experiment, the infant is shown a tall cylinder and a tall cylindrical container. The experimenter demonstrates that the tall cylinder fits into the tall container, and the infant is bored by the expected physical outcome. The experimenter then places the tall cylinder completely into a much shorter cylindrical container, and the impossible event confuses the infant. Extended attention demonstrates the infant's understanding that containers cannot hold objects that exceed them in height. Baillargeon's research The published findings of Renee Baillargeon brought innate knowledge to the forefront in psychological research. Her research method centered on the visual preference technique. Baillargeon and her followers studied how infants show preference to one stimulus over another. Experimenters judge preference by the length of time an infant will stare at a stimulus before habituating. Researchers believe that preference indicates the infant's ability to discriminate between the two events. See also Cartoon physics Common sense Elizabeth Spelke Folk psychology Renee Baillargeon Weak ontology References Scientific folklore Philosophy of physics Perception Consensus reality Physics
Naïve physics
[ "Physics" ]
1,422
[ "Philosophy of physics", "Applied and interdisciplinary physics" ]
168,917
https://en.wikipedia.org/wiki/Legality%20of%20cannabis
The legality of cannabis for medical and recreational use varies by country, in terms of its possession, distribution, and cultivation, and (in regards to medical) how it can be consumed and what medical conditions it can be used for. These policies in most countries are regulated by three United Nations treaties: the 1961 Single Convention on Narcotic Drugs, the 1971 Convention on Psychotropic Substances, and the 1988 Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Cannabis was reclassified in 2020 to a Schedule I-only drug under the Single Convention treaty (from being a Schedule I and IV drug previously), with the schedules from strictest to least being IV, I, II, and III. As a Schedule I drug under the treaty, countries can allow the medical use of cannabis but it is considered to be an addictive drug with a serious risk of abuse. The use of cannabis for recreational purposes is prohibited in most countries; however, many have adopted a policy of decriminalization to make simple possession a non-criminal offense (often similar to a minor traffic violation). Others have much more severe penalties such as some Middle Eastern and Far Eastern countries where possession of even small amounts is punished by imprisonment for several years. Countries that have legalized recreational use of cannabis are Canada, Georgia, Germany, Luxembourg, Malta, Mexico, South Africa, Thailand, and Uruguay, plus 24 states, 3 territories, and the District of Columbia in the United States and the Australian Capital Territory in Australia. Commercial sale of recreational cannabis is legalized nationwide in three countries (Canada, Thailand, and Uruguay) and in all subnational U.S. jurisdictions that have legalized possession except Virginia and Washington, D.C. A policy of limited enforcement has also been adopted in many countries, in particular the Netherlands where the sale of cannabis is tolerated at licensed coffeeshops. Countries that have legalized medical use of cannabis include Albania, Argentina, Australia, Barbados, Brazil, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Denmark, Ecuador, Finland, Georgia, Germany, Greece, Ireland, Israel, Italy, Jamaica, Lebanon, Luxembourg, Malawi, Malta, Mexico, the Netherlands, New Zealand, North Macedonia, Norway, Panama, Peru, Poland, Portugal, Rwanda, Saint Vincent and the Grenadines, San Marino, South Africa, Spain, Sri Lanka, Switzerland, Thailand, Ukraine, the United Kingdom, Uruguay, Vanuatu, Zambia, and Zimbabwe. Others have more restrictive laws that allow only the use of certain cannabis-derived pharmaceuticals, such as Sativex, Marinol, or Epidiolex. In the United States, 39 states, 4 territories, and the District of Columbia have legalized the medical use of cannabis, but at the federal level its use remains prohibited. Legalization timeline By country See also References Cannabis law Drug control law Drug policy by country
Legality of cannabis
[ "Chemistry" ]
592
[ "Drug control law", "Regulation of chemicals" ]
168,927
https://en.wikipedia.org/wiki/Somatic%20cell%20nuclear%20transfer
In genetics and developmental biology, somatic cell nuclear transfer (SCNT) is a laboratory strategy for creating a viable embryo from a body cell and an egg cell. The technique consists of taking a denucleated oocyte (egg cell) and implanting a donor nucleus from a somatic (body) cell. It is used in both therapeutic and reproductive cloning. In 1996, Dolly the sheep became famous for being the first successful case of the reproductive cloning of a mammal. In January 2018, a team of scientists in Shanghai announced the successful cloning of two female crab-eating macaques (named Zhong Zhong and Hua Hua) from foetal nuclei. "Therapeutic cloning" refers to the potential use of SCNT in regenerative medicine; this approach has been championed as an answer to the many issues concerning embryonic stem cells (ESCs) and the destruction of viable embryos for medical use, though questions remain on how homologous the two cell types truly are. Introduction Somatic cell nuclear transfer is a technique for cloning in which the nucleus of a somatic cell is transferred to the cytoplasm of an enucleated egg. After the somatic cell transfers, the cytoplasmic factors affect the nucleus to become a zygote. The blastocyst stage is developed by the egg to help create embryonic stem cells from the inner cell mass of the blastocyst. The first mammal to be developed by this technique was Dolly the sheep, in 1996. Early 20th-Century Although Dolly is generally recognized as the first animal to be cloned using this technique, earlier instances of SCNT exist as early as the 1950s. In particular, the research of Sir John Gurdon in 1958 entailed the cloning of Xenopus laevis utilizing the principles of SCNT. In short, the experiment consisted of inducing a female specimen to ovulate, at which point her eggs were harvested. From here, the egg was enucleated using ultra-violet irradiation to disable the egg's pronucleus. At this point, the prepared egg cell and nucleus from the donor cell were combined, and then incubation and eventual development into a tadpole proceeded. Gurdon's application of SCNT differs from more modern applications and even applications used on other model systems of the time (i.e., Rana pipiens) due to his usage of UV irradiation to enucleate the egg instead of using a pipette to remove the nucleus from the egg. Process The process of somatic cell nuclear transfer involves two different cells. The first being a female gamete, known as the ovum (egg/oocyte). In human SCNT experiments, these eggs are obtained through consenting donors, utilizing ovarian stimulation. The second being a somatic cell, referring to the cells of the human body. Skin cells, fat cells, and liver cells are only a few examples. The genetic material of the donor egg cell is removed and discarded, leaving it 'deprogrammed.' What is left is a somatic cell and an enucleated egg cell. These are then fused by inserting the somatic cell into the 'empty' ovum. After being inserted into the egg, the somatic cell nucleus is reprogrammed by its host egg cell. The ovum, now containing the somatic cell's nucleus, is stimulated with a shock and will begin to divide. The egg is now viable and capable of producing an adult organism containing all necessary genetic information from just one parent. Development will ensue normally and after many mitotic divisions, the single cell forms a blastocyst (an early stage embryo with about 100 cells) with an identical genome to the original organism (i.e. a clone). Stem cells can then be obtained by the destruction of this clone embryo for use in therapeutic cloning or in the case of reproductive cloning the clone embryo is implanted into a host mother for further development and brought to term. Conventional SCNT requires the use of micromanipulators, which are expensive machines used to accurately manipulate cells. Using the micromanipulator, a scientist makes an opening in the zona pellucida and sucks out the egg cell's original nucleus using a pipette. They then make another opening to a different pipette to inject the donor nucleus. Alternatively, electric energy can be applied to fuse the empty egg cell with a donor cell containing a nucleus. An alternative technique called "handmade cloning" was described by Indian scientists in 2001. This technique requires no use of a micromanipulator and has been used for the cloning of several livestock species. Removal of the nucleus can be done chemically, by centrifuge, or with the use of a blade. The empty egg is glued to the donor cell with phytohaemagglutinin, then fused using electricity. (If a blade is used, two fusion steps would be required: the first fusion is between the donor and an empty half-egg, the second between the half-size "demi-embryo" and another empty half-egg.) Applications Stem cell research Somatic cell nuclear transplantation has become a focus of study in stem cell research. The aim of carrying out this procedure is to obtain pluripotent cells from a cloned embryo. These cells genetically matched the donor organism from which they came. This gives them the ability to create patient specific pluripotent cells, which could then be used in therapies or disease research. Embryonic stem cells are undifferentiated cells of an embryo. These cells are deemed to have a pluripotent potential because they have the ability to give rise to all of the tissues found in an adult organism. This ability allows stem cells to create any cell type, which could then be transplanted to replace damaged or destroyed cells. Controversy surrounds human ESC work due to the destruction of viable human embryos, leading scientists to seek alternative methods of obtaining pluripotent stem cells, SCNT is one such method. A potential use of stem cells genetically matched to a patient would be to create cell lines that have genes linked to a patient's particular disease. By doing so, an in vitro model could be created, would be useful for studying that particular disease, potentially discovering its pathophysiology, and discovering therapies. For example, if a person with Parkinson's disease donated their somatic cells, the stem cells resulting from SCNT would have genes that contribute to Parkinson's disease. The disease specific stem cell lines could then be studied in order to better understand the condition. Another application of SCNT stem cell research is using the patient specific stem cell lines to generate tissues or even organs for transplant into the specific patient. The resulting cells would be genetically identical to the somatic cell donor, thus avoiding any complications from immune system rejection. Only a handful of the labs in the world are currently using SCNT techniques in human stem cell research. In the United States, scientists at the Harvard Stem Cell Institute, the University of California San Francisco, the Oregon Health & Science University, Stemagen (La Jolla, CA) and possibly Advanced Cell Technology are currently researching a technique to use somatic cell nuclear transfer to produce embryonic stem cells. In the United Kingdom, the Human Fertilisation and Embryology Authority has granted permission to research groups at the Roslin Institute and the Newcastle Centre for Life. SCNT may also be occurring in China. Though there has been numerous successes with cloning animals, questions remain concerning the mechanisms of reprogramming in the ovum. Despite many attempts, success in creating human nuclear transfer embryonic stem cells has been limited. There lies a problem in the human cell's ability to form a blastocyst; the cells fail to progress past the eight cell stage of development. This is thought to be a result from the somatic cell nucleus being unable to turn on embryonic genes crucial for proper development. These earlier experiments used procedures developed in non-primate animals with little success. A research group from the Oregon Health & Science University demonstrated SCNT procedures developed for primates successfully using skin cells. The key to their success was utilizing oocytes in metaphase II (MII) of the cell cycle. Egg cells in MII contain special factors in the cytoplasm that have a special ability in reprogramming implanted somatic cell nuclei into cells with pluripotent states. When the ovum's nucleus is removed, the cell loses its genetic information. This has been blamed for why enucleated eggs are hampered in their reprogramming ability. It is theorized the critical embryonic genes are physically linked to oocyte chromosomes, enucleation negatively affects these factors. Another possibility is removing the egg nucleus or inserting the somatic nucleus causes damage to the cytoplast, affecting reprogramming ability. Taking this into account the research group applied their new technique in an attempt to produce human SCNT stem cells. In May 2013, the Oregon group reported the successful derivation of human embryonic stem cell lines derived through SCNT, using fetal and infant donor cells. Using MII oocytes from volunteers and their improved SCNT procedure, human clone embryos were successfully produced. These embryos were of poor quality, lacking a substantial inner cell mass and poorly constructed trophectoderm. The imperfect embryos prevented the acquisition of human ESC. The addition of caffeine during the removal of the ovum's nucleus and fusion of the somatic cell and the egg improved blastocyst formation and ESC isolation. The ESC obtain were found to be capable of producing teratomas, expressed pluripotent transcription factors, and expressed a normal 46XX karyotype, indicating these SCNT were in fact ESC-like. This was the first instance of successfully using SCNT to reprogram human somatic cells. This study used fetal and infantile somatic cells to produce their ESC. In April 2014, an international research team expanded on this break through. There remained the question of whether the same success could be accomplished using adult somatic cells. Epigenetic and age related changes were thought to possibly hinder an adult somatic cells ability to be reprogrammed. Implementing the procedure pioneered by the Oregon research group they indeed were able to grow stem cells generated by SCNT using adult cells from two donors aged 35 and 75, indicating that age does not impede a cell's ability to be reprogrammed. Late April 2014, the New York Stem Cell Foundation was successful in creating SCNT stem cells derived from adult somatic cells. One of these lines of stem cells was derived from the donor cells of a type 1 diabetic. The group was then able to successfully culture these stem cells and induce differentiation. When injected into mice, cells of all three of the germ layers successfully formed. The most significant of these cells, were those who expressed insulin and were capable of secreting the hormone. These insulin producing cells could be used for replacement therapy in diabetics, demonstrating real SCNT stem cell therapeutic potential. The impetus for SCNT-based stem cell research has been decreased by the development and improvement of alternative methods of generating stem cells. Methods to reprogram normal body cells into pluripotent stem cells were developed in humans in 2007. The following year, this method achieved a key goal of SCNT-based stem cell research: the derivation of pluripotent stem cell lines that have all genes linked to various diseases. Some scientists working on SCNT-based stem cell research have recently moved to the new methods of induced pluripotent stem cells. Though recent studies have put in question how similar iPS cells are to embryonic stem cells. Epigenetic memory in iPS affects the cell lineage it can differentiate into. For instance, an iPS cell derived from a blood cell using only the yamanaka factors will be more efficient at differentiating into blood cells, while it will be less efficient at creating a neuron. Recent studies indicate however that changes to the epigenetic memory of iPSCs using small molecules can reset them to an almost naive state of pluripotency. Studies have even shown that via tetraploid complementation, an entire viable organism can be created solely from iPSCs. SCNT stem cells have been found to have similar challenges. The cause for low yields in bovine SCNT cloning has, in recent years, been attributed to the previously hidden epigenetic memory of the somatic cells that were being introduced into the oocyte. Reproductive cloning This technique is currently the basis for cloning animals (such as the famous Dolly the sheep), and has been proposed as a possible way to clone humans. Using SCNT in reproductive cloning has proven difficult with limited success. High fetal and neonatal death make the process very inefficient. Resulting cloned offspring are also plagued with development and imprinting disorders in non-human species. For these reasons, along with moral and ethical objections, reproductive cloning in humans is proscribed in more than 30 countries. Most researchers believe that in the foreseeable future it will not be possible to use the current cloning technique to produce a human clone that will develop to term. It remains a possibility, though critical adjustments will be required to overcome current limitations during early embryonic development in human SCNT. There is also the potential for treating diseases associated with mutations in mitochondrial DNA. Recent studies show SCNT of the nucleus of a body cell afflicted with one of these diseases into a healthy oocyte prevents the inheritance of the mitochondrial disease. This treatment does not involve cloning but would produce a child with three genetic parents. A father providing a sperm cell, one mother providing the egg nucleus, and another mother providing the enucleated egg cell. In 2018, the first successful cloning of primates using somatic cell nuclear transfer, the same method as Dolly the sheep, with the birth of two live female clones (crab-eating macaques named Zhong Zhong and Hua Hua) was reported. Interspecies nuclear transfer Interspecies nuclear transfer (iSCNT) is a means of somatic cell nuclear transfer being used to facilitate the rescue of endangered species, or even to restore species after their extinction. The technique is similar to SCNT cloning which typically is between domestic animals and rodents, or where there is a ready supply of oocytes and surrogate animals. However, the cloning of highly endangered or extinct species requires the use of an alternative method of cloning. Interspecies nuclear transfer utilizes a host and a donor of two different organisms that are closely related species and within the same genus. In 2000, Robert Lanza was able to produce a cloned fetus of a gaur, Bos gaurus, combining it successfully with a domestic cow, Bos taurus. In 2017, the first cloned Bactrian camel was born through iSCNT, using oocytes of dromedary camel and skin fibroblast cells of an adult Bactrian camel as donor nuclei. Limitations Somatic cell nuclear transfer (SCNT) can be inefficient due to stresses placed on both the egg cell and the introduced nucleus. This can result in a low percentage of successfully reprogrammed cells. For example, in 1996 Dolly the sheep was born after 277 eggs were used for SCNT, which created 29 viable embryos, giving it a measly 0.3% efficiency. Only three of these embryos survived until birth, and only one survived to adulthood. Millie, the offspring that survived, took 95 attempts to produce. Because the procedure was not automated and had to be performed manually under a microscope, SCNT was very resource intensive. Another reason why there is such high mortality rate with the cloned offspring is due to the fetus being larger than even other large offspring, resulting in death soon after birth. The biochemistry involved in reprogramming the differentiated somatic cell nucleus and activating the recipient egg was also far from understood. Another limitation is trying to use one-cell embryos during the SCNT. When using just one-cell cloned embryos, the experiment has a 65% chance to fail in the process of making morula or blastocyst. The biochemistry also has to be extremely precise, as most late term cloned fetus deaths are the result of inadequate placentation. However, by 2014, researchers were reporting success rates of 70-80% with cloning pigs and in 2016 a Korean company, Sooam Biotech, was reported to be producing 500 cloned embryos a day. In SCNT, not all of the donor cell's genetic information is transferred, as the donor cell's mitochondria that contain their own mitochondrial DNA are left behind. The resulting hybrid cells retain those mitochondrial structures which originally belonged to the egg. As a consequence, clones such as Dolly that are born from SCNT are not perfect copies of the donor of the nucleus. This fact may also hamper the potential benefits of SCNT-derived tissues and organs for therapy, as there may be an immuno-response to the non-self mtDNA after transplant. Additionally, the genes found in the mitochondria’s genome need to communicate with the cell’s genome and a failure of somatic cell nuclear reprogramming can lead to non communication to the cell’s genome causing SCNT to fail. Epigenetic factors play an important role in the success or failure of SCNT attempts. The varying gene expression of a previously activated cell and its mRNAs may lead to overexpression, underexpression, or in some cases non functional genes which will affect the developing fetus. One such example of epigenetic limitations to SCNT is regulating histone methylation. Differing regulation of these histone methylation genes can directly affect the transcription of the developing genome, causing failure of the SCNT. Another contributing factor to failure of SCNT includes the X chromosome inactivation in early development of the embryo. A non coding gene called XIST is responsible for inactivating one X chromosome during development, however in SCNT this gene can have abnormal regulation causing mortality to the developing fetus. Controversy Nuclear transfer techniques present a different set of ethical considerations than those associated with the use of other stem cells like embryonic stem cells which are controversial for their requirement to destroy an embryo. These different considerations have led to some individuals and organizations who are not opposed to human embryonic stem cell research to be concerned about, or opposed to, SCNT research. One concern is that blastula creation in SCNT-based human stem cell research will lead to the reproductive cloning of humans. Both processes use the same first step: the creation of a nuclear transferred embryo, most likely via SCNT. Those who hold this concern often advocate for strong regulation of SCNT to preclude implantation of any derived products for the intention of human reproduction, or its prohibition. A second important concern is the appropriate source of the eggs that are needed. SCNT requires human egg cells, which can only be obtained from women. The most common source of these eggs today are eggs that are produced and in excess of the clinical need during IVF treatment. This is a minimally invasive procedure, but it does carry some health risks, such as ovarian hyperstimulation syndrome. One vision for successful stem cell therapies is to create custom stem cell lines for patients. Each custom stem cell line would consist of a collection of identical stem cells each carrying the patient's own DNA, thus reducing or eliminating any problems with rejection when the stem cells were transplanted for treatment. For example, to treat a man with Parkinson's disease, a cell nucleus from one of his cells would be transplanted by SCNT into an egg cell from an egg donor, creating a unique lineage of stem cells almost identical to the patient's own cells. (There would be differences. For example, the mitochondrial DNA would be the same as that of the egg donor. In comparison, his own cells would carry the mitochondrial DNA of his mother.) Potentially millions of patients could benefit from stem cell therapy, and each patient would require a large number of donated eggs in order to successfully create a single custom therapeutic stem cell line. Such large numbers of donated eggs would exceed the number of eggs currently left over and available from couples trying to have children through assisted reproductive technology. Therefore, healthy young women would need to be induced to sell eggs to be used in the creation of custom stem cell lines that could then be purchased by the medical industry and sold to patients. It is so far unclear where all these eggs would come from. Stem cell experts consider it unlikely that such large numbers of human egg donations would occur in a developed country because of the unknown long-term public health effects of treating large numbers of healthy young women with heavy doses of hormones in order to induce hyper-ovulation (ovulating several eggs at once). Although such treatments have been performed for several decades now, the long-term effects have not been studied or declared safe to use on a large scale on otherwise healthy women. Longer-term treatments with much lower doses of hormones are known to increase the rate of cancer decades later. Whether hormone treatments to induce hyper-ovulation could have similar effects is unknown. There are also ethical questions surrounding paying for eggs. In general, marketing body parts is considered unethical and is banned in most countries. Human eggs have been a notable exception to this rule for some time. To address the problem of creating a human egg market, some stem cell researchers are investigating the possibility of creating artificial eggs. If successful, human egg donations would not be needed to create custom stem cell lines. However, this technology may be a long way off. Policies regarding human SCNT SCNT involving human cells is currently legal for research purposes in the United Kingdom, having been incorporated into the Human Fertilisation and Embryology Act 1990. Permission must be obtained from the Human Fertilisation and Embryology Authority in order to perform or attempt SCNT. In the United States, the practice remains legal, as it has not been addressed by federal law. However, in 2002, a moratorium on United States federal funding for SCNT prohibits funding the practice for the purposes of research. Thus, though legal, SCNT cannot be federally funded. American scholars have recently argued that because the product of SCNT is a clone embryo, rather than a human embryo, these policies are morally wrong and should be revised. In 2003, the United Nations adopted a proposal submitted by Costa Rica, calling on member states to "prohibit all forms of human cloning in as much as they are incompatible with human dignity and the protection of human life." This phrase may include SCNT, depending on interpretation. The Council of Europe's Convention on Human Rights and Biomedicine and its Additional Protocol to the Convention for the Protection of Human Rights and Dignity of the Human Being with regard to the Application of Biology and Medicine, on the Prohibition of Cloning Human Being appear to ban SCNT of human beings. Of the Council's 45 member states, the Convention has been signed by 31 and ratified by 18. The Additional Protocol has been signed by 29 member nations and ratified by 14. See also Cloning Embryogenesis Handmade cloning In vitro fertilisation Induced stem cells New Jersey legislation S1909/A2840 Rejuvenation Stem cell controversy Stem cell research References Further reading External links Research Cloning: Medical and scientific, legal and ethical aspects The Basics: Stem Cells and Public Policy The Century Foundation, June 2005 Research Cloning Basic Science, Center for Genetics and Society, (last modified October 4, 2004, retrieved October 6, 2006) Cloning: present uses and promises National Institutes of Health, Paper giving background information on cloning in general and SCNT from The Office of Science Policy Analysis. Nuclear Transfer – Stem Cells or Somatic Cell Nuclear Transfer (SCNT) The International Society for Stem Cell Research The Hinxton Group: An International Consortium on Stem Cells, Ethics & Law Cell culture techniques Cloning Induced stem cells Life extension Stem cell research 1996 in biotechnology Bioethics
Somatic cell nuclear transfer
[ "Chemistry", "Technology", "Engineering", "Biology" ]
5,010
[ "Biochemistry methods", "Bioethics", "Stem cell research", "Cloning", "Cell culture techniques", "Genetic engineering", "Translational medicine", "Tissue engineering", "Ethics of science and technology", "Induced stem cells" ]
168,944
https://en.wikipedia.org/wiki/Immunosuppression
Immunosuppression is a reduction of the activation or efficacy of the immune system. Some portions of the immune system itself have immunosuppressive effects on other parts of the immune system, and immunosuppression may occur as an adverse reaction to treatment of other conditions. In general, deliberately induced immunosuppression is performed to prevent the body from rejecting an organ transplant. Additionally, it is used for treating graft-versus-host disease after a bone marrow transplant, or for the treatment of auto-immune diseases such as systemic lupus erythematosus, rheumatoid arthritis, Sjögren's syndrome, or Crohn's disease. This is typically done using medications, but may involve surgery (splenectomy), plasmapheresis, or radiation. A person who is undergoing immunosuppression, or whose immune system is weak for some other reasons (such as chemotherapy or HIV), is said to be immunocompromised. Deliberately induced Administration of immunosuppressive medications or immunosuppressants is the main method for deliberately inducing immunosuppression; in optimal circumstances, immunosuppressive drugs primarily target hyperactive components of the immune system. People in remission from cancer who require immunosuppression are not more likely to experience a recurrence. Throughout its history, radiation therapy has been used to decrease the strength of the immune system. Dr. Joseph Murray of Brigham and Women's Hospital was given the Nobel Prize in Physiology or Medicine in 1990 for work on immunosuppression. Immunosuppressive drugs have the potential to cause immunodeficiency, which can increase susceptibility to opportunistic infection and decrease cancer immunosurveillance. Immunosuppressants may be prescribed when a normal immune response is undesirable, such as in autoimmune diseases. Steroids were the first class of immunosuppressant drugs identified, though side-effects of early compounds limited their use. The more specific azathioprine was identified in 1960, but it was the discovery of ciclosporin in 1980 (together with azathioprine) that allowed significant expansion of transplantation to less well-matched donor-recipient pairs as well as broad application to lung transplantation, pancreas transplantation, and heart transplantation. After an organ transplantation, the body will nearly always reject the new organ(s) due to differences in human leukocyte antigen between the donor and recipient. As a result, the immune system detects the new tissue as "foreign", and attempts to remove it by attacking it with white blood cells, resulting in the death of the donated tissue. Immunosuppressants are administered in order to help prevent rejection; however, the body becomes more vulnerable to infections and malignancy during the course of such treatment. Non-deliberate immunosuppression Non-deliberate immunosuppression can occur in, for example, ataxia–telangiectasia, complement deficiencies, many types of cancer, and certain chronic infections such as human immunodeficiency virus (HIV). The unwanted effect in non-deliberate immunosuppression is immunodeficiency that results in increased susceptibility to pathogens, such as bacteria and viruses. Immunodeficiency is also a potential adverse effect of many immunosuppressant drugs, in this sense, the scope of the term immunosuppression in general includes both beneficial and potential adverse effects of decreasing the function of the immune system. B cell deficiency and T cell deficiency are immune impairment that individuals are born with or are acquired, which in turn can lead to immunodeficiency problems. Nezelof syndrome is an example of an immunodeficiency of T-cells. See also References Further reading Retrieved 6 May 2017. External links PubMed Immune system Immunology Medical treatments it:Immunodepressione
Immunosuppression
[ "Biology" ]
845
[ "Organ systems", "Immunology", "Immune system" ]
168,977
https://en.wikipedia.org/wiki/Trillian%20%28character%29
Tricia Marie McMillan, also known as Trillian Astra, is a fictional character from Douglas Adams' series The Hitchhiker's Guide to the Galaxy. She is most commonly referred to simply as "Trillian", a modification of her birth name, which she adopted because it sounded more "space-like". According to the movie version, her middle name is Marie. Physically, she is described as "a slim, darkish humanoid, with long waves of black hair, a full mouth, an odd little knob of a nose and ridiculously brown eyes," looking "vaguely Arabic." Biography Tricia McMillan is a mathematician and astrophysicist whom Arthur Dent attempted to talk to at a party in Islington. She and Arthur next meet six months later on the spaceship Heart of Gold, shortly after the Earth has been destroyed to make way for a hyperspace bypass. The trilogy later reveals that Trillian eventually left the party with Zaphod Beeblebrox, who, according to the Quintessential Phase, is directly responsible for her nickname. In the radio series, she is carried off and forcibly married to the President of the Algolian Chapter of the Galactic Rotary Club and consequently does not appear in the second radio series at all. The later radio series (the Tertiary Phase and beyond) reveal this (probably) occurred only in the artificial universe within the Guide offices. In the books, which the third, fourth and fifth series follow, she saves the universe from the Krikketeers and later becomes a Sub-Etha Radio reporter under the name Trillian Astra. Some drafts of the movie's screenplay, and Robbie Stamp's "making of" book covering the movie, state that Trillian was to be revealed as half-human, an acknowledged divergence from Douglas Adams' original storyline. This would have been done in order to underline the loneliness of Arthur Dent, the only 100% Homo sapiens remaining in the universe, after Earth's demolition. This idea was scrapped after the "making of" book was written, and the scene revealing Trillian's heritage (by the mice, to Arthur, on the Earth Mark II) was re-written. An interview with actress Zooey Deschanel, included on the DVD version, has her mention that Trillian is half-human, suggesting the interview was recorded prior to the change of plan. Relationships In the novels and radio series, Trillian does not have a romantic relationship with Arthur (although when Arthur starts seeing Fenchurch, Ford Prefect asks him what happened to Trillian). In the fifth book, Trillian is revealed as the mother of Random Dent. It is unclear for how long (if ever) Trillian had a relationship with Zaphod. They seem to travel away from each other after the third book, although in the fourth one Arthur states Trillian is with Zaphod, and in the fifth Trillian implies that she hadn't had a child with Zaphod simply because they're different species. In the sixth novel, And Another Thing..., she pursues a relationship with Wowbagger, the Infinitely Prolonged; she accuses Arthur of carrying a torch for her as well. The 2005 film puts a different spin on the character. The main emotional arc of the movie is a love triangle between Trillian, Zaphod, and Arthur. Trillian is initially attracted to Arthur when she meets him on Earth, but she's disappointed by his apparent lack of spontaneity. During their travels, Trillian discovers that Zaphod may be the more superficially exciting choice, but Arthur is the man who truly cares about her, Arthur commenting when he is about to have his head cut open by the mice that his feelings for Trillian are the only thing that he ever had questions about where the answer made him happy. The film concludes with Arthur and Trillian sharing a brief kiss as they prepare to return to the Heart of Gold with Ford and Zaphod as Slartibartfast plans to restart Earth, Arthur concluding that he is ready to move on from the planet now that there is a world to come back to. Portrayals Trillian was played on radio by Susan Sheridan, on television by Sandra Dickinson (who also reprised an alternate-universe version of the role in the fifth and sixth radio series, playing both original and alternate-universe versions in the latter), on the Original Records LP version by Cindy Oswin, and in the 2005 film by Zooey Deschanel. In The Illustrated Hitchhiker's Guide to the Galaxy, she is portrayed by Tali, a model. In the original radio series, she is portrayed with an English accent – in both the TV series and movie she is played as an American. The "Quintessential Phase" of the radio series features Sandra Dickinson in the role of the alternate version of Tricia McMillan as a "blonder and more American" Trillian – the radio series indicates that the character is otherwise identical to the first Trillian and was born in the United Kingdom. In the book Mostly Harmless, it is said that both the alternate Tricia McMillan and Trillian have an English accent. In The Hitchhiker's Guide to the Galaxy novel, she is described as follows: "She was slim, darkish, humanoid, with long waves of black hair, a full mouth, an odd little knob of a nose and ridiculously brown eyes. With her red head scarf knotted in that particular way and her long flowing silky brown dress, she looked vaguely Arabic." She has consistently not been portrayed as such in the television and film adaptions, although the film adaptation Trillian is closer to her appearance in the books than it was in the television series. Appearances Trillian comes closest of all female characters to appearing in the entire "Hitchhiker's" saga. Novels The Hitchhiker's Guide to the Galaxy The Restaurant at the End of the Universe Life, the Universe and Everything So Long, and Thanks for All the Fish (mentioned only) Mostly Harmless (also alternate Tricia McMillan) And Another Thing... (also alternate Tricia McMillan) Radio The Hitchhiker's Guide to the Galaxy'' radio series Featuring Susan Sheridan as Trillian: Primary Phase: "Fit the Second", "Fit the Third", "Fit the Fourth", "Fit the Fifth", "Fit the Sixth" Tertiary Phase: "Fit the Thirteenth", "Fit the Sixteenth", "Fit the Seventeenth", "Fit the Eighteenth" Quintessential Phase: "Fit the Twenty-Third", "Fit the Twenty-Fifth", "Fit the Twenty-Sixth" Featuring Sandra Dickinson as the alternate character Tricia McMillan: Quintessential Phase: "Fit the Twenty-Fourth", "Fit the Twenty-Fifth", "Fit the Twenty-Sixth" Featuring Sandra Dickinson as Trillian and Tricia McMillan: "The Hexagonal Phase" LP Featuring Cindy Oswin as Trillian The Hitchhiker's Guide to the Galaxy The Restaurant at the End of the Universe Television Featuring Sandra Dickinson as Trillian Episode 2 Episode 3 Episode 4 Episode 5 Episode 6 Computer game The Hitchhiker's Guide to the Galaxy Film Featuring Zooey Deschanel as Trillian The Hitchhiker's Guide to the Galaxy References The Hitchhiker's Guide to the Galaxy characters Female characters in film Female characters in literature Female characters in television Fictional mathematicians Fictional astronomers Fictional reporters and correspondents Fictional refugees Literary characters introduced in 1978 Fictional female scientists
Trillian (character)
[ "Astronomy" ]
1,573
[ "Astronomers", "Fictional astronomers" ]
168,986
https://en.wikipedia.org/wiki/Glycogen
Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animals, fungi, and bacteria. It is the main storage form of glucose in the human body. Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems). In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cells, white blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo. The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum. Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores, primarily from the liver (glycogen in skeletal muscle is mainly used as an immediate source of energy for that muscle rather than being used to maintain physiological blood glucose levels). Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle glucose uptake from the blood, thereby increasing the amount of blood glucose available for use in other tissues. Liver glycogen stores serve as a store of glucose for use throughout the body, particularly the central nervous system. The human brain consumes approximately 60% of blood glucose in fasted, sedentary individuals. Glycogen is an analogue of starch, a glucose polymer that functions as energy storage in plants. It has a structure similar to amylopectin (a component of starch), but is more extensively branched and compact than starch. Both are white powders in their dry state. Glycogen is found in the form of granules in the cytosol/cytoplasm in many cell types, and plays an important role in the glucose cycle. Glycogen forms an energy reserve that can be quickly mobilized to meet a sudden need for glucose, but one that is less compact than the energy reserves of triglycerides (lipids). As such it is also found as storage reserve in many parasitic protozoa. Structure Glycogen is a branched biopolymer consisting of linear chains of glucose residues with an average chain length of approximately 8–12 glucose units and 2,000-60,000 residues per one molecule of glycogen. Like amylopectin, glucose units are linked together linearly by α(1→4) glycosidic bonds from one glucose to the next. Branches are linked to the chains from which they are branching off by α(1→6) glycosidic bonds between the first glucose of the new branch and a glucose on the stem chain. Each glycogen is essentially a ball of glucose trees, with around 12 layers, centered on a glycogenin protein, with three kinds of glucose chains: A, B, and C. There is only one C-chain, attached to the glycogenin. This C-chain is formed by the self-glucosylation of the glycogenin, forming a short primer chain. From the C-chain grows out B-chains, and from B-chains branch out B- and A-chains. The B-chains have on average 2 branch points, while the A-chains are terminal, thus unbranched. On average, each chain has length 12, tightly constrained to be between 11 and 15. All A-chains reach the spherical surface of the glycogen. Glycogen in muscle, liver, and fat cells is stored in a hydrated form, composed of three or four parts of water per part of glycogen associated with 0.45 millimoles (18 mg) of potassium per gram of glycogen. Glucose is an osmotic molecule, and can have profound effects on osmotic pressure in high concentrations possibly leading to cell damage or death if stored in the cell without being modified. Glycogen is a non-osmotic molecule, so it can be used as a solution to storing glucose in the cell without disrupting osmotic pressure. Functions Liver As a meal containing carbohydrates or protein is eaten and digested, blood glucose levels rise, and the pancreas secretes insulin. Blood glucose from the portal vein enters liver cells (hepatocytes). Insulin acts on the hepatocytes to stimulate the action of several enzymes, including glycogen synthase. Glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful. In this postprandial or "fed" state, the liver takes in more glucose from the blood than it releases. After a meal has been digested and glucose levels begin to fall, insulin secretion is reduced, and glycogen synthesis stops. When it is needed for energy, glycogen is broken down and converted again to glucose. Glycogen phosphorylase is the primary enzyme of glycogen breakdown. For the next 8–12 hours, glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel. Glucagon, another hormone produced by the pancreas, in many respects serves as a countersignal to insulin. In response to insulin levels being below normal (when blood levels of glucose begin to fall below the normal range), glucagon is secreted in increasing amounts and stimulates both glycogenolysis (the breakdown of glycogen) and gluconeogenesis (the production of glucose from other sources). Muscle Muscle glycogen appears to function as a reserve of quickly available phosphorylated glucose, in the form of glucose-1-phosphate, for muscle cells. Glycogen contained within skeletal muscle cells are primarily in the form of β particles. Other cells that contain small amounts use it locally as well. As muscle cells lack glucose-6-phosphatase, which is required to pass glucose into the blood, the glycogen they store is available solely for internal use and is not shared with other cells. This is in contrast to liver cells, which, on demand, readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organs. Skeletal muscle needs ATP (provides energy) for muscle contraction and relaxation, in what is known as the sliding filament theory. Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. During anaerobic activity, such as weightlifting and isometric exercise, the phosphagen system (ATP-PCr) and muscle glycogen are the only substrates used as they do not require oxygen nor blood flow. Different bioenergetic systems produce ATP at different speeds, with ATP produced from muscle glycogen being much faster than fatty acid oxidation. The level of exercise intensity determines how much of which substrate (fuel) is used for ATP synthesis also. Muscle glycogen can supply a much higher rate of substrate for ATP synthesis than blood glucose. During maximum intensity exercise, muscle glycogen can supply 40 mmol glucose/kg wet weight/minute, whereas blood glucose can supply 4 - 5 mmol. Due to its high supply rate and quick ATP synthesis, during high-intensity aerobic activity (such as brisk walking, jogging, or running), the higher the exercise intensity, the more the muscle cell produces ATP from muscle glycogen. This reliance on muscle glycogen is not only to provide the muscle with enough ATP during high-intensity exercise, but also to maintain blood glucose homeostasis (that is, to not become hypoglycaemic by the muscles needing to extract far more glucose from the blood than the liver can provide). A deficit of muscle glycogen leads to muscle fatigue known as "hitting the wall" or "the bonk" (see below under glycogen depletion). Structure Type In 1999, Meléndez et al claimed that the structure of glycogen is optimal under a particular metabolic constraint model, where the structure was suggested to be "fractal" in nature. However, research by Besford et al used small angle X-ray scattering experiments accompanied by branching theory models to show that glycogen is a randomly hyperbranched polymer nanoparticle. Glycogen is not fractal in nature. This has been subsequently verified by others who have performed Monte Carlo simulations of glycogen particle growth, and shown that the molecular density reaches a maximum near the centre of the nanoparticle structure, not at the periphery (contradicting a fractal structure that would have greater density at the periphery). History Glycogen was discovered by Claude Bernard. His experiments showed that the liver contained a substance that could give rise to reducing sugar by the action of a "ferment" in the liver. By 1857, he described the isolation of a substance he called "la matière glycogène", or "sugar-forming substance". Soon after the discovery of glycogen in the liver, M.A. Sanson found that muscular tissue also contains glycogen. The empirical formula for glycogen of ()n was established by August Kekulé in 1858. Sanson, M. A. "Note sur la formation physiologique du sucre dans l’economie animale." Comptes rendus des seances de l’Academie des Sciences 44 (1857): 1323-5. Metabolism Synthesis Glycogen synthesis is, unlike its breakdown, endergonic—it requires the input of energy. Energy for glycogen synthesis comes from uridine triphosphate (UTP), which reacts with glucose-1-phosphate, forming UDP-glucose, in a reaction catalysed by UTP—glucose-1-phosphate uridylyltransferase. Glycogen is synthesized from monomers of UDP-glucose initially by the protein glycogenin, which has two tyrosine anchors for the reducing end of glycogen, since glycogenin is a homodimer. After about eight glucose molecules have been added to a tyrosine residue, the enzyme glycogen synthase progressively lengthens the glycogen chain using UDP-glucose, adding α(1→4)-bonded glucose to the nonreducing end of the glycogen chain. The glycogen branching enzyme catalyzes the transfer of a terminal fragment of six or seven glucose residues from a nonreducing end to the C-6 hydroxyl group of a glucose residue deeper into the interior of the glycogen molecule. The branching enzyme can act upon only a branch having at least 11 residues, and the enzyme may transfer to the same glucose chain or adjacent glucose chains. Breakdown Glycogen is cleaved from the nonreducing ends of the chain by the enzyme glycogen phosphorylase to produce monomers of glucose-1-phosphate: In vivo, phosphorolysis proceeds in the direction of glycogen breakdown because the ratio of phosphate and glucose-1-phosphate is usually greater than 100. Glucose-1-phosphate is then converted to glucose 6 phosphate (G6P) by phosphoglucomutase. A special debranching enzyme is needed to remove the α(1→6) branches in branched glycogen and reshape the chain into a linear polymer. The G6P monomers produced have three possible fates: G6P can continue on the glycolysis pathway and be used as fuel. G6P can enter the pentose phosphate pathway via the enzyme glucose-6-phosphate dehydrogenase to produce NADPH and 5 carbon sugars. In the liver and kidney, G6P can be dephosphorylated back to glucose by the enzyme glucose 6-phosphatase. This is the final step in the gluconeogenesis pathway. Clinical relevance Disorders of glycogen metabolism The most common disease in which glycogen metabolism becomes abnormal is diabetes, in which, because of abnormal amounts of insulin, liver glycogen can be abnormally accumulated or depleted. Restoration of normal glucose metabolism usually normalizes glycogen metabolism, as well. In hypoglycemia caused by excessive insulin, liver glycogen levels are high, but the high insulin levels prevent the glycogenolysis necessary to maintain normal blood sugar levels. Glucagon is a common treatment for this type of hypoglycemia. Various inborn errors of carbohydrate metabolism are caused by deficiencies of enzymes or transport proteins necessary for glycogen synthesis or breakdown. These are collectively referred to as glycogen storage diseases. Glycogen depletion and endurance exercise Long-distance athletes, such as marathon runners, cross-country skiers, and cyclists, often experience glycogen depletion, where almost all of the athlete's glycogen stores are depleted after long periods of exertion without sufficient carbohydrate consumption. This phenomenon is referred to as "hitting the wall" in running and "bonking" in cycling. Glycogen depletion can be forestalled in three possible ways: First, during exercise, carbohydrates with the highest possible rate of conversion to blood glucose (high glycemic index) are ingested continuously. The best possible outcome of this strategy replaces about 35% of glucose consumed at heart rates above about 80% of maximum. Second, through endurance training adaptations and specialized regimens (e.g. fasting, low-intensity endurance training), the body can condition type I muscle fibers to improve both fuel use efficiency and workload capacity to increase the percentage of fatty acids used as fuel, sparing carbohydrate use from all sources. Third, by consuming large quantities of carbohydrates after depleting glycogen stores as a result of exercise or diet, the body can increase storage capacity of intramuscular glycogen stores. This process is known as carbohydrate loading. In general, glycemic index of carbohydrate source does not matter since muscular insulin sensitivity is increased as a result of temporary glycogen depletion. When athletes ingest both carbohydrate and caffeine following exhaustive exercise, their glycogen stores tend to be replenished more rapidly; however, the minimum dose of caffeine at which there is a clinically significant effect on glycogen repletion has not been established. Nanomedicine Glycogen nanoparticles have been investigated as potential drug delivery systems. See also Bioenergetic systems Chitin Peptidoglycan References External links Exercise physiology Glycobiology Hepatology Nutrition Polysaccharides
Glycogen
[ "Chemistry", "Biology" ]
3,462
[ "Biochemistry", "Glycobiology", "Carbohydrates", "Polysaccharides" ]
169,055
https://en.wikipedia.org/wiki/Apache%20POI
Apache POI, a project run by the Apache Software Foundation, and previously a sub-project of the Jakarta Project, provides pure Java libraries for reading and writing files in Microsoft Office formats, such as Word, PowerPoint and Excel. History and roadmap The name was originally an acronym for "Poor Obfuscation Implementation", referring humorously to the fact that the file formats seemed to be deliberately obfuscated, but poorly, since they were successfully reverse-engineered. This explanation – and those of the similar names for the various sub-projects – were removed from the official web pages in order to better market the tools to businesses who would not consider such humor appropriate. The original authors (Andrew C. Oliver and Marc Johnson) also noted the existence of the Hawaiian poi dish, made of mashed taro root, which had similarly derogatory connotations. Office Open XML support POI supports the ISO/IEC 29500:2008 Office Open XML file formats since version 3.5. A significant contribution for OOXML support came from Sourcesense, an open source company which was commissioned by Microsoft to develop this contribution. This link spurred controversy, some POI contributors questioning POI OOXML patent protection regarding Microsoft's Open Specification Promise patent license. Architecture The Apache POI project contains the following subcomponents (meaning of acronyms is taken from old documentation): POIFS (Poor Obfuscation Implementation File System) – This component reads and writes Microsoft's OLE 2 Compound document format. Since all Microsoft Office files are OLE 2 files, this component is the basic building block of all the other POI elements. POIFS can therefore be used to read a wider variety of files, beyond those whose explicit decoders are already written in POI. HSSF (Horrible SpreadSheet Format) – reads and writes Microsoft Excel (XLS) format files. It can read files written by Excel 97 onwards; this file format is known as the BIFF 8 format. As the Excel file format is complex and contains a number of tricky characteristics, some of the more advanced features cannot be read. XSSF (XML SpreadSheet Format) – reads and writes Office Open XML (XLSX) format files. Similar feature set to HSSF, but for Office Open XML files. HPSF (Horrible Property Set Format) – reads "Document Summary" information from Microsoft Office files. This is essentially the information that one can see by using the File|Properties menu item within an Office application. HWPF (Horrible Word Processor Format) – aims to read and write Microsoft Word 97 (DOC) format files. This component is in initial stages of development. XWPF (XML Word Processor Format) – similar feature set to HWPF, but for Office Open XML files. HSLF (Horrible Slide Layout Format) – a pure Java implementation for Microsoft PowerPoint files. This provides the ability to read, create and edit presentations (though some things are easier to do than others) XSLF (Open Office XML Slideshow Format) HDGF (Horrible DiaGram Format) – an initial pure Java implementation for Microsoft Visio binary files. It provides an ability to read the low level contents of the files. HPBF (Horrible PuBlisher Format) – a pure Java implementation for Microsoft Publisher files. HSMF (Horrible Stupid Mail Format) – a pure Java implementation for Microsoft Outlook MSG files. DDF (Dreadful Drawing Format) – a package for decoding the Microsoft Office Drawing format. XDDF (XML Dreadful Drawing Format) The HSSF component is the most advanced feature of the library. Other components (HPSF, HWPF, and HSLF) are usable, but less full-featured. The POI library is also provided as a Ruby or ColdFusion extension. There are modules for Big Data platforms (e.g. Apache Hive/Apache Flink/Apache Spark), which provide certain functionality of Apache POI, such as the processing of Excel files. Version history See also Open Packaging Conventions Office Open XML software References External links POI Microsoft Office-related software Java platform Java (programming language) libraries Cross-platform free software
Apache POI
[ "Technology" ]
849
[ "Computing platforms", "Java platform" ]
169,056
https://en.wikipedia.org/wiki/Martinus%20Beijerinck
) | death_place = Gorssel, Netherlands | field = Microbiology | work_institutions = Wageningen UniversityDelft School of Microbiology (founder) | alma_mater = Leiden University | known_for = One of the founders of virology, environmental microbiology and general microbiologyConceptual discovery of virus (tobacco mosaic virus)Enrichment cultureBiological nitrogen fixationSulfate-reducing bacteriaNitrogen fixing bacteriaAzotobacter (Azotobacter chroococcum)RhizobiumDesulfovibrio desulfuricans (Spirillum desulfuricans) | prizes = Leeuwenhoek Medal (1905) }} Martinus Willem Beijerinck (, 16 March 1851 – 1 January 1931) was a Dutch microbiologist and botanist who was one of the founders of virology and environmental microbiology. He is credited with the co-discovery of viruses (1898), which he called "contagium vivum fluidum". Life Early life and education Born in Amsterdam, Beijerinck studied at the Technical School of Delft, where he was awarded the degree of biology in 1872. He obtained his Doctor of Science degree from the University of Leiden in 1877. At the time, Delft, then a Polytechnic, did not have the right to confer doctorates, so Leiden did this for them. He became a teacher in microbiology at the Agricultural School in Wageningen (now Wageningen University) and later at the Polytechnische Hogeschool Delft (Delft Polytechnic, currently Delft University of Technology) (from 1895). He established the Delft School of Microbiology. His studies of agricultural and industrial microbiology yielded fundamental discoveries in the field of biology. His achievements have been perhaps unfairly overshadowed by those of his contemporaries, Robert Koch and Louis Pasteur, because unlike them, Beijerinck never studied human disease. In 1877, he wrote his first notable research paper, discussing plant galls. The paper later became the basis for his doctoral dissertation. In 1885 he became a member of the Royal Netherlands Academy of Arts and Sciences. Scientific career He is considered one of the founders of virology. In 1898, he published results on the filtration experiments demonstrating that tobacco mosaic disease is caused by an infectious agent smaller than a bacterium. His results were in accordance with the similar observation made by Dmitri Ivanovsky in 1892. Like Ivanovsky before him and Adolf Mayer, predecessor at Wageningen, Beijerinck could not culture the filterable infectious agent; however, he concluded that the agent can replicate and multiply in living plants. He named the new pathogen virus to indicate its non-bacterial nature. Beijerinck asserted that the virus was somewhat liquid in nature, calling it "contagium vivum fluidum" (contagious living fluid). It was not until the first crystals of the tobacco mosaic virus (TMV) obtained by Wendell Stanley in 1935, the first electron micrographs of TMV produced in 1939 and the first X-ray crystallographic analysis of TMV performed in 1941 proved that the virus was particulate. Nitrogen fixation, the process by which diatomic nitrogen gas is converted to ammonium ions and becomes available to plants, was also investigated by Beijerinck. Bacteria perform nitrogen fixation, dwelling inside root nodules of certain plants (legumes). In addition to having discovered a biochemical reaction vital to soil fertility and agriculture, Beijerinck revealed this archetypical example of symbiosis between plants and bacteria. Beijerinck discovered the phenomenon of bacterial sulfate reduction, a form of anaerobic respiration. He learned bacteria could use sulfate as a terminal electron acceptor, instead of oxygen. This discovery has had an important impact on our current understanding of biogeochemical cycles. Spirillum desulfuricans, now known as Desulfovibrio desulfuricans, the first known sulfate-reducing bacterium, was isolated and described by Beijerinck. Beijerinck invented the enrichment culture, a fundamental method of studying microbes from the environment. He is often incorrectly credited with framing the microbial ecology idea that "everything is everywhere, but, the environment selects", which was stated by Lourens Baas Becking. Personal life Beijerinck was a socially eccentric figure. He was verbally abusive to students, never married, and had few professional collaborations. He was also known for his ascetic lifestyle and his view of science and marriage being incompatible. His low popularity with his students and their parents periodically depressed him, as he very much loved spreading his enthusiasm for biology in the classroom. After his retirement at the Delft School of Microbiology in 1921, at age 70, he moved to Gorssel where he lived for the rest of his life, together with his two sisters. Recognition Beijerinckia (a genus of bacteria), Beijerinckiaceae (a family of Hyphomicrobiales), and Beijerinck crater are named after him. The M.W. Beijerinck Virology Prize (M.W. Beijerinck Virologie Prijs) is awarded in his honor. See also History of virology Nitrification Clostridium beijerinckii Sergei Winogradsky References External links Beijerinck and the Delft School of Microbiology Viruses and the Prokaryotic World 1851 births 1931 deaths Corresponding Members of the Russian Academy of Sciences (1917–1925) Corresponding Members of the USSR Academy of Sciences Delft University of Technology alumni Academic staff of the Delft University of Technology Dutch microbiologists 19th-century Dutch botanists 20th-century Dutch botanists Dutch phytopathologists Environmental microbiology Foreign members of the Royal Society Honorary members of the USSR Academy of Sciences Leeuwenhoek Medal winners Leiden University alumni Members of the Royal Netherlands Academy of Arts and Sciences Nitrogen cycle Scientists from Amsterdam Dutch soil scientists Academic staff of Wageningen University and Research
Martinus Beijerinck
[ "Chemistry", "Environmental_science" ]
1,259
[ "Environmental microbiology", "Nitrogen cycle", "Metabolism" ]
169,146
https://en.wikipedia.org/wiki/Cold%20cathode
A cold cathode is a cathode that is not electrically heated by a filament. A cathode may be considered "cold" if it emits more electrons than can be supplied by thermionic emission alone. It is used in gas-discharge lamps, such as neon lamps, discharge tubes, and some types of vacuum tube. The other type of cathode is a hot cathode, which is heated by electric current passing through a filament. A cold cathode does not necessarily operate at a low temperature: it is often heated to its operating temperature by other methods, such as the current passing from the cathode into the gas. Cold-cathode devices A cold-cathode vacuum tube does not rely on external heating of an electrode to provide thermionic emission of electrons. Early cold-cathode devices included the Geissler tube and Plucker tube, and early cathode-ray tubes. Study of the phenomena in these devices led to the discovery of the electron. Neon lamps are used both to produce light as indicators and for special-purpose illumination, and also as circuit elements displaying negative resistance. Addition of a trigger electrode to a device allowed the glow discharge to be initiated by an external control circuit; Bell Laboratories developed a "trigger tube" cold-cathode device in 1936. Many types of cold-cathode switching tube were developed, including various types of thyratron, the krytron, cold-cathode displays (Nixie tube) and others. Voltage regulator tubes rely on the relatively constant voltage of a glow discharge over a range of current and were used to stabilize power-supply voltages in tube-based instruments. A Dekatron is a cold-cathode tube with multiple electrodes that is used for counting. Each time a pulse is applied to a control electrode, a glow discharge moves to a step electrode; by providing ten electrodes in each tube and cascading the tubes, a counter system can be developed and the count observed by the position of the glow discharges. Counter tubes were used widely before development of integrated circuit counter devices. The flash tube is a cold-cathode device filled with xenon gas, used to produce an intense short pulse of light for photography or to act as a stroboscope to examine the motion of moving parts. Lamps Cold-cathode lamps include cold-cathode fluorescent lamps (CCFLs) and neon lamps. Neon lamps primarily rely on excitation of gas molecules to emit light; CCFLs use a discharge in mercury vapor to develop ultraviolet light, which in turn causes a fluorescent coating on the inside of the lamp to emit visible light. Cold-cathode fluorescent lamps were used for backlighting of LCDs, for example computer monitors and television screens. In the lighting industry, “cold cathode” historically refers to luminous tubing larger than 20 mm in diameter and operating on a current of 120 to 240 milliamperes. This larger-diameter tubing is often used for interior alcove and general lighting. The term "neon lamp" refers to tubing that is smaller than 15 mm in diameter and typically operates at approximately 40 milliamperes. These lamps are commonly used for neon signs. Details The cathode is the negative electrode. Any gas-discharge lamp has a positive (anode) and a negative electrode. Both electrodes alternate between acting as an anode and a cathode when these devices run with alternating current. A cold cathode is distinguished from a hot cathode that is heated to induce thermionic emission of electrons. Discharge tubes with hot cathodes have an envelope filled with low-pressure gas and containing two electrodes. Hot cathode devices include common vacuum tubes, fluorescent lamps, high-pressure discharge lamps and vacuum fluorescent displays. The surface of cold cathodes can emit secondary electrons at a ratio greater than unity (breakdown). An electron that leaves the cathode will collide with neutral gas molecules. The collision may just excite the molecule, but sometimes it will knock an electron free to create a positive ion. The original electron and the freed electron continue toward the anode and may create more positive ions (see Townsend avalanche). The result is for each electron that leaves the cathode, several positive ions are generated that eventually crash onto the cathode. Some crashing positive ions may generate a secondary electron. The discharge is self-sustaining when for each electron that leaves the cathode, enough positive ions hit the cathode to free, on average, another electron. External circuitry limits the discharge current. Cold-cathode discharge lamps use higher voltages than hot-cathode ones. The resulting strong electric field near the cathode accelerates ions to a sufficient velocity to create free electrons from the cathode material. Another mechanism to generate free electrons from a cold metallic surface is field electron emission. It is used in some x-ray tubes, the field-electron microscope (FEM), and field-emission displays (FEDs). Cold cathodes sometimes have a rare-earth coating to enhance electron emission. Some types contain a source of beta radiation to start ionization of the gas that fills the tube. In some tubes, glow discharge around the cathode is usually minimized; instead there is a so-called positive column, filling the tube. Examples are the neon lamp and nixie tubes. Nixie tubes too are cold-cathode neon displays that are in-line, but not in-plane, display devices. Cold-cathode devices typically use a complex high-voltage power supply with some mechanism for limiting current. Although creating the initial space charge and the first arc of current through the tube may require a very high voltage, once the tube begins to heat up, the electrical resistance drops, thus increasing the electric current through the lamp. To offset this effect and maintain normal operation, the supply voltage is gradually lowered. In the case of tubes with an ionizing gas, the gas can become a very hot plasma, and electrical resistance is greatly reduced. If operated from a simple power supply without current limiting, this reduction in resistance would lead to damage to the power supply and overheating of the tube electrodes. Applications Cold cathodes are used in cold-cathode rectifiers, such as the crossatron and mercury-arc valves, and cold-cathode amplifiers, such as in automatic message accounting and other pseudospark switching applications. Other examples include the thyratron, krytron, sprytron, and ignitron tubes. A common cold-cathode application is in neon signs and other locations where the ambient temperature is likely to drop well below freezing, The Clock Tower, Palace of Westminster (Big Ben) uses cold-cathode lighting behind the clock faces where continual striking and failure to strike in cold weather would be undesirable. Large cold-cathode fluorescent lamps (CCFLs) have been produced in the past and are still used today when shaped, long-life linear light sources are required. , miniature CCFLs were extensively used as backlights for computer and television liquid-crystal displays. CCFL lifespans vary in LCD televisions depending on transient voltage surges and temperature levels in usage environments. Due to its efficiency, CCFL technology has expanded into room lighting. Costs are similar to those of traditional fluorescent lighting, but with several advantages: it has a long life, the light emitted is , bulbs turn on instantly to full output and are also dimmable. Effects of internal heating In systems using alternating current but without separate anode structures, the electrodes alternate as anodes and cathodes, and the impinging electrons can cause substantial localized heating, often to red heat. The electrode may take advantage of this heating to facilitate the thermionic emission of electrons when it is acting as a cathode. (Instant-start fluorescent lamps employ this aspect; they start as cold-cathode devices, but soon localized heating of the fine tungsten-wire cathodes causes them to operate in the same mode as hot-cathode lamps.) This aspect is problematic in the case of backlights used for LCD TV displays. New energy-efficiency regulations being proposed in many countries will require variable backlighting; variable backlighting also improves the perceived contrast range, which is desirable for LCD TV sets. However, CCFLs are strictly limited in the degree to which they can be dimmed, both because a lower plasma current will lower the temperature of the cathode, causing erratic operation, and because running the cathode at too low a temperature drastically shortens the life of the lamps. Much research is being directed to this problem, but high-end manufacturers are now turning to high-efficiency white LEDs as a better solution. References and notes Notes Citations Electrodes Gas discharge lamps Types of lamp Vacuum Vacuum tubes
Cold cathode
[ "Physics", "Chemistry" ]
1,852
[ "Vacuum tubes", "Electrodes", "Vacuum", "Electrochemistry", "Matter" ]
169,169
https://en.wikipedia.org/wiki/Cryopump
A cryopump or a "cryogenic pump" is a vacuum pump that traps gases and vapours by condensing them on a cold surface, but are only effective on some gases. The effectiveness depends on the freezing and boiling points of the gas relative to the cryopump's temperature. They are sometimes used to block particular contaminants, for example in front of a diffusion pump to trap backstreaming oil, or in front of a McLeod gauge to keep out water. In this function, they are called a cryotrap, waterpump or cold trap, even though the physical mechanism is the same as for a cryopump. Cryotrapping can also refer to a somewhat different effect, where molecules will increase their residence time on a cold surface without actually freezing (supercooling). There is a delay between the molecule impinging on the surface and rebounding from it. Kinetic energy will have been lost as the molecules slow down. For example, hydrogen does not condense at 8 kelvins, but it can be cryotrapped. This effectively traps molecules for an extended period and thereby removes them from the vacuum environment just like cryopumping. History Early experiments into cryotrapping of gasses in activated charcoal were conducted as far back as 1874. The first cryopumps mainly used liquid helium to cool the pump, either in a large liquid helium reservoir, or by continuous flow into the cryopump. However, over time most cryopumps were redesigned to use gaseous helium, enabled by the invention of better cryocoolers. The key refrigeration technology was discovered in the 1950s by two employees of the Massachusetts-based company Arthur D. Little Inc., William E. Gifford and Howard O. McMahon. This technology came to be known as the Gifford-McMahon cryocooler. In the 1970s, the Gifford-McMahon cryocooler was used to make a vacuum pump by Helix Technology Corporation and its subsidiary company Cryogenic Technology Inc. In 1976, cryopumps began to be used in IBM's manufacturing of integrated circuits. The use of cryopumps became common in semiconductor manufacturing worldwide, with expansions such as a cryogenics company founded jointly by Helix and ULVAC (jp:アルバック) in 1981. Operation Cryopumps are commonly cooled by compressed helium, though they may also use dry ice, liquid nitrogen, or stand-alone versions may include a built-in cryocooler. Baffles are often attached to the cold head to expand the surface area available for condensation, but these also increase the radiative heat uptake of the cryopump. Over time, the surface eventually saturates with condensate and thus the pumping speed gradually drops to zero. It will hold the trapped gases as long as it remains cold, but it will not condense fresh gases from leaks or backstreaming until it is regenerated. Saturation happens very quickly in low vacuums, so cryopumps are usually only used in high or ultrahigh vacuum systems. The cryopump provides fast, clean pumping of all gases in the 10−3 to 10−9 Torr range. The cryopump operates on the principle that gases can be condensed and held at extremely low vapor pressures, achieving high speeds and throughputs. The cold head consists of a two-stage cold head cylinder (part of the vacuum vessel) and a drive unit displacer assembly. These together produce closed-cycle refrigeration at temperatures that range from 60 to 80K for the first-stage cold station to 10 to 20K for the second-stage cold station, typically. Some cryopumps have multiple stages at various low temperatures, with the outer stages shielding the coldest inner stages. The outer stages condense high boiling point gases such as water and oil, thus saving the surface area and refrigeration capacity of the inner stages for lower boiling point gases such as nitrogen. As cooling temperatures decrease when using dry ice, liquid nitrogen, then compressed helium, lower molecular-weight gases can be trapped. Trapping nitrogen, helium, and hydrogen requires extremely low temperatures (~10K) and large surface area as described below. Even at this temperature, the lighter gases helium and hydrogen have very low trapping efficiency and are the predominant molecules in ultra-high vacuum systems. Cryopumps are often combined with sorption pumps by coating the cold head with highly adsorbing materials such as activated charcoal or a zeolite. As the sorbent saturates, the effectiveness of a sorption pump decreases, but can be recharged by heating the zeolite material (preferably under conditions of low pressure) to outgas it. The breakdown temperature of the zeolite material's porous structure may limit the maximum temperature that it may be heated to for regeneration. Sorption pumps are a type of cryopump that is often used as roughing pumps to reduce pressures from the range of atmospheric to on the order of 0.1 Pa (10−3 Torr), while lower pressures are achieved using a finishing pump (see vacuum). Regeneration Regeneration of a cryopump is the process of evaporating the trapped gases. During a regeneration cycle, the cryopump is warmed to room temperature or higher, allowing trapped gases to change from a solid state to a gaseous state and thereby be released from the cryopump through a pressure relief valve into the atmosphere. Most production equipment utilizing a cryopump have a means to isolate the cryopump from the vacuum chamber so regeneration takes place without exposing the vacuum system to released gasses such as water vapor. Water vapor is the hardest natural element to remove from vacuum chamber walls upon exposure to the atmosphere due to monolayer formation and hydrogen bonding. Adding heat to the dry nitrogen purge-gas will speed the warm-up and reduce the regeneration time. When regeneration is complete, the cryopump will be roughed to 50μm (50 milliTorr or ), isolated, and the rate-of-rise (ROR) will be monitored to test for complete regeneration. If the ROR exceeds 10μm/min the cryopump will require additional purge time. References , Chapter 3 Vacuum pumps Gases Gas technologies
Cryopump
[ "Physics", "Chemistry", "Engineering" ]
1,300
[ "Matter", "Vacuum pumps", "Vacuum systems", "Phases of matter", "Vacuum", "Statistical mechanics", "Gases" ]
169,176
https://en.wikipedia.org/wiki/Clearance%20rate
In criminal justice, clearance rate is calculated by dividing the number of crimes that are "cleared", a criminal charge being laid, or convicted by the total number of crimes recorded. Various groups use clearance rates as a measure of crimes solved by the police. Clearance rates can be problematic for measuring the performance of police services and for comparing various police services. This is because a police force may employ a different way of measuring clearance rates. For example, each police force may have a different method of recording when a "crime" has occurred and different criteria for determining when a crime has been "cleared." A given police force may appear to have a much better clearance rate because of its calculation methodology. In system conflict theory, it is argued that clearance rates cause the police to focus on appearing to solve crimes (generating high clearance rate scores) rather than actually solving crimes. Further focus on clearance rates may result in effort being expended to attribute crimes (correctly or incorrectly) to a criminal, which may not result in retribution, compensation, rehabilitation or deterrence. Homicide clearance rate Homicide clearance rate in the USA has been decreasing from 93% in 1962 to 54% in 2020. Some U.S. police forces have been criticized for overuse of "exceptional clearance", which is intended to classify as "cleared" cases where probable cause to arrest a suspect exists, but police are unable to do so for reasons outside their control (such as death or incarceration in a foreign country). While homicide clearance rate differs between countries, with around 98% in Finland and around 24% in Trinidad and Tobago, a direct comparison is limited due to differing definitions and criminal justice procedures. See also Conviction rate Crime harm index Crime statistics Criminal investigation Dark figure of crime Fear of crime List of unsolved deaths Under-reporting References External links - "The Post has mapped more than 52,000 homicides in major American cities over the past decade and found that across the country, there are areas where murder is common but arrests are rare." Criminology Law enforcement Crime statistics Ratios Social statistics indicators
Clearance rate
[ "Mathematics" ]
420
[ "Arithmetic", "Ratios" ]
169,188
https://en.wikipedia.org/wiki/Color%20confinement
In quantum chromodynamics (QCD), color confinement, often simply called confinement, is the phenomenon that color-charged particles (such as quarks and gluons) cannot be isolated, and therefore cannot be directly observed in normal conditions below the Hagedorn temperature of approximately 2 terakelvin (corresponding to energies of approximately 130–140 MeV per particle). Quarks and gluons must clump together to form hadrons. The two main types of hadron are the mesons (one quark, one antiquark) and the baryons (three quarks). In addition, colorless glueballs formed only of gluons are also consistent with confinement, though difficult to identify experimentally. Quarks and gluons cannot be separated from their parent hadron without producing new hadrons. Origin There is not yet an analytic proof of color confinement in any non-abelian gauge theory. The phenomenon can be understood qualitatively by noting that the force-carrying gluons of QCD have color charge, unlike the photons of quantum electrodynamics (QED). Whereas the electric field between electrically charged particles decreases rapidly as those particles are separated, the gluon field between a pair of color charges forms a narrow flux tube (or string) between them. Because of this behavior of the gluon field, the strong force between the particles is constant regardless of their separation. Therefore, as two color charges are separated, at some point it becomes energetically favorable for a new quark–antiquark pair to appear, rather than extending the tube further. As a result of this, when quarks are produced in particle accelerators, instead of seeing the individual quarks in detectors, scientists see "jets" of many color-neutral particles (mesons and baryons), clustered together. This process is called hadronization, fragmentation, or string breaking. The confining phase is usually defined by the behavior of the action of the Wilson loop, which is simply the path in spacetime traced out by a quark–antiquark pair created at one point and annihilated at another point. In a non-confining theory, the action of such a loop is proportional to its perimeter. However, in a confining theory, the action of the loop is instead proportional to its area. Since the area is proportional to the separation of the quark–antiquark pair, free quarks are suppressed. Mesons are allowed in such a picture, since a loop containing another loop with the opposite orientation has only a small area between the two loops. At non-zero temperatures, the order operator for confinement are thermal versions of Wilson loops known as Polyakov loops. Confinement scale The confinement scale or QCD scale is the scale at which the perturbatively defined strong coupling constant diverges. This is known as the Landau pole. The confinement scale definition and value therefore depend on the renormalization scheme used. For example, in the MS-bar scheme and at 4-loop in the running of , the world average in the 3-flavour case is given by When the renormalization group equation is solved exactly, the scale is not defined at all. It is therefore customary to quote the value of the strong coupling constant at a particular reference scale instead. It is sometimes believed that the sole origin of confinement is the very large value of the strong coupling near the Landau pole. This is sometimes referred as infrared slavery (a term chosen to contrast with the ultraviolet freedom). It is however incorrect since in QCD the Landau pole is unphysical, which can be seen by the fact that its position at the confinement scale largely depends on the chosen renormalization scheme, i.e., on a convention. Most evidence points to a moderately large coupling, typically of value 1-3 depending on the choice of renormalization scheme. In contrast to the simple but erroneous mechanism of infrared slavery, a large coupling is but one ingredient for color confinement, the other one being that gluons are color-charged and can therefore collapse into gluon tubes. Models exhibiting confinement In addition to QCD in four spacetime dimensions, the two-dimensional Schwinger model also exhibits confinement. Compact Abelian gauge theories also exhibit confinement in 2 and 3 spacetime dimensions. Confinement has been found in elementary excitations of magnetic systems called spinons. If the electroweak symmetry breaking scale were lowered, the unbroken SU(2) interaction would eventually become confining. Alternative models where SU(2) becomes confining above that scale are quantitatively similar to the Standard Model at lower energies, but dramatically different above symmetry breaking. Models of fully screened quarks Besides the quark confinement idea, there is a potential possibility that the color charge of quarks gets fully screened by the gluonic color surrounding the quark. Exact solutions of SU(3) classical Yang–Mills theory which provide full screening (by gluon fields) of the color charge of a quark have been found. However, such classical solutions do not take into account non-trivial properties of QCD vacuum. Therefore, the significance of such full gluonic screening solutions for a separated quark is not clear. See also Lund string model Gluon field strength tensor Asymptotic freedom Beta function (physics) Yang–Mills existence and mass gap Lattice gauge theory Dual superconductor model Center vortex References Gluons Quantum chromodynamics Quark matter Unsolved problems in physics
Color confinement
[ "Physics" ]
1,149
[ "Astrophysics", "Unsolved problems in physics", "Quark matter", "Nuclear physics" ]
169,191
https://en.wikipedia.org/wiki/Shape
A shape is a graphical representation of an object's form or its external boundary, outline, or external surface. It is distinct from other object properties, such as color, texture, or material type. In geometry, shape excludes information about the object's position, size, orientation and chirality. A figure is a representation including both shape and size (as in, e.g., figure of the Earth). A plane shape or plane figure is constrained to lie on a plane, in contrast to solid 3D shapes. A two-dimensional shape or two-dimensional figure (also: 2D shape or 2D figure) may lie on a more general curved surface (a two-dimensional space). Classification of simple shapes Some simple shapes can be put into broad categories. For instance, polygons are classified according to their number of edges as triangles, quadrilaterals, pentagons, etc. Each of these is divided into smaller categories; triangles can be equilateral, isosceles, obtuse, acute, scalene, etc. while quadrilaterals can be rectangles, rhombi, trapezoids, squares, etc. Other common shapes are points, lines, planes, and conic sections such as ellipses, circles, and parabolas. Among the most common 3-dimensional shapes are polyhedra, which are shapes with flat faces; ellipsoids, which are egg-shaped or sphere-shaped objects; cylinders; and cones. If an object falls into one of these categories exactly or even approximately, we can use it to describe the shape of the object. Thus, we say that the shape of a manhole cover is a disk, because it is approximately the same geometric object as an actual geometric disk. In geometry A geometric shape consists of the geometric information which remains when location, scale, orientation and reflection are removed from the description of a geometric object. That is, the result of moving a shape around, enlarging it, rotating it, or reflecting it in a mirror is the same shape as the original, and not a distinct shape. Many two-dimensional geometric shapes can be defined by a set of points or vertices and lines connecting the points in a closed chain, as well as the resulting interior points. Such shapes are called polygons and include triangles, squares, and pentagons. Other shapes may be bounded by curves such as the circle or the ellipse. Many three-dimensional geometric shapes can be defined by a set of vertices, lines connecting the vertices, and two-dimensional faces enclosed by those lines, as well as the resulting interior points. Such shapes are called polyhedrons and include cubes as well as pyramids such as tetrahedrons. Other three-dimensional shapes may be bounded by curved surfaces, such as the ellipsoid and the sphere. A shape is said to be convex if all of the points on a line segment between any two of its points are also part of the shape. Properties There are multiple ways to compare the shapes of two objects: Congruence: Two objects are congruent if one can be transformed into the other by a sequence of rotations, translations, and/or reflections. Similarity: Two objects are similar if one can be transformed into the other by a uniform scaling, together with a sequence of rotations, translations, and/or reflections. Isotopy: Two objects are isotopic if one can be transformed into the other by a sequence of deformations that do not tear the object or put holes in it. Sometimes, two similar or congruent objects may be regarded as having a different shape if a reflection is required to transform one into the other. For instance, the letters "b" and "d" are a reflection of each other, and hence they are congruent and similar, but in some contexts they are not regarded as having the same shape. Sometimes, only the outline or external boundary of the object is considered to determine its shape. For instance, a hollow sphere may be considered to have the same shape as a solid sphere. Procrustes analysis is used in many sciences to determine whether or not two objects have the same shape, or to measure the difference between two shapes. In advanced mathematics, quasi-isometry can be used as a criterion to state that two shapes are approximately the same. Simple shapes can often be classified into basic geometric objects such as a line, a curve, a plane, a plane figure (e.g. square or circle), or a solid figure (e.g. cube or sphere). However, most shapes occurring in the physical world are complex. Some, such as plant structures and coastlines, may be so complicated as to defy traditional mathematical description – in which case they may be analyzed by differential geometry, or as fractals. Some common shapes include: Circle, Square, Triangle, Rectangle, Oval, Star (polygon), Rhombus, Semicircle. Regular polygons starting at pentagon follow the naming convention of the Greek derived prefix with '-gon' suffix: Pentagon, Hexagon, Heptagon, Octagon, Nonagon, Decagon... See polygon Equivalence of shapes In geometry, two subsets of a Euclidean space have the same shape if one can be transformed to the other by a combination of translations, rotations (together also called rigid transformations), and uniform scalings. In other words, the shape of a set of points is all the geometrical information that is invariant to translations, rotations, and size changes. Having the same shape is an equivalence relation, and accordingly a precise mathematical definition of the notion of shape can be given as being an equivalence class of subsets of a Euclidean space having the same shape. Mathematician and statistician David George Kendall writes: In this paper ‘shape’ is used in the vulgar sense, and means what one would normally expect it to mean. [...] We here define ‘shape’ informally as ‘all the geometrical information that remains when location, scale and rotational effects are filtered out from an object.’ Shapes of physical objects are equal if the subsets of space these objects occupy satisfy the definition above. In particular, the shape does not depend on the size and placement in space of the object. For instance, a "d" and a "p" have the same shape, as they can be perfectly superimposed if the "d" is translated to the right by a given distance, rotated upside down and magnified by a given factor (see Procrustes superimposition for details). However, a mirror image could be called a different shape. For instance, a "b" and a "p" have a different shape, at least when they are constrained to move within a two-dimensional space like the page on which they are written. Even though they have the same size, there's no way to perfectly superimpose them by translating and rotating them along the page. Similarly, within a three-dimensional space, a right hand and a left hand have a different shape, even if they are the mirror images of each other. Shapes may change if the object is scaled non-uniformly. For example, a sphere becomes an ellipsoid when scaled differently in the vertical and horizontal directions. In other words, preserving axes of symmetry (if they exist) is important for preserving shapes. Also, shape is determined by only the outer boundary of an object. Congruence and similarity Objects that can be transformed into each other by rigid transformations and mirroring (but not scaling) are congruent. An object is therefore congruent to its mirror image (even if it is not symmetric), but not to a scaled version. Two congruent objects always have either the same shape or mirror image shapes, and have the same size. Objects that have the same shape or mirror image shapes are called geometrically similar, whether or not they have the same size. Thus, objects that can be transformed into each other by rigid transformations, mirroring, and uniform scaling are similar. Similarity is preserved when one of the objects is uniformly scaled, while congruence is not. Thus, congruent objects are always geometrically similar, but similar objects may not be congruent, as they may have different size. Homeomorphism A more flexible definition of shape takes into consideration the fact that realistic shapes are often deformable, e.g. a person in different postures, a tree bending in the wind or a hand with different finger positions. One way of modeling non-rigid movements is by homeomorphisms. Roughly speaking, a homeomorphism is a continuous stretching and bending of an object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a donut are not. An often-repeated mathematical joke is that topologists cannot tell their coffee cup from their donut, since a sufficiently pliable donut could be reshaped to the form of a coffee cup by creating a dimple and progressively enlarging it, while preserving the donut hole in a cup's handle. A described shape has external lines that you can see and make up the shape. If you were putting your coordinates on a coordinate graph you could draw lines to show where you can see a shape, however not every time you put coordinates in a graph as such you can make a shape. This shape has a outline and boundary so you can see it and is not just regular dots on a regular paper. Shape analysis The above-mentioned mathematical definitions of rigid and non-rigid shape have arisen in the field of statistical shape analysis. In particular, Procrustes analysis is a technique used for comparing shapes of similar objects (e.g. bones of different animals), or measuring the deformation of a deformable object. Other methods are designed to work with non-rigid (bendable) objects, e.g. for posture independent shape retrieval (see for example Spectral shape analysis). Similarity classes All similar triangles have the same shape. These shapes can be classified using complex numbers , , for the vertices, in a method advanced by J.A. Lester and Rafael Artzy. For example, an equilateral triangle can be expressed by the complex numbers 0, 1, representing its vertices. Lester and Artzy call the ratio the shape of triangle . Then the shape of the equilateral triangle is For any affine transformation of the complex plane,   a triangle is transformed but does not change its shape. Hence shape is an invariant of affine geometry. The shape depends on the order of the arguments of function S, but permutations lead to related values. For instance, Also Combining these permutations gives Furthermore, These relations are "conversion rules" for shape of a triangle. The shape of a quadrilateral is associated with two complex numbers , . If the quadrilateral has vertices , , , , then and . Artzy proves these propositions about quadrilateral shapes: If then the quadrilateral is a parallelogram. If a parallelogram has , then it is a rhombus. When and , then the quadrilateral is square. If and , then the quadrilateral is a trapezoid. A polygon has a shape defined by n − 2 complex numbers The polygon bounds a convex set when all these shape components have imaginary components of the same sign. Human perception of shapes Human vision relies on a wide range of shape representations. Some psychologists have theorized that humans mentally break down images into simple geometric shapes (e.g., cones and spheres) called geons. Meanwhile, others have suggested shapes are decomposed into features or dimensions that describe the way shapes tend to vary, like their segmentability, compactness and spikiness. When comparing shape similarity, however, at least 22 independent dimensions are needed to account for the way natural shapes vary. There is also clear evidence that shapes guide human attention. See also Area Glossary of shapes with metaphorical names Lists of shapes Shape factor Size Skew polygon Solid geometry Region (mathematics) References External links Elementary geometry Morphology Structure
Shape
[ "Mathematics" ]
2,513
[ "Geometric shapes", "Mathematical objects", "Elementary mathematics", "Elementary geometry", "Geometric objects" ]
169,193
https://en.wikipedia.org/wiki/Twig
A twig is a thin, often short, branch of a tree or bush. The buds on the twig are an important diagnostic characteristic, as are the abscission scars where the leaves have fallen away. The color, texture, and patterning of the twig bark are also important, in addition to the thickness and nature of any pith of the twig. There are two types of twigs: vegetative twigs and fruiting spurs. Fruiting spurs are specialized twigs that generally branch off the sides of branches and are stubby and slow-growing, with many annular ring markings from seasons past. The twig's age and rate of growth can be determined by counting the winter terminal bud scale scars, or annular ring marking, across the diameter of the twig. Uses Twigs can be useful in starting a fire. They can be used as kindling wood, bridging the gap between highly flammable tinder (dry grass and leaves) and firewood. This is due to their high amounts of stored carbon dioxide used in photosynthesis. Twigs are a feature of tool use by non-humans. For example, chimpanzees have been observed using twigs to go "fishing" for termites, and elephants have been reported using twigs to scratch parts of their ears and mouths which could not be reached by rubbing against a tree. References Plant morphology Building materials
Twig
[ "Physics", "Engineering", "Biology" ]
286
[ "Plants", "Building engineering", "Plant morphology", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
169,197
https://en.wikipedia.org/wiki/Acrux
Acrux is the brightest star in the southern constellation of Crux. It has the Bayer designation α Crucis, which is Latinised to Alpha Crucis and abbreviated Alpha Cru or α Cru. With a combined visual magnitude of +0.76, it is the 13th-brightest star in the night sky. It is the most southerly star of the asterism known as the Southern Cross and is the southernmost first-magnitude star, 2.3 degrees more southerly than Alpha Centauri. This system is located at a distance of 321 light-years from the Sun. To the naked eye Acrux appears as a single star, but it is actually a multiple star system containing six components. Through optical telescopes, Acrux appears as a triple star, whose two brightest components are visually separated by about 4 arcseconds and are known as Acrux A and Acrux B, α1 Crucis and α2 Crucis, or α Crucis A and α Crucis B. Both components are B-type stars, and are many times more massive and luminous than the Sun. This system was the second ever to be recognized as a binary, in 1685 by a Jesuit priest. α1 Crucis is itself a spectroscopic binary with components designated α Crucis Aa (officially named Acrux, historically the name of the entire system) and α Crucis Ab. Its two component stars orbit every 76 days at a separation of about 1 astronomical unit (AU). HR 4729, also known as Acrux C, is a more distant companion, forming a triple star through small telescopes. C is also a spectroscopic binary, which brings the total number of stars in the system to at least five. Nomenclature α Crucis (Latinised to Alpha Crucis) is the system's Bayer designation; α1 and α2 Crucis, those of its two main components stars. The designations of these two constituents as Acrux A and Acrux B and those of A's components—Acrux Aa and Acrux Ab—derive from the convention used by the Washington Multiplicity Catalog (WMC) for multiple star systems, and adopted by the International Astronomical Union (IAU). The historical name Acrux for α1 Crucis is an "Americanism" coined in the 19th century, but entering common use only by the mid 20th century. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN states that in the case of multiple stars the name should be understood to be attributed to the brightest component by visual brightness. The WGSN approved the name Acrux for the star Acrux Aa on 20 July 2016 and it is now so entered in the IAU Catalog of Star Names. Since Acrux is at −63° declination, making it the southernmost first-magnitude star, it is only visible south of latitude 27° North. It barely rises from cities such as Miami, United States, or Karachi, Pakistan (both around 25°N) and not at all from New Orleans, United States, or Cairo, Egypt (both about 30°N). Because of Earth's axial precession, the star was visible to ancient Hindu astronomers in India who named it Tri-shanku. It was also visible to the ancient Romans and Greeks, who regarded it as part of the constellation of Centaurus. In Chinese, (, "Cross"), refers to an asterism consisting of Acrux, Mimosa, Gamma Crucis and Delta Crucis. Consequently, Acrux itself is known as (, "the Second Star of Cross"). This star is known as Estrela de Magalhães ("Star of Magellan") in Portuguese. Stellar properties The two components, α1 and α2 Crucis, are separated by 4 arcseconds. α1 is magnitude 1.40 and α2 is magnitude 2.09, both early class B stars, with surface temperatures of about 28,000 and , respectively. Their luminosities are 25,000 and 16,000 times that of the Sun. α1 and α2 orbit over such a long period that motion is only barely seen. From their minimum separation of 430 astronomical units, the period is estimated to be around 1,500 years. α1 is itself a spectroscopic binary star, with its components thought to be around 14 and 10 times the mass of the Sun and orbiting in only 76 days at a separation of about . The masses of α2 and the brighter component of α1 suggest that the stars will someday expand into blue and red supergiants (similar to Betelgeuse and Antares) before exploding as supernovae. Component Ab may perform electron capture in the degenerate O+Ne+Mg core and trigger a supernova explosion, otherwise it will become a massive white dwarf. Photometry with the TESS satellite has shown that one of the stars in the α Crucis system is a β Cephei variable, although α1 and α2 Crucis are too close for TESS to resolve and determine which one is the pulsator. Rizzuto and colleagues determined in 2011 that the α Crucis system was 66% likely to be a member of the Lower Centaurus–Crux sub-group of the Scorpius–Centaurus association. It was not previously seen to be a member of the group. A bow shock is present around α Crucis, and is visible in the infrared spectrum, but is not aligned with α Crucis; the bow shock likely formed from large-scale motions in the interstellar matter. The cooler, less-luminous B-class star HR 4729 (HD 108250) lies 90 arcseconds away from triple star system α Crucis and shares its motion through space, suggesting it may be gravitationally bound to it, and it is therefore generally assumed to be physically associated. It is itself a spectroscopic binary system, sometimes catalogued as component C (Acrux C) of the Acrux multiple system. Another fainter visual companion listed as component D or Acrux D. A further seven faint stars are also listed as companions out to a distance of about two arc-minutes. On 2 October 2008, the Cassini–Huygens spacecraft resolved three of the components (A, B and C) of the multiple star system as Saturn's disk occulted it. In culture Acrux is represented in the flags of Australia, New Zealand, Samoa, and Papua New Guinea as one of five stars that compose the Southern Cross. It is also featured in the flag of Brazil, along with 26 other stars, each of which represents a state; Acrux represents the state of São Paulo. As of 2015, it is also represented on the cover of the Brazilian passport. The Brazilian oceanographic research vessel Alpha Crucis is named after the star. See also Lists of stars Notes References External links http://jumk.de/astronomie/big-stars/acrux.shtml http://www.daviddarling.info/encyclopedia/A/Acrux.html Double stars B-type main-sequence stars B-type subgiants Spectroscopic binaries Triple star systems 6 Lower Centaurus Crux Crux Crucis, Alpha 4730 1 Durchmusterung objects 060718 108248 9 Crucis, 26 7 Acrux
Acrux
[ "Astronomy" ]
1,582
[ "Crux", "Constellations" ]
169,208
https://en.wikipedia.org/wiki/Marsh
In ecology, a marsh is a wetland that is dominated by herbaceous plants rather than by woody plants. More in general, the word can be used for any low-lying and seasonally waterlogged terrain. In Europe and in agricultural literature low-lying meadows that require draining and embanked polderlands are also referred to as marshes or marshland. Marshes can often be found at the edges of lakes and streams, where they form a transition between the aquatic and terrestrial ecosystems. They are often dominated by grasses, rushes or reeds. If woody plants are present they tend to be low-growing shrubs, and the marsh is sometimes called a carr. This form of vegetation is what differentiates marshes from other types of wetland such as swamps, which are dominated by trees, and mires, which are wetlands that have accumulated deposits of acidic peat. Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. This biological productivity means that marshes contain 0.1% of global sequestered terrestrial carbon. Moreover, they have an outsized influence on climate resilience of coastal areas and waterways, absorbing high tides and other water changes due to extreme weather. Though some marshes are expected to migrate upland, most natural marshlands will be threatened by sea level rise and associated erosion. Basic information Marshes provide a habitat for many species of plants, animals, and insects that have adapted to living in flooded conditions or other environments. The plants must be able to survive in wet mud with low oxygen levels. Many of these plants, therefore, have aerenchyma, channels within the stem that allow air to move from the leaves into the rooting zone. Marsh plants also tend to have rhizomes for underground storage and reproduction. Common examples include cattails, sedges, papyrus and sawgrass. Aquatic animals, from fish to salamanders, are generally able to live with a low amount of oxygen in the water. Some can obtain oxygen from the air instead, while others can live indefinitely in conditions of low oxygen. The pH in marshes tends to be neutral to alkaline, as opposed to bogs, where peat accumulates under more acid conditions. Values and ecosystem services Marshes provide habitats for many kinds of invertebrates, fish, amphibians, waterfowl and aquatic mammals. Marshes have extremely high levels of biological production, some of the highest in the world, and therefore are important in supporting fisheries. Marshes also improve water quality by acting as a sink to filter pollutants and sediment from the water that flows through them. Marshes partake in water purification by providing nutrient and pollution consumption. Marshes (and other wetlands) are able to absorb water during periods of heavy rainfall and slowly release it into waterways and therefore reduce the magnitude of flooding. Marshes also provide the services of tourism, recreation, education, and research. Types of marshes Marshes differ depending mainly on their location and salinity. These factors greatly influence the range and scope of animal and plant life that can survive and reproduce in these environments. The three main types of marsh are salt marshes, freshwater tidal marshes, and freshwater marshes. These three can be found worldwide, and each contains a different set of organisms. Salt marshes Saltwater marshes are found around the world in mid to high latitudes, wherever there are sections of protected coastline. They are located close enough to the shoreline that the motion of the tides affects them, and, sporadically, they are covered with water. They flourish where the rate of sediment buildup is greater than the rate at which the land level is sinking. Salt marshes are dominated by specially adapted rooted vegetation, primarily salt-tolerant grasses. Salt marshes are most commonly found in lagoons, estuaries, and on the sheltered side of a shingle or sandspit. The currents there carry the fine particles around to the quiet side of the spit, and sediment begins to build up. These locations allow the marshes to absorb the excess nutrients from the water running through them before they reach the oceans and estuaries. These marshes are slowly declining. Coastal development and urban sprawl have caused significant loss of these essential habitats. Freshwater tidal marshes Although considered a freshwater marsh, the ocean tides affect this form of marsh. However, without the stresses of salinity at work in its saltwater counterpart, the diversity of the plants and animals that live in and use freshwater tidal marshes is much higher than in salt marshes. The most severe threats to this form of marsh are the increasing size and pollution of the cities surrounding them. Freshwater marshes Ranging greatly in size and geographic location, freshwater marshes make up North America's most common form of wetland. They are also the most diverse of the three types of marsh. Some examples of freshwater marsh types in North America are: Wet meadows Wet meadows occur in shallow lake basins, low-lying depressions, and the land between shallow marshes and upland areas. They also happen on the edges of large lakes and rivers. Wet meadows often have very high plant diversity and high densities of buried seeds. They are regularly flooded but are often dry in the summer. Vernal pools Vernal pools are a type of marsh found only seasonally in shallow depressions in the land. They can be covered in shallow water, but in the summer and fall, they can be completely dry. In western North America, vernal pools tend to form in open grasslands, whereas in the east, they often occur in forested landscapes. Further south, vernal pools form in pine savannas and flatwoods. Many amphibian species depend upon vernal pools for spring breeding; these ponds provide a habitat free from fish, which eat the eggs and young of amphibians. An example is the endangered gopher frog. Similar temporary ponds occur in other world ecosystems, where they may have local names. However, vernal pool can be applied to all such temporary pool ecosystems. Playa lakes Playa lakes are a form of shallow freshwater marsh in the southern high plains of the United States. Like vernal pools, they are only present at certain times of the year and generally have a circular shape. As the playa dries during the summer, conspicuous plant zonation develops along the shoreline. Prairie potholes Prairie potholes are found in northern North America, such as the Prairie Pothole Region. Glaciers once covered these landscapes, and as a result, shallow depressions were formed in great numbers. These depressions fill with water in the spring. They provide important breeding habitats for many species of waterfowl. Some pools only occur seasonally, while others retain enough water to be present all year. Riverine wetlands Many kinds of marsh occur along the fringes of large rivers. The different types are produced by factors such as water level, nutrients, ice scour, and waves. Embanked marshlands Large tracts of tidal marsh have been embanked and artificially drained. They are usually known by the Dutch name of polders. In Northern Germany and Scandinavia they are called Marschland, Marsch or marsk; in France marais maritime. In the Netherlands and Belgium, they are designated as marine clay districts. In East Anglia, a region in the East of England, the embanked marshes are also known as Fens. Restoration Some areas have already lost 90% of their wetlands, including marshes. They have been drained to create agricultural land or filled to accommodate urban sprawl. Restoration is returning marshes to the landscape to replace those lost in the past. Restoration can be done on a large scale, such as by allowing rivers to flood naturally in the spring, or on a small scale by returning wetlands to urban landscapes. See also References External links Marshes of the Lowcountry (South Carolina) – Beaufort County Library Fluvial landforms Pedology Wetlands tr:Sazlık
Marsh
[ "Environmental_science" ]
1,593
[ "Hydrology", "Wetlands" ]
169,250
https://en.wikipedia.org/wiki/Lipid-anchored%20protein
Lipid-anchored proteins (also known as lipid-linked proteins) are proteins located on the surface of the cell membrane that are covalently attached to lipids embedded within the cell membrane. These proteins insert and assume a place in the bilayer structure of the membrane alongside the similar fatty acid tails. The lipid-anchored protein can be located on either side of the cell membrane. Thus, the lipid serves to anchor the protein to the cell membrane. They are a type of proteolipids. The lipid groups play a role in protein interaction and can contribute to the function of the protein to which it is attached. Furthermore, the lipid serves as a mediator of membrane associations or as a determinant for specific protein-protein interactions. For example, lipid groups can play an important role in increasing molecular hydrophobicity. This allows for the interaction of proteins with cellular membranes and protein domains. In a dynamic role, lipidation can sequester a protein away from its substrate to inactivate the protein and then activate it by substrate presentation. Overall, there are three main types of lipid-anchored proteins which include prenylated proteins, fatty acylated proteins and glycosylphosphatidylinositol-linked proteins (GPI). A protein can have multiple lipid groups covalently attached to it, but the site where the lipids bind to the protein depends both on the lipid group and protein. Prenylated proteins Prenylated proteins are proteins with covalently attached hydrophobic isoprene polymers (i.e. branched five-carbon hydrocarbon) at cysteine residues of the protein. More specifically, these isoprenoid groups, usually farnesyl (15-carbon) and geranylgeranyl (20-carbon) are attached to the protein via thioether linkages at cysteine residues near the C terminal of the protein. This prenylation of lipid chains to proteins facilitate their interaction with the cell membrane. The prenylation motif “CaaX box” is the most common prenylation site in proteins, that is, the site where farnesyl or geranylgeranyl covalently attach. In the CaaX box sequence, the C represents the cysteine that is prenylated, the A represents any aliphatic amino acid and the X determines the type of prenylation that will occur. If the X is an Ala, Met, Ser or Gln the protein will be farnesylated via the farnesyltransferase enzyme and if the X is a Leu then the protein will be geranylgeranylated via the geranylgeranyltransferase I enzyme. Both of these enzymes are similar with each containing two subunits. Roles and function Prenylated proteins are particularly important for eukaryotic cell growth, differentiation and morphology. Furthermore, protein prenylation is a reversible post-translational modification to the cell membrane. This dynamic interaction of prenylated proteins with the cell membrane is important for their signalling functions and is often deregulated in disease processes such as cancer. More specifically, Ras is the protein that undergoes prenylation via farnesyltransferase and when it is switched on it can turn on genes involved in cell growth and differentiation. Thus overactiving Ras signalling can lead to cancer. An understanding of these prenylated proteins and their mechanisms have been important for the drug development efforts in combating cancer. Other prenylated proteins include members of the Rab and Rho families as well as lamins. Some important prenylation chains that are involved in the HMG-CoA reductase metabolic pathway are geranylgeraniol, farnesol and dolichol. These isoprene polymers (e.g. geranyl pyrophosphate and farnesyl pyrophosphate) are involved in the condensations via enzymes such as prenyltransferase that eventually cyclizes to form cholesterol. Fatty acylated proteins Fatty acylated proteins are proteins that have been post-translationally modified to include the covalent attachment of fatty acids at certain amino acid residues. The most common fatty acids that are covalently attached to the protein are the saturated myristic (14-carbon) acid and palmitic acid (16-carbon). Proteins can be modified to contain either one or both of these fatty acids. N-myristoylation N-myristoylation (i.e. attachment of myristic acid) is generally an irreversible protein modification that typically occurs during protein synthesis in which the myrisitc acid is attached to the α-amino group of an N-terminal glycine residue through an amide linkage. This reaction is facilitated by N-myristoyltransferase . These proteins usually begin with a - sequence and with either a serine or threonine at position 5. Proteins that have been myristoylated are involved in signal transduction cascade, protein-protein interactions and in mechanisms that regulate protein targeting and function. An example in which the myristoylation of a protein is important is in apoptosis, programmed cell death. After the protein BH3 interacting-domain death agonist (Bid) has been myristoylated, it targets the protein to move to the mitochondrial membrane to release cytochrome c, which then ultimately leads to cell death. Other proteins that are myristoylated and involved in the regulation of apoptosis are actin and gelsolin. S-palmitoylation S-palmitoylation (i.e. attachment of palmitic acid) is a reversible protein modification in which a palmitic acid is attached to a specific cysteine residue via thioester linkage. The term S-acylation can also be used when other medium and long fatty acids chains are also attached to palmitoylated proteins. No consensus sequence for protein palmitoylation has been identified. Palmitoylated proteins are mainly found on the cytoplasmic side of the plasma membrane where they play a role in transmembrane signaling. The palmitoyl group can be removed by palmitoyl thioesterases. It is believed that this reverse palmitoylation may regulate the interaction of the protein with the membrane and thus have a role in signaling processes. Furthermore, this allows for the regulation of protein subcellular localization, stability and trafficking. An example in which palmitoylation of a protein plays a role in cell signaling pathways is in the clustering of proteins in the synapse. When the postsynaptic density protein 95 (PSD-95) is palmitoylated, it is restricted to the membrane and allows it to bind to and cluster ion channels in the postsynaptic membrane. Thus, palmitoylation can play a role in the regulation of neurotransmitter release. Palmitoylation mediates the affinity of a protein for lipid rafts and facilitates the clustering of proteins. The clustering can increase the proximity of two molecules. Alternatively, clustering can sequester a protein away from a substrate. For example, palmitoylation of phospholipase D (PLD) sequesters the enzyme away from its substrate phosphatidylcholine. When cholesterol levels decrease or PIP2 levels increase the palmitate mediated localization is disrupted, the enzyme trafficks to PIP2 where it encounters its substrate and is active by substrate presentation. GPI proteins Glycosylphosphatidylinositol-anchored proteins (GPI-anchored proteins) are attached to a GPI complex molecular group via an amide linkage to the protein's C-terminal carboxyl group. This GPI complex consists of several main components that are all interconnected: a phosphoethanolamine, a linear tetrasaccharide (composed of three mannose and a glucosaminyl) and a phosphatidylinositol. The phosphatidylinositol group is glycosidically linked to the non-N-acetylated glucosamine of the tetrasaccharide. A phosphodiester bond is then formed between the mannose at the nonreducing end (of the tetrasaccaride) and the phosphoethanolamine. The phosphoethanolamine is then amide linked to the C-terminal of the carboxyl group of the respective protein. The GPI attachment occurs through the action of GPI-transamidase complex. The fatty acid chains of the phosphatidylinositol are inserted into the membrane and thus are what anchor the protein to the membrane. These proteins are only located on the exterior surface of the plasma membrane. Roles and function The sugar residues in the tetrasaccaride and the fatty acid residues in the phosphatidylinositol group vary depending on the protein. This great diversity is what allows the GPI proteins to have a wide range of functions including acting as hydrolytic enzymes, adhesion molecule, receptors, protease inhibitor and complement regulatory proteins. Furthermore, GPI proteins play an important in embryogenesis, development, neurogenesis, the immune system and fertilization. More specifically, the GPI protein IZUMO1R (also named JUNO after the Roman goddess of fertility) on the egg plasma has an essential role in sperm-egg fusion. Releasing the IZUMO1R (JUNO) GPI protein from the egg plasma membrane does not allow for sperm to fuse with the egg and it is suggested that this mechanism may contribute to the polyspermy block at the plasma membrane in eggs. Other roles that GPI modification allows for is in the association with membrane microdomains, transient homodimerization or in apical sorting in polarized cells. References External links Membrane biology Membrane proteins Lipoproteins Post-translational modification
Lipid-anchored protein
[ "Chemistry", "Biology" ]
2,097
[ "Lipid biochemistry", "Membrane biology", "Gene expression", "Protein classification", "Biochemical reactions", "Membrane proteins", "Post-translational modification", "Molecular biology", "Lipoproteins" ]
169,262
https://en.wikipedia.org/wiki/Intuitionistic%20logic
Intuitionistic logic, sometimes more generally called constructive logic, refers to systems of symbolic logic that differ from the systems used for classical logic by more closely mirroring the notion of constructive proof. In particular, systems of intuitionistic logic do not assume the law of the excluded middle and double negation elimination, which are fundamental inference rules in classical logic. Formalized intuitionistic logic was originally developed by Arend Heyting to provide a formal basis for L. E. J. Brouwer's programme of intuitionism. From a proof-theoretic perspective, Heyting’s calculus is a restriction of classical logic in which the law of excluded middle and double negation elimination have been removed. Excluded middle and double negation elimination can still be proved for some propositions on a case by case basis, however, but do not hold universally as they do with classical logic. The standard explanation of intuitionistic logic is the BHK interpretation. Several systems of semantics for intuitionistic logic have been studied. One of these semantics mirrors classical Boolean-valued semantics but uses Heyting algebras in place of Boolean algebras. Another semantics uses Kripke models. These, however, are technical means for studying Heyting’s deductive system rather than formalizations of Brouwer’s original informal semantic intuitions. Semantical systems claiming to capture such intuitions, due to offering meaningful concepts of “constructive truth” (rather than merely validity or provability), are Kurt Gödel’s dialectica interpretation, Stephen Cole Kleene’s realizability, Yurii Medvedev’s logic of finite problems, or Giorgi Japaridze’s computability logic. Yet such semantics persistently induce logics properly stronger than Heyting’s logic. Some authors have argued that this might be an indication of inadequacy of Heyting’s calculus itself, deeming the latter incomplete as a constructive logic. Mathematical constructivism In the semantics of classical logic, propositional formulae are assigned truth values from the two-element set ("true" and "false" respectively), regardless of whether we have direct evidence for either case. This is referred to as the 'law of excluded middle', because it excludes the possibility of any truth value besides 'true' or 'false'. In contrast, propositional formulae in intuitionistic logic are not assigned a definite truth value and are only considered "true" when we have direct evidence, hence proof. We can also say, instead of the propositional formula being "true" due to direct evidence, that it is inhabited by a proof in the Curry–Howard sense. Operations in intuitionistic logic therefore preserve justification, with respect to evidence and provability, rather than truth-valuation. Intuitionistic logic is a commonly-used tool in developing approaches to constructivism in mathematics. The use of constructivist logics in general has been a controversial topic among mathematicians and philosophers (see, for example, the Brouwer–Hilbert controversy). A common objection to their use is the above-cited lack of two central rules of classical logic, the law of excluded middle and double negation elimination. David Hilbert considered them to be so important to the practice of mathematics that he wrote: Intuitionistic logic has found practical use in mathematics despite the challenges presented by the inability to utilize these rules. One reason for this is that its restrictions produce proofs that have the disjunction and existence properties, making it also suitable for other forms of mathematical constructivism. Informally, this means that if there is a constructive proof that an object exists, that constructive proof may be used as an algorithm for generating an example of that object, a principle known as the Curry–Howard correspondence between proofs and algorithms. One reason that this particular aspect of intuitionistic logic is so valuable is that it enables practitioners to utilize a wide range of computerized tools, known as proof assistants. These tools assist their users in the generation and verification of large-scale proofs, whose size usually precludes the usual human-based checking that goes into publishing and reviewing a mathematical proof. As such, the use of proof assistants (such as Agda or Coq) is enabling modern mathematicians and logicians to develop and prove extremely complex systems, beyond those that are feasible to create and check solely by hand. One example of a proof that was impossible to satisfactorily verify without formal verification is the famous proof of the four color theorem. This theorem stumped mathematicians for more than a hundred years, until a proof was developed that ruled out large classes of possible counterexamples, yet still left open enough possibilities that a computer program was needed to finish the proof. That proof was controversial for some time, but, later, it was verified using Coq. Syntax The syntax of formulas of intuitionistic logic is similar to propositional logic or first-order logic. However, intuitionistic connectives are not definable in terms of each other in the same way as in classical logic, hence their choice matters. In intuitionistic propositional logic (IPL) it is customary to use →, ∧, ∨, ⊥ as the basic connectives, treating ¬A as an abbreviation for . In intuitionistic first-order logic both quantifiers ∃, ∀ are needed. Hilbert-style calculus Intuitionistic logic can be defined using the following Hilbert-style calculus. This is similar to a way of axiomatizing classical propositional logic. In propositional logic, the inference rule is modus ponens MP: from and infer and the axioms are THEN-1: THEN-2: AND-1: AND-2: AND-3: OR-1: OR-2: OR-3: FALSE: To make this a system of first-order predicate logic, the generalization rules -GEN: from infer , if is not free in -GEN: from infer , if is not free in are added, along with the axioms PRED-1: , if the term is free for substitution for the variable in (i.e., if no occurrence of any variable in becomes bound in ) PRED-2: , with the same restriction as for PRED-1 Negation If one wishes to include a connective for negation rather than consider it an abbreviation for , it is enough to add: NOT-1': NOT-2': There are a number of alternatives available if one wishes to omit the connective (false). For example, one may replace the three axioms FALSE, NOT-1', and NOT-2' with the two axioms NOT-1: NOT-2: as at . Alternatives to NOT-1 are or . Equivalence The connective for equivalence may be treated as an abbreviation, with standing for . Alternatively, one may add the axioms IFF-1: IFF-2: IFF-3: IFF-1 and IFF-2 can, if desired, be combined into a single axiom using conjunction. Sequent calculus Gerhard Gentzen discovered that a simple restriction of his system LK (his sequent calculus for classical logic) results in a system that is sound and complete with respect to intuitionistic logic. He called this system LJ. In LK any number of formulas is allowed to appear on the conclusion side of a sequent; in contrast LJ allows at most one formula in this position. Other derivatives of LK are limited to intuitionistic derivations but still allow multiple conclusions in a sequent. LJ' is one example. Theorems The theorems of the pure logic are the statements provable from the axioms and inference rules. For example, using THEN-1 in THEN-2 reduces it to . A formal proof of the latter using the Hilbert system is given on that page. With for , this in turn implies . In words: "If being the case implies that is absurd, then if does hold, one has that is not the case." Due to the symmetry of the statement, one in fact obtained When explaining the theorems of intuitionistic logic in terms of classical logic, it can be understood as a weakening thereof: It is more conservative in what it allows a reasoner to infer, while not permitting any new inferences that could not be made under classical logic. Each theorem of intuitionistic logic is a theorem in classical logic, but not conversely. Many tautologies in classical logic are not theorems in intuitionistic logicin particular, as said above, one of intuitionistic logic's chief aims is to not affirm the law of the excluded middle so as to vitiate the use of non-constructive proof by contradiction, which can be used to furnish existence claims without providing explicit examples of the objects that it proves exist. Double negations A double negation does not affirm the law of the excluded middle (PEM); while it is not necessarily the case that PEM is upheld in any context, no counterexample can be given either. Such a counterexample would be an inference (inferring the negation of the law for a certain proposition) disallowed under classical logic and thus PEM is not allowed in a strict weakening like intuitionistic logic. Formally, it is a simple theorem that for any two propositions. By considering any established to be false this indeed shows that the double negation of the law is retained as a tautology already in minimal logic. This means any is established to be inconsistent and the propositional calculus is in turn always compatible with classical logic. When assuming the law of excluded middle implies a proposition, then by applying contraposition twice and using the double-negated excluded middle, one may prove double-negated variants of various strictly classical tautologies. The situation is more intricate for predicate logic formulas, when some quantified expressions are being negated. Double negation and implication Akin to the above, from modus ponens in the form follows . The relation between them may always be used to obtain new formulas: A weakened premise makes for a strong implication, and vice versa. For example, note that if holds, then so does , but the schema in the other direction would imply the double-negation elimination principle. Propositions for which double-negation elimination is possible are also called stable. Intuitionistic logic proves stability only for restricted types of propositions. A formula for which excluded middle holds can be proven stable using the disjunctive syllogism, which is discussed more thoroughly below. The converse does however not hold in general, unless the excluded middle statement at hand is stable itself. An implication can be proven to be equivalent to , whatever the propositions. As a special case, it follows that propositions of negated form ( here) are stable, i.e. is always valid. In general, is stronger than , which is stronger than , which itself implies the three equivalent statements , and . Using the disjunctive syllogism, the previous four are indeed equivalent. This also gives an intuitionistically valid derivation of , as it is thus equivalent to an identity. When expresses a claim, then its double-negation merely expresses the claim that a refutation of would be inconsistent. Having proven such a mere double-negation also still aids in negating other statements through negation introduction, as then . A double-negated existential statement does not denote existence of an entity with a property, but rather the absurdity of assumed non-existence of any such entity. Also all the principles in the next section involving quantifiers explain use of implications with hypothetical existence as premise. Formula translation Weakening statements by adding two negations before existential quantifiers (and atoms) is also the core step in the double-negation translation. It constitutes an embedding of classical first-order logic into intuitionistic logic: a first-order formula is provable in classical logic if and only if its Gödel–Gentzen translation is provable intuitionistically. For example, any theorem of classical propositional logic of the form has a proof consisting of an intuitionistic proof of followed by one application of double-negation elimination. Intuitionistic logic can thus be seen as a means of extending classical logic with constructive semantics. Non-interdefinability of operators Already minimal logic easily proves the following theorems, relating conjunction resp. disjunction to the implication using negation: , a weakened variant of the disjunctive syllogism resp. and similarly Indeed, stronger variants of these still do hold - for example the antecedents may be double-negated, as noted, or all may be replaced by on the antecedent sides, as will be discussed. However, neither of these five implications can be reversed without immediately implying excluded middle (consider for ) resp. double-negation elimination (consider true ). Hence, the left hand sides do not constitute a possible definition of the right hand sides. In contrast, in classical propositional logic it is possible to take one of those three connectives plus negation as primitive and define the other two in terms of it, in this way. Such is done, for example, in Łukasiewicz's three axioms of propositional logic. It is even possible to define all in terms of a sole sufficient operator such as the Peirce arrow (NOR) or Sheffer stroke (NAND). Similarly, in classical first-order logic, one of the quantifiers can be defined in terms of the other and negation. These are fundamentally consequences of the law of bivalence, which makes all such connectives merely Boolean functions. The law of bivalence is not required to hold in intuitionistic logic. As a result, none of the basic connectives can be dispensed with, and the above axioms are all necessary. So most of the classical identities between connectives and quantifiers are only theorems of intuitionistic logic in one direction. Some of the theorems go in both directions, i.e. are equivalences, as subsequently discussed. Existential vs. universal quantification Firstly, when is not free in the proposition , then When the domain of discourse is empty, then by the principle of explosion, an existential statement implies anything. When the domain contains at least one term, then assuming excluded middle for , the inverse of the above implication becomes provably too, meaning the two sides become equivalent. This inverse direction is equivalent to the drinker's paradox (DP). Moreover, an existential and dual variant of it is given by the independence of premise principle (IP). Classically, the statement above is moreover equivalent to a more disjunctive form discussed further below. Constructively, existence claims are however generally harder to come by. If the domain of discourse is not empty and is moreover independent of , such principles are equivalent to formulas in the propositional calculus. Here, the formula then just expresses the identity . This is the curried form of modus ponens , which in the special the case with as a false proposition results in the law of non-contradiction principle . Considering a false proposition for the original implication results in the important In words: "If there exists an entity that does not have the property , then the following is refuted: Each entity has the property ." The quantifier formula with negations also immediately follows from the non-contradiction principle derived above, each instance of which itself already follows from the more particular . To derive a contradiction given , it suffices to establish its negation (as opposed to the stronger ) and this makes proving double-negations valuable also. By the same token, the original quantifier formula in fact still holds with weakened to . And so, in fact, a stronger theorem holds: In words: "If there exists an entity that does not have the property , then the following is refuted: For each entity, one is not able to prove that it does not have the property ". Secondly, where similar considerations apply. Here the existential part is always a hypothesis and this is an equivalence. Considering the special case again, The proven conversion can be used to obtain two further implications: Of course, variants of such formulas can also be derived that have the double-negations in the antecedent. A special case of the first formula here is and this is indeed stronger than the -direction of the equivalence bullet point listed above. For simplicity of the discussion here and below, the formulas are generally presented in weakened forms without all possible insertions of double-negations in the antecedents. More general variants hold. Incorporating the predicate and currying, the following generalization also entails the relation between implication and conjunction in the predicate calculus, discussed below. If the predicate is decidedly false for all , then this equivalence is trivial. If is decidedly true for all , the schema simply reduces to the previously stated equivalence. In the language of classes, and , the special case of this equivalence with false equates two characterizations of disjointness : Disjunction vs. conjunction There are finite variations of the quantifier formulas, with just two propositions: The first principle cannot be reversed: Considering for would imply the weak excluded middle, i.e. the statement . But intuitionistic logic alone does not even prove . So in particular, there is no distributivity principle for negations deriving the claim from . For an informal example of the constructive reading, consider the following: From conclusive evidence it not to be the case that both Alice and Bob showed up to their date, one cannot derive conclusive evidence, tied to either of the two persons, that this person did not show up. Negated propositions are comparably weak, in that the classically valid De Morgan's law, granting a disjunction from a single negative hypothetical, does not automatically hold constructively. The intuitionistic propositional calculus and some of its extensions exhibit the disjunction property instead, implying one of the disjuncts of any disjunction individually would have to be derivable as well. The converse variants of those two, and the equivalent variants with double-negated antecedents, had already been mentioned above. Implications towards the negation of a conjunction can often be proven directly from the non-contradiction principle. In this way one may also obtain the mixed form of the implications, e.g. . Concatenating the theorems, we also find The reverse cannot be provable, as it would prove weak excluded middle. In predicate logic, the constant domain principle is not valid: does not imply the stronger . The distributive properties does however hold for any finite number of propositions. For a variant of the De Morgan law concerning two existentially closed decidable predicates, see LLPO. Conjunction vs. implication From the general equivalence also follows import-export, expressing incompatibility of two predicates using two different connectives: Due to the symmetry of the conjunction connective, this again implies the already established . The equivalence formula for the negated conjunction may be understood as a special case of currying and uncurrying. Many more considerations regarding double-negations again apply. And both non-reversible theorems relating conjunction and implication mentioned in the introduction follow from this equivalence. One is a converse, and holds simply because is stronger than . Now when using the principle in the next section, the following variant of the latter, with more negations on the left, also holds: A consequence is that Disjunction vs. implication Already minimal logic proves excluded middle equivalent to consequentia mirabilis, an instance of Peirce's law. Now akin to modus ponens, clearly already in minimal logic, which is a theorem that does not even involve negations. In classical logic, this implication is in fact an equivalence. With taking to be of the form , excluded middle together with explosion is seen to entail Peirce's law. In intuitionistic logic, one obtains variants of the stated theorem involving , as follows. Firstly, note that two different formulas for mentioned above can be used to imply . The latter are forms of the disjunctive syllogism for negated propositions, . A strengthened form still holds in intuitionistic logic: As in previous sections, the positions of and may be switched, giving a stronger principle than the one mentioned in the introduction. So, for example, intuitionistically "Either or " is a stronger propositional formula than "If not , then ", whereas these are classically interchangeable. The implication cannot generally be reversed, as that immediately implies excluded middle. Non-contradiction and explosion together also prove the stronger variant . And this shows how excluded middle for implies double-negation elimination for it. For a fixed , this implication cannot generally be reversed. However, as is always constructively valid, it follows that assuming double-negation elimination for all such disjunctions implies classical logic also. Of course the formulas established here may be combined to obtain yet more variations. For example, the disjunctive syllogism as presented generalizes to If some term exists at all, the antecedent here even implies , which in turn itself also implies the conclusion here (this is again the very first formula mentioned in this section). The bulk of the discussion in these sections applies just as well to just minimal logic. But as for the disjunctive syllogism with general , minimal logic can at most prove where denotes . The conclusion here can only be simplified to using explosion. Equivalences The above lists also contain equivalences. The equivalence involving a conjunction and a disjunction stems from actually being stronger than . Both sides of the equivalence can be understood as conjunctions of independent implications. Above, absurdity is used for . In functional interpretations, it corresponds to if-clause constructions. So e.g. "Not ( or )" is equivalent to "Not , and also not ". An equivalence itself is generally defined as, and then equivalent to, a conjunction () of implications (), as follows: With it, such connectives become in turn definable from it: In turn, and are complete bases of intuitionistic connectives, for example. Functionally complete connectives As shown by Alexander V. Kuznetsov, either of the following connectives – the first one ternary, the second one quinary – is by itself functionally complete: either one can serve the role of a sole sufficient operator for intuitionistic propositional logic, thus forming an analog of the Sheffer stroke from classical propositional logic: Semantics The semantics are rather more complicated than for the classical case. A model theory can be given by Heyting algebras or, equivalently, by Kripke semantics. In 2014, a Tarski-like model theory was proved complete by Bob Constable, but with a different notion of completeness than classically. Unproved statements in intuitionistic logic are not given an intermediate truth value (as is sometimes mistakenly asserted). One can prove that such statements have no third truth value, a result dating back to Glivenko in 1928. Instead they remain of unknown truth value, until they are either proved or disproved. Statements are disproved by deducing a contradiction from them. A consequence of this point of view is that intuitionistic logic has no interpretation as a two-valued logic, nor even as a finite-valued logic, in the familiar sense. Although intuitionistic logic retains the trivial propositions from classical logic, each proof of a propositional formula is considered a valid propositional value, thus by Heyting's notion of propositions-as-sets, propositional formulae are (potentially non-finite) sets of their proofs. Heyting algebra semantics In classical logic, we often discuss the truth values that a formula can take. The values are usually chosen as the members of a Boolean algebra. The meet and join operations in the Boolean algebra are identified with the ∧ and ∨ logical connectives, so that the value of a formula of the form A ∧ B is the meet of the value of A and the value of B in the Boolean algebra. Then we have the useful theorem that a formula is a valid proposition of classical logic if and only if its value is 1 for every valuation—that is, for any assignment of values to its variables. A corresponding theorem is true for intuitionistic logic, but instead of assigning each formula a value from a Boolean algebra, one uses values from a Heyting algebra, of which Boolean algebras are a special case. A formula is valid in intuitionistic logic if and only if it receives the value of the top element for any valuation on any Heyting algebra. It can be shown that to recognize valid formulas, it is sufficient to consider a single Heyting algebra whose elements are the open subsets of the real line R. In this algebra we have: where int(X) is the interior of X and X∁ its complement. The last identity concerning A → B allows us to calculate the value of ¬A: With these assignments, intuitionistically valid formulas are precisely those that are assigned the value of the entire line. For example, the formula ¬(A ∧ ¬A) is valid, because no matter what set X is chosen as the value of the formula A, the value of ¬(A ∧ ¬A) can be shown to be the entire line: So the valuation of this formula is true, and indeed the formula is valid. But the law of the excluded middle, A ∨ ¬A, can be shown to be invalid by using a specific value of the set of positive real numbers for A: The interpretation of any intuitionistically valid formula in the infinite Heyting algebra described above results in the top element, representing true, as the valuation of the formula, regardless of what values from the algebra are assigned to the variables of the formula. Conversely, for every invalid formula, there is an assignment of values to the variables that yields a valuation that differs from the top element. No finite Heyting algebra has the second of these two properties. Kripke semantics Building upon his work on semantics of modal logic, Saul Kripke created another semantics for intuitionistic logic, known as Kripke semantics or relational semantics. Tarski-like semantics It was discovered that Tarski-like semantics for intuitionistic logic were not possible to prove complete. However, Robert Constable has shown that a weaker notion of completeness still holds for intuitionistic logic under a Tarski-like model. In this notion of completeness we are concerned not with all of the statements that are true of every model, but with the statements that are true in the same way in every model. That is, a single proof that the model judges a formula to be true must be valid for every model. In this case, there is not only a proof of completeness, but one that is valid according to intuitionistic logic. Metalogic Admissible rules In intuitionistic logic or a fixed theory using the logic, the situation can occur that an implication always hold metatheoretically, but not in the language. For example, in the pure propositional calculus, if is provable, then so is . Another example is that being provable always also means that so is . One says the system is closed under these implications as rules and they may be adopted. Theories' features Theories over constructive logics can exhibit the disjunction property. The pure intuitionistic propositional calculus does so as well. In particular, it means the excluded middle disjunction for an un-rejectable statement is provable exactly when is provable. This also means, for examples, that the excluded middle disjunction for some the excluded middle disjunctions are not provable also. Relation to other logics Paraconsistent logic Intuitionistic logic is related by duality to a paraconsistent logic known as Brazilian, anti-intuitionistic or dual-intuitionistic logic. The subsystem of intuitionistic logic with the FALSE (resp. NOT-2) axiom removed is known as minimal logic and some differences have been elaborated on above. Intermediate logics In 1932, Kurt Gödel defined a system of logics intermediate between classical and intuitionistic logic. Indeed, any finite Heyting algebra that is not equivalent to a Boolean algebra defines (semantically) an intermediate logic. On the other hand, validity of formulae in pure intuitionistic logic is not tied to any individual Heyting algebra but relates to any and all Heyting algebras at the same time. So for example, for a schema not involving negations, consider the classically valid . Adopting this over intuitionistic logic gives the intermediate logic called Gödel-Dummett logic. Relation to classical logic The system of classical logic is obtained by adding any one of the following axioms: (Law of the excluded middle) (Double negation elimination) (Consequentia mirabilis, see also Peirce's law) Various reformulations, or formulations as schemata in two variables (e.g. Peirce's law), also exist. One notable one is the (reverse) law of contraposition Such are detailed on the intermediate logics article. In general, one may take as the extra axiom any classical tautology that is not valid in the two-element Kripke frame (in other words, that is not included in Smetanich's logic). Many-valued logic Kurt Gödel's work involving many-valued logic showed in 1932 that intuitionistic logic is not a finite-valued logic. (See the section titled Heyting algebra semantics above for an infinite-valued logic interpretation of intuitionistic logic.) Modal logic Any formula of the intuitionistic propositional logic (IPC) may be translated into the language of the normal modal logic S4 as follows: and it has been demonstrated that the translated formula is valid in the propositional modal logic S4 if and only if the original formula is valid in IPC. The above set of formulae are called the Gödel–McKinsey–Tarski translation. There is also an intuitionistic version of modal logic S4 called Constructive Modal Logic CS4. Lambda calculus There is an extended Curry–Howard isomorphism between IPC and simply-typed lambda calculus. See also BHK interpretation Computability logic Constructive analysis Constructive proof Constructive set theory Curry–Howard correspondence Game semantics Harrop formula Heyting arithmetic Inhabited set Intermediate logics Intuitionistic type theory Kripke semantics Linear logic Paraconsistent logic Realizability Relevance theory Smooth infinitesimal analysis Notes References External links Logic in computer science Non-classical logic Constructivism (mathematics) Systems of formal logic Intuitionism
Intuitionistic logic
[ "Mathematics" ]
6,390
[ "Mathematical logic", "Logic in computer science", "Constructivism (mathematics)" ]
169,269
https://en.wikipedia.org/wiki/Family%20tree
A family tree, also called a genealogy or a pedigree chart, is a chart representing family relationships in a conventional tree structure. More detailed family trees, used in medicine and social work, are known as genograms. Representations of family history Genealogical data can be represented in several formats, for example, as a pedigree or . Family trees are often presented with the oldest generations at the top of the tree and the younger generations at the bottom. An ancestry chart, which is a tree showing the ancestors of an individual and not all members of a family, will more closely resemble a tree in shape, being wider at the top than at the bottom. In some ancestry charts, an individual appears on the left and his or her ancestors appear to the right. Conversely, a descendant chart, which depicts all the descendants of an individual, will be narrowest at the top. Beyond these formats, some family trees might include all members of a particular surname (e.g., male-line descendants). Yet another approach is to include all holders of a certain office, such as the Kings of Germany, which represents the reliance on marriage to link dynasties together. The passage of time can also be included to illustrate ancestry and descent. A time scale is often used, expanding radially across the center, divided into decades. Children of the parent form branches around the center and their names are plotted in their birth year on the time scale. Spouses' names join children's names and nuclear families of parents and children branch off to grandchildren, and so on. Great-grandparents are often in the center to portray four or five generations, which reflect the natural growth pattern of a tree as seen from the top but sometimes there can be great-great-grandparents or more. In a descendant tree, living relatives are common on the outer branches and contemporary cousins appear adjacent to each other. Privacy should be considered when preparing a living family tree. The image of the tree probably originated with that of the Tree of Jesse in medieval art, used to illustrate the Genealogy of Christ in terms of a prophecy of Isaiah (Isaiah 11:1). Possibly the first non-biblical use, and the first to show full family relationships rather than a purely patrilineal scheme, was that involving family trees of the classical gods in Boccaccio's Genealogia Deorum Gentilium ("On the Genealogy of the Gods of the Gentiles"), whose first version dates to 1360. Common formats In addition to familiar representations of family history and genealogy as a tree structure, there are other notable systems used to illustrate and document ancestry and descent. Ahnentafel An Ahnentafel (German for "ancestor table") is a genealogical numbering system for listing a person's direct ancestors in a fixed sequence of ascent: Subject (or proband) Father Mother Paternal grandfather Paternal grandmother Maternal grandfather Maternal grandmother and so on, back through the generations. Apart from the subject or proband, who can be male or female, all even-numbered persons are male, and all odd-numbered persons are female. In this scheme, the number of any person's father is double the person's number, and a person's mother is double the person's number plus one. This system can also be displayed as a tree: Fan chart A fan chart features a half circle chart with concentric rings: the subject is the inner circle, the second circle is divided in two (each side is one parent), the third circle is divided in four, and so forth. Fan charts depict paternal and maternal ancestors. Graph theory While family trees are depicted as trees, family relations do not in general form a tree in the strict sense used in graph theory, since distant relatives can mate. Therefore, a person can have a common ancestor on both their mother's and father's side. However, because a parent must be born before their child, an individual cannot be their own ancestor, and thus there are no loops. In this regard, ancestry forms a directed acyclic graph. Nevertheless, graphs depicting matrilineal descent (mother-daughter relationships) and patrilineal descent (father-son relationships) do form trees. Assuming no common ancestor, an ancestry chart is a perfect binary tree, as each person has exactly one mother and one father; these thus have a regular structure. A Descendant chart, on the other hand, does not, in general, have a regular structure, as a person can have any number of children or none at all. Notable examples Family trees have been used to document family histories across time and cultures throughout the world. Africa In Africa, the ruling dynasty of Ethiopia claimed descent from King Solomon via the Queen of Sheba. Through this claim, the family traced their descent back to the House of David. The genealogy of Ancient Egyptian ruling dynasties was recorded from the beginnings of the Pharaonic era to the end of the Ptolomaic Kingdom; although this is not a record of one continuously linked family lineage, and surviving records are incomplete. Elsewhere in Africa, oral traditions of genealogical recording predominate. Members of the Keita dynasty of Mali, for example, have had their pedigrees sung by griots during annual ceremonies since the 14th century. Meanwhile, in Nigeria, many ruling clans—most notably those descended from Oduduwa—claim descent from the legendary King Kisra. Here too, pedigrees are recited by griots attached to the royal courts. The Americas In some pre-contact Native American civilizations, genealogical records of ruling and priestly families were kept, some of which extended over several centuries or longer. East Asia There are extensive genealogies for the ruling dynasties of China, but these do not form a single, unified family tree. Additionally, it is unclear at which point(s) the most ancient historical figures named become mythological. In Japan, the ancestry of the Imperial Family is traced back to the mythological origins of Japan. The connection to persons from the established historical record only begins in the mid-first millennium AD. The longest family tree in the world is that of the Chinese philosopher and educator Confucius (551–479 BC), who is descended from King Tang (1675–1646 BC). The tree spans more than 80 generations from him and includes more than 2 million members. An international effort involving more than 450 branches around the world was started in 1998 to retrace and revise this family tree. A new edition of the Confucius genealogy was printed in September 2009 by the Confucius Genealogy Compilation Committee, to coincide with the 2560th anniversary of the birth of the Chinese thinker. This latest edition was expected to include some 1.3 million living members who are scattered around the world today. Europe and West Asia Before the Dark Ages, in the Greco-Roman world, some reliable pedigrees dated back perhaps at least as far as the first half of the first millennium BC; with claimed or mythological origins reaching back further. Roman clan and family lineages played an important part in the structure of their society and were the basis of their intricate system of personal names. However, there was a break in the continuity of record-keeping at the end of Classical Antiquity. Records of the lines of succession of the Popes and the Eastern Roman Emperors through this transitional period have survived, but these are not continuous genealogical histories of single families. Refer to descent from antiquity. Many noble and aristocratic families of European and West Asian origin can reliably trace their ancestry back as far as the mid to late first millennium AD; some claiming undocumented descent from Classical Antiquity or mythological ancestors. In Europe, for example, the pedigree of Niall Noígíallach would be a contender for the longest, through Conn of the Hundred Battles (fl. 123 AD); in the legendary history of Ireland, he is further descended from Breogán, and ultimately from Adam, through the sons of Noah. Another very old and extensive tree is that of the Lurie lineage—which includes Sigmund Freud and Martin Buber—and traces back to Lurie, a 13th-century rabbi in Brest-Litovsk, and from there to Rashi and purportedly back to the legendary King David, as documented by Neil Rosenstein in his book The Lurie Legacy. The 1999 edition of the Guinness Book of Records recorded the Lurie family in the "longest lineage" category as one of the oldest-known living families in the world today. Family trees and representations of lineages are also important in religious traditions. The biblical genealogies of Jesus also claim descent from the House of David, covering a period of approximately 1000 years. In the Torah and Old Testament, genealogies are provided for many biblical persons, including a record of the descendants of Adam. Also according to the Torah, the Kohanim are descended from Aaron. Genetic testing performed at the Technion has shown that most modern Kohanim share common Y-chromosome origins, although there is no complete family tree of the Kohanim. In the Islamic world, claimed descent from Muhammad greatly enhanced the status of political and religious leaders; new dynasties often used claims of such descent to help establish their legitimacy. Elsewhere Elsewhere, in many human cultures, clan and tribal associations are based on claims of common ancestry, although detailed documentation of those origins is often very limited. Global Forms of family trees are also used in genetic genealogy. In 2022, scientists reported the largest detailed human genetic genealogy, that unifies human genomes from many sources for insights about human history, ancestry and evolution and demonstrates a novel computational method for estimating how human DNA is related via a series of 13 million linked trees along the genome, a , which has been described as the largest "human family tree". See also GEDCOM List of family trees Kinship terminology References External links Genealogy Charts
Family tree
[ "Biology" ]
2,018
[ "Phylogenetics", "Genealogy" ]
169,270
https://en.wikipedia.org/wiki/Macrophage
Macrophages (; abbreviated Mφ, MΦ or MP) are a type of white blood cell of the innate immune system that engulf and digest pathogens, such as cancer cells, microbes, cellular debris and foreign substances, which do not have proteins that are specific to healthy body cells on their surface. This process is called phagocytosis, which acts to defend the host against infection and injury. Macrophages are found in essentially all tissues, where they patrol for potential pathogens by amoeboid movement. They take various forms (with various names) throughout the body (e.g., histiocytes, Kupffer cells, alveolar macrophages, microglia, and others), but all are part of the mononuclear phagocyte system. Besides phagocytosis, they play a critical role in nonspecific defense (innate immunity) and also help initiate specific defense mechanisms (adaptive immunity) by recruiting other immune cells such as lymphocytes. For example, they are important as antigen presenters to T cells. In humans, dysfunctional macrophages cause severe diseases such as chronic granulomatous disease that result in frequent infections. Beyond increasing inflammation and stimulating the immune system, macrophages also play an important anti-inflammatory role and can decrease immune reactions through the release of cytokines. Macrophages that encourage inflammation are called M1 macrophages, whereas those that decrease inflammation and encourage tissue repair are called M2 macrophages. This difference is reflected in their metabolism; M1 macrophages have the unique ability to metabolize arginine to the "killer" molecule nitric oxide, whereas M2 macrophages have the unique ability to metabolize arginine to the "repair" molecule ornithine. However, this dichotomy has been recently questioned as further complexity has been discovered. Human macrophages are about in diameter and are produced by the differentiation of monocytes in tissues. They can be identified using flow cytometry or immunohistochemical staining by their specific expression of proteins such as CD14, CD40, CD11b, CD64, F4/80 (mice)/EMR1 (human), lysozyme M, MAC-1/MAC-3 and CD68. Macrophages were first discovered and named by Élie Metchnikoff, a Russian Empire zoologist, in 1884. Structure Types A majority of macrophages are stationed at strategic points where microbial invasion or accumulation of foreign particles is likely to occur. These cells together as a group are known as the mononuclear phagocyte system and were previously known as the reticuloendothelial system. Each type of macrophage, determined by its location, has a specific name: Investigations concerning Kupffer cells are hampered because in humans, Kupffer cells are only accessible for immunohistochemical analysis from biopsies or autopsies. From rats and mice, they are difficult to isolate, and after purification, only approximately 5 million cells can be obtained from one mouse. Macrophages can express paracrine functions within organs that are specific to the function of that organ. In the testis, for example, macrophages have been shown to be able to interact with Leydig cells by secreting 25-hydroxycholesterol, an oxysterol that can be converted to testosterone by neighbouring Leydig cells. Also, testicular macrophages may participate in creating an immune privileged environment in the testis, and in mediating infertility during inflammation of the testis. Cardiac resident macrophages participate in electrical conduction via gap junction communication with cardiac myocytes. Macrophages can be classified on basis of the fundamental function and activation. According to this grouping, there are classically activated (M1) macrophages, wound-healing macrophages (also known as alternatively-activated (M2) macrophages), and regulatory macrophages (Mregs). Development Macrophages that reside in adult healthy tissues either derive from circulating monocytes or are established before birth and then maintained during adult life independently of monocytes. By contrast, most of the macrophages that accumulate at diseased sites typically derive from circulating monocytes. Leukocyte extravasation describes monocyte entry into damaged tissue through the endothelium of blood vessels as they become macrophages. Monocytes are attracted to a damaged site by chemical substances through chemotaxis, triggered by a range of stimuli including damaged cells, pathogens and cytokines released by macrophages already at the site. At some sites such as the testis, macrophages have been shown to populate the organ through proliferation. Unlike short-lived neutrophils, macrophages survive longer in the body, up to several months. Function Phagocytosis Macrophages are professional phagocytes and are highly specialized in removal of dying or dead cells and cellular debris. This role is important in chronic inflammation, as the early stages of inflammation are dominated by neutrophils, which are ingested by macrophages if they come of age (see CD31 for a description of this process). The neutrophils are at first attracted to a site, where they perform their function and die, before they or their neutrophil extracellular traps are phagocytized by the macrophages. When at the site, the first wave of neutrophils, after the process of aging and after the first 48 hours, stimulate the appearance of the macrophages whereby these macrophages will then ingest the aged neutrophils. The removal of dying cells is, to a greater extent, handled by fixed macrophages, which will stay at strategic locations such as the lungs, liver, neural tissue, bone, spleen and connective tissue, ingesting foreign materials such as pathogens and recruiting additional macrophages if needed. When a macrophage ingests a pathogen, the pathogen becomes trapped in a phagosome, which then fuses with a lysosome. Within the phagolysosome, enzymes and toxic peroxides digest the pathogen. However, some bacteria, such as Mycobacterium tuberculosis, have become resistant to these methods of digestion. Typhoidal Salmonellae induce their own phagocytosis by host macrophages in vivo, and inhibit digestion by lysosomal action, thereby using macrophages for their own replication and causing macrophage apoptosis. Macrophages can digest more than 100 bacteria before they finally die due to their own digestive compounds. Role in innate immune response When a pathogen invades, tissue resident macrophages are among the first cells to respond. Two of the main roles of the tissue resident macrophages are to phagocytose incoming antigen and to secrete proinflammatory cytokines that induce inflammation and recruit other immune cells to the site. Phagocytosis of pathogens Macrophages can internalize antigens through receptor-mediated phagocytosis. Macrophages have a wide variety of pattern recognition receptors (PRRs) that can recognize microbe-associated molecular patterns (MAMPs) from pathogens. Many PRRs, such as toll-like receptors (TLRs), scavenger receptors (SRs), C-type lectin receptors, among others, recognize pathogens for phagocytosis. Macrophages can also recognize pathogens for phagocytosis indirectly through opsonins, which are molecules that attach to pathogens and mark them for phagocytosis. Opsonins can cause a stronger adhesion between the macrophage and pathogen during phagocytosis, hence opsonins tend to enhance macrophages’ phagocytic activity. Both complement proteins and antibodies can bind to antigens and opsonize them. Macrophages have complement receptor 1 (CR1) and 3 (CR3) that recognize pathogen-bound complement proteins C3b and iC3b, respectively, as well as fragment crystallizable γ receptors (FcγRs) that recognize the fragment crystallizable (Fc) region of antigen-bound immunoglobulin G (IgG) antibodies. When phagocytosing and digesting pathogens, macrophages go through a respiratory burst where more oxygen is consumed to supply the energy required for producing reactive oxygen species (ROS) and other antimicrobial molecules that digest the consumed pathogens. Chemical secretion Recognition of MAMPs by PRRs can activate tissue resident macrophages to secrete proinflammatory cytokines that recruit other immune cells. Among the PRRs, TLRs play a major role in signal transduction leading to cytokine production. The binding of MAMPs to TLR triggers a series of downstream events that eventually activates transcription factor NF-κB and results in transcription of the genes for several proinflammatory cytokines, including IL-1β, IL-6, TNF-α, IL-12B, and type I interferons such as IFN-α and IFN-β. Systemically, IL-1β, IL-6, and TNF-α induce fever and initiate the acute phase response in which the liver secretes acute phase proteins. Locally, IL-1β and TNF-α cause vasodilation, where the gaps between blood vessel epithelial cells widen, and upregulation of cell surface adhesion molecules on epithelial cells to induce leukocyte extravasation. Additionally, activated macrophages have been found to have delayed synthesis of prostaglandins (PGs) which are important mediators of inflammation and pain. Among the PGs, anti-inflammatory PGE2 and pro-inflammatory PGD2 increase the most after activation, with PGE2 increasing expression of IL-10 and inhibiting production of TNFs via the COX-2 pathway. Neutrophils are among the first immune cells recruited by macrophages to exit the blood via extravasation and arrive at the infection site. Macrophages secrete many chemokines such as CXCL1, CXCL2, and CXCL8 (IL-8) that attract neutrophils to the site of infection. After neutrophils have finished phagocytosing and clearing the antigen at the end of the immune response, they undergo apoptosis, and macrophages are recruited from blood monocytes to help clear apoptotic debris. Macrophages also recruit other immune cells such as monocytes, dendritic cells, natural killer cells, basophils, eosinophils, and T cells through chemokines such as CCL2, CCL4, CCL5, CXCL8, CXCL9, CXCL10, and CXCL11. Along with dendritic cells, macrophages help activate natural killer (NK) cells through secretion of type I interferons (IFN-α and IFN-β) and IL-12. IL-12 acts with IL-18 to stimulate the production of proinflammatory cytokine interferon gamma (IFN-γ) by NK cells, which serves as an important source of IFN-γ before the adaptive immune system is activated. IFN-γ enhances the innate immune response by inducing a more aggressive phenotype in macrophages, allowing macrophages to more efficiently kill pathogens. Some of the T cell chemoattractants secreted by macrophages include CCL5, CXCL9, CXCL10, and CXCL11. Role in adaptive immunity Interactions with CD4+ T Helper Cells Macrophages are professional antigen presenting cells (APC), meaning they can present peptides from phagocytosed antigens on major histocompatibility complex (MHC) II molecules on their cell surface for T helper cells. Macrophages are not primary activators of naïve T helper cells that have never been previously activated since tissue resident macrophages do not travel to the lymph nodes where naïve T helper cells reside. Although macrophages are also found in secondary lymphoid organs like the lymph nodes, they do not reside in T cell zones and are not effective at activating naïve T helper cells. The macrophages in lymphoid tissues are more involved in ingesting antigens and preventing them from entering the blood, as well as taking up debris from apoptotic lymphocytes. Therefore, macrophages interact mostly with previously activated T helper cells that have left the lymph node and arrived at the site of infection or with tissue resident memory T cells. Macrophages supply both signals required for T helper cell activation: 1) Macrophages present antigen peptide-bound MHC class II molecule to be recognized by the corresponding T cell receptor (TCR), and 2) recognition of pathogens by PRRs induce macrophages to upregulate the co-stimulatory molecules CD80 and CD86 (also known as B7) that binds to CD28 on T helper cells to supply the co-stimulatory signal. These interactions allow T helper cells to achieve full effector function and provide T helper cells with continued survival and differentiation signals preventing them from undergoing apoptosis due to lack of TCR signaling. For example, IL-2 signaling in T cells upregulates the expression of anti-apoptotic protein Bcl-2, but T cell production of IL-2 and the high-affinity IL-2 receptor IL-2RA both require continued signal from TCR recognition of MHC-bound antigen. Activation Macrophages can achieve different activation phenotypes through interactions with different subsets of T helper cells, such as TH1 and TH2. Although there is a broad spectrum of macrophage activation phenotypes, there are two major phenotypes that are commonly acknowledged. They are the classically activated macrophages, or M1 macrophages, and the alternatively activated macrophages, or M2 macrophages. M1 macrophages are proinflammatory, while M2 macrophages are mostly anti-inflammatory. Classical TH1 cells play an important role in classical macrophage activation as part of type 1 immune response against intracellular pathogens (such as intracellular bacteria) that can survive and replicate inside host cells, especially those pathogens that replicate even after being phagocytosed by macrophages. After the TCR of TH1 cells recognize specific antigen peptide-bound MHC class II molecules on macrophages, TH1 cells 1) secrete IFN-γ and 2) upregulate the expression of CD40 ligand (CD40L), which binds to CD40 on macrophages. These 2 signals activate the macrophages and enhance their ability to kill intracellular pathogens through increased production of antimicrobial molecules such as nitric oxide (NO) and superoxide (O2-). This enhancement of macrophages' antimicrobial ability by TH1 cells is known as classical macrophage activation, and the activated macrophages are known as classically activated macrophages, or M1 macrophages. The M1 macrophages in turn upregulate B7 molecules and antigen presentation through MHC class II molecules to provide signals that sustain T cell help. The activation of TH1 and M1 macrophage is a positive feedback loop, with IFN-γ from TH1 cells upregulating CD40 expression on macrophages; the interaction between CD40 on the macrophages and CD40L on T cells activate macrophages to secrete IL-12; and IL-12 promotes more IFN-γ secretion from TH1 cells. The initial contact between macrophage antigen-bound MHC II and TCR serves as the contact point between the two cells where most of the IFN-γ secretion and CD-40L on T cells concentrate to, so only macrophages directly interacting with TH1 cells are likely to be activated. In addition to activating M1 macrophages, TH1 cells express Fas ligand (FasL) and lymphotoxin beta (LT-β) to help kill chronically infected macrophages that can no longer kill pathogens. The killing of chronically infected macrophages release pathogens to the extracellular space that can then be killed by other activated macrophages. TH1 cells also help recruit more monocytes, the precursor to macrophages, to the infection site. TH1 secretion TNF-α and LT-α to make blood vessels easier for monocytes to bind to and exit. TH1 secretion of CCL2 as a chemoattractant for monocytes. IL-3 and GM-CSF released by TH1 cells stimulate more monocyte production in the bone marrow. When intracellular pathogens cannot be eliminated, such as in the case of Mycobacterium tuberculosis, the pathogen is contained through the formation of granuloma, an aggregation of infected macrophages surrounded by activated T cells. The macrophages bordering the activated lymphocytes often fuse to form multinucleated giant cells that appear to have increased antimicrobial ability due to their proximity to TH1 cells, but over time, the cells in the center start to die and form necrotic tissue. Alternative TH2 cells play an important role in alternative macrophage activation as part of type 2 immune response against large extracellular pathogens like helminths. TH2 cells secrete IL-4 and IL-13, which activate macrophages to become M2 macrophages, also known as alternatively activated macrophages. M2 macrophages express arginase-1, an enzyme that converts arginine to ornithine and urea. Ornithine help increase smooth muscle contraction to expel the worm and also participates in tissue and wound repair. Ornithine can be further metabolized to proline, which is essential for synthesizing collagen. M2 macrophages can also decrease inflammation by producing IL-1 receptor antagonist (IL-1RA) and IL-1 receptors that do not lead to downstream inflammatory signaling (IL-1RII). Interactions with CD8+ cytotoxic t cells Another part of the adaptive immunity activation involves stimulating CD8+ via cross presentation of antigens peptides on MHC class I molecules. Studies have shown that proinflammatory macrophages are capable of cross presentation of antigens on MHC class I molecules, but whether macrophage cross-presentation plays a role in naïve or memory CD8+ T cell activation is still unclear. Interactions with B cells Macrophages have been shown to secrete cytokines BAFF and APRIL, which are important for plasma cell isotype switching. APRIL and IL-6 secreted by macrophage precursors in the bone marrow help maintain survival of plasma cells homed to the bone marrow. Subtypes There are several activated forms of macrophages. In spite of a spectrum of ways to activate macrophages, there are two main groups designated M1 and M2. M1 macrophages: as mentioned earlier (previously referred to as classically activated macrophages), M1 "killer" macrophages are activated by LPS and IFN-gamma, and secrete high levels of IL-12 and low levels of IL-10. M1 macrophages have pro-inflammatory, bactericidal, and phagocytic functions. In contrast, the M2 "repair" designation (also referred to as alternatively activated macrophages) broadly refers to macrophages that function in constructive processes like wound healing and tissue repair, and those that turn off damaging immune system activation by producing anti-inflammatory cytokines like IL-10. M2 is the phenotype of resident tissue macrophages, and can be further elevated by IL-4. M2 macrophages produce high levels of IL-10, TGF-beta and low levels of IL-12. Tumor-associated macrophages are mainly of the M2 phenotype, and seem to actively promote tumor growth. Macrophages exist in a variety of phenotypes which are determined by the role they play in wound maturation. Phenotypes can be predominantly separated into two major categories; M1 and M2. M1 macrophages are the dominating phenotype observed in the early stages of inflammation and are activated by four key mediators: interferon-γ (IFN-γ), tumor necrosis factor (TNF), and damage associated molecular patterns (DAMPs). These mediator molecules create a pro-inflammatory response that in return produce pro-inflammatory cytokines like Interleukin-6 and TNF. Unlike M1 macrophages, M2 macrophages secrete an anti-inflammatory response via the addition of Interleukin-4 or Interleukin-13. They also play a role in wound healing and are needed for revascularization and reepithelialization. M2 macrophages are divided into four major types based on their roles: M2a, M2b, M2c, and M2d. How M2 phenotypes are determined is still up for discussion but studies have shown that their environment allows them to adjust to whichever phenotype is most appropriate to efficiently heal the wound. M2 macrophages are needed for vascular stability. They produce vascular endothelial growth factor-A and TGF-β1. There is a phenotype shift from M1 to M2 macrophages in acute wounds, however this shift is impaired for chronic wounds. This dysregulation results in insufficient M2 macrophages and its corresponding growth factors that aid in wound repair. With a lack of these growth factors/anti-inflammatory cytokines and an overabundance of pro-inflammatory cytokines from M1 macrophages chronic wounds are unable to heal in a timely manner. Normally, after neutrophils eat debris/pathogens they perform apoptosis and are removed. At this point, inflammation is not needed and M1 undergoes a switch to M2 (anti-inflammatory). However, dysregulation occurs as the M1 macrophages are unable/do not phagocytose neutrophils that have undergone apoptosis leading to increased macrophage migration and inflammation. Both M1 and M2 macrophages play a role in promotion of atherosclerosis. M1 macrophages promote atherosclerosis by inflammation. M2 macrophages can remove cholesterol from blood vessels, but when the cholesterol is oxidized, the M2 macrophages become apoptotic foam cells contributing to the atheromatous plaque of atherosclerosis. Role in muscle regeneration The first step to understanding the importance of macrophages in muscle repair, growth, and regeneration is that there are two "waves" of macrophages with the onset of damageable muscle use– subpopulations that do and do not directly have an influence on repairing muscle. The initial wave is a phagocytic population that comes along during periods of increased muscle use that are sufficient to cause muscle membrane lysis and membrane inflammation, which can enter and degrade the contents of injured muscle fibers. These early-invading, phagocytic macrophages reach their highest concentration about 24 hours following the onset of some form of muscle cell injury or reloading. Their concentration rapidly declines after 48 hours. The second group is the non-phagocytic types that are distributed near regenerative fibers. These peak between two and four days and remain elevated for several days while muscle tissue is rebuilding. The first subpopulation has no direct benefit to repairing muscle, while the second non-phagocytic group does. It is thought that macrophages release soluble substances that influence the proliferation, differentiation, growth, repair, and regeneration of muscle, but at this time the factor that is produced to mediate these effects is unknown. It is known that macrophages' involvement in promoting tissue repair is not muscle specific; they accumulate in numerous tissues during the healing process phase following injury. Role in wound healing Macrophages are essential for wound healing. They replace polymorphonuclear neutrophils as the predominant cells in the wound by day two after injury. Attracted to the wound site by growth factors released by platelets and other cells, monocytes from the bloodstream enter the area through blood vessel walls. Numbers of monocytes in the wound peak one to one and a half days after the injury occurs. Once they are in the wound site, monocytes mature into macrophages. The spleen contains half the body's monocytes in reserve ready to be deployed to injured tissue. The macrophage's main role is to phagocytize bacteria and damaged tissue, and they also debride damaged tissue by releasing proteases. Macrophages also secrete a number of factors such as growth factors and other cytokines, especially during the third and fourth post-wound days. These factors attract cells involved in the proliferation stage of healing to the area. Macrophages may also restrain the contraction phase. Macrophages are stimulated by the low oxygen content of their surroundings to produce factors that induce and speed angiogenesis and they also stimulate cells that re-epithelialize the wound, create granulation tissue, and lay down a new extracellular matrix. By secreting these factors, macrophages contribute to pushing the wound healing process into the next phase. Role in limb regeneration Scientists have elucidated that as well as eating up material debris, macrophages are involved in the typical limb regeneration in the salamander. They found that removing the macrophages from a salamander resulted in failure of limb regeneration and a scarring response. Role in iron homeostasis As described above, macrophages play a key role in removing dying or dead cells and cellular debris. Erythrocytes have a lifespan on average of 120 days and so are constantly being destroyed by macrophages in the spleen and liver. Macrophages will also engulf macromolecules, and so play a key role in the pharmacokinetics of parenteral irons. The iron that is released from the haemoglobin is either stored internally in ferritin or is released into the circulation via ferroportin. In cases where systemic iron levels are raised, or where inflammation is present, raised levels of hepcidin act on macrophage ferroportin channels, leading to iron remaining within the macrophages. Role in pigment retainment Melanophages are a subset of tissue-resident macrophages able to absorb pigment, either native to the organism or exogenous (such as tattoos), from extracellular space. In contrast to dendritic juncional melanocytes, which synthesize melanosomes and contain various stages of their development, the melanophages only accumulate phagocytosed melanin in lysosome-like phagosomes. This occurs repeatedly as the pigment from dead dermal macrophages is phagocytosed by their successors, preserving the tattoo in the same place. Role in tissue homeostasis Every tissue harbors its own specialized population of resident macrophages, which entertain reciprocal interconnections with the stroma and functional tissue. These resident macrophages are sessile (non-migratory), provide essential growth factors to support the physiological function of the tissue (e.g. macrophage-neuronal crosstalk in the guts), and can actively protect the tissue from inflammatory damage. Nerve-associated macrophages Nerve-associated macrophages or NAMs are those tissue-resident macrophages that are associated with nerves. Some of them are known to have an elongated morphology of up to 200μm Clinical significance Due to their role in phagocytosis, macrophages are involved in many diseases of the immune system. For example, they participate in the formation of granulomas, inflammatory lesions that may be caused by a large number of diseases. Some disorders, mostly rare, of ineffective phagocytosis and macrophage function have been described, for example. As a host for intracellular pathogens In their role as a phagocytic immune cell macrophages are responsible for engulfing pathogens to destroy them. Some pathogens subvert this process and instead live inside the macrophage. This provides an environment in which the pathogen is hidden from the immune system and allows it to replicate. Diseases with this type of behaviour include tuberculosis (caused by Mycobacterium tuberculosis) and leishmaniasis (caused by Leishmania species). In order to minimize the possibility of becoming the host of an intracellular bacteria, macrophages have evolved defense mechanisms such as induction of nitric oxide and reactive oxygen intermediates, which are toxic to microbes. Macrophages have also evolved the ability to restrict the microbe's nutrient supply and induce autophagy. Tuberculosis Once engulfed by a macrophage, the causative agent of tuberculosis, Mycobacterium tuberculosis, avoids cellular defenses and uses the cell to replicate. Recent evidence suggests that in response to the pulmonary infection of Mycobacterium tuberculosis, the peripheral macrophages matures into M1 phenotype. Macrophage M1 phenotype is characterized by increased secretion of pro-inflammatory cytokines (IL-1β, TNF-α, and IL-6) and increased glycolytic activities essential for clearance of infection. Leishmaniasis Upon phagocytosis by a macrophage, the Leishmania parasite finds itself in a phagocytic vacuole. Under normal circumstances, this phagocytic vacuole would develop into a lysosome and its contents would be digested. Leishmania alter this process and avoid being destroyed; instead, they make a home inside the vacuole. Chikungunya Infection of macrophages in joints is associated with local inflammation during and after the acute phase of Chikungunya (caused by CHIKV or Chikungunya virus). Others Adenovirus (most common cause of pink eye) can remain latent in a host macrophage, with continued viral shedding 6–18 months after initial infection. Brucella spp. can remain latent in a macrophage via inhibition of phagosome–lysosome fusion; causes brucellosis (undulant fever). Legionella pneumophila, the causative agent of Legionnaires' disease, also establishes residence within macrophages. Heart disease Macrophages are the predominant cells involved in creating the progressive plaque lesions of atherosclerosis. Focal recruitment of macrophages occurs after the onset of acute myocardial infarction. These macrophages function to remove debris, apoptotic cells and to prepare for tissue regeneration. Macrophages protect against ischemia-induced ventricular tachycardia in hypokalemic mice. HIV infection Macrophages also play a role in human immunodeficiency virus (HIV) infection. Like T cells, macrophages can be infected with HIV, and even become a reservoir of ongoing virus replication throughout the body. HIV can enter the macrophage through binding of gp120 to CD4 and second membrane receptor, CCR5 (a chemokine receptor). Both circulating monocytes and macrophages serve as a reservoir for the virus. Macrophages are better able to resist infection by HIV-1 than CD4+ T cells, although susceptibility to HIV infection differs among macrophage subtypes. Cancer Macrophages can contribute to tumor growth and progression by promoting tumor cell proliferation and invasion, fostering tumor angiogenesis and suppressing antitumor immune cells. Inflammatory compounds, such as tumor necrosis factor (TNF)-alpha released by the macrophages activate the gene switch nuclear factor-kappa B. NF-κB then enters the nucleus of a tumor cell and turns on production of proteins that stop apoptosis and promote cell proliferation and inflammation. Moreover, macrophages serve as a source for many pro-angiogenic factors including vascular endothelial factor (VEGF), tumor necrosis factor-alpha (TNF-alpha), macrophage colony-stimulating factor (M-CSF/CSF1) and IL-1 and IL-6, contributing further to the tumor growth. Macrophages have been shown to infiltrate a number of tumors. Their number correlates with poor prognosis in certain cancers, including cancers of breast, cervix, bladder, brain and prostate. Some tumors can also produce factors, including M-CSF/CSF1, MCP-1/CCL2 and Angiotensin II, that trigger the amplification and mobilization of macrophages in tumors. Additionally, subcapsular sinus macrophages in tumor-draining lymph nodes can suppress cancer progression by containing the spread of tumor-derived materials. Cancer therapy Experimental studies indicate that macrophages can affect all therapeutic modalities, including surgery, chemotherapy, radiotherapy, immunotherapy and targeted therapy. Macrophages can influence treatment outcomes both positively and negatively. Macrophages can be protective in different ways: they can remove dead tumor cells (in a process called phagocytosis) following treatments that kill these cells; they can serve as drug depots for some anticancer drugs; they can also be activated by some therapies to promote antitumor immunity. Macrophages can also be deleterious in several ways: for example they can suppress various chemotherapies, radiotherapies and immunotherapies. Because macrophages can regulate tumor progression, therapeutic strategies to reduce the number of these cells, or to manipulate their phenotypes, are currently being tested in cancer patients. However, macrophages are also involved in antibody mediated cytotoxicity (ADCC) and this mechanism has been proposed to be important for certain cancer immunotherapy antibodies. Similarly, studies identified macrophages genetically engineered to express chimeric antigen receptors as promising therapeutic approach to lowering tumor burden. Obesity It has been observed that increased number of pro-inflammatory macrophages within obese adipose tissue contributes to obesity complications including insulin resistance and diabetes type 2. The modulation of the inflammatory state of adipose tissue macrophages has therefore been considered a possible therapeutic target to treat obesity-related diseases. Although adipose tissue macrophages are subject to anti-inflammatory homeostatic control by sympathetic innervation, experiments using ADRB2 gene knockout mice indicate that this effect is indirectly exerted through the modulation of adipocyte function, and not through direct Beta-2 adrenergic receptor activation, suggesting that adrenergic stimulation of macrophages may be insufficient to impact adipose tissue inflammation or function in obesity. Within the fat (adipose) tissue of CCR2 deficient mice, there is an increased number of eosinophils, greater alternative macrophage activation, and a propensity towards type 2 cytokine expression. Furthermore, this effect was exaggerated when the mice became obese from a high fat diet. This is partially caused by a phenotype switch of macrophages induced by necrosis of fat cells (adipocytes). In an obese individual some adipocytes burst and undergo necrotic death, which causes the residential M2 macrophages to switch to M1 phenotype. This is one of the causes of a low-grade systemic chronic inflammatory state associated with obesity. Intestinal macrophages Though very similar in structure to tissue macrophages, intestinal macrophages have evolved specific characteristics and functions given their natural environment, which is in the digestive tract. Macrophages and intestinal macrophages have high plasticity causing their phenotype to be altered by their environments. Like macrophages, intestinal macrophages are differentiated monocytes, though intestinal macrophages have to coexist with the microbiome in the intestines. This is a challenge considering the bacteria found in the gut are not recognized as "self" and could be potential targets for phagocytosis by the macrophage. To prevent the destruction of the gut bacteria, intestinal macrophages have developed key differences compared to other macrophages. Primarily, intestinal macrophages do not induce inflammatory responses. Whereas tissue macrophages release various inflammatory cytokines, such as IL-1, IL-6 and TNF-α, intestinal macrophages do not produce or secrete inflammatory cytokines. This change is directly caused by the intestinal macrophages environment. Surrounding intestinal epithelial cells release TGF-β, which induces the change from proinflammatory macrophage to noninflammatory macrophage. Even though the inflammatory response is downregulated in intestinal macrophages, phagocytosis is still carried out. There is no drop off in phagocytosis efficiency as intestinal macrophages are able to effectively phagocytize the bacteria,S. typhimurium and E. coli, but intestinal macrophages still do not release cytokines, even after phagocytosis. Also, intestinal macrophages do not express lipopolysaccharide (LPS), IgA, or IgG receptors. The lack of LPS receptors is important for the gut as the intestinal macrophages do not detect the microbe-associated molecular patterns (MAMPS/PAMPS) of the intestinal microbiome. Nor do they express IL-2 and IL-3 growth factor receptors. Role in disease Intestinal macrophages have been shown to play a role in inflammatory bowel disease (IBD), such as Crohn's disease (CD) and ulcerative colitis (UC). In a healthy gut, intestinal macrophages limit the inflammatory response in the gut, but in a disease-state, intestinal macrophage numbers and diversity are altered. This leads to inflammation of the gut and disease symptoms of IBD. Intestinal macrophages are critical in maintaining gut homeostasis. The presence of inflammation or pathogen alters this homeostasis, and concurrently alters the intestinal macrophages. There has yet to be a determined mechanism for the alteration of the intestinal macrophages by recruitment of new monocytes or changes in the already present intestinal macrophages. Additionally, a new study reveals macrophages limit iron access to bacteria by releasing extracellular vesicles, improving sepsis outcomes. Media History Macrophages were first discovered late in the 19th century by zoologist Élie Metchnikoff. Metchnikoff revolutionized the branch of macrophages by combining philosophical insights and the evolutionary study of life. Later on, Van Furth during the 1960s proposed the idea that circulating blood monocytes in adults allowed for the origin of all tissue macrophages. In recent years, publishing regarding macrophages has led people to believe that multiple resident tissue macrophages are independent of the blood monocytes as it is formed during the embryonic stage of development. Within the 21st century, all the ideas concerning the origin of macrophages (present in tissues) were compiled together to suggest that physiologically complex organisms, from macrophages independently by mechanisms that don't have to depend on the blood monocytes. See also Bacteriophage Dendritic cell Histiocyte List of distinct cell types in the adult human body References Phagocytes Cell biology Immune system Human cells Articles containing video clips Connective tissue cells Lymphatic system
Macrophage
[ "Biology" ]
8,375
[ "Immune system", "Organ systems", "Cell biology" ]
169,283
https://en.wikipedia.org/wiki/CHSH%20inequality
In physics, the Clauser–Horne–Shimony–Holt (CHSH) inequality can be used in the proof of Bell's theorem, which states that certain consequences of entanglement in quantum mechanics cannot be reproduced by local hidden-variable theories. Experimental verification of the inequality being violated is seen as confirmation that nature cannot be described by such theories. CHSH stands for John Clauser, Michael Horne, Abner Shimony, and Richard Holt, who described it in a much-cited paper published in 1969. They derived the CHSH inequality, which, as with John Stewart Bell's original inequality, is a constraint—on the statistical occurrence of "coincidences" in a Bell test—which is necessarily true if an underlying local hidden-variable theory exists. In practice, the inequality is routinely violated by modern experiments in quantum mechanics. Statement The usual form of the CHSH inequality is where and are detector settings on side , and on side , the four combinations being tested in separate subexperiments. The terms etc. are the quantum correlations of the particle pairs, where the quantum correlation is defined to be the expectation value of the product of the "outcomes" of the experiment, i.e. the statistical average of , where are the separate outcomes, using the coding +1 for the '+' channel and −1 for the '−' channel. Clauser et al.'s 1969 derivation was oriented towards the use of "two-channel" detectors, and indeed it is for these that it is generally used, but under their method the only possible outcomes were +1 and −1. In order to adapt to real situations, which at the time meant the use of polarised light and single-channel polarisers, they had to interpret '−' as meaning "non-detection in the '+' channel", i.e. either '−' or nothing. They did not in the original article discuss how the two-channel inequality could be applied in real experiments with real imperfect detectors, though it was later proved that the inequality itself was equally valid. The occurrence of zero outcomes, though, means it is no longer so obvious how the values of E are to be estimated from the experimental data. The mathematical formalism of quantum mechanics predicts that the value of exceeds 2 for systems prepared in suitable entangled states and the appropriate choice of measurement settings (see below). The maximum violation predicted by quantum mechanics is (Tsirelson's bound) and can be obtained from a maximal entangled Bell state. Experiments Many Bell tests conducted subsequent to Alain Aspect's second experiment in 1982 have used the CHSH inequality, estimating the terms using (3) and assuming fair sampling. Some dramatic violations of the inequality have been reported. In practice most actual experiments have used light rather than the electrons that Bell originally had in mind. The property of interest is, in the best known experiments, the polarisation direction, though other properties can be used. The diagram shows a typical optical experiment. Coincidences (simultaneous detections) are recorded, the results being categorised as '++', '+−', '−+' or '−−' and corresponding counts accumulated. Four separate subexperiments are conducted, corresponding to the four terms in the test statistic S (, above). The settings , , , and are generally in practice chosen—the "Bell test angles"—these being the ones for which the quantum mechanical formula gives the greatest violation of the inequality. For each selected value of , the numbers of coincidences in each category are recorded. The experimental estimate for is then calculated as: Once all the 's have been estimated, an experimental estimate of S (Eq. ) can be found. If it is numerically greater than 2 it has infringed the CHSH inequality and the experiment is declared to have supported the quantum mechanics prediction and ruled out all local hidden-variable theories. The CHSH paper lists many preconditions (or "reasonable and/or presumable assumptions") to derive the simplified theorem and formula. For example, for the method to be valid, it has to be assumed that the detected pairs are a fair sample of those emitted. In actual experiments, detectors are never 100% efficient, so that only a sample of the emitted pairs are detected. A subtle, related requirement is that the hidden variables do not influence or determine detection probability in a way that would lead to different samples at each arm of the experiment. The CHSH inequality has been violated with photon pairs, beryllium ion pairs, ytterbium ion pairs, rubidium atom pairs, whole rubidium-atom cloud pairs, nitrogen vacancies in diamonds, and Josephson phase qubits. Derivation The original 1969 derivation will not be given here since it is not easy to follow and involves the assumption that the outcomes are all +1 or −1, never zero. Bell's 1971 derivation is more general. He effectively assumes the "Objective Local Theory" later used by Clauser and Horne. It is assumed that any hidden variables associated with the detectors themselves are independent on the two sides and can be averaged out from the start. Another derivation of interest is given in Clauser and Horne's 1974 paper, in which they start from the CH74 inequality. Bell's 1971 derivation The following is based on page 37 of Bell's Speakable and Unspeakable, the main change being to use the symbol ‘E’ instead of ‘P’ for the expected value of the quantum correlation. This avoids any suggestion that the quantum correlation is itself a probability. We start with the standard assumption of independence of the two sides, enabling us to obtain the joint probabilities of pairs of outcomes by multiplying the separate probabilities, for any selected value of the "hidden variable" λ. λ is assumed to be drawn from a fixed distribution of possible states of the source, the probability of the source being in the state λ for any particular trial being given by the density function ρ(λ), the integral of which over the complete hidden variable space is 1. We thus assume we can write: where A and B are the outcomes. Since the possible values of A and B are −1, 0 and +1, it follows that: Then, if a, a′, b and b′ are alternative settings for the detectors, Taking absolute values of both sides, and applying the triangle inequality to the right-hand side, we obtain We use the fact that and are both non-negative to rewrite the right-hand side of this as By (), this must be less than or equal to which, using the fact that the integral of is 1, is equal to which is equal to . Putting this together with the left-hand side, we have: which means that the left-hand side is less than or equal to both and . That is: from which we obtain (by the triangle inequality again), which is the CHSH inequality. Derivation from Clauser and Horne's 1974 inequality In their 1974 paper, Clauser and Horne show that the CHSH inequality can be derived from the CH74 one. As they tell us, in a two-channel experiment the CH74 single-channel test is still applicable and provides four sets of inequalities governing the probabilities p of coincidences. Working from the inhomogeneous version of the inequality, we can write: where j and k are each '+' or '−', indicating which detectors are being considered. To obtain the CHSH test statistic S (), all that is needed is to multiply the inequalities for which j is different from k by −1 and add these to the inequalities for which j and k are the same. Optimal violation by a general quantum state In experimental practice, the two particles are not an ideal EPR pair. There is a necessary and sufficient condition for a two-qubit density matrix to violate the CHSH inequality, expressed by the maximum attainable polynomial Smax defined in . This is important in entanglement-based quantum key distribution, where the secret key rate depends on the degree of measurement correlations. Let us introduce a 3×3 real matrix with elements , where are the Pauli matrices. Then we find the eigenvalues and eigenvectors of the real symmetric matrix , where the indices are sorted by . Then, the maximal CHSH polynomial is determined by the two greatest eigenvalues, Optimal measurement bases There exists an optimal configuration of the measurement bases a, a', b, b for a given that yields Smax with at least one free parameter. The projective measurement that yields either +1 or −1 for two orthogonal states respectively, can be expressed by an operator . The choice of this measurement basis can be parametrized by a real unit vector and the Pauli vector by expressing . Then, the expected correlation in bases a, b is The numerical values of the basis vectors, when found, can be directly translated to the configuration of the projective measurements. The optimal set of bases for the state is found by taking the two greatest eigenvalues and the corresponding eigenvectors of , and finding the auxiliary unit vectors where is a free parameter. We also calculate the acute angle to obtain the bases that maximize , In entanglement-based quantum key distribution, there is another measurement basis used to communicate the secret key ( assuming Alice uses the side A). The bases then need to minimize the quantum bit error rate Q, which is the probability of obtaining different measurement outcomes (+1 on one particle and −1 on the other). The corresponding bases are The CHSH polynomial S needs to be maximized as well, which together with the bases above creates the constraint . CHSH game The CHSH game''' is a thought experiment involving two parties separated at a great distance (far enough to preclude classical communication at the speed of light), each of whom has access to one half of an entangled two-qubit pair. Analysis of this game shows that no classical local hidden-variable theory can explain the correlations that can result from entanglement. Since this game is indeed physically realizable, this gives strong evidence that classical physics is fundamentally incapable of explaining certain quantum phenomena, at least in a "local" fashion. In the CHSH game, there are two cooperating players, Alice and Bob, and a referee, Charlie. These agents will be abbreviated respectively. At the start of the game, Charlie chooses bits uniformly at random, and then sends to Alice and to Bob. Alice and Bob must then each respond to Charlie with bits respectively. Now, once Alice and Bob send their responses back to Charlie, Charlie tests if , where ∧ denotes a logical AND operation and ⊕ denotes a logical XOR operation. If this equality holds, then Alice and Bob win, and if not then they lose. It is also required that Alice and Bob's responses can only depend on the bits they see: so Alice's response depends only on , and similarly for Bob. This means that Alice and Bob are forbidden from directly communicating with each other about the values of the bits sent to them by Charlie. However, Alice and Bob are allowed to decide on a common strategy before the game begins. In the following sections, it is shown that if Alice and Bob use only classical strategies involving their local information (and potentially some random coin tosses), it is impossible for them to win with a probability higher than 75%. However, if Alice and Bob are allowed to share a single entangled qubit pair, then there exists a strategy which allows Alice and Bob to succeed with a probability of ~85%. Optimal classical strategy We first establish that any deterministic classical strategy has success probability at most 75% (where the probability is taken over Charlie's uniformly random choice of ). By a deterministic strategy, we mean a pair of functions , where is a function determining Alice's response as a function of the message she receives from Charlie, and is a function determining Bob's response based on what he receives. To prove that any deterministic strategy fails at least 25% of the time, we can simply consider all possible pairs of strategies for Alice and Bob, of which there are at most 8 (for each party, there are 4 functions ). It can be verified that for each of those 8 strategies there is always at least one out of the four possible input pairs which makes the strategy fail. For example, in the strategy where both players always answer 0, we have that Alice and Bob win in all cases except for when , so using this strategy their win probability is exactly 75%. Now, consider the case of randomized classical strategies, where Alice and Bob have access to correlated random numbers. They can be produced by jointly flipping a coin several times before the game has started and Alice and Bob are still allowed to communicate. The output they give at each round is then a function of both Charlie's message and the outcome of the corresponding coin flip. Such a strategy can be viewed as a probability distribution over deterministic strategies, and thus its success probability is a weighted sum over the success probabilities of the deterministic strategies. But since every deterministic strategy has a success probability of at most 75%, this weighted sum cannot exceed 75% either. Optimal quantum strategy Now, imagine that Alice and Bob share the two-qubit entangled state: , commonly referred to as an EPR pair. Alice and Bob will use this entangled pair in their strategy as described below. The optimality of this strategy then follows from Tsirelson's bound. Upon receiving the bit from Charlie, Alice will measure her qubit in the basis or in the basis , conditionally on whether or , respectively. She will then label the two possible outputs resulting from each measurement choice as if the first state in the measurement basis is observed, and otherwise. Bob also uses the bit received from Charlie to decide which measurement to perform: if he measures in the basis , while if he measures in the basis , wherewith . The following table shows how the game is played. The states are arranged in the order that puts each state between the two most similar. They could correspond, for example, to photons polarized at angles of 0°, 22.5°, 45°, ... 180° (with 180° and 0° being the same state). To analyze the success probability, it suffices to analyze the probability that they output a winning value pair on each of the four possible inputs , and then take the average. We analyze the case where here: In this case the winning response pairs are and . On input , we know that Alice will measure in the basis , and Bob will measure in the basis . Then the probability that they both output 0 is the same as the probability that their measurements yield respectively, so precisely . Similarly, the probability that they both output 1 is exactly . So the probability that either of these successful outcomes happens is . In the case of the 3 other possible input pairs, essentially identical analysis shows that Alice and Bob will have the same win probability of , so overall the average win probability for a randomly chosen input is . Since , this is strictly better than what was possible in the classical case. Modeling general quantum strategies An arbitrary quantum strategy for the CHSH game can be modeled as a triple where is a bipartite state for some , and are Alice's observables each corresponding to receiving from the referee, and and are Bob's observables each corresponding to receiving from the referee. The optimal quantum strategy described above can be recast in this notation as follows: is the EPR pair , the observable (corresponding to Alice measuring in the basis), the observable (corresponding to Alice measuring in the basis), where and are Pauli matrices. The observables and (corresponding to each of Bob's choice of basis to measure in). We will denote the success probability of a strategy in the CHSH game by , and we define the bias of the strategy as , which is the difference between the winning and losing probabilities of . In particular, we have The bias of the quantum strategy described above is . Tsirelson's inequality and CHSH rigidity Tsirelson's inequality, discovered by Boris Tsirelson in 1980, states that for any quantum strategy for the CHSH game, the bias . Equivalently, it states that success probability for any quantum strategy for the CHSH game. In particular, this implies the optimality of the quantum strategy described above for the CHSH game. Tsirelson's inequality establishes that the maximum success probability of any quantum strategy is , and we saw that this maximum success probability is achieved by the quantum strategy described above. In fact, any quantum strategy that achieves this maximum success probability must be isomorphic (in a precise sense) to the canonical quantum strategy described above; this property is called the rigidity of the CHSH game, first attributed to Summers and Werner. More formally, we have the following result: Informally, the above theorem states that given an arbitrary optimal strategy for the CHSH game, there exists a local change-of-basis (given by the isometries ) for Alice and Bob such that their shared state factors into the tensor of an EPR pair and an additional auxiliary state . Furthermore, Alice and Bob's observables and behave, up to unitary transformations, like the and observables on their respective qubits from the EPR pair. An approximate or quantitative version of CHSH rigidity was obtained by McKague, et al. who proved that if you have a quantum strategy such that for some , then there exist isometries under which the strategy is -close to the canonical quantum strategy. Representation-theoretic proofs of approximate rigidity are also known. Applications Note that the CHSH game can be viewed as a test for quantum entanglement and quantum measurements, and that the rigidity of the CHSH game lets us test for a specific entanglement as well as specific'' quantum measurements. This in turn can be leveraged to test or even verify entire quantum computations—in particular, the rigidity of CHSH games has been harnessed to construct protocols for verifiable quantum delegation, certifiable randomness expansion, and device-independent cryptography. See also Correlation does not imply causation Leggett–Garg inequality Quantum game theory References External links Bell inequality - Virtual Lab by Quantum Flytrap, an interactive simulation of the CHSH Bell inequality violation Quantum measurement Inequalities
CHSH inequality
[ "Physics", "Mathematics" ]
3,840
[ "Mathematical theorems", "Quantum game theory", "Quantum mechanics", "Binary relations", "Game theory", "Quantum measurement", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
169,311
https://en.wikipedia.org/wiki/Dragan%20Maru%C5%A1i%C4%8D
Dragan Marušič (born 1953, Koper, Slovenia) is a Slovene mathematician. Marušič obtained his BSc in technical mathematics from the University of Ljubljana in 1976, and his PhD from the University of Reading in 1981 under the supervision of Crispin Nash-Williams. Marušič has published extensively, and has supervised seven PhD students (as of 2013). He served as the third rector of the University of Primorska from 2011 to 2019, a university he lobbied to have established in his home town of Koper. His research focuses on topics in algebraic graph theory, particularly the symmetry of graphs and the action of finite groups on combinatorial objects. He is regarded as the founder of the Slovenian school of research in algebraic graph theory and permutation groups. Education and career From 1968 to 1972 Marušič attended gymnasium in Koper. He studied undergraduate mathematics at the University of Ljubljana, graduating in 1976. He completed his PhD in 1981 in England, at the University of Reading under the supervision of Crispin Nash-Williams. After completing a post-doctoral fellowship at the University of Reading in 1983, Marušič spent a year teaching high school mathematics in Koper. He worked for one year at the University of Minnesota Duluth as an assistant professor, and then spent three years at the University of California, Santa Cruz, from 1985 to 1988. In 1988, he returned to Slovenia to work at the University of Ljubljana, where he rose quickly through the ranks, becoming a full professor in 1994. He also held the post of vice-rector of student affairs there from 1989 to 1991. In 1991-92 he spent a year as a Fulbright scholar at the University of California, Santa Cruz. Marušič maintains his post at the University of Ljubljana, although he has also held an appointment at the University of Primorska since 2004, shortly after its founding. He has increasingly devoted his time to the newer university, where he established the Faculty of Mathematics, Natural Sciences, and Information Technologies (UP FAMNIT). He served as the dean of that faculty from 2007 to 2011. He was elected in 2011 as the third rector of the University of Primorska, a position which he held until 2019. Marušič has supervised seven PhD students, and has supervised or co-supervised six post-doctoral fellows, in addition to numerous master's and honours students. He is one of the two founding editors and editors-in-chief (with Tomaž Pisanski) of the journal Ars Mathematica Contemporanea. Achievements and honours Marušič is regarded as the founder of the Slovenian school of research in algebraic graph theory and permutation groups. In 2002 he received the Zois Award, the highest scientific award in Slovenia, for his achievements in the field of graph theory and algebra. Since 2010, he has been a member of the committee that selects the Zois Award recipients, as well as the recipients of other scientific honours from the government of Slovenia. Research In his research, Marušič has focused on the actions of permutation groups on graphs. Some of his major contributions have been on the topics of the existence of semiregular automorphisms (see group action for an explanation of this) of vertex-transitive graphs, the existence of Hamiltonian paths and cycles in vertex-transitive graphs, and the structures of semi-symmetric graphs and half-transitive graphs. With co-authors, he proved that the Gray graph on 54 vertices is the smallest cubic semi-symmetric graph. He has well over 100 publications. Personal life Marušič is married and has two sons. His brother, Dorijan Marušič was the Minister of Health for Slovenia. External links Selected publications References 1953 births Living people Graph theorists Alumni of the University of Reading 20th-century Slovenian mathematicians 21st-century Slovenian mathematicians Academic staff of the University of Primorska People from Koper Academic staff of the University of Ljubljana University of Ljubljana alumni Yugoslav mathematicians
Dragan Marušič
[ "Mathematics" ]
802
[ "Mathematical relations", "Graph theory", "Graph theorists" ]
169,312
https://en.wikipedia.org/wiki/Monoceros
Monoceros (Greek: , "unicorn") is a faint constellation on the celestial equator. Its definition is attributed to the 17th-century cartographer Petrus Plancius. It is bordered by Orion to the west, Gemini to the north, Canis Major to the south, and Hydra to the east. Other bordering constellations include Canis Minor, Lepus, and Puppis. Features Stars Monoceros contains only a few fourth magnitude stars, making it difficult to see with the naked eye. Alpha Monocerotis has a visual magnitude of 3.93, while for Gamma Monocerotis it is 3.98. Beta Monocerotis is a triple star system; the three stars form a fixed triangle. The visual magnitudes of the stars are 4.7, 5.2, and 6.1. William Herschel discovered it in 1781 and called it "one of the most beautiful sights in the heavens". Epsilon Monocerotis is a fixed binary, with visual magnitudes of 4.5 and 6.5. S Monocerotis, or 15 Monocerotis, is a bluish white variable star and is located at the center of NGC 2264. The variation in its magnitude is slight (4.2–4.6). It has a companion star of visual magnitude 8. V838 Monocerotis, a variable red supergiant star, had an outburst starting on January 6, 2002; in February of that year, its brightness increased by a factor of 10,000 in one day. After the outburst was over, the Hubble Space Telescope was able to observe a light echo, which illuminated the dust surrounding the star. Monoceros also contains Plaskett's Star, a massive binary system whose combined mass is estimated, per 2008 calculations, to be almost 100 solar masses. Monoceros is the location of the binary system Scholz's Star, host to a red dwarf primary and brown dwarf secondary; the system performed a close flypast of the Solar System approximately 70,000 years ago, travelling within 120,000 astronomical units of the Sun within the Oort cloud. One of the nearest known black holes to the Solar System is in this constellation. The binary star system A0620-00 in the constellation of Monoceros is at a distance of roughly 3,300 light-years (1,000 parsecs) away. The black hole is estimated to be 6.6 solar masses. Planets Monoceros contains two super-Earth exoplanets in one planetary system: CoRoT-7b was detected by the CoRoT satellite and CoRoT-7c was detected by the High Accuracy Radial Velocity Planet Searcher from ground-based telescopes. Until the announcement of Kepler-10b in January 2011, CoRoT-7b was the smallest exoplanet to have its diameter measured, at 1.58 times that of the Earth (which would give it a volume 3.95 times Earth's). Both planets in this system were discovered in 2009. Deep-sky objects Part of the galactic plane goes through Monoceros, so background galaxies are concealed by interstellar dust. Monoceros contains many clusters and nebulae; most notable among them are: Messier 50, an open cluster The Rosette Nebula (NGC 2237, 2238, 2239, and 2246) is a diffuse nebula in Monoceros. It has an overall magnitude of 6.0 and is 4900 light-years from Earth. The Rosette Nebula, over 100 light-years in diameter, has an associated star cluster and possesses many Bok globules in its dark areas. It was independently discovered in the 1880s by Lewis Swift (early 1880s) and Edward Emerson Barnard (1883) as they hunted for comets. The Christmas Tree Cluster (NGC 2264) is another open cluster in Monoceros. Named for its resemblance to a Christmas tree, it is fairly bright at an overall magnitude of 3.9; it is 2400 light-years from Earth. The variable star S Monocerotis represents the tree's trunk, while the variable star V429 Monocerotis represents its top. The Cone Nebula (NGC 2264), associated with the Christmas Tree Cluster, is a very dim nebula that contains a dark conic structure. It appears clearly in photographs, but is very elusive in a telescope. The nebula contains several Herbig–Haro objects, which are small irregularly variable nebulae. They are associated with protostars. NGC 2254 is an open cluster with an overall magnitude of 9.7, 7100 light-years from Earth. It is a Shapley class f and Trumpler class I 2 p cluster, meaning that it appears to be a fairly rich cluster overall, though it has fewer than 50 stars. It appears distinct from the background star field and is very concentrated at its center; its stars range moderately in brightness. Hubble's Variable Nebula (NGC 2261) is a nebula with an approximate magnitude of 10, 2500 light-years from Earth. It is named for Edwin Hubble, and was discovered in 1783 by Herschel. Hubble's Variable Nebula is illuminated by R Monocerotis, a young variable star embedded in the nebula; the star's unique interaction with the material in the nebula makes it both an emission nebula and a reflection nebula. One hypothesis regarding their interaction is that the nebula and its illuminating star are a very early stage planetary system. IC 447, a reflection nebula. History In Western astronomy, Monoceros is a relatively modern constellation, not one of Ptolemy's 48 in the Almagest. Its first certain appearance was on a globe created by the cartographer Petrus Plancius in 1612 or 1613 and it was later charted by German astronomer Jakob Bartsch as Unicornu on his star chart of 1624. German astronomers Heinrich Wilhelm Olbers and Ludwig Ideler indicate (according to Richard Hinckley Allen's allegations) that the constellation may be older, quoting an astrological work from 1564 that mentioned "the second horse between the Twins and the Crab has many stars, but not very bright"; these references may ultimately be due to the 13th century Scotsman Michael Scot, but refer to a horse and not a unicorn, and its position does not quite match. Joseph Scaliger (died 1609) is reported to have found Monoceros on an ancient Persian sphere. Astronomer Camille Flammarion (died 1925) believed that a former constellation, Neper (the "Auger"), occupied the part of that sky now deemed Monoceros and Microscopium, but this is disputed. Chinese asterisms Sze Fūh, the Four Great Canals; Kwan Kew; and Wae Choo, the Outer Kitchen, all lay within the boundaries of Monoceros. See also Monoceros in Chinese astronomy References Sources External links "VISTA Reveals the Secret of the Unicorn" — European Southern Observatory "The Deep Photographic Guide to the Constellations: Monoceros", allthesky.com "Monoceros", Dibon-Smith "Monoceros", Ian Ridpath's Star Tales Equatorial constellations Constellations listed by Petrus Plancius Unicorns
Monoceros
[ "Astronomy" ]
1,491
[ "Equatorial constellations", "Constellations listed by Petrus Plancius", "Monoceros", "Constellations" ]
3,050,716
https://en.wikipedia.org/wiki/Mesh%20generation
Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells. Often these cells form a simplicial complex. Usually the cells partition the geometric input domain. Mesh cells are used as discrete local approximations of the larger domain. Meshes are created by computer algorithms, often with human guidance through a GUI, depending on the complexity of the domain and the type of mesh desired. A typical goal is to create a mesh that accurately captures the input domain geometry, with high-quality (well-shaped) cells, and without so many cells as to make subsequent calculations intractable. The mesh should also be fine (have small elements) in areas that are important for the subsequent calculations. Meshes are used for rendering to a computer screen and for physical simulation such as finite element analysis or computational fluid dynamics. Meshes are composed of simple cells like triangles because, e.g., we know how to perform operations such as finite element calculations (engineering) or ray tracing (computer graphics) on triangles, but we do not know how to perform these operations directly on complicated spaces and shapes such as a roadway bridge. We can simulate the strength of the bridge, or draw it on a computer screen, by performing calculations on each triangle and calculating the interactions between triangles. A major distinction is between structured and unstructured meshing. In structured meshing the mesh is a regular lattice, such as an array, with implied connectivity between elements. In unstructured meshing, elements may be connected to each other in irregular patterns, and more complicated domains can be captured. This page is primarily about unstructured meshes. While a mesh may be a triangulation, the process of meshing is distinguished from point set triangulation in that meshing includes the freedom to add vertices not present in the input. "Facetting" (triangulating) CAD models for drafting has the same freedom to add vertices, but the goal is to represent the shape accurately using as few triangles as possible and the shape of individual triangles is not important. Computer graphics renderings of textures and realistic lighting conditions use meshes instead. Many mesh generation software is coupled to a CAD system defining its input, and simulation software for taking its output. The input can vary greatly but common forms are Solid modeling, Geometric modeling, NURBS, B-rep, STL or a point cloud. Terminology The terms "mesh generation," "grid generation," "meshing," " and "gridding," are often used interchangeably, although strictly speaking the latter two are broader and encompass mesh improvement: changing the mesh with the goal of increasing the speed or accuracy of the numerical calculations that will be performed over it. In computer graphics rendering, and mathematics, a mesh is sometimes referred to as a tessellation. Mesh faces (cells, entities) have different names depending on their dimension and the context in which the mesh will be used. In finite elements, the highest-dimensional mesh entities are called "elements," "edges" are 1D and "nodes" are 0D. If the elements are 3D, then the 2D entities are "faces." In computational geometry, the 0D points are called vertices. Tetrahedra are often abbreviated as "tets"; triangles are "tris", quadrilaterals are "quads" and hexahedra (topological cubes) are "hexes." Techniques Many meshing techniques are built on the principles of the Delaunay triangulation, together with rules for adding vertices, such as Ruppert's algorithm. A distinguishing feature is that an initial coarse mesh of the entire space is formed, then vertices and triangles are added. In contrast, advancing front algorithms start from the domain boundary, and add elements incrementally filling up the interior. Hybrid techniques do both. A special class of advancing front techniques creates thin boundary layers of elements for fluid flow. In structured mesh generation the entire mesh is a lattice graph, such as a regular grid of squares. In block-structured meshing, the domain is divided into large subregions, each of which is a structured mesh. Some direct methods start with a block-structured mesh and then move the mesh to conform to the input; see Automatic Hex-Mesh Generation based on polycube. Another direct method is to cut the structured cells by the domain boundary; see sculpt based on Marching cubes. Some types of meshes are much more difficult to create than others. Simplicial meshes tend to be easier than cubical meshes. An important category is generating a hex mesh conforming to a fixed quad surface mesh; a research subarea is studying the existence and generation of meshes of specific small configurations, such as the tetragonal trapezohedron. Because of the difficulty of this problem, the existence of combinatorial hex meshes has been studied apart from the problem of generating good geometric realizations; see Combinatorial Techniques for Hexahedral Mesh Generation. While known algorithms generate simplicial meshes with guaranteed minimum quality, such guarantees are rare for cubical meshes, and many popular implementations generate inverted (inside-out) hexes from some inputs. Meshes are often created in serial on workstations, even when subsequent calculations over the mesh will be done in parallel on super-computers. This is both because of the limitation that most mesh generators are interactive, and because mesh generation runtime is typically insignificant compared to solver time. However, if the mesh is too large to fit in the memory of a single serial machine, or the mesh must be changed (adapted) during the simulation, meshing is done in parallel. Algebraic methods The grid generation by algebraic methods is based on mathematical interpolation function. It is done by using known functions in one, two or three dimensions taking arbitrary shaped regions. The computational domain might not be rectangular, but for the sake of simplicity, the domain is taken to be rectangular. The main advantage of the methods is that they provide explicit control of physical grid shape and spacing. The simplest procedure that may be used to produce boundary fitted computational mesh is the normalization transformation. For a nozzle, with the describing function the grid can easily be generated using uniform division in y-direction with equally spaced increments in x-direction, which are described by where denotes the y-coordinate of the nozzle wall. For given values of (, ), the values of (, ) can be easily recovered. Differential equation methods Like algebraic methods, differential equation methods are also used to generate grids. The advantage of using the partial differential equations (PDEs) is that the solution of grid generating equations can be exploited to generate the mesh. Grid construction can be done using all three classes of partial differential equations. Elliptic schemes Elliptic PDEs generally have very smooth solutions leading to smooth contours. Using its smoothness as an advantage Laplace's equations can preferably be used because the Jacobian found out to be positive as a result of maximum principle for harmonic functions. After extensive work done by Crowley (1962) and Winslow (1966) on PDEs by transforming physical domain into computational plane while mapping using Poisson's equation, Thompson et al. (1974) have worked extensively on elliptic PDEs to generate grids. In Poisson grid generators, the mapping is accomplished by marking the desired grid points on the boundary of the physical domain, with the interior point distribution determined through the solution of equations written below where, are the co-ordinates in the computational domain, while P and Q are responsible for point spacing within D. Transforming above equations in computational space yields a set of two elliptic PDEs of the form, where These systems of equations are solved in the computational plane on uniformly spaced grid which provides us with the co-ordinates of each point in physical space. The advantage of using elliptic PDEs is the solution linked to them is smooth and the resulting grid is smooth. But, specification of P and Q becomes a difficult task thus adding it to its disadvantages. Moreover, the grid has to be computed after each time step which adds up to computational time. Hyperbolic schemes This grid generation scheme is generally applicable to problems with open domains consistent with the type of PDE describing the physical problem. The advantage associated with hyperbolic PDEs is that the governing equations need to be solved only once for generating grid. The initial point distribution along with the approximate boundary conditions forms the required input and the solution is the then marched outward. Steger and Sorenson (1980) proposed a volume orthogonality method that uses Hyperbolic PDEs for mesh generation. For a 2-D problem, Considering computational space to be given by , the inverse of the Jacobian is given by, where represents the area in physical space for a given area in computational space. The second equation links the orthogonality of grid lines at the boundary in physical space which can be written as For and surfaces to be perpendicular the equation becomes The problem associated with such system of equations is the specification of . Poor selection of may lead to shock and discontinuous propagation of this information throughout the mesh. While mesh being orthogonal is generated very rapidly which comes out as an advantage with this method. Parabolic schemes The solving technique is similar to that of hyperbolic PDEs by advancing the solution away from the initial data surface satisfying the boundary conditions at the end. Nakamura (1982) and Edwards (1985) developed the basic ideas for parabolic grid generation. The idea uses either of Laplace or the Poisson's equation and especially treating the parts which controls elliptic behavior. The initial values are given as the coordinates of the point along the surface and the advancing the solutions to the outer surface of the object satisfying the boundary conditions along edges. The control of the grid spacing has not been suggested until now. Nakamura and Edwards, grid control was accomplished using non uniform spacing. The parabolic grid generation shows an advantage over the hyperbolic grid generation that, no shocks or discontinuities occur and the grid is relatively smooth. The specifications of initial values and selection of step size to control the grid points is however time-consuming, but these techniques can be effective when familiarity and experience is gained. Variational methods This method includes a technique that minimizes grid smoothness, orthogonality and volume variation. This method forms mathematical platform to solve grid generation problems. In this method an alternative grid is generated by a new mesh after each iteration and computing the grid speed using backward difference method. This technique is a powerful one with a disadvantage that effort is required to solve the equations related to grid. Further work needed to be done to minimize the integrals that will reduce the CPU time. Unstructured grid generation The main importance of this scheme is that it provides a method that will generate the grid automatically. Using this method, grids are segmented into blocks according to the surface of the element and a structure is provided to ensure appropriate connectivity. To interpret the data flow solver is used. When an unstructured scheme is employed, the main interest is to fulfill the demand of the user and a grid generator is used to accomplish this task. The information storage in structured scheme is cell to cell instead of grid to grid and hence the more memory space is needed. Due to random cell location, the solver efficiency in unstructured is less as compared to the structured scheme. Some points are needed to be kept in mind at the time of grid construction. The grid point with high resolution creates difficulty for both structured and unstructured. For example, in case of boundary layer, structured scheme produces elongated grid in the direction of flow. On the other hand, unstructured grids require a higher cell density in the boundary layer because the cell needs to be as equilateral as possible to avoid errors. We must identify what information is required to identify the cell and all the neighbors of the cell in the computational mesh. We can choose to locate the arbitrary points anywhere we want for the unstructured grid. A point insertion scheme is used to insert the points independently and the cell connectivity is determined. This suggests that the point be identified as they are inserted. Logic for establishing new connectivity is determined once the points are inserted. Data that form grid point that identifies grid cell are needed. As each cell is formed it is numbered and the points are sorted. In addition the neighbor cell information is needed. Adaptive grid A problem in solving partial differential equations using previous methods is that the grid is constructed and the points are distributed in the physical domain before details of the solution is known. So the grid may or may not be the best for the given problem. Adaptive methods are used to improve the accuracy of the solutions. The adaptive method is referred to as ‘h’ method if mesh refinement is used, ‘r’ method if the number of grid point is fixed and not redistributed and ‘p’ if the order of solution scheme is increased in finite-element theory. The multi dimensional problems using the equidistribution scheme can be accomplished in several ways. The simplest to understand are the Poisson Grid Generators with control function based on the equidistribution of the weight function with the diffusion set as a multiple of desired cell volume. The equidistribution scheme can also be applied to the unstructured problem. The problem is the connectivity hampers if mesh point movement is very large. Steady flow and the time-accurate flow calculation can be solved through this adaptive method. The grid is refined and after a predetermined number of iteration in order to adapt it in a steady flow problem. The grid will stop adjusting to the changes once the solution converges. In time accurate case coupling of the partial differential equations of the physical problem and those describing the grid movement is required. Image-based meshing Cell topology Usually the cells are polygonal or polyhedral and form a mesh that partitions the domain. Important classes of two-dimensional elements include triangles (simplices) and quadrilaterals (topological squares). In three-dimensions the most-common cells are tetrahedra (simplices) and hexahedra (topological cubes). Simplicial meshes may be of any dimension and include triangles (2D) and tetrahedra (3D) as important instances. Cubical meshes is the pan-dimensional category that includes quads (2D) and hexes (3D). In 3D, 4-sided pyramids and 3-sided prisms appear in conformal meshes of mixed cell type. Cell dimension The mesh is embedded in a geometric space that is typically two or three dimensional, although sometimes the dimension is increased by one by adding the time-dimension. Higher dimensional meshes are used in niche contexts. One-dimensional meshes are useful as well. A significant category is surface meshes, which are 2D meshes embedded in 3D to represent a curved surface. Duality Dual graphs have several roles in meshing. One can make a polyhedral Voronoi diagram mesh by dualizing a Delaunay triangulation simplicial mesh. One can create a cubical mesh by generating an arrangement of surfaces and dualizing the intersection graph; see spatial twist continuum. Sometimes both the primal mesh and its dual mesh are used in the same simulation; see Hodge star operator. This arises from physics involving divergence and curl (mathematics) operators, such as flux & vorticity or electricity & magnetism, where one variable naturally lives on the primal faces and its counterpart on the dual faces. Mesh type by use Three-dimensional meshes created for finite element analysis need to consist of tetrahedra, pyramids, prisms or hexahedra. Those used for the finite volume method can consist of arbitrary polyhedra. Those used for finite difference methods consist of piecewise structured arrays of hexahedra known as multi-block structured meshes. 4-sided pyramids are useful to conformally connect hexes to tets. 3-sided prisms are used for boundary layers conforming to a tet mesh of the far-interior of the object. Surface meshes are useful in computer graphics where the surfaces of objects reflect light (also subsurface scattering) and a full 3D mesh is not needed. Surface meshes are also used to model thin objects such as sheet metal in auto manufacturing and building exteriors in architecture. High (e.g., 17) dimensional cubical meshes are common in astrophysics and string theory. Mathematical definition and variants What is the precise definition of a mesh? There is not a universally-accepted mathematical description that applies in all contexts. However, some mathematical objects are clearly meshes: a simplicial complex is a mesh composed of simplices. Most polyhedral (e.g. cubical) meshes are conformal, meaning they have the cell structure of a CW complex, a generalization of a simplicial complex. A mesh need not be simplicial because an arbitrary subset of nodes of a cell is not necessarily a cell: e.g., three nodes of a quad does not define a cell. However, two cells intersect at cells: e.g. a quad does not have a node in its interior. The intersection of two cells may be several cells: e.g., two quads may share two edges. An intersection being more than one cell is sometimes forbidden and rarely desired; the goal of some mesh improvement techniques (e.g. pillowing) is to remove these configurations. In some contexts, a distinction is made between a topological mesh and a geometric mesh whose embedding satisfies certain quality criteria. Important mesh variants that are not CW complexes include non-conformal meshes where cells do not meet strictly face-to-face, but the cells nonetheless partition the domain. An example of this is an octree, where an element face may be partitioned by the faces of adjacent elements. Such meshes are useful for flux-based simulations. In overset grids, there are multiple conformal meshes that overlap geometrically and do not partition the domain; see e.g., Overflow, the OVERset grid FLOW solver. So-called meshless or meshfree methods often make use of some mesh-like discretization of the domain, and have basis functions with overlapping support. Sometimes a local mesh is created near each simulation degree-of-freedom point, and these meshes may overlap and be non-conformal to one another. Implicit triangulations are based on a delta complex: for each triangle the lengths of its edges, and a gluing map between face edges. (please expand) High-order elements Many meshes use linear elements, where the mapping from the abstract to realized element is linear, and mesh edges are straight segments. Higher order polynomial mappings are common, especially quadratic. A primary goal for higher-order elements is to more accurately represent the domain boundary, although they have accuracy benefits in the interior of the mesh as well. One of the motivations for cubical meshes is that linear cubical elements have some of the same numerical advantages as quadratic simplicial elements. In the isogeometric analysis simulation technique, the mesh cells containing the domain boundary use the CAD representation directly instead of a linear or polynomial approximation. Mesh improvement Improving a mesh involves changing its discrete connectivity, the continuous geometric position of its cells, or both. For discrete changes, for simplicial elements one swaps edges and inserts/removes nodes. The same kinds of operations are done for cubical (quad/hex) meshes, although there are fewer possible operations and local changes have global consequences. E.g., for a hexahedral mesh, merging two nodes creates cells that are not hexes, but if diagonally-opposite nodes on a quadrilateral are merged and this is propagated into collapsing an entire face-connected column of hexes, then all remaining cells will still be hexes. In adaptive mesh refinement, elements are split (h-refinement) in areas where the function being calculated has a high gradient. Meshes are also coarsened, removing elements for efficiency. The multigrid method does something similar to refinement and coarsening to speed up the numerical solve, but without actually changing the mesh. For continuous changes, nodes are moved, or the higher-dimensional faces are moved by changing the polynomial order of elements. Moving nodes to improve quality is called "smoothing" or "r-refinement" and increasing the order of elements is called "p-refinement." Nodes are also moved in simulations where the shape of objects change over time. This degrades the shape of the elements. If the object deforms enough, the entire object is remeshed and the current solution mapped from the old mesh to the new mesh. Research community Practitioners The field is highly interdisciplinary, with contributions found in mathematics, computer science, and engineering. Meshing R&D is distinguished by an equal focus on discrete and continuous math and computation, as with computational geometry, but in contrast to graph theory (discrete) and numerical analysis (continuous). Mesh generation is deceptively difficult: it is easy for humans to see how to create a mesh of a given object, but difficult to program a computer to make good decisions for arbitrary input a priori. There is an infinite variety of geometry found in nature and man-made objects. Many mesh generation researchers were first users of meshes. Mesh generation continues to receive widespread attention, support and funding because the human-time to create a mesh dwarfs the time to set up and solve the calculation once the mesh is finished. This has always been the situation since numerical simulation and computer graphics were invented, because as computer hardware and simple equation-solving software have improved, people have been drawn to larger and more complex geometric models in a drive for greater fidelity, scientific insight, and artistic expression. Journals Meshing research is published in a broad range of journals. This is in keeping with the interdisciplinary nature of the research required to make progress, and also the wide variety of applications that make use of meshes. About 150 meshing publications appear each year across 20 journals, with at most 20 publications appearing in any one journal. There is no journal whose primary topic is meshing. The journals that publish at least 10 meshing papers per year are in bold. Advances in Engineering Software American Institute of Aeronautics and Astronautics Journal (AIAAJ) Algorithmica Applied Computational Electromagnetics Society Journal Applied Numerical Mathematics Astronomy and Computing Computational Geometry: Theory and Applications Computer-Aided Design, often including a special issue devoted to extended papers from the IMR (see conferences below) Computer Aided Geometric Design (CAGD) Computer Graphics Forum (Eurographics) Computer Methods in Applied Mechanics and Engineering Discrete and Computational Geometry Engineering with Computers Finite Elements in Analysis and Design International Journal for Numerical Methods in Engineering (IJNME) International Journal for Numerical Methods in Fluids International Journal for Numerical Methods in Biomedical Engineering International Journal of Computational Geometry & Applications Journal of Computational Physics (JCP) Journal on Numerical Analysis Journal on Scientific Computing (SISC) Transactions on Graphics (ACM TOG) Transactions on Mathematical Software (ACM TOMS) Transactions on Visualization and Computer Graphics (IEEE TVCG) Lecture Notes in Computational Science and Engineering (LNCSE) Computational Mathematics and Mathematical Physics (CMMP) Conferences (Conferences whose primary topic is meshing are in bold.) Aerospace Sciences Meeting AIAA (15 meshing talks/papers) Canadian Conference on Computational Geometry CCCG CompIMAGE: International Symposium Computational Modeling of Objects Represented in Images Computational Fluid Dynamics Conference AIAA Computational Fluid Dynamics Conference ECCOMAS Computational Science & Engineering CS&E Conference on Numerical Grid Generation ISGG Eurographics Annual Conference (Eurographics)] (proceedings in Computer Graphics Forum) Geometric & Physical Modeling SIAM International Conference on Isogeometric Analysis IGA International Symposium on Computational Geometry SoCG Numerical Geometry, Grid Generation and Scientific Computing (NUMGRID) (proceedings in Lecture Notes in Computational Science and Engineering) International Meshing Roundtable, SIAM IMR workshop. (Refereed proceedings and special journal issue.) SIGGRAPH (proceedings in ACM Transactions on Graphics) Symposium on Geometry Processing SGP (Eurographics) (proceedings in Computer Graphics Forum) World Congress on Engineering Workshops Workshops whose primary topic is meshing are in bold. Conference on Geometry: Theory and Applications CGTA European Workshop on Computational Geometry EuroCG Fall Workshop on Computational Geometry Finite Elements in Fluids FEF MeshTrends Symposium (in WCCM or USNCCM alternate years) Polytopal Element Methods in Mathematics and Engineering Tetrahedron workshop See also Grid classification Mesh parameterization Meshfree methods Parallel mesh generation Principles of grid generation Polygon mesh Regular grid Stretched grid method Tessellation (computer graphics) Types of mesh Unstructured grid References Bibliography . . . CGAL The Computational Geometry Algorithms Library Jan Brandts, Sergey Korotov, Michal Krizek: "Simplicial Partitions with Applications to the Finite Element Method", Springer Monographs in Mathematics, (2020). url="https://www.springer.com/gp/book/9783030556761" Grid Generation Methods - Liseikin, Vladimir D. External links Periodic Table of the Finite Elements Literature on Mesh Generation Conferences, Workshops, Summerschools Mesh generators Many commercial product descriptions emphasize simulation rather than the meshing technology that enables simulation. Lists of mesh generators (external): Free/open source mesh generators Public domain and commercial mesh generators ANSA Pre-processor ANSYS CD-adapco and Siemens DISW Comet Solutions CGAL Computational Geometry Algorithms Library Mesh generation 2D Conforming Triangulations and Meshes 3D Mesh Generation CUBIT Ennova Gmsh Hextreme meshes MeshLab MSC Software Omega_h Tri/Tet Adaptivity Open FOAM Mesh generation and conversion Salome Mesh module TetGen TetWild TRIANGLE Mesh generation and Delaunay triangulation Multi-domain partitioned mesh generators These tools generate the partitioned meshes required for multi-material finite element modelling. MDM(Multiple Domain Meshing) generates unstructured tetrahedral and hexahedral meshes for a composite domain made up of heterogeneous materials, automatically and efficiently QMDM (Quality Multi-Domain Meshing) produces a high quality, mutually consistent triangular surface meshes for multiple domains QMDMNG, (Quality Multi-Domain Meshing with No Gap), produces a quality meshes with each one a two-dimensional manifold and no gap between two adjacent meshes. SOFA_mesh_partitioning_tools generates partitioned tetrahedral meshes for multi-material FEM, based on CGAL. Articles Another Fine Mesh, MeshTrends Blog, Pointwise Mesh Generation & Grid Generation on the Web Mesh Generation group on LinkedIn Research groups and people Mesh Generation people on Google Scholar David Bommes, Computer Graphics Group, University of Bern David Eppstein's Geometry in Action, Mesh Generation Jonathan Shewchuk's Meshing and Triangulation in Graphics, Engineering, and Modeling Scott A. Mitchell Robert Schneiders Models and meshes Useful models (inputs) and meshes (outputs) for comparing meshing algorithms and meshes. HexaLab has models and meshes that have been published in research papers, reconstructed or from the original paper. Princeton Shape Benchmark Shape Retrieval Contest SHREC has different models each year, e.g., Shape Retrieval Contest of Non-rigid 3D Watertight Meshes 2011 Thingi10k meshed models from the Thingiverse CAD models Modeling engines linked with mesh generation software to represent the domain geometry. ACIS by Spatial Open Cascade Mesh file formats Common (output) file formats for describing meshes. NetCDF Genesis/Exodus XDMF VTK/VTU MEDIT MED/Salome Gmsh ANSYS mesh OFF Wavefront OBJ PLY STL meshio can convert between all of the above formats. Mesh visualizers Blender Mesh Viewer Paraview Tutorials Cubit tutorials Mesh generation people Mesh generators Geometric algorithms Computer-aided design Triangulation (geometry) Numerical analysis Numerical differential equations Computational fluid dynamics 3D computer graphics
Mesh generation
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
5,806
[ "Triangulation (geometry)", "Mesh generation", "Design engineering", "Computer-aided design", "Computational fluid dynamics", "Tessellation", "Computational mathematics", "Planar graphs", "Computational physics", "Mathematical relations", "Numerical analysis", "Planes (geometry)", "Approxima...
3,050,742
https://en.wikipedia.org/wiki/Polyisobutene
Polyisobutene (polyisobutylene) is a class of organic polymers prepared by polymerization of isobutene. The polymers often have the formula Me3C[CH2CMe2]nH (Me = CH3). They are typically colorless gummy solids. Cationic polymerization, initiated with a strong Brønsted or Lewis acid, is the typical method for its production. The molecular weight (MW) of the resulting polymer determines the applications. Low MW polyisobutene, a mixture of oligomers with Mns of about 500, is used as plasticizers. Medium and high MW polyisobutenes, with Mn ≥ 20,000, are components of commercial adhesives. See also Butyl rubber Polybutene References Organic polymers
Polyisobutene
[ "Chemistry" ]
168
[ "Organic compounds", "Organic polymers" ]
3,050,954
https://en.wikipedia.org/wiki/Fredholm%20alternative
In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a theorem on Fredholm operators. Part of the result states that a non-zero complex number in the spectrum of a compact operator is an eigenvalue. Linear algebra If V is an n-dimensional vector space and is a linear transformation, then exactly one of the following holds: For each vector v in V there is a vector u in V so that . In other words: T is surjective (and so also bijective, since V is finite-dimensional). A more elementary formulation, in terms of matrices, is as follows. Given an m×n matrix A and a m×1 column vector b, exactly one of the following must hold: Either: A x = b has a solution x Or: AT y = 0 has a solution y with yTb ≠ 0. In other words, A x = b has a solution if and only if for any y such that AT y = 0, it follows that yTb = 0 . Integral equations Let be an integral kernel, and consider the homogeneous equation, the Fredholm integral equation, and the inhomogeneous equation The Fredholm alternative is the statement that, for every non-zero fixed complex number either the first equation has a non-trivial solution, or the second equation has a solution for all . A sufficient condition for this statement to be true is for to be square integrable on the rectangle (where a and/or b may be minus or plus infinity). The integral operator defined by such a K is called a Hilbert–Schmidt integral operator. Functional analysis Results about Fredholm operators generalize these results to complete normed vector spaces of infinite dimensions; that is, Banach spaces. The integral equation can be reformulated in terms of operator notation as follows. Write (somewhat informally) to mean with the Dirac delta function, considered as a distribution, or generalized function, in two variables. Then by convolution, induces a linear operator acting on a Banach space of functions given by with given by In this language, the Fredholm alternative for integral equations is seen to be analogous to the Fredholm alternative for finite-dimensional linear algebra. The operator given by convolution with an kernel, as above, is known as a Hilbert–Schmidt integral operator. Such operators are always compact. More generally, the Fredholm alternative is valid when is any compact operator. The Fredholm alternative may be restated in the following form: a nonzero either is an eigenvalue of or lies in the domain of the resolvent Elliptic partial differential equations The Fredholm alternative can be applied to solving linear elliptic boundary value problems. The basic result is: if the equation and the appropriate Banach spaces have been set up correctly, then either (1) The homogeneous equation has a nontrivial solution, or (2) The inhomogeneous equation can be solved uniquely for each choice of data. The argument goes as follows. A typical simple-to-understand elliptic operator L would be the Laplacian plus some lower order terms. Combined with suitable boundary conditions and expressed on a suitable Banach space X (which encodes both the boundary conditions and the desired regularity of the solution), L becomes an unbounded operator from X to itself, and one attempts to solve where f ∈ X is some function serving as data for which we want a solution. The Fredholm alternative, together with the theory of elliptic equations, will enable us to organize the solutions of this equation. A concrete example would be an elliptic boundary-value problem like supplemented with the boundary condition where Ω ⊆ Rn is a bounded open set with smooth boundary and h(x) is a fixed coefficient function (a potential, in the case of a Schrödinger operator). The function f ∈ X is the variable data for which we wish to solve the equation. Here one would take X to be the space L2(Ω) of all square-integrable functions on Ω, and dom(L) is then the Sobolev space W 2,2(Ω) ∩ W(Ω), which amounts to the set of all square-integrable functions on Ω whose weak first and second derivatives exist and are square-integrable, and which satisfy a zero boundary condition on ∂Ω. If X has been selected correctly (as it has in this example), then for μ0 >> 0 the operator L + μ0 is positive, and then employing elliptic estimates, one can prove that L + μ0 : dom(L) → X is a bijection, and its inverse is a compact, everywhere-defined operator K from X to X, with image equal to dom(L). We fix one such μ0, but its value is not important as it is only a tool. We may then transform the Fredholm alternative, stated above for compact operators, into a statement about the solvability of the boundary-value problem (*)–(**). The Fredholm alternative, as stated above, asserts: For each λ ∈ R, either λ is an eigenvalue of K, or the operator K − λ is bijective from X to itself. Let us explore the two alternatives as they play out for the boundary-value problem. Suppose λ ≠ 0. Then either (A) λ is an eigenvalue of K ⇔ there is a solution h ∈ dom(L) of (L + μ0) h = λ−1h ⇔ –μ0+λ−1 is an eigenvalue of L. (B) The operator K − λ : X → X is a bijection ⇔ (K − λ) (L + μ0) = Id − λ (L + μ0) : dom(L) → X is a bijection ⇔ L + μ0 − λ−1 : dom(L) → X is a bijection. Replacing -μ0+λ−1 by λ, and treating the case λ = −μ0 separately, this yields the following Fredholm alternative for an elliptic boundary-value problem: For each λ ∈ R, either the homogeneous equation (L − λ) u = 0 has a nontrivial solution, or the inhomogeneous equation (L − λ) u = f possesses a unique solution u ∈ dom(L) for each given datum f ∈ X. The latter function u solves the boundary-value problem (*)–(**) introduced above. This is the dichotomy that was claimed in (1)–(2) above. By the spectral theorem for compact operators, one also obtains that the set of λ for which the solvability fails is a discrete subset of R (the eigenvalues of L). The eigenvalues’ associated eigenfunctions can be thought of as "resonances" that block the solvability of the equation. See also Spectral theory of compact operators Farkas' lemma References A. G. Ramm, "A Simple Proof of the Fredholm Alternative and a Characterization of the Fredholm Operators", American Mathematical Monthly, 108 (2001) p. 855. Fredholm theory Linear algebra
Fredholm alternative
[ "Mathematics" ]
1,530
[ "Linear algebra", "Algebra" ]
3,050,976
https://en.wikipedia.org/wiki/Pilot%20plant
A pilot plant is a pre-commercial production system that employs new production technology and/or produces small volumes of new technology-based products, mainly for the purpose of learning about the new technology. The knowledge obtained is then used for design of full-scale production systems and commercial products, as well as for identification of further research objectives and support of investment decisions. Other (non-technical) purposes include gaining public support for new technologies and questioning government regulations. Pilot plant is a relative term in the sense that pilot plants are typically smaller than full-scale production plants, but are built in a range of sizes. Also, as pilot plants are intended for learning, they typically are more flexible, possibly at the expense of economy. Some pilot plants are built in laboratories using stock lab equipment, while others require substantial engineering efforts, cost millions of dollars, and are custom-assembled and fabricated from process equipment, instrumentation and piping. They can also be used to train personnel for a full-scale plant. Pilot plants tend to be smaller compared to demonstration plants. Terminology A word similar to pilot plant is pilot line. Essentially, pilot plants and pilot lines perform the same functions, but 'pilot plant' is used in the context of (bio)chemical and advanced materials production systems, whereas 'pilot line' is used for new technology in general. The term 'kilo lab' is also used for small pilot plants referring to the expected output quantities. Risk management Pilot plants are used to reduce the risk associated with construction of large process plants. They do so in several ways: Computer simulations and semi-empirical methods are used to determine the limitations of the pilot scale system. These mathematical models are then tested in a physical pilot-scale plant. Various modeling methods are used for scale-up. These methods include: Chemical similitude studies Mathematical modeling Chemical process simulation Finite Elemental Analysis (FEA) Computational Fluid Dynamics (CFD) These theoretical modeling methods return the following: Finalized mass and energy balances Optimized system design and capacity Equipment requirements System limitations The basis for determining the cost to build the pilot module They are substantially less expensive to build than full-scale plants. The business does not put as much capital at risk on a project that may be inefficient or unfeasible. Further, design changes can be made more cheaply at the pilot scale and kinks in the process can be worked out before the large plant is constructed. They provide valuable data for design of the full-scale plant. Scientific data about reactions, material properties, corrosiveness, for instance, may be available, but it is difficult to predict the behavior of a process of any complexity. Engineering data from other process may be available, but this data can not always be clearly applied to the process of interest. Designers use data from the pilot plant to refine their design of the production scale facility. If a system is well defined and the engineering parameters are known, pilot plants are not used. For instance, a business that wants to expand production capacity by building a new plant that does the same thing as an existing plant may choose to not use a pilot plant. Additionally, advances in process simulation on computers have increased the confidence of process designers and reduced the need for pilot plants. However, they are still used as even state-of-the-art simulation cannot accurately predict the behavior of complex systems. Scale dependence of plant properties As a system increases in size, system properties that depend on quantity of matter (with extensive properties) may change. The surface area to liquid ratio in a chemical plant is a good example of such a property. On a small chemical scale, in a flask, say, there is a relatively large surface area to liquid ratio. However, if the reaction in question is scaled up to fit in a 500-gallon tank, the surface area to liquid ratio becomes much smaller. As a result of this difference in surface area to liquid ratio, the exact nature of the thermodynamics and the reaction kinetics of the process change in a non-linear fashion. This is why a reaction in a beaker can behave vastly differently from the same reaction in a large-scale production process. Other factors Other factors that may change during the transformation to a production scale include: Reaction kinetics Chemical equilibrium Material properties Fluid dynamics Thermodynamics Equipment selection Agitation Uniformity / homogeneity After data has been collected from operation of a pilot plant, a larger production-scale facility may be built. Alternatively, a demonstration plant, which is typically bigger than a pilot plant, but smaller than a full-scale production plant, may be built to demonstrate the commercial feasibility of the process. Businesses sometimes continue to operate the pilot plant in order to test ideas for new products, new feedstocks, or different operating conditions. Alternatively, they may be operated as production facilities, augmenting production from the main plant. Bench scale vs pilot vs demonstration The differences between bench scale, pilot scale and demonstration scale are strongly influenced by industry and application. Some industries use pilot plant and demonstration plant interchangeably. Some pilot plants are built as portable modules that can be easily transported as a contained unit. For batch processes, in the pharmaceutical industry for example, bench scale is typically conducted on samples 1–20 kg or less, whereas pilot scale testing is performed with samples of 20–100 kg. Demonstration scale is essentially operating the equipment at full commercial feed rates over extended time periods to prove operational stability. For continuous processes, in the petroleum industry for example, bench scale systems are typically microreactor or CSTR systems with less than 1000 mL of catalyst, studying reactions and/or separations on a once-through basis. Pilot plants will typically have reactors with catalyst volume between 1 and 100 litres, and will often incorporate product separation and gas/liquid recycle with the goal of closing the mass balance. Demonstration plants, also referred to as semi-works plants, will study the viability of the process on a pre-commercial scale, with typical catalyst volumes in the 100 - 1000 litre range. The design of a demonstration scale plant for a continuous process will closely resemble that of the anticipated future commercial plant, albeit at a much lower throughput, and its goal is to study catalyst performance and operating lifetime over an extended period, while generating significant quantities of product for market testing. In the development of new processes, the design and operation of the pilot and demonstration plant will often run in parallel with the design of the future commercial plant, and the results from pilot testing programs are key to optimizing the commercial plant flowsheet. It is common in cases where process technology has been successfully implemented that the savings at the commercial scale resulting from pilot testing will significantly outweigh the cost of the pilot plant itself. Steps to creating a custom pilot plant Custom pilot plants are commonly designed either for research or commercial purposes. They can range in size from a small system with no automation and low flow, to a highly automated system producing relatively large amounts of products in a day. No matter the size, the steps to designing and fabricating a working pilot plant are the same. They are: Pre-engineering - completing a process flow diagram (PFD), basic piping and instrumentation diagrams (P&ID's) and initial equipment layouts. Engineering modeling and optimization - 2D and 3D models are created, using a simulation software to model the process parameters and scale the chemical processes. These modeling software help determine system limitations, non-linear chemical and physical changes, and potential equipment sizing. Mass and energy balances, finalized P&ID's and general arrangement drawings are produced. Automation strategies for the system are developed (if needed). Controls system programming begins and will continue through fabrication and assembly Fabrication and assembly - after an optimized design has been determined, the custom pilot is fabricated and assembled. Pilot plants can either be assembled on-site or off-site as modular skids that will be constructed and tested in a controlled environment. Testing - testing of completed systems, including system controls, is conducted to ensure proper system function. Installation and startup - if constructed offsite, pilot skids are installed onsite. After all equipment is in place, full system startup is completed by integrating the system with existing plant utilities and controls. Full operation is tested and affirmed. Training - operator training is complete and full system documentation is handed over. See also Chemical engineering Operations research Process engineering References Bibliography M. Levin (Editor), Pharmaceutical Process Scale-Up (Drugs and the Pharmaceutical), Informa Healthcare, 3rd edition, (2011) M. Lackner (Editor), Scale-up in Combustion, ProcessEng Engineering GmbH, Wien, (2009). M. Zlokarnik, Scale-up in Chemical Engineering, Wiley-VCH Verlag GmbH & Co. KGaA, 2nd edition, (2006). Richard Palluzi, Pilot Plants: Design, Construction and Operation, McGraw-Hill, February, 1992. Richard Palluzi, Pilot Plants, Chemical Engineering, March, 1990. Industrial engineering Industrial processes
Pilot plant
[ "Engineering" ]
1,840
[ "Industrial engineering" ]
3,051,050
https://en.wikipedia.org/wiki/Runaway%20breakdown
Runaway breakdown is a theory of lightning initiation proposed by Alex Gurevich in 1992. Electrons in air have a mean free path of ~1 cm. Fast electrons which move at a large fraction of the speed of light have a mean free path up to 100 times longer. Given the longer free paths, an electric field can accelerate these electrons to energies far higher than those of initially static electrons. If they strike air molecules, more relativistic electrons will be released, creating an avalanche multiplication of "runaway" electrons. This process, called the relativistic runaway electron avalanche, has been hypothesized to lead to electrical breakdown in thunderstorms, but only when a source of high-energy electrons from a cosmic ray is present to start the "runaway" process. The resulting conductive plasma trail, many tens of meters long, is suggested to supply the "seed" which triggers a lightning flash. See also Spark gap Avalanche breakdown Electron avalanche References External links How cosmic rays trigger lightning strikes Runaway Breakdown and the Mysteries of Lightning - Physics Today May 2005 Also available on-line at: http://www.phy.olemiss.edu/~jgladden/phys510/spring06/Gurevich.pdf . Nova Science Now segment on Lightning - Aired on PBS October 18, 2005 Lightning
Runaway breakdown
[ "Physics", "Materials_science", "Astronomy" ]
275
[ "Physical phenomena", "Materials science stubs", "Plasma physics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Electrical phenomena", "Plasma physics stubs", "Lightning", "Electromagnetism stubs" ]
3,051,078
https://en.wikipedia.org/wiki/Air%20entrainment
Air entrainment in concrete is the intentional creation of tiny air bubbles in a batch by adding an air entraining agent during mixing. A form of surfactant (a surface-active substance that in the instance reduces the surface tension between water and solids) it allows bubbles of a desired size to form. These are created during concrete mixing (while the slurry is in its liquid state), with most surviving to remain part of it when hardened. Air entrainment makes concrete more workable during placement, and increases its durability when hardened, particularly in climates subject to freeze-thaw cycles. It also improves the workability of concrete. In contrast to the foam concrete, that is made by introducing stable air bubbles through the use of a foam agent, which is lightweight (has lower density), and is commonly used for insulation or filling voids, air entrained concrete, has evenly distributed tiny air voids introduced through admixtures to enhance durability, workability, and resistance to freeze-thaw cycles without significantly reducing its overall density, and without negative impact on its mechanical properties, allowing to use it in objects such as bridges or roads built using roller compacted concrete. Another difference is manufacturing process: foam concrete involves the creation of a foam mixture separately, which is then mixed with cement, sand, and water to form the final product, while air entrained concrete is produced by adding specialized admixtures or additives directly into the concrete mix during mixing to create small air bubbles throughout the mixture. Approximately 85% of concrete manufacturing in the United States contains air-entraining agents, which are considered the fifth ingredient in concrete manufacturing technology. Benefits Air entrainment is beneficial for the properties of both fresh and hardened concrete. In fresh concrete, air entrainment improves workability and makes it easier to handle and pump. It also helps prevent bleeding and segregation, unwanted processes that can occur during mixing. In hardened concrete, air entrainment strengthens the material by making it better able to withstand freeze-thaw cycles. It also increases its resistance to cracking, improves durability against fire damage, and enhances overall strength. Therefore, adding air to concrete when it's being made makes it easier to handle at first, but then later helps it stay strong even under tough conditions like freezing temperatures or fire exposure. Tiny air bubbles in air entrained concrete act as internal cushioning, absorbing energy during impact and increasing resistance to physical forces such as shock or vibration. This improved impact resistance helps minimize surface damage and prevent the propagation of cracks or breaks, thereby increasing overall durability. Additionally, the air voids, acting as pressure relief zones, allow water or moisture expansion during freeze-thaw cycles without causing internal stresses and subsequent cracking. Process Though hardened concrete appears as a compact solid, it is actually highly porous (typical concrete porosity: ~ 6 – 12 vol.%), having small capillaries resulting from the evaporation of water beyond that required for the hydration reaction. A water to cement ratio (w/c) of approximately 0.38 (this means 38 lbs. of water for every 100 lbs. of cement) is required for all the cement particles to hydrate. Water beyond that is surplus and is used to make the plastic concrete more workable or easily flowing or less viscous. To achieve a suitable slump to be workable, most concrete has a w/c of 0.45 to 0.60 at the time of placement, which means there is substantial excess water that will not react with cement. When the excess water evaporates it leaves little pores in its place. Environmental water can later fill these voids through capillary action. During freeze-thaw cycles, the water occupying those pores expands and creates tensile stresses which lead to tiny cracks. These cracks allow more water into the concrete and the cracks enlarge. Eventually the concrete spalls – chunks break off. The failure of reinforced concrete is most often due to this cycle, which is accelerated by moisture reaching the reinforcing steel, causing it to rust, expand, create more cracks, let in more water, and aggravate the decomposition cycle. Air entertainment is a process that should be tightly controlled to avoid naturally occurring entertainment, which means the unintentional or undesirable presence of air voids in concrete, caused by factors such as improper mixing or insufficient consolidation, which may lead to reduced strength and durability due to inconsistent sizes and placement of air voids, making it less desirable for achieving specific concrete performance properties. Various materials can impact the properties of air-entraining admixture in several ways. Fly ash, a supplementary cementitious material, improves paste packing due to its smaller particles, resulting in better flow and finishing of the concrete. Fly ash's lower specific gravity increases the paste content for a given water-to-cementitious material ratio (w/cm) compared to ordinary Portland cement. Different types of fly ash require adjustments in air-entraining admixture dosage due to variations in their chemical compositions and air loss characteristics. Class F fly ash typically demands higher levels of admixture to maintain desired entrained air levels compared to Class C fly ash. Silica fume is another material that influences air-entrained concrete. Its fine particle size and smoothness necessitate higher dosages of air-entraining admixture than traditional concretes without silica fume. Slag cement contributes improved packing and increased paste volume fraction due to its lower specific gravity than ordinary Portland cement. Including natural pozzolans like rice husk ash or metakaolin affects fineness and composition, which further influence the required dosage of air-entraining admixtures in mixed concretes containing these materials. Size The air bubbles typically have a diameter of and are closely spaced. The voids they create can be compressed a little, acting to reduce or absorb stresses from freezing. Air entraining was introduced in the 1930s and most modern concrete, especially if subjected to freezing temperatures, is air-entrained. The bubbles contribute to workability by acting as a sort of lubricant for all the aggregates and large sand particles in a concrete mix. Entrapped air In addition to intentionally entrained air, hardened concrete also typically contains some amount of entrapped air. These are larger bubbles, creating larger voids, known as "honeycombing", and generally are less evenly distributed than entrained air. Proper concrete placement, which often includes vibration to settle it into place and drive out entrapped air, particularly in wall forms, is essential to minimizing deleterious entrapped air. Interference of carbon-containing fly ash Using fly ash, a byproduct of coal combustion, as an additive in concrete production, is a common practice due to its environmental and cost benefits. Still, residual carbon in fly ash can interfere with air-entraining admixtures (AEAs) added to enhance air entrainment in concrete for improved workability and resistance against freezing and thawing conditions. This issue has become more pronounced with the implementation of low-NOx combustion technologies. There are mechanisms behind the interactions between AEAs and fly ash in concrete mixtures, related to the effects of residual carbon. The amount of carbon and its properties, such as particle size and surface chemistry, impact the adsorption capacity of AEAs. The type of fuel used during combustion affects both the amount and properties of residual carbon present. Fly ash derived from bituminous coal generally has higher carbon content than those produced from sub-bituminous coal or lignite but exhibits lower AEA adsorption capacity per mass of carbon. Different post-treatment methods are used to improve fly ash quality for concrete utilization. Techniques such as ozonation, thermal treatment, and physical cleaning have shown promise in enhancing performance. History Air entrainment was discovered by accident in the mid-1930s. At that time, cement manufacturers were using a grinding aid to enhance the process of grinding cement. This grinding aid was a mixture of various chemicals, including salts of wood resin, which were added to the cement during the grinding process. During the experiments, researchers noticed that adding this grinding aid caused the resulting concrete to exhibit specific unique properties. Specifically, they observed that the concrete contained tiny, dispersed air bubbles throughout its structure, significantly improving its durability and resistance to freezing and thawing. Further investigations and research were conducted to understand this phenomenon, leading to the realization that the grinding aid was responsible for entraining air into the concrete. This accidental discovery eventually led to intentional air entrainment becoming a standard practice in concrete production. Since then, air-entrained concrete has become a standard practice rather than an exception, especially in cold climates. Air-entraining agents (AEAs) have been developed and extensively studied to improve resistance against freezing and thawing damage caused by both internal distress and salt scaling. Future directions Superabsorbent polymers (SAP) have the potential to replace traditional air-entraining agents (AEAs) in concrete, as they can create stable pore systems that function similarly to air voids introduced by AEAs. SAP particles absorb water during mixing and form stable, water-filled inclusions in fresh concrete. As cement hydrates and undergoes chemical shrinkage, the pores of the hardening cement paste empty out their water content. The SAP particles then release their absorbed water to compensate for this shrinkage, effectively mitigating autogenous shrinkage and reducing the risk of cracking. These pores created by SAP act as voids similar to those generated by AEAs, improving freeze-thaw resistance and durability. Unlike AEAs, which may lose a portion of entrained air due to factors like long hauling durations or high ambient temperatures, SAP's pore system remains stable regardless of consistency, superplasticizer addition, or placement method. SAP is a reliable alternative for achieving controlled air entrainment in concrete construction. Using SAP instead of traditional AEAs, construction practitioners can enhance freeze-thaw resistance without worrying about losing a significant portion of entrained air bubbles during mixing or placement processes. References Chemistry of construction methods Concrete
Air entrainment
[ "Engineering" ]
2,118
[ "Structural engineering", "Concrete" ]
3,051,419
https://en.wikipedia.org/wiki/Ethylene%20glycol%20dimethacrylate
Ethylene glycol dimethylacrylate (EGDMA) is a diester formed by condensation of two equivalents of methacrylic acid and one equivalent of ethylene glycol. EGDMA can be used in free radical copolymer crosslinking reactions. When used with methyl methacrylate, it leads to gel point at relatively low concentrations because of the nearly equivalent reactivities of all the double bonds involved. It is used as a monomer to prepare hydroxyapatite/poly methyl methacrylate composites. EGDMA can be used in free radical copolymer crosslinking reactions. Its toxicity profile has been fairly well studied. It is sometimes called ethylene dimethacrylate. References Monomers Methacrylate esters Glycol esters
Ethylene glycol dimethacrylate
[ "Chemistry", "Materials_science" ]
184
[ "Monomers", "Polymer chemistry" ]
3,051,710
https://en.wikipedia.org/wiki/Nano-thermite
Nano-thermite or super-thermite is a metastable intermolecular composite (MIC) characterized by a particle size of its main constituents, a metal fuel and oxidizer, under 100 nanometers. This allows for high and customizable reaction rates. Nano-thermites contain an oxidizer and a reducing agent, which are intimately mixed on the nanometer scale. MICs, including nano-thermitic materials, are a type of reactive materials investigated for military use, as well as for general applications involving propellants, explosives, and pyrotechnics. What distinguishes MICs from traditional thermites is that the oxidizer and a reducing agent, normally iron oxide and aluminium, are in the form of extremely fine powders (nanoparticles). This dramatically increases the reactivity relative to micrometre-sized powder thermite. As the mass transport mechanisms that slow down the burning rates of traditional thermites are not so important at these scales, the reaction proceeds much more quickly. Potential uses Historically, pyrotechnic or explosive applications for traditional thermites have been limited due to their relatively slow energy release rates. Because nanothermites are created from reactant particles with proximities approaching the atomic scale, energy release rates are far greater. MICs or super-thermites are generally developed for military use, propellants, explosives, incendiary devices, and pyrotechnics. Research into military applications of nano-sized materials began in the early 1990s. Because of their highly increased reaction rate, nano-thermitic materials are being studied by the U.S. military with the aim of developing new types of bombs several times more powerful than conventional explosives. Nanoenergetic materials can store more energy than conventional energetic materials and can be used in innovative ways to tailor the release of this energy. Thermobaric weapons are one potential application of nanoenergetic materials. Types There are many possible thermodynamically stable fuel-oxidizer combinations. Some of them are: Aluminium-molybdenum(VI) oxide Aluminium-copper(II) oxide Aluminium-iron(II,III) oxide Antimony-potassium permanganate Aluminium-potassium permanganate Aluminium-bismuth(III) oxide Aluminium-tungsten(VI) oxide hydrate Aluminium-fluoropolymer (typically Viton) Titanium-boron (burns to titanium diboride, which belongs to a class of compounds called intermetallic composites). In military research, aluminium-molybdenum oxide, aluminium-Teflon and aluminium-copper(II) oxide have received considerable attention. Other compositions tested were based on nanosized RDX and with thermoplastic elastomers. PTFE or other fluoropolymer can be used as a binder for the composition. Its reaction with the aluminium, similar to magnesium/teflon/viton thermite, adds energy to the reaction. Of the listed compositions, that with potassium permanganate has the highest pressurization rate. The most common method of preparing nanoenergetic materials is by ultrasonification in quantities of less than 2g. Some research has been developed to increase production scales. Due to the very high electrostatic discharge (ESD) sensitivity of these materials, sub 1 gram scales are currently typical. Production Nanoaluminum, or ultra fine grain (UFG) aluminum, powders are a key component of most nano-thermitic materials. A method for producing this material is the dynamic gas-phase condensation method, pioneered by Wayne Danen and Steve Son at Los Alamos National Laboratory. A variant of the method is being used at the Indian Head Division of the Naval Surface Warfare Center. Another method for production is electrothermal synthesis, developed by NovaCentrix, which uses a pulsed plasma arc to vaporize the aluminum. The powders made by the dynamic gas-phase condensation and the electrothermal synthesis processes are indistinguishable. A critical aspect of the production is the ability to produce particles of sizes in the tens of nano-meter range, as well as with a limited distribution of particle sizes. In 2002, the production of nano-sized aluminum particles required considerable effort, and commercial sources for the material were limited. An application of the sol-gel method, developed by Randall Simpson, Alexander Gash and others at the Lawrence Livermore National Laboratory, can be used to make the actual mixtures of nano-structured composite energetic materials. Depending on the process, MICs of different density can be produced. Highly porous and uniform products can be achieved by super-critical extraction. The most common types of production are in liquids or via resonant acoustic mixing. However, more complicated methods like the ones previously mentioned are used. Ignition As with all explosives, research into control yet simplicity has been a goal of research into nanoscale explosives. Some can be ignited with laser pulses. MICs have been investigated as a possible replacement for lead (e.g. lead styphnate, lead azide) in percussion caps and electric matches. Compositions based on Al-Bi2O3 tend to be used. PETN may be optionally added. Aluminium powder can be added to nano explosives. Aluminium has a relatively low combustion rate and a high enthalpy of combustion. The products of a thermite reaction, resulting from ignition of the nano-thermitic mixture, are usually metal oxides and elemental metals. At the temperatures prevailing during the reaction, the products can be solid, liquid or gaseous, depending on the components of the mixture. Hazards Like conventional thermite, super thermite reacts at very high temperature and is difficult to extinguish. The reaction produces dangerous ultra-violet (UV) light, requiring that the reaction not be viewed directly or that special eye protection (for example, a welder's mask) be worn. In addition, super thermites are very sensitive to electrostatic discharge (ESD). Surrounding the metal oxide particles with carbon nanofibers may make nanothermites safer to handle. See also Thermate Pyrotechnic composition References External links Synthesis and Reactivity of a Super-Reactive Metastable Intermolecular Composite Formulation of Al/KMnO4 Metastable Intermolecular Composites for Small Caliber Cartridges and Cartridge Actuated Devices Performance of Nanocomposite Energetic Materials Al-MoO3 Pyrotechnic compositions Incendiary weapons Explosives Nanoparticles
Nano-thermite
[ "Chemistry" ]
1,370
[ "Pyrotechnic compositions", "Explosives", "Explosions" ]
3,051,739
https://en.wikipedia.org/wiki/Exploding%20tree
A tree may explode when stresses in its trunk increase due to extreme cold, heat, or lightning, causing it to split suddenly. Causes Cold Cold weather will cause some trees to shatter by freezing the sap, because it contains water, which expands as it freezes, creating a sound like a gunshot. The sound is produced as the tree bark splits, with the wood contracting as the sap expands. John Claudius Loudon described this effect of cold on trees in his Encyclopaedia of Gardening, in the entry for frosts, as follows: Henry Ward Beecher records anecdotal evidence of the wood from which instrument cases and carrying boxes were splitting in temperatures of in Captain Bach's travels near the Great Slave Lake. Linda Runyon, author of books on wilderness living, recounts her experience of the effect of cold on maple trees as follows: Wally and Shirley Loudon reported the effect of the freeze of December 1968 upon their orchard in Carlton, Washington as follows: To the Sioux of The Dakotas and the Cree, the first new moon of the new year is known, in various dialects, as the "Moon of the Cold-Exploding Trees". Tree sap is a supercooled liquid in cold temperatures. John Hunter observed, in his Treatise on the Blood, that tree sap within a tree freezes some 17 degrees Fahrenheit below its nominal freezing point. Lightning Trees can explode when struck by lightning. The strong electric current is carried mostly by the water-conducting sapwood below the bark, heating it up and boiling the water. The pressure of the steam can make the trunk burst. This happens especially with trees whose trunks are already dying or rotting. The more usual result of lightning striking a tree, however, is a lightning scar, running down the bark, or simply root damage, whose only visible sign above ground is branches that were fed by the root dying back. Fire Exploding trees also occur during forest fires and are a risk to smokejumpers. Eucalyptus trees are known to explode during bush fires due to vaporised eucalyptus oils producing an explosive mixture with air. Explosive behaviour of Eucalyptus trunks has been observed in both laboratory tests and in wildfires in Australia. Aspen trees have also been observed to explode in wildfires. Steam pressure build up in tree trunks is theoretically unlikely to lead to an explosion in a rapidly moving fire front, although trees exploding after the initial front has passed or exploding through other mechanisms is entirely possible. April Fools' Day hoax Exploding trees were the subject of a 2005 April Fools' Day hoax in the United States, covered by National Public Radio, stating that maple trees in New England had been exploding due to a failure to collect their sap, causing pressure to build from the inside. The root pressure in a maple tree is approximately 0.1 MPa, one standard atmosphere, which is insufficient to cause a tree to explode. See also Frost crack Sandbox tree (Hura crepitans), also known as the "dynamite tree" Footnotes Similar text can be found in the entry for Frost in Charles Hutton's 1795 Mathematical and Philosophical Dictionary References External links YouTube video with cold weather and bursting tree bark at 43:53, Wild Russia, Episode 6, Primeval Valleys Tree Trees
Exploding tree
[ "Chemistry" ]
655
[ "Explosions" ]
3,051,956
https://en.wikipedia.org/wiki/Explosive%20eruption
In volcanology, an explosive eruption is a volcanic eruption of the most violent type. A notable example is the 1980 eruption of Mount St. Helens. Such eruptions result when sufficient gas has dissolved under pressure within a viscous magma such that expelled lava violently froths into volcanic ash when pressure is suddenly lowered at the vent. Sometimes a lava plug will block the conduit to the summit, and when this occurs, eruptions are more violent. Explosive eruptions can expel as much as per second of rocks, dust, gas and pyroclastic material, averaged over the duration of eruption, that travels at several hundred meters per second as high as into the atmosphere. This cloud may subsequently collapse, creating a fast-moving pyroclastic flow of hot volcanic matter. Physics Viscous magmas cool beneath the surface before they erupt. As they do this, bubbles exsolve from the magma. Because the magma is viscous, the bubbles remain trapped in the magma. As the magma nears the surface, the bubbles and thus the magma increase in volume. The pressure of the magma builds until the blockage is blasted out in an explosive eruption through the weakest point in the cone, usually the crater. (However, in the case of the eruption of Mount St. Helens, the pressure was released on the side of the volcano, rather than the crater.). The release of pressure causes more gas to exsolve, doing so explosively. The gas may expand at hundreds of metres per second, expanding upward and outward. As the eruption progresses, a chain reaction causes the magma to be ejected at higher and higher speeds. Volcanic ash formation The violently expanding gas disperses and breaks up magma, forming an emulsion of gas and magma called volcanic ash. The cooling of the gas in the ash as it expands chills the magma fragments, often forming tiny glass shards recognisable as portions of the walls of former liquid bubbles. In more fluid magmas the bubble walls may have time to reform into spherical liquid droplets. The final state of the emulsions depends strongly on the ratio of liquid to gas. Gas-poor magmas end up cooling into rocks with small cavities, becoming vesicular lava. Gas-rich magmas cool to form rocks with cavities that nearly touch, with an average density less than that of water, forming pumice. Meanwhile, other material can be accelerated with the gas, becoming volcanic bombs. These can travel with so much energy that large ones can create craters when they hit the ground. Pyroclastic flows When an emulsion of volcanic gas and magma falls back to the ground, it can create a density current called a pyroclastic flow. The emulsion is somewhat fluidised by the gas, allowing it to spread. These can often climb over obstacles, and devastate human life. Earthly pyroclastic flows can travel at up to per hour and reach temperatures of . The high temperatures can burn flammable materials in the flow's path, including wood, vegetation, and buildings. Alternately, when an eruption has contact with snow, crater lakes, or wet soil in large amounts, water mixing into the flow can create lahars, which pose significant known risks worldwide. Types Vulcanian eruption Peléan eruption Plinian eruption Consequences: Eruption column Pyroclastic flow Pyroclastic fall Pyroclastic surge Other mechanisms An explosive eruption is usually triggered by exsolution of volatiles but there are other ways to create an explosive eruption. Phreatic eruption A phreatic eruption can occur when hot water under pressure is depressurised. Depressurisation reduces the boiling point of the water, so when depressurised the water suddenly boils. Or it may happen when groundwater is suddenly heated, flashing to steam suddenly. When the water turns into steam, it expands at supersonic speeds, up to 1,700 times its original volume. This can be enough to shatter solid rock, and hurl rock fragments hundreds of metres. A phreatomagmatic eruption contains magmatic material, in contrast to a phreatic eruption which does not. Clathrate hydrates One mechanism for explosive cryovolcanism is cryomagma making contact with clathrate hydrates. Clathrate hydrates, if exposed to warm temperatures, readily decompose. A 1982 article pointed out the possibility that the production of pressurised gas upon destabilisation of clathrate hydrates making contact with warm rising magma could produce an explosion that breaks through the surface, resulting in explosive cryovolcanism. Water vapor in a vacuum If a fracture reaches the surface of an icy body and the column of rising water is exposed to the near-vacuum of the surface of most icy bodies, it will immediately start to boil, because its vapor pressure is much more than the ambient pressure. Not only that, but any volatiles in the water will exsolve. The combination of these processes will release droplets and vapor, which can rise up the fracture, creating a plume. This is thought to be partially responsible for Enceladus's ice plumes. See also Effusive eruption Volcanic explosivity index References External links
Explosive eruption
[ "Chemistry" ]
1,068
[ "Explosive eruptions", "Explosions" ]
3,051,962
https://en.wikipedia.org/wiki/Homography
In projective geometry, a homography is an isomorphism of projective spaces, induced by an isomorphism of the vector spaces from which the projective spaces derive. It is a bijection that maps lines to lines, and thus a collineation. In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity, projective transformation, and projective collineation. Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry, and the term homography, which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity. The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field (incidence geometry, see also synthetic geometry); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations". For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field. Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the results remain true, or may be generalized to projective geometries for which these theorems do not hold. Geometric motivation Historically, the concept of homography had been introduced to understand, explain and study visual perspective, and, specifically, the difference in appearance of two plane objects viewed from different points of view. In three-dimensional Euclidean space, a central projection from a point O (the center) onto a plane P that does not contain O is the mapping that sends a point A to the intersection (if it exists) of the line OA and the plane P. The projection is not defined if the point A belongs to the plane passing through O and parallel to P. The notion of projective space was originally introduced by extending the Euclidean space, that is, by adding points at infinity to it, in order to define the projection for every point except O. Given another plane Q, which does not contain O, the restriction to Q of the above projection is called a perspectivity. With these definitions, a perspectivity is only a partial function, but it becomes a bijection if extended to projective spaces. Therefore, this notion is normally defined for projective spaces. The notion is also easily generalized to projective spaces of any dimension, over any field, in the following way: If f is a perspectivity from P to Q, and g a perspectivity from Q to P, with a different center, then is a homography from P to itself, which is called a central collineation, when the dimension of P is at least two. (See below and .) Originally, a homography was defined as the composition of a finite number of perspectivities. It is a part of the fundamental theorem of projective geometry (see below) that this definition coincides with the more algebraic definition sketched in the introduction and detailed below. Definition and expression in homogeneous coordinates A projective space P(V) of dimension n over a field K may be defined as the set of the lines through the origin in a K-vector space V of dimension . If a basis of V has been fixed, a point of V may be represented by a point of Kn+1. A point of P(V), being a line in V, may thus be represented by the coordinates of any nonzero point of this line, which are thus called homogeneous coordinates of the projective point. Given two projective spaces P(V) and P(W) of the same dimension, a homography is a mapping from P(V) to P(W), which is induced by an isomorphism of vector spaces . Such an isomorphism induces a bijection from P(V) to P(W), because of the linearity of f. Two such isomorphisms, f and g, define the same homography if and only if there is a nonzero element a of K such that . This may be written in terms of homogeneous coordinates in the following way: A homography φ may be defined by a nonsingular matrix [ai,j], called the matrix of the homography. This matrix is defined up to the multiplication by a nonzero element of K. The homogeneous coordinates of a point and the coordinates of its image by φ are related by When the projective spaces are defined by adding points at infinity to affine spaces (projective completion) the preceding formulas become, in affine coordinates, which generalizes the expression of the homographic function of the next section. This defines only a partial function between affine spaces, which is defined only outside the hyperplane where the denominator is zero. Homographies of a projective line The projective line over a field K may be identified with the union of K and a point, called the "point at infinity" and denoted by ∞ (see Projective line). With this representation of the projective line, the homographies are the mappings which are called homographic functions or linear fractional transformations. In the case of the complex projective line, which can be identified with the Riemann sphere, the homographies are called Möbius transformations. These correspond precisely with those bijections of the Riemann sphere that preserve orientation and are conformal. In the study of collineations, the case of projective lines is special due to the small dimension. When the line is viewed as a projective space in isolation, any permutation of the points of a projective line is a collineation, since every set of points are collinear. However, if the projective line is embedded in a higher-dimensional projective space, the geometric structure of that space can be used to impose a geometric structure on the line. Thus, in synthetic geometry, the homographies and the collineations of the projective line that are considered are those obtained by restrictions to the line of collineations and homographies of spaces of higher dimension. This means that the fundamental theorem of projective geometry (see below) remains valid in the one-dimensional setting. A homography of a projective line may also be properly defined by insisting that the mapping preserves cross-ratios. Projective frame and coordinates A projective frame or projective basis of a projective space of dimension is an ordered set of points such that no hyperplane contains of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension has at most vertices. Projective spaces over a commutative field are considered in this section, although most results may be generalized to projective spaces over a division ring. Let be a projective space of dimension , where is a K-vector space of dimension , and be the canonical projection that maps a nonzero vector to the vector line that contains it. For every frame of , there exists a basis of V such that the frame is , and this basis is unique up to the multiplication of all its elements by the same nonzero element of K. Conversely, if is a basis of V, then is a frame of It follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the first fundamental theorem of projective geometry. Every frame allows to define projective coordinates, also known as homogeneous coordinates: every point may be written as ; the projective coordinates of on this frame are the coordinates of on the base . It is not difficult to verify that changing the e and , without changing the frame nor p(v), results in multiplying the projective coordinates by the same nonzero element of K. The projective space has a canonical frame consisting of the image by of the canonical basis of (consisting of the elements having only one nonzero entry, which is equal to 1), and . On this basis, the homogeneous coordinates of are simply the entries (coefficients) of the tuple . Given another projective space of the same dimension, and a frame of it, there is one and only one homography mapping onto the canonical frame of . The projective coordinates of a point on the frame are the homogeneous coordinates of on the canonical frame of . Central collineations In above sections, homographies have been defined through linear algebra. In synthetic geometry, they are traditionally defined as the composition of one or several special homographies called central collineations. It is a part of the fundamental theorem of projective geometry that the two definitions are equivalent. In a projective space, P, of dimension , a collineation of P is a bijection from P onto P that maps lines onto lines. A central collineation (traditionally these were called perspectivities, but this term may be confusing, having another meaning; see Perspectivity) is a bijection α from P to P, such that there exists a hyperplane H (called the axis of α), which is fixed pointwise by α (that is, for all points X in H) and a point O (called the center of α), which is fixed linewise by α (any line through O is mapped to itself by α, but not necessarily pointwise). There are two types of central collineations. Elations are the central collineations in which the center is incident with the axis and homologies are those in which the center is not incident with the axis. A central collineation is uniquely defined by its center, its axis, and the image α(P) of any given point P that differs from the center O and does not belong to the axis. (The image α(Q) of any other point Q is the intersection of the line defined by O and Q and the line passing through α(P) and the intersection with the axis of the line defined by P and Q.) A central collineation is a homography defined by a (n+1) × (n+1) matrix that has an eigenspace of dimension n. It is a homology, if the matrix has another eigenvalue and is therefore diagonalizable. It is an elation, if all the eigenvalues are equal and the matrix is not diagonalizable. The geometric view of a central collineation is easiest to see in a projective plane. Given a central collineation α, consider a line ℓ that does not pass through the center O, and its image under α, . Setting , the axis of α is some line M through R. The image of any point A of ℓ under α is the intersection of OA with ℓ. The image of a point B that does not belong to ℓ may be constructed in the following way: let , then . The composition of two central collineations, while still a homography in general, is not a central collineation. In fact, every homography is the composition of a finite number of central collineations. In synthetic geometry, this property, which is a part of the fundamental theory of projective geometry is taken as the definition of homographies. Fundamental theorem of projective geometry There are collineations besides the homographies. In particular, any field automorphism σ of a field F induces a collineation of every projective space over F by applying σ to all homogeneous coordinates (over a projective frame) of a point. These collineations are called automorphic collineations. The fundamental theorem of projective geometry consists of the three following theorems. Given two projective frames of a projective space P, there is exactly one homography of P that maps the first frame onto the second one. If the dimension of a projective space P is at least two, every collineation of P is the composition of an automorphic collineation and a homography. In particular, over the reals, every collineation of a projective space of dimension at least two is a homography. Every homography is the composition of a finite number of perspectivities. In particular, if the dimension of the implied projective space is at least two, every homography is the composition of a finite number of central collineations. If projective spaces are defined by means of axioms (synthetic geometry), the third part is simply a definition. On the other hand, if projective spaces are defined by means of linear algebra, the first part is an easy corollary of the definitions. Therefore, the proof of the first part in synthetic geometry, and the proof of the third part in terms of linear algebra both are fundamental steps of the proof of the equivalence of the two ways of defining projective spaces. Homography groups As every homography has an inverse mapping and the composition of two homographies is another, the homographies of a given projective space form a group. For example, the Möbius group is the homography group of any complex projective line. As all the projective spaces of the same dimension over the same field are isomorphic, the same is true for their homography groups. They are therefore considered as a single group acting on several spaces, and only the dimension and the field appear in the notation, not the specific projective space. Homography groups also called projective linear groups are denoted when acting on a projective space of dimension n over a field F. Above definition of homographies shows that may be identified to the quotient group , where is the general linear group of the invertible matrices, and F×I is the group of the products by a nonzero element of F of the identity matrix of size . When F is a Galois field GF(q) then the homography group is written . For example, acts on the eight points in the projective line over the finite field GF(7), while , which is isomorphic to the alternating group A5, is the homography group of the projective line with five points. The homography group is a subgroup of the collineation group of the collineations of a projective space of dimension n. When the points and lines of the projective space are viewed as a block design, whose blocks are the sets of points contained in a line, it is common to call the collineation group the automorphism group of the design. Cross-ratio The cross-ratio of four collinear points is an invariant under the homography that is fundamental for the study of the homographies of the lines. Three distinct points , and on a projective line over a field form a projective frame of this line. There is therefore a unique homography of this line onto that maps to , to 0, and to 1. Given a fourth point on the same line, the cross-ratio of the four points , , and , denoted , is the element of . In other words, if has homogeneous coordinates over the projective frame , then . Over a ring Suppose A is a ring and U is its group of units. Homographies act on a projective line over A, written P(A), consisting of points with projective coordinates. The homographies on P(A) are described by matrix mappings When A is a commutative ring, the homography may be written but otherwise the linear fractional transformation is seen as an equivalence: The homography group of the ring of integers Z is modular group . Ring homographies have been used in quaternion analysis, and with dual quaternions to facilitate screw theory. The conformal group of spacetime can be represented with homographies where A is the composition algebra of biquaternions. Periodic homographies The homography is periodic when the ring is Z/nZ (the integers modulo n) since then Arthur Cayley was interested in periodicity when he calculated iterates in 1879. In his review of a brute force approach to periodicity of homographies, H. S. M. Coxeter gave this analysis: A real homography is involutory (of period 2) if and only if . If it is periodic with period , then it is elliptic, and no loss of generality occurs by assuming that . Since the characteristic roots are exp(±hπi/m), where , the trace is . See also W-curve Notes References , translated from the 1977 French original by M. Cole and S. Levy, fourth printing of the 1987 English translation Further reading Patrick du Val (1964) Homographies, quaternions and rotations, Oxford Mathematical Monographs, Clarendon Press, Oxford, . Gunter Ewald (1971) Geometry: An Introduction, page 263, Belmont:Wadsworth Publishing . External links Projective geometry Transformation (function)
Homography
[ "Mathematics" ]
3,607
[ "Geometry", "Transformation (function)" ]
3,051,997
https://en.wikipedia.org/wiki/Flood%20barrier
A flood barrier, surge barrier or storm surge barrier is a specific type of floodgate, designed to prevent a storm surge or spring tide from flooding the protected area behind the barrier. A surge barrier is almost always part of a larger flood protection system consisting of floodwalls, levees (also known as dikes), and other constructions and natural geographical features. Flood barrier may also refer to barriers placed around or at individual buildings to keep floodwaters from entering the buildings. Examples Delta Works The Delta Works in the Netherlands is the largest flood protection project in the world. This project consists of a number of surge barriers, the Oosterscheldekering being the largest surge barrier in the world, long. Other examples include the Maeslantkering, Haringvlietdam and the Hartelkering. Thames Barrier The Thames Barrier is the world's second largest movable flood barrier (after the Oosterscheldekering and the Haringvlietdam) and is located downstream of central London. Its purpose is to prevent London from being flooded by exceptionally high tides and storm surges moving up from the North Sea. It needs to be raised (closed) only during high tide; at ebb tide it can be lowered to release the water that backs up behind it. New Orleans In 2007 the United States Army Corps of Engineers started construction of an ambitious project that aimed to prevent storm surges from flooding the city by 2011. The IHNC Lake Borgne Surge Barrier on the confluence of these waterways is the largest in the United States. It protects the city from the Gulf of Mexico from flooding the area. The new Seabrook floodgate prevents a storm surge from entering from Lake Pontchartrain. The GIWW West Closure Complex closes the Gulf Intracoastal Waterway to protect the west side of the city. This complex is unique in that it contains the world's largest pumping station, necessary to pump out rainwater that is discharged in the protected side of the canal during a hurricane. Eider Barrage The Eider Barrage is located at the mouth of the river Eider near Tönning on Germany's North Sea coast. Its main purpose is protection from storm surges by the North Seas. It is Germany's largest coastal protection structure. St. Petersburg Dam The Saint Petersburg Dam (officially called the Saint Petersburg Flood Prevention Facility Complex) is a barrier separating the Gulf of Finland from Neva Bay to protect the city of Saint Petersburg, Russia from coastal flooding. The Soviet Union started construction of the barrier in 1978 and it was completed and made operational in 2011. New England The New Bedford Harbor Hurricane Barrier protects the city of New Bedford, Massachusetts, with a mostly immovable barrier of stone and fill. It has three land and one marine door for access in calm seas. The nearby Fox Point Hurricane Barrier protects the city of Providence, Rhode Island. The US Army Corps of Engineers also owns and operates the hurricane barrier at Stamford, CT. Venice The MOSE Project is intended to protect the city of Venice, Italy, and the Venetian Lagoon from flooding. River Foss Barrier The River Foss, York, UK has a barrier to control the inflow of fast moving water from the River Ouse that may overspill its banks upstream the Foss and flood surrounding properties. Proposed flood barriers New York Harbor The New York Harbor Storm-Surge Barrier is a proposed regional flood barrier system that would protect the harbor and the New York – New Jersey metropolitan region. Ike Dike The Ike Dike is a proposed flood barrier that would protect Houston, Texas. Perimeter flood barriers Flood barriers may be placed temporarily or permanently around individual buildings or at building entrances to keep floodwaters from entering those buildings. A wall constructed of sandbags is an example of a temporary barrier. A reinforced concrete wall is an example of a permanent barrier. == Temporary barriers == Sandbags have traditionally been used as temporary flood barriers. References Dams Hydrology Water transport infrastructure
Flood barrier
[ "Chemistry", "Engineering", "Environmental_science" ]
804
[ "Hydrology", "Environmental engineering" ]
3,052,185
https://en.wikipedia.org/wiki/Bust/waist/hip%20measurements
Bust/waist/hip measurements (informally called 'body measurements' or ′vital statistics′) are a common method of specifying clothing sizes. They match the three inflection points of the female body shape. In human body measurement, these three sizes are the circumferences of the bust, waist and hips; usually rendered as xx–yy–zz in inches, or centimeters. The three sizes are used mostly in fashion, and almost exclusively in reference to women, who, compared to men, are more likely to have a narrow waist relative to their hips. Measurements and perception Breast volume will have an effect on the perception of a woman's figure even when bust/waist/hip measurements are nominally the same. Brassière band size is measured below the breasts, not at the bust. A woman with measurements of 36A–27–38 will have a different presentation than a woman with measurements of 34C–27–38. These women have ribcage circumferences differing by 2 inches, but when breast tissue is included the measurements are the same at 38 inches. The result is that the latter woman will appear "bustier" than the former due to the apparent difference in bust to hip ratios (narrower shoulders, more prominent breasts) even though they are both technically 38–27–38. Height will also affect the presentation of the figure. A woman who is 36–24–36 (91.5–61–91.5) at tall looks different from a woman who is 36–24–36 at tall. Since the latter woman's figure has greater distance between measuring points, she will likely appear thinner than her former counterpart, again, even though they share the same measurements. The specific proportions of 36–24–36 inches (90-60-90 centimeters) have frequently been given as the "hourglass" proportions for women since at least the 1960s See also Female body shape Physical attractiveness Waist–hip ratio Waist-to-height ratio Notes References Sizes in clothing Body shape Waist
Bust/waist/hip measurements
[ "Physics", "Mathematics" ]
415
[ "Sizes in clothing", "Quantity", "Physical quantities", "Size" ]
3,052,305
https://en.wikipedia.org/wiki/Polyspermy
In biology, polyspermy describes the fertilization of an egg by more than one sperm. Diploid organisms normally contain two copies of each chromosome, one from each parent. The cell resulting from polyspermy, on the other hand, contains three or more copies of each chromosome—one from the egg and one each from multiple sperm. Usually, the result is an unviable zygote. This may occur because sperm are too efficient at reaching and fertilizing eggs due to the selective pressures of sperm competition. Such a situation is often deleterious to the female: in other words, the male–male competition among sperm spills over to create sexual conflict. Physiological polyspermy Physiological polyspermy happens when the egg normally accepts more than one sperm but only one of the multiple sperm will fuse its nucleus with the nucleus of the egg. Physiological polyspermy is present in some species of vertebrates and invertebrates. Some species utilize physiological polyspermy as the proper mechanism for developing their offspring. Some of these animals include birds, ctenophora, reptiles and amphibians. Some vertebrates that are both amniote or anamniote, including urodele amphibians, cartilaginous fish, birds and reptiles, undergo physiological polyspermy because of the internal fertilization of their yolky eggs. Sperm triggers egg activation by the induction of free calcium ion concentration in the cytoplasm of the egg. This induction plays a very critical role in both physiological polyspermy and monomeric polyspermy species. The rise in calcium causes activation of the egg. The egg will then be altered on both a biochemical and morphological level. In mammals as well as sea urchins, the sudden rise in calcium concentration occurs because of the influx of calcium ions within the egg. These calcium ions are responsible for the cortical granule reaction, and are also stored in the egg's endoplasmic reticulum. Unlike physiological polyspermy, monospermic fertilization deals with the analysis of the egg calcium waves, as this is the typical reproduction process in all species. Species that undergo physiological polyspermy have polyploidy-preventing mechanisms that act inside the egg. This is quite different from the normal polyspermy block on the outside of the egg. Chicken and zebra finch eggs require multiple sperm In the journal Proceedings of the Royal Society B, as reported in the New York Times, Dr. Nicola Hemmings, an evolutionary biologist at the University of Sheffield, and one of the study's authors reported that the eggs of zebra finches and chickens require multiple sperm, from 10 to hundreds of sperm, to penetrate the egg to ensure successful fertilization and growth of the bird embryo. Blocking polyspermy Polyspermy is very rare in human reproduction. The decline in the numbers of sperm that swim to the oviduct is one of two ways that prevents polyspermy in humans. The other mechanism is the blocking of sperm in the fertilized egg. According to Developmental Biology Interactive, if an egg becomes fertilized by multiple sperm, the embryo will then gain various paternal centrioles. When this happens, there is a struggle for extra chromosomes. This competition causes disarrayment in cleavage furrow formation and the normal consequence is death of the zygote. Only two cases of human polyspermy leading to birth of children have been reported. Fast block of polyspermy The eggs of sexually-reproducing organisms are adapted to avoid this situation. The defenses are particularly well characterized in the sea urchin, which respond to the acceptance of one sperm by inhibiting the successful penetration of the egg by subsequent sperm. Similar defenses exist in other eukaryotes. The prevention of polyspermy in sea urchins depends on a change in the electrical charge across the surface of the egg, which is caused by the fusion of the first sperm with the egg. Unfertilized sea urchin eggs have a negative charge inside, but the charge becomes positive upon fertilization. When sea urchin sperm encounter an egg with a positive charge, sperm-egg fusion is blocked. Thus, after the first sperm contacts the egg and causes the change, subsequent sperms are prevented from fusing. This "electrical polyspermy block" is thought to result because a positively charged molecule in the sperm surface membrane is repelled by the positive charge at the egg surface. Electrical polyspermy blocks operate in many animal species, including frogs, clams, and marine worms, but not in the several mammals that have been studied (hamster, rabbit, mouse). In species without an electrical block, polyspermy is usually prevented by secretion of materials that establish a mechanical barrier to polyspermy. Animals such as sea urchins have a two-step polyspermy prevention strategy, with the fast, but transient, electrical block superseded after the first minute or so by a more slowly developing permanent mechanical block. Electrical blocks are helpful in species where a very fast block to polyspermy is needed, due to the presence of many sperm arriving simultaneously at the egg surface, as occurs in animals such as sea urchins. In sea urchins, fertilization occurs externally in the ocean, such that hundreds of sperm can encounter the egg within several seconds. In mammals it occurs 2-3 seconds after the first sperm enters the egg, it is a chemical process which involves changing the potential of egg from a resting potential of -70 mv to 10 mv. It involves infux of sodium ion into the egg this membrane of the egg changes from negative to positive, sperm cell cannot enter positively charged egg, the positive charge only last for 60 seconds Slow block of polyspermy In mammals, in which fertilization occurs internally, fewer sperm reach the fertilization site in the oviduct. This may be the result of the female genital tract being adapted to minimize the number of sperm reaching the egg. Nevertheless, polyspermy preventing mechanisms are essential in mammals; a secretion reaction, the "cortical reaction" modifies the extracellular coat of the egg (the zona pellucida), and additional mechanisms that are not well understood modify the egg's plasma membrane. The zona pellucida is modified by serine proteases that are released from the cortical granules. The proteases destroy the protein link between the cell membrane and the vitelline envelope, remove any receptors that other sperm have bound to, and help to form the fertilization envelope from the cortical granules. The cortical reaction occurs due to calcium oscillations inside the oocyte. What triggers such oscillations is PLC-zeta, a phospholipase unique to sperm that is very sensitive to calcium concentrations. When the first spermatozoa get inside the oocyte, it brings in PLC-zeta, that is activated by oocyte's basal calcium concentrations, initiates the formation of IP3 and causes calcium release from endoplasmic reticulum stores, generating the oscillations in calcium concentration that will activate the oocyte and block polyspermy. Evolutionary advantage Female defenses select for ever more aggressive male sperm, however, leading to an evolutionary arms race. On the one hand, polyspermy creates inviable zygotes and lowers female fitness, but on the other, defenses may prevent fertilization altogether. This leads to a delicate compromise between the two, and has been suggested as one possible cause for the relatively high infertility rates seen in mammalian species. In some species, polyspermy is allowed to happen to result in more than one sperm entering the egg creating viable offspring without detrimental effects. See also Cortical reaction References Further reading Ginzberg, A. S. 1972. Fertilization in Fishes and the Problem of Polyspermy, Israel Program for Scientific Translations, Jerusalem. Jaffe, L. A. & M. Gould. 1985. Polyspermy-preventing mechanisms. In C. B. Metz & A. Monroy (editors) Biology of Fertilization. Academic, New York.Brendon magero; 223–250. External links Animation of polyspermy Reproduction
Polyspermy
[ "Biology" ]
1,703
[ "Biological interactions", "Behavior", "Reproduction" ]
3,052,574
https://en.wikipedia.org/wiki/KCalc
KCalc is a scientific software calculator integrated with the KDE Gear. Functions KCalc includes trigonometric functions, logic operations, saved previous results, copy and paste, a configure UI, and statistical computations. The history function uses a stack method. KCalc also supports boolean operations. KCalc does not support graphing. Since version 2 (included in KDE 3.5) KCalc offers arbitrary precision. Gallery See also Comparison of software calculators GNOME Calculator External links The KCalc Handbook References Free educational software KDE Applications Software calculators
KCalc
[ "Mathematics" ]
128
[ "Software calculators", "Mathematical software" ]
3,052,970
https://en.wikipedia.org/wiki/Glan%E2%80%93Thompson%20prism
A Glan–Thompson prism is a type of polarizing prism similar to the Nicol prism and Glan–Foucault prism. Design A Glan–Thompson prism consists of two right-angled calcite prisms that are cemented together by their long faces. The optical axes of the calcite crystals are parallel and aligned perpendicular to the plane of reflection. Birefringence splits light entering the prism into two rays, experiencing different refractive indices; the p-polarized ordinary ray is totally internally reflected from the calcite–cement interface, leaving the s-polarized extraordinary ray to be transmitted. The prism can therefore be used as a polarizing beam splitter. Traditionally Canada balsam was used as the cement in assembling these prisms, but this has largely been replaced by synthetic polymers. Characteristics Compared to the similar Glan–Foucault prism, the Glan–Thompson has a wider acceptance angle, but a much lower limit of maximal irradiance (due to optical damage limitations of the cement layer). See also Glan–Taylor prism References Polarization (waves) Prisms (optics)
Glan–Thompson prism
[ "Physics" ]
234
[ "Polarization (waves)", "Astrophysics" ]
3,052,975
https://en.wikipedia.org/wiki/MINDO
MINDO, or Modified Intermediate Neglect of Differential Overlap is a semi-empirical method for the quantum calculation of molecular electronic structure in computational chemistry. It is based on the Intermediate Neglect of Differential Overlap (INDO) method of John Pople. It was developed by the group of Michael Dewar and was the original method in the MOPAC program. The method should actually be referred to as MINDO/3. It was later replaced by the MNDO method, which in turn was replaced by the PM3 and AM1 methods. References Semiempirical quantum chemistry methods
MINDO
[ "Chemistry" ]
116
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Computational chemistry", "Physical chemistry stubs", "Semiempirical quantum chemistry methods" ]
3,052,983
https://en.wikipedia.org/wiki/Glan%E2%80%93Foucault%20prism
A Glan–Foucault prism (also called a Glan–air prism) is a type of prism which is used as a polarizer. It is similar in construction to a Glan–Thompson prism, except that two right-angled calcite prisms are spaced with an air gap instead of being cemented together. Total internal reflection of p-polarized light at the air gap means that only s-polarized light is transmitted straight through the prism. Design Compared to the Glan–Thompson prism, the Glan–Foucault has a narrower acceptance angle over which it works, but because it uses an air gap rather than cement, much higher irradiances can be used without damage. The prism can thus be used with laser beams. The prism is also shorter (for a given usable aperture) than the Glan–Thompson design, and the deflection angle of the rejected beam can be made close to 90°, which is sometimes useful. Glan–Foucault prisms are not typically used as polarizing beamsplitters because while the transmitted beam is completely polarized, the reflected beam is not. Polarization The Glan–Taylor prism is similar, except that the crystal axes and transmitted polarization direction are orthogonal to the Glan–Foucault design. This yields higher transmission and better polarization of the reflected light. Calcite Glan–Foucault prisms are now rarely used, having been mostly replaced by Glan–Taylor polarizers and other more recent designs. Yttrium orthovanadate (YVO4) prisms based on the Glan–Foucault design have superior polarization of the reflected beam and higher damage threshold, compared with calcite Glan–Foucault and Glan–Taylor prisms. YVO4 prisms are more expensive, however, and can accept beams over a very limited range of angles of incidence. References Polarization (waves) Prisms (optics)
Glan–Foucault prism
[ "Physics" ]
413
[ "Polarization (waves)", "Astrophysics" ]
3,053,039
https://en.wikipedia.org/wiki/Bateman%20Manuscript%20Project
The Bateman Manuscript Project was a major effort at collation and encyclopedic compilation of the mathematical theory of special functions. It resulted in the eventual publication of five important reference volumes, under the editorship of Arthur Erdélyi. Overview The theory of special functions was a core activity of the field of applied mathematics, from the middle of the nineteenth century to the advent of high-speed electronic computing. The intricate properties of spherical harmonics, elliptic functions and other staples of problem-solving in mathematical physics, astronomy and right across the physical sciences, are not easy to document completely, absent a theory explaining the inter-relationships. Mathematical tables to perform actual calculations needed to mesh with an adequate theory of how functions could be transformed into those already tabulated. Harry Bateman, a distinguished applied mathematician, undertook the somewhat quixotic task of trying to collate the content of the very large literature. On his death in 1946, his papers on this project were still in a uniformly rough state. The publication of the edited version provided special functions texts more up-to-date than, for example, the classic Whittaker & Watson. The volumes were out of print for many years, and copyright in the works reverted to the California Institute of Technology, who renewed them in the early 1980s. In 2011, the California Institute of Technology gave permission for scans of the volumes to be made publicly available. Other mathematicians involved in the project include Wilhelm Magnus, and Francesco Tricomi. Askey–Bateman project In 2007, the Askey–Bateman project was announced by Mourad Ismail as a five- or six-volume encyclopedic book series on special functions, based on the works of Harry Bateman and Richard Askey. Starting in 2020, Cambridge University Press began publishing volumes 1 and 2 of this Encyclopedia of Special Functions with series editors Mourad Ismail and : Volume 1: Univariate Orthogonal Polynomials (editor: Mourad Ismail) Volume 2: Multivariable Special Functions (editors: Tom H. Koornwinder, Jasper V. Stokman) Volume 3: Hypergeometric and Basic Hypergeometric Functions (editor: Mourad Ismail) Further volumes were considered to include topics such as: Continued fractions, number theory, and elliptic and theta functions Equations of mathematical physics, including continuous and discrete Painlevé, Lamé and Heun equations Transforms and related topics See also Abramowitz and Stegun (AS) Jahnke and Emde (JE) Magnus and Oberhettinger (MO) NIST Handbook of Mathematical Functions Digital Library of Mathematical Functions (DLMF) Bibliography (xxvi+4 errata pages+302 pages, red cloth hardcover) (NB. With a preface by Mina Rees and a foreword by Earnest C. Watson. Copyright was renewed by California Institute of Technology in 1981.); Reprint: Robert E. Krieger Publishing Co., Inc., Melbourne, Florida, US. 1981. ; Planned Dover reprint . (xvii+1 errata page+396 pages, red cloth hardcover) (NB. Copyright was renewed by California Institute of Technology in 1981.); Reprint: Robert E. Krieger Publishing Co., Inc., Melbourne, Florida, US. 1981. ; Planned Dover reprint: . (xvii+292+2 pages, red cloth hardcover) (NB. Copyright was renewed by California Institute of Technology in 1983.); Reprint: Robert E. Krieger Publishing Co., Inc., Melbourne, Florida, US. 1981. ; Planned Dover reprint . (xx+391 pages including 1 errata page, red cloth hardcover) (xvi+451 pages, red cloth hardcover) (1+6+111+1+1 pages) (343 pages, cloth hardcover) (327 pages, cloth hardcover) References Mathematics literature Special functions History of mathematics
Bateman Manuscript Project
[ "Mathematics" ]
794
[ "Special functions", "Combinatorics" ]
3,053,158
https://en.wikipedia.org/wiki/Gantry%20crane
A gantry crane is a crane built atop a gantry, which is a structure used to straddle an object or workspace. They can range from enormous "full" gantry cranes, capable of lifting some of the heaviest loads in the world, to small shop cranes, used for tasks such as lifting automobile engines out of vehicles. They are also called portal cranes, the "portal" being the empty space straddled by the gantry. The terms gantry crane and overhead crane (or bridge crane) are often used interchangeably, as both types of crane straddle their workload. The distinction most often drawn between the two is that with gantry cranes, the entire structure (including gantry) is usually wheeled (often on rails). By contrast, the supporting structure of an overhead crane is fixed in location, often in the form of the walls or ceiling of a building, to which is attached a movable hoist running overhead along a rail or beam (which may itself move). Further confusing the issue is that gantry cranes may also incorporate a movable beam-mounted hoist in addition to the entire structure being wheeled, and some overhead cranes are suspended from a freestanding gantry. Variants Ship-to-shore gantry crane Ship-to-shore gantry cranes are imposing, multi-story structures prominent at most container terminals, used to load intermodal containers on and off container ships. They operate along two rails (waterside and landside designations) spaced based on the size of crane to be used. Ship-to-shore crane elements Lateral movement system: A combination of two sets of typically ten rail wheels. The lateral movement is controlled by a cabin along the landside wheel. During any lateral movement, lights and sirens operate to ensure safety of the crew operating adjacent to the crane. The wheels are mounted to the bottom of the vertical frame/bracing system. Vertical frame and braces: A structurally designed system of beams assembled to support the boom, cabin, operating machinery, and the cargo being lifted. They display signage describing restrictions, requirements and identifiers. Crane boom: A horizontal beam that runs transversely to the berth. It spans from landside of the landside rail wheels to a length over the edge of the berth. The waterside span is based on the size of ship that it can successfully load/unload. Beams also have the ability to be raised for storage purposes. Hook: Device which moves vertically to raise and lower cargo as well as horizontally along the boom's length. For container cranes, a spreader is attached to span the container and lock it safely in place during movement. Operating cabin: Encased setup with glass paneled flooring for operator to view the cargo being moved. Elevators which are located along vertical frame members are used to get crew up and down from the cabin. Storage equipment: For temporary storage options between vessel operations, one steel pin is inserted into anchorage arm dropped from each wheel set into a stow pin assembly. This setup is designed to prevent lateral movement along the rails. During hurricanes and other emergency shut down situations, tie down assemblies are used. Two angled arms are anchored at each end of each set of wheels. This setup prevents longitudinal movement along the rails as well as prevents tipping of the crane due to uplift from high velocity winds. Ship-to-shore gantry cranes are often used in pairs or teams of cranes in order to minimize the time required to load and unload vessels. As container ship sizes and widths have increased throughout the 20th Century, ship-to-shore gantry cranes and the implementation of those gantry cranes have become more individualized in order to effectively load and unload vessels while maximizing profitability and minimizing time in port. One example are systems where specialized berths are built that accommodate one vessel at a time with ship-to-shore gantry cranes on both sides of the vessel. This allows for more cranes and double the workspace under the cranes to be used for transporting cargo off dock. The first quayside container gantry crane was developed in 1959 by Paceco Corporation. Full gantry crane Full gantry cranes (where the load remains beneath the gantry structure, supported from a beam) are well suited to lifting massive objects such as ships' engines, as the entire structure can resist the torque created by the load, and counterweights are generally not required. These are often found in shipyards where they are used to move large ship components together for construction. They use a complex system of cables and attachments to support the massive loads undertaken by the full gantry cranes. Some full gantry cranes of note are Samson and Goliath and Taisun. Samson and Goliath are two full gantry cranes located in the Harland and Wolff shipyard in Belfast. They have spans of and can lift loads of up to to a height of . In 2008, the world's strongest gantry crane, Taisun, which can lift , was installed in Yantai, China at the Yantai Raffles Shipyard. In 2012, a capacity crane, the "Honghai Crane" was planned for construction in Qidong City, China and was finished in 2014. Rubber-tyred gantry crane Smaller gantry cranes are also available running on rubber tyres so that tracks are not needed. Rubber tyred gantry cranes are essential for moving containers from berths throughout the rest of the yard. For this task they come in large sizes, as pictured to the left, that are used for moving to straddle multiple lanes of rail, road, or container storage. They also are capable of lifting fully loaded containers to great heights. Smaller rubber tyred gantry cranes come in the form of straddle carriers which are used when moving individual containers or vertical stacks of containers. Portable gantry crane systems, such as rubber tyred gantry cranes, are in high demand in terminals and ports restricted in size and reliant on maximizing vertical space and not needing to haul containers long distances. This is due to the relatively slow speed yet high reach of rubber tyred gantry cranes when compared to other forms of container terminal equipment. Portable gantry crane Portable gantry cranes are used to lift and transport smaller items, usually less than . They are widely used in the HVAC, machinery moving and fine art installation industries. Some portable gantry cranes are equipped with an enclosed track, while others use an I-beam, or other extruded shapes, for the running surface. Most workstation gantry cranes are intended to be stationary when loaded, and mobile when unloaded. Workstation Gantry Cranes can be outfitted with either a wire rope hoist or a lower capacity chain hoist. Gallery See also References Port infrastructure Cranes (machines)
Gantry crane
[ "Engineering" ]
1,378
[ "Engineering vehicles", "Cranes (machines)" ]
3,053,322
https://en.wikipedia.org/wiki/Capsule%20%28pharmacy%29
In the manufacture of pharmaceuticals, encapsulation refers to a range of dosage forms—techniques used to enclose medicines—in a relatively stable shell known as a capsule, allowing them to, for example, be taken orally or be used as suppositories. The two main types of capsules are: Hard-shelled capsules, which contain dry, powdered ingredients or miniature pellets made by e.g. processes of extrusion or spheronization. These are made in two-halves: a smaller-diameter "body" that is filled and then sealed using a larger-diameter "cap". Soft-shelled capsules, primarily used for oils and for active ingredients that are dissolved or suspended in oil. Both of these classes of capsules are made from aqueous solutions of gelling agents, such as animal protein (mainly gelatin) or plant polysaccharides or their derivatives (such as carrageenans and modified forms of starch and cellulose). Other ingredients can be added to the gelling agent solution including plasticizers such as glycerin or sorbitol to decrease the capsule's hardness, coloring agents, preservatives, disintegrants, lubricants and surface treatment. Since their inception, capsules have been viewed by consumers as the most efficient method of taking medication. For this reason, producers of drugs such as OTC analgesics wanting to emphasize the strength of their product developed the "caplet", a portmanteau of "capsule-shaped tablet", to tie this positive association to more efficiently produced tablet pills, as well as being an easier-to-swallow shape than the usual disk-shaped tablet medication. Single-piece gel encapsulation ("soft capsules") In 1833, Mothes and Dublanc were granted a patent for a method to produce a single-piece gelatin capsule that was sealed with a drop of gelatin solution. They used individual iron molds for their process, filling the capsules individually with a medicine dropper. Later on, methods were developed that used sets of plates with pockets to form the capsules. Although some companies still use this method, the equipment is no longer produced commercially. All modern soft-gel encapsulation uses variations of a process developed by R. P. Scherer in 1933. His innovation used a rotary die to produce the capsules. They were then filled by blow molding. This method was high-yield, consistent, and reduced waste. Softgels can be an effective delivery system for oral drugs, especially poorly soluble drugs. This is because the fill can contain liquid ingredients that help increase the solubility or permeability of the drug across the membranes in the body. Liquid ingredients are difficult to include in any other solid dosage form, such as a tablet. Softgels are also highly suited to potent drugs (for example, where the dose is <100 μg), where the highly reproducible filling process helps ensure each softgel has the same drug content, and because the operators are not exposed to any drug dust during the manufacturing process. In 1949, the Lederle Laboratories division of the American Cyanamid Company developed the "Accogel" process, allowing powders to be accurately filled into soft gelatin capsules. Two-piece gel encapsulation ("hard capsules") James Murdoch of London patented the two-piece telescoping gelatin capsule in 1847. The capsules are made in two parts by dipping metal pins in the gelling agent solution. The capsules are supplied as closed units to the pharmaceutical manufacturer. Before use, the two halves are separated, and the capsule is filled with powder or more normally pellets made by the process of extrusion and spheronization (either by placing a compressed slug of powder into one half of the capsule or by filling one half of the capsule with loose powder) and the other half of the capsule is pressed on. With the compressed slug method, weight varies less between capsules. However, the machinery required to manufacture them is more complex. The powder or spheroids inside the capsule contains the active ingredients and any excipients, such as binders, disintegrants, fillers, glidant, and preservatives. Manufacturing materials Gelatin capsules, informally called gel caps or gelcaps, are composed of gelatin manufactured from the collagen of animal skin or bone. Vegetable capsules, introduced in 1989, are made from cellulose, a structural component in plants. The main ingredient of vegetarian capsules is hydroxypropyl methyl cellulose. In the 21st century, gelatin capsules are more broadly used than vegetarian capsules because the cost of production is lower. Manufacturing equipment The process of encapsulation of hard gelatin capsules can be done on manual, semi-automatic, and automatic capsule filling machines. Hard gelatin capsules are manufactured by the dipping method which is dipping, rotation, drying, stripping, trimming, and joining. Softgels are filled at the same time as they are produced and sealed on the rotary die of a fully automatic machine. Capsule fill weight is a critical attribute in encapsulation and various real-time fill weight monitoring techniques such as near-infrared spectroscopy (NIR) and vibrational spectroscopy are used, as well as in-line weight checks, to ensure product quality. Volume is measured to the full line, which is customary to the top of the smaller-diameter body half. After capping, some ullage volume (airspace) remains in the finished capsule. Standard sizes of two-piece capsules See also Capsule endoscopy OROS Pharmacy Automation - The Tablet Counter Pharmaceutical formulation Pill splitting Tablet Oblaat References Pharmaceutical industry Drug delivery devices Dosage forms Articles containing video clips
Capsule (pharmacy)
[ "Chemistry", "Biology" ]
1,219
[ "Pharmaceutical industry", "Pharmacology", "Drug delivery devices", "Life sciences industry" ]
3,053,507
https://en.wikipedia.org/wiki/Crystal%20growth
A crystal is a solid material whose constituent atoms, molecules, or ions are arranged in an orderly repeating pattern extending in all three spatial dimensions. Crystal growth is a major stage of a crystallization process, and consists of the addition of new atoms, ions, or polymer strings into the characteristic arrangement of the crystalline lattice. The growth typically follows an initial stage of either homogeneous or heterogeneous (surface catalyzed) nucleation, unless a "seed" crystal, purposely added to start the growth, was already present. The action of crystal growth yields a crystalline solid whose atoms or molecules are close packed, with fixed positions in space relative to each other. The crystalline state of matter is characterized by a distinct structural rigidity and very high resistance to deformation (i.e. changes of shape and/or volume). Most crystalline solids have high values both of Young's modulus and of the shear modulus of elasticity. This contrasts with most liquids or fluids, which have a low shear modulus, and typically exhibit the capacity for macroscopic viscous flow. Overview After successful formation of a stable nucleus, a growth stage ensues in which free particles (atoms or molecules) adsorb onto the nucleus and propagate its crystalline structure outwards from the nucleating site. This process is significantly faster than nucleation. The reason for such rapid growth is that real crystals contain dislocations and other defects, which act as a catalyst for the addition of particles to the existing crystalline structure. By contrast, perfect crystals (lacking defects) would grow exceedingly slowly. On the other hand, impurities can act as crystal growth inhibitors and can also modify crystal habit. Nucleation Nucleation can be either homogeneous, without the influence of foreign particles, or heterogeneous, with the influence of foreign particles. Generally, heterogeneous nucleation takes place more quickly since the foreign particles act as a scaffold for the crystal to grow on, thus eliminating the necessity of creating a new surface and the incipient surface energy requirements. Heterogeneous nucleation can take place by several methods. Some of the most typical are small inclusions, or cuts, in the container the crystal is being grown on. This includes scratches on the sides and bottom of glassware. A common practice in crystal growing is to add a foreign substance, such as a string or a rock, to the solution, thereby providing nucleation sites for facilitating crystal growth and reducing the time to fully crystallize. The number of nucleating sites can also be controlled in this manner. If a brand-new piece of glassware or a plastic container is used, crystals may not form because the container surface is too smooth to allow heterogeneous nucleation. On the other hand, a badly scratched container will result in many lines of small crystals. To achieve a moderate number of medium-sized crystals, a container which has a few scratches works best. Likewise, adding small previously made crystals, or seed crystals, to a crystal growing project will provide nucleating sites to the solution. The addition of only one seed crystal should result in a larger single crystal. Mechanisms of growth The interface between a crystal and its vapor can be molecularly sharp at temperatures well below the melting point. An ideal crystalline surface grows by the spreading of single layers, or equivalently, by the lateral advance of the growth steps bounding the layers. For perceptible growth rates, this mechanism requires a finite driving force (or degree of supercooling) in order to lower the nucleation barrier sufficiently for nucleation to occur by means of thermal fluctuations. In the theory of crystal growth from the melt, Burton and Cabrera have distinguished between two major mechanisms: Non-uniform lateral growth The surface advances by the lateral motion of steps which are one interplanar spacing in height (or some integral multiple thereof). An element of surface undergoes no change and does not advance normal to itself except during the passage of a step, and then it advances by the step height. It is useful to consider the step as the transition between two adjacent regions of a surface which are parallel to each other and thus identical in configuration—displaced from each other by an integral number of lattice planes. Note here the distinct possibility of a step in a diffuse surface, even though the step height would be much smaller than the thickness of the diffuse surface. Uniform normal growth The surface advances normal to itself without the necessity of a stepwise growth mechanism. This means that in the presence of a sufficient thermodynamic driving force, every element of surface is capable of a continuous change contributing to the advancement of the interface. For a sharp or discontinuous surface, this continuous change may be more or less uniform over large areas for each successive new layer. For a more diffuse surface, a continuous growth mechanism may require changes over several successive layers simultaneously. Non-uniform lateral growth is a geometrical motion of steps—as opposed to motion of the entire surface normal to itself. Alternatively, uniform normal growth is based on the time sequence of an element of surface. In this mode, there is no motion or change except when a step passes via a continual change. The prediction of which mechanism will be operative under any set of given conditions is fundamental to the understanding of crystal growth. Two criteria have been used to make this prediction: Whether or not the surface is diffuse: a diffuse surface is one in which the change from one phase to another is continuous, occurring over several atomic planes. This is in contrast to a sharp surface for which the major change in property (e.g. density or composition) is discontinuous, and is generally confined to a depth of one interplanar distance. Whether or not the surface is singular: a singular surface is one in which the surface tension as a function of orientation has a pointed minimum. Growth of singular surfaces is known to requires steps, whereas it is generally held that non-singular surfaces can continuously advance normal to themselves. Driving force Consider next the necessary requirements for the appearance of lateral growth. It is evident that the lateral growth mechanism will be found when any area in the surface can reach a metastable equilibrium in the presence of a driving force. It will then tend to remain in such an equilibrium configuration until the passage of a step. Afterward, the configuration will be identical except that each part of the step will have advanced by the step height. If the surface cannot reach equilibrium in the presence of a driving force, then it will continue to advance without waiting for the lateral motion of steps. Thus, Cahn concluded that the distinguishing feature is the ability of the surface to reach an equilibrium state in the presence of the driving force. He also concluded that for every surface or interface in a crystalline medium, there exists a critical driving force, which, if exceeded, will enable the surface or interface to advance normal to itself, and, if not exceeded, will require the lateral growth mechanism. Thus, for sufficiently large driving forces, the interface can move uniformly without the benefit of either a heterogeneous nucleation or screw dislocation mechanism. What constitutes a sufficiently large driving force depends upon the diffuseness of the interface, so that for extremely diffuse interfaces, this critical driving force will be so small that any measurable driving force will exceed it. Alternatively, for sharp interfaces, the critical driving force will be very large, and most growth will occur by the lateral step mechanism. Note that in a typical solidification or crystallization process, the thermodynamic driving force is dictated by the degree of supercooling. Morphology It is generally believed that the mechanical and other properties of the crystal are also pertinent to the subject matter, and that crystal morphology provides the missing link between growth kinetics and physical properties. The necessary thermodynamic apparatus was provided by Josiah Willard Gibbs' study of heterogeneous equilibrium. He provided a clear definition of surface energy, by which the concept of surface tension is made applicable to solids as well as liquids. He also appreciated that an anisotropic surface free energy implied a non-spherical equilibrium shape, which should be thermodynamically defined as the shape which minimizes the total surface free energy. It may be instructional to note that whisker growth provides the link between the mechanical phenomenon of high strength in whiskers and the various growth mechanisms which are responsible for their fibrous morphologies. (Prior to the discovery of carbon nanotubes, single-crystal whiskers had the highest tensile strength of any materials known). Some mechanisms produce defect-free whiskers, while others may have single screw dislocations along the main axis of growth—producing high strength whiskers. The mechanism behind whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including mechanically induced stresses, stresses induced by diffusion of different elements, and thermally induced stresses. Metal whiskers differ from metallic dendrites in several respects. Dendrites are fern-shaped like the branches of a tree, and grow across the surface of the metal. In contrast, whiskers are fibrous and project at a right angle to the surface of growth, or substrate. Diffusion-control Very commonly when the supersaturation (or degree of supercooling) is high, and sometimes even when it is not high, growth kinetics may be diffusion-controlled, which means the transport of atoms or molecules to the growing nucleus is limiting the velocity of crystal growth. Assuming the nucleus in such a diffusion-controlled system is a perfect sphere, the growth velocity, corresponding to the change of the radius with time , can be determined with Fick’s Laws. 1. Fick' s Law: , where is the flux of atoms in the dimension of , is the diffusion coefficient and is the concentration gradient. 2. Fick' s Law: , where is the change of the concentration with time. The first Law can be adjusted to the flux of matter onto a specific surface, in this case the surface of the spherical nucleus: , where now is the flux onto the spherical surface in the dimension of and being the area of the spherical nucleus. can also be expressed as the change of number of atoms in the nucleus over time, with the number of atoms in the nucleus being: , where is the volume of the spherical nucleus and is the atomic volume. Therefore, the change if number of atoms in the nucleus over time will be: Combining both equations for the following expression for the growth velocity is obtained: From second Fick’s Law for spheres the equation below can be obtained: Assuming that the diffusion profile does not change over time but is only shifted with the growing radius it can be said that , which leads to being constant. This constant can be indicated with the letter and integrating will result in the following equation: , where is the radius of the nucleus, is the distance from the nucleus where the equilibrium concentration is recovered and is the concentration right at the surface of the nucleus. Now the expression for can be found by: Therefore, the growth velocity for a diffusion-controlled system can be described as: Under such diffusion controlled conditions, the polyhedral crystal form will be unstable, it will sprout protrusions at its corners and edges where the degree of supersaturation is at its highest level. The tips of these protrusions will clearly be the points of highest supersaturation. It is generally believed that the protrusion will become longer (and thinner at the tip) until the effect of interfacial free energy in raising the chemical potential slows the tip growth and maintains a constant value for the tip thickness. In the subsequent tip-thickening process, there should be a corresponding instability of shape. Minor bumps or "bulges" should be exaggerated—and develop into rapidly growing side branches. In such an unstable (or metastable) situation, minor degrees of anisotropy should be sufficient to determine directions of significant branching and growth. The most appealing aspect of this argument, of course, is that it yields the primary morphological features of dendritic growth. See also Abnormal grain growth Chvorinov's rule Cloud condensation nuclei Crystal structure Czochralski process Dendrite (metal) Diana's Tree Fractional crystallization Ice nucleus Laser-heated pedestal growth Manganese nodule Micro-pulling-down Monocrystalline whisker Protocrystalline Recrystallization (chemistry) Seed crystal Single crystal Whisker (metallurgy) Simulation Kinetic Monte Carlo surface growth method References Crystallography Crystals Materials science Mineralogy Articles containing video clips
Crystal growth
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,594
[ "Applied and interdisciplinary physics", "Materials science", "Crystallography", "Crystals", "Condensed matter physics", "nan" ]
3,053,533
https://en.wikipedia.org/wiki/Strong%20Interest%20Inventory
The Strong Interest Inventory (SII) is an interest inventory used in career assessment. As such, career assessments may be used in career counseling. The goal of this assessment is to give insight into a person's interests, so that they may have less difficulty in deciding on an appropriate career choice for themselves. It is also frequently used for educational guidance as one of the most popular career assessment tools. The test was developed in 1927 by psychologist Edward Kellog Strong Jr. to help people exiting the military find suitable jobs. It was revised later by Jo-Ida Hansen and David P. Campbell. The modern version of 2004 is based on the Holland Codes typology of psychologist John L. Holland. The Strong is designed for high school students, college students, and adults, and was found to be at about the ninth-grade reading level. Background and history Before he created the inventory, Strong was the head of the Bureau of Educational Research at the Carnegie Institute of Technology. Strong attended a seminar at the Carnegie Institute of Technology where a man by the name of Clarence S. Yoakum introduced the use of questionnaires in differentiating between people of various occupations. This later sparked Strong's interest in developing a better way of measuring people's occupational interests. Starting off as the "Strong Vocational Interest Blank", the name changed when the test was revised in 1974 to the Strong-Campbell Interest Inventory and later to the Strong Interest Inventory. The inventory has been revised six times over the years to reflect continued development in the field. Strong based his empirical approach on the idea that interests were on a dimension of liking to disliking that could be used to discriminate among various occupational groups. In other words, Strong developed several scales that contrasted groups of people, based on their answers. This method of scaling, developed by Strong, has been very influential and has been used in several different questionnaires, including the Minnesota Multiphasic Personality Inventory (MMPI). Strong's original Inventory had 10 occupational scales. The original Inventory was created with men in mind, so in 1933 Strong came out with a women's form of the Strong Vocational Blank. In 1974 when the Strong-Campbell Interest Inventory came out, Campbell had combined both the men's and the women's forms into a single form. Other improvements that Campbell made to earlier versions include: the use of 124 occupational scales, the continued use of 23 Basic Interest Scales, and the addition of 2 special scales to measure academic comfort and introversion/extroversion dimensions. The Strong Interest Inventory is high in both predictive and concurrent validity. Components The newly revised inventory consists of 291 items that measure an individual's interest in six areas: The first 282 items are answered by the examinee choosing one of the following options: "strongly like", "like", "indifferent", "dislike", or "strongly dislike" while the remaining 9 items in the "Your Characteristics" section are answered the same way but with different options including: "strongly like me", "like me", "don't know", "unlike me", or "strongly unlike me". It is an assessment of interests, and not to be confused with personality assessments or aptitude test. Scoring The newly revised version of this test can typically be taken in 30–45 minutes after which the results must be scored by computer. After scoring, an individual can then view how their personal interests compare with the interests of people in a specific career field. Access to the comparison database and interpretation of the results usually incurs a fee. Strong Interest Inventory is a registered trademark of The Myers-Briggs Company, or CPP, Inc. of Mountain View, California. The results include: Scores on the level of interest on each of the six Holland Codes or General Occupational Themes (GOTs). The six GOTs include: Realistic, Investigative, Artistic, Social, Enterprising, and Conventional (RIASEC). Scores on 30 Basic Interest Scales (e.g. art, science, and public speaking) Scores on 244 Occupational Scales which indicate the similarity between the respondent's interests and those of people working in each of the 122 occupations. Scores on 5 Personal Style Scales (learning, working, leadership, risk-taking and team orientation). Scores on 3 Administrative Scales used to identify test errors or unusual profiles. Interpretation When an individual takes and completes the assessment, the resulting data is reflected by scores in each of the six General Occupational Themes (GOTs) or interest areas, including Realistic, Investigative, Artistic, Social, Enterprising, and Conventional (RIASEC). Typically, a Theme Code that reflects the top three RIASEC interest areas is reported. For some individuals, however, only one or two RIASEC interest areas may be reported, this being the case if their scores in the five remaining interest areas are not considered high enough or significant enough to be identified as major interest areas. The RIASEC GOT interest area results for any particular individual can be correlated or not correlated, differentiated or undifferentiated. As an example, in his 1998 case study of Ms. Flood, Jeffrey R. Prince reported in the Career Development Quarterly, 46, "Environments that are purely Artistic usually reflect values of independence and self-expression through loosely structured activities, whereas Enterprising environments frequently include organizational structures that value status." Ms. Flood's RIASEC results reflected that her main GOT occupational theme code was AE, with S as a possible third theme code, but at a lower intensity. In this case, and according to Holland's RIASEC Hexagon, these theme codes may not be entirely congruent, correlated, or undifferentiated with Ms. Flood's interests. Theme codes that fall closer in proximity to each other on Holland's RIASEC hexagon are those that generally reflect greater congruence, correlation, and undifferentiation. Therefore, Ms. Flood may experience some psychological challenges in trying to integrate her interests related the Artistic and Enterprising theme codes into her work because those interest areas are not considered to be highly correlated. Ms. Flood would benefit from working in careers and occupations that support her main GOTs of Artistic and Enterprising, with some Social component. Reception Jan Case from Louisiana State University Health Sciences Center and Terry L. Blackwell from Montana State University-Billings did a study in 2008, published in Rehabilitation Counseling Bulletin, 51. It came to favorable conclusions about the inventory: Many of the jobs that the Strong Interest Inventory predicted did not exist prior to the latest version. Because of this fact, the test is constantly being updated as new jobs are created and technology advances. See also Career development Holland Codes References External links Publisher website Personal development Personality tests
Strong Interest Inventory
[ "Biology" ]
1,382
[ "Personal development", "Behavior", "Human behavior" ]
3,053,725
https://en.wikipedia.org/wiki/Glass%20recycling
Glass recycling is the processing of waste glass into usable products. Glass that is crushed or imploded and ready to be remelted is called cullet. There are two types of cullet: internal and external. Internal cullet is composed of defective products detected and rejected by a quality control process during the industrial process of glass manufacturing, transition phases of product changes (such as thickness and colour changes) and production offcuts. External cullet is waste glass that has been collected or reprocessed with the purpose of recycling. External cullet (which can be pre- or post-consumer) is classified as waste. The word "cullet", when used in the context of end-of-waste, will always refer to external cullet. To be recycled, glass waste needs to be purified and cleaned of contamination. Then, depending on the end use and local processing capabilities, it might also have to be separated into different sizes and colours. Many recyclers collect different colours of glass separately since glass tends to retain its colour after recycling. The most common colours used for consumer containers are clear (flint) glass, green glass, and brown (amber) glass. Glass is ideal for recycling since none of the material is degraded by normal use. Many collection points have separate bins for clear (flint), green and brown (amber). Glass re-processors intending to make new glass containers require separation by colour. If the recycled glass is not going to be made into more glass, or if the glass re-processor uses newer optical sorting equipment, separation by colour at the collection point may not be required. Heat-resistant glass, such as Pyrex or borosilicate glass, must not be part of the glass recycling stream, because even a small piece of such material will alter the viscosity of the fluid in the furnace at remelt. Processing of external cullet To use external cullet in production, as much contamination should be removed as possible. Typical contaminations are: Organics: Paper labels, and corks Inorganics: Plastic caps and rings, metal caps, stones, ceramics, porcelains, PVB (Polyvinyl butyral) and EVA (Ethylene-Vinyl Acetate) foils in flat/laminated glass Metals: Ferrous and non-ferrous metals Heat resistant (ex: Pyrex dishes) and lead glass (ex: crystal with lead content) Manpower or machinery can be used in different stages of purification. Since they melt at higher temperatures than glass, separation of inorganics, the removal of heat resistant glass and lead glass is critical. In the modern recycling facilities, dryer systems and optical sorting machines are used. The input material should be sized and cleaned for the highest efficiency in automatic sorting. More than one free fall or conveyor belt sorter can be used, depending on the requirements of the process. Different colors can be sorted by optical sorting machines. Recycling into glass containers Glass bottles and jars are infinitely recyclable. The use of recycled glass in manufacturing conserves raw materials and reduces energy consumption. Because the chemical energy required to melt the raw materials has already been expended, the use of cullet can significantly reduce energy consumption compared with manufacturing new glass from silica (SiO2), soda ash (Na2CO3), and calcium carbonate (CaCO3). Soda lime glass from virgin raw materials theoretically requires approximately 2.671 GJ/tonne compared to 1.886 GJ/tonne to melt 100% glass cullet. As a general rule, every 10% increase in cullet usage results in an energy savings of 2–3% in the melting process, with a theoretical maximum potential of 30% energy saving. Every metric ton (1,000 kg) of waste glass recycled into new items saves of carbon dioxide from being released into the atmosphere during the manufacture of new glass. But recycling glass does not avoid the remelting process, which accounts for 75% of the energy consumption during production. Recycling into other products The use of the recycled glass as aggregate in concrete has become popular, with large-scale research on that application being carried out at Columbia University in New York. Recycled glass greatly enhances the aesthetic appeal of the concrete. Recent research has shown that concrete made with recycled glass aggregates have better long-term strength and better thermal insulation, due to the thermal properties of the glass aggregates. Glass which is not recycled, but crushed, reduces the volume of waste sent to landfill. Waste glass may also be kept out of landfill by using it for roadbed aggregate. Glass aggregate, a mix of colors crushed to a small size, is substituted for pea gravel or crushed rock in many construction and utility projects, reducing costs to a degree that varies depending on the size of the project. Glass aggregate is not sharp to handle. In many cases, the state Department of Transportation has specifications for use, size and percentage of quantity for use. Common applications are as pipe bedding—placed around sewer, storm water or drinking water pipes, to transfer weight from the surface and protect the pipe. Another common use is as fill to bring the level of a concrete floor even with a foundation. Foam glass gravel provides a lighter aggregate with other useful properties. Other uses for recycled glass include: Fiberglass insulation products Ceramic production As a flux in brick manufacture Astroturf Agriculture and landscape applications, such as top dressing, root zone material or golf bunker sand Recycled glass countertops As water filtration media Abrasives Mixed waste streams may be collected from materials recovery facilities or mechanical biological treatment systems. Some facilities can sort mixed waste streams into different colours using electro-optical sorting units. Recycled glass in construction The alternative markets for recycled glass waste include the construction sector (using glass waste for road pavement construction, as an aggregate in asphalt, pipe bedding material, drainage or filler aggregate), the production of cement and concrete (using glass waste as aggregate), as partial replacement to cement, partial replacement for cement and aggregate in the same mixture or raw material for cement production, as well as decorative aggregate, abrasives, or filtration media. Recycled glass in road pavement Three different samples of recycled glass with different gradation curves produced from residential and industrial waste glass streams in Victoria were studied in this research to investigate their usage as a construction material in geotechnical applications. The Fine Recycled Glass (FRG) and Medium Recycled Glass (MRG) were classified as well-graded (SW-SM), while Coarse Recycled Glass (CRG) was poorly graded (GP) according to the Unified Soil Classification System (USCS). The specific gravity of recycled glass was approximately 10% lower than that of natural aggregate. MRG exhibited higher maximum dry unit weight and lower optimum water content compared to FRG. LA abrasion tests showed FRG and MRG to have abrasion resistance similar to construction and demolition material, while CRG had higher abrasion values. Post-compaction analysis indicated stability for FRG and MRG, but CRG displayed poor compaction behavior due to particle shape and moisture absorption issues. CBR and direct shear tests revealed MRG's superior shear resistance and slightly higher internal friction angle compared to FRG. Consolidated drained triaxial shear tests confirmed these findings, suggesting FRG and MRG behave similarly to natural sand and gravel mixtures in geotechnical applications. Hydraulic conductivity tests demonstrated medium permeability and good drainage characteristics for FRG and MRG. Compliance with EPA Victoria requirements for fill material was also confirmed. Overall, the study supports using recycled glass in various geotechnical engineering applications. Recycled glass in bricks Polymer concrete, a material commonly used in industrial flooring, uses polymers, typically resins, to replace lime-type cements as a binder. Researchers have found that grounded recycled glass can be used as a substitute for sand when making polymer concrete. According to research, using recycled glass instead of sand produces a high strength, water-resistant material suitable for industrial flooring and infrastructure drainage, particularly in areas subject to heavy traffic such as service stations, forklift operating areas and airports. Challenges Despite all the improvement in the waste and recovery processes, challenges include: Lack of incentive to recycle when inconvenient; opt-in and subscription models lead to low participation Rising material recovery facility fees and pressure from the waste management industry have caused some municipalities to remove glass from curbside recycling Lack of recycling mandates and high levels of contamination cause a significant portion of materials to be disposed of in landfills. Low landfill tip fees for many MRFs (material recovery facilities) incentivize sending glass to the landfill. Lack of capacity in certain areas hinders the ability to meet the market demand and reduces the incentive to invest in materials recovery facilities. In some regions, strong demand for cullet from other end markets reduces potential supply for glass containers. Distance between the sources of and markets for cullet requires long-haul shipping. Virgin materials are often cheaper than cullet, sometimes by as much as 20%. By country Europe Germany In 2004, Germany recycled 2.116 million tons of glass. Reusable glass or plastic (PET) bottles are available for many drinks, especially beer and carbonated water as well as soft drinks (Mehrwegflaschen). The deposit per bottle (Pfand) is €0.08-€0.15, compared to €0.25 for recyclable but not reusable plastic bottles. There is no deposit for glass bottles which do not get refilled. Non-deposit bottles are collected in three colours: white, green and brown. India In 2021, India recycled 78.6 million tons of glass. Many drinks are packaged in reusable glass and plastic (PET) bottles, especially beer and carbonated water. Recycled waste glass has many uses such as cement and paint additives as well as remanufacturing into glass tiles. Specially, Indian recycling companies (Faizal Ahamed & Co, AMB Traders, Abubakkar Sons and JBA Groups in Tamil Nadu, India) collect 5 million tons of glass for recycling per month. Non-deposit bottles are typically collected in three colors: clear, green and brown. Netherlands The first bottle bank for non-deposit bottles (glasbak) was installed in Zeist in 1972. Glass is collected in three colours: white, green and brown. There is a deposit for refillable beer bottles when returned to supermarkets. United Kingdom Glass collection points, known as bottle banks are very common near shopping centres, at civic amenity sites and in local neighborhoods in the United Kingdom. The first bottle bank was introduced by Stanley Race CBE, then president of the Glass Manufacturers' Federation and Ron England in Barnsley on 6 June 1977. Development work was done by the DoE at Warren Spring Laboratory, Stevenage, (now AERA at Harwell) and Nazeing Glass Works, Broxbourne to prove if a usable glass product could be made from over 90% recycled glass. It was found necessary to use magnets to remove unwanted metal closures in the mixture. Bottle banks commonly stand beside collection points for other recyclable waste like paper, metals and plastics. Local, municipal waste collectors usually have one central point for all types of waste in which large glass containers are located. In 2007 there were over 50,000 bottle banks in the United Kingdom, and 752,000 tons of glass was being recycled annually. Asia India Approximately 45% glass waste gets recycled each year. North America United States Rates of recycling and methods of waste collection vary substantially across the United States because laws are written on the state or local level and large municipalities often have their own unique systems. Many cities do curbside recycling, meaning they collect household recyclable waste on a weekly or bi-weekly basis that residents set out in special containers in front of their homes and transported to a materials recovery facility. This is typically single-stream recycling, which creates an impure product and partly explains why, as of 2019, the US has a recycling rate of around 33% versus 90% in some European countries. European countries have requirements for minimum recycled glass content, and more widespread deposit-return systems that provide more uniform material streams. The lower population density and long distances in much of the United States, and the cost of shipping heavy glass also mean that recycling is not inherently economical in places where there are no nearby buyers. Apartment dwellers usually use shared containers that may be collected by the city or by private recycling companies which can have their own recycling rules. In some cases, glass is specifically separated into its own container because broken glass is a hazard to the people who later manually sort the co-mingled recyclables. Sorted recyclables are later sold to companies to be used in the manufacture of new products. In 1971, the state of Oregon passed a law requiring buyers of carbonated beverages (such as beer and soda) to pay five cents per container (increased to ten cents in April 2017) as a deposit which would be refunded to anyone who returned the container for recycling. This law has since been copied in nine other states including New York and California. The abbreviations of states with deposit laws are printed on all qualifying bottles and cans. In states with these container deposit laws, most supermarkets automate the deposit refund process by providing machines which will count containers as they are inserted and then print credit vouchers that can be redeemed at the store for the number of containers returned. Small glass bottles (mostly beer) are broken, one-by-one, inside these deposit refund machines as the bottles are inserted. A large, wheeled hopper (very roughly 1.5 m by 1.5 m by 0.5 m) inside the machine collects the broken glass until it can be emptied by an employee. Nationwide bottle refunds recover 80% of glass containers that require a deposit. Major companies in the space include Strategic Materials, which purchases post-consumer glass for 47 facilities across the country. Strategic Materials has worked to correct misconceptions about glass recycling. Glass manufacturers such as Owens-Illinois ultimately include recycled glass in their product. The Glass Recycling Coalition is a group of companies and stakeholders working to improve glass recycling. Oceania Australia In 2019, many Australian cities after decades of poor planning and minimum investment are winding back their glass recycling programmes in favour of plastic usage. For many years, there was only one state in Australia with a return deposit scheme on glass containers. Other states had unsuccessfully tried to lobby for glass deposit schemes. More recently this situation has changed dramatically, with the original scheme in South Australia now joined by legislated container deposit schemes in New South Wales, Queensland, Australian Capital Territory, and the Northern Territory, with schemes planned in Western Australia (2020), Tasmania (2022) and Victoria (2023). Africa South Africa South Africa has an efficient returnable bottle system which includes beer, spirit and liquor bottles. Bottles and jars manufactured in South Africa contain at least 40% recycled glass. Life Cycle Analysis Life Cycle Analysis (LCA) is an important tool for ecological evaluation of products or processes. LCA is an internationally accepted standard (ISO 14040 & ISO 14044) and scientific tool that is used to quantify the environmental performance attributable to the different life stages of our products, including upstream stages such as raw material production and energy supply. Results are benchmarked based on LCA indicators with the final aim of identifying operational efficiencies and optimising product design while providing a higher level of environmental transparency. The life-cycle of glass starts from extraction of raw materials, to distribution, use by final consumers to disposal/landfilling. In light of saving the economy and the environment, researchers are working to eliminate the linearity of this lifecycle to have a circular/closed loop life cycle where extraction of raw materials and landfilling after final consumption will be eliminated. Glass takes up to millions of years to decompose in the environment and even more in landfill. Fortunately, glass is 100% recyclable, making it a sustainable resource for producing new forms of packaging without relying on raw materials. The problem now is that only 70% of glasses are being collected for recycling in the EU (30% in the US) (which is already good, but can be better). Its recyclability can hence be improved by improving its collection rate all around the world. The only way we can increase collection rate is to enlighten every single consumer of glass to properly dispose and speak up against improper disposal of this glass. Cradle to cradle Analysis The Cradle-to-Cradle analysis is an approach which evaluates a product's overall sustainability across its entire life cycle. It expands the definition of design quality to include positive effects on economic, ecological and social health. The Cradle to cradle analysis of glass showed that the most impactful phase of a glass lifecycle is at its raw materials usage. Hence, why the sustainability of this product is focused on eliminating this stage of production by recycling used glasses to make secondary raw materials. Regulatory Framework Waste Framework Directive (2008/98/EC) establishes specific targets for the re-use and recycling of building waste, including glass. Defines high levels of recycling as key for Europe's resource efficiency. A ban on landfill disposal of single clear glass panes and insulating glass units should be introduced in the revised version of Directive 1999/31/EC. ISO International Organization for Standardization (ISO) is a non-governmental institution (established under the aegis of the UN) bridging public and private sectors. ISO is an international standard setter for “business, government and society,” through its pursuit of voluntary standards. These standards range from those dealing with size, clarity, and weights measures to the systems businesses ought to put in place to enhance customer satisfaction. Its work thus has an intimate impact on daily life by shaping and molding the way in which commerce is conducted, the operating procedures of business, and the way in which consumers engage with markets. Some of this standard setting was the result of government and business agreement on product development; others were the consequence of commercial battles fought out over the most appropriate format. The organization boasts having developed more than 17,000 international standards in its 60-year history and claims that it is engaged in producing an additional 1,100 standards each year. ISO are usually put in consideration in lifecycle assessment of products. The ISO 81.040 contains the international standards for glass. It's divided in four chapters. 81.040.01 Glass in general. 81.040.10 raw materials and raw glass. 81.040.20 Glass in building. 81.040.30 Glass products. Other related ISO: 55.100 Bottles, pots, jars. 71.040.20 Laboratory glassware. See also Baler Castlemaine Tooheys Ltd v South Australia; Container-deposit legislation Glass crusher Reuse of bottles Waste management References External links "Plant Chops Old Bottles For New", August 1949, Popular Science article on the basics of glass recycling Glass chemistry Recycling Recycling by material Glass production
Glass recycling
[ "Chemistry", "Materials_science", "Engineering" ]
3,920
[ "Glass production", "Glass engineering and science", "Glass chemistry", "Materials science" ]
3,053,980
https://en.wikipedia.org/wiki/Slooh
Slooh is a robotic telescope service that can be viewed live through a web browser. It was not the first robotic telescope, but it was the first that offered "live" viewing through a telescope via the web. Other online telescopes traditionally email a picture to the recipient. The site has a patent on their live image processing method. Slooh is an online astronomy platform with live-views and telescope rental for a fee. Observations come from a global network of telescopes located in places including Spain and Chile. The name Slooh comes from the word "slew" to indicate the movement of a telescope, modified with "ooh" to express pleasure and surprise. Slooh, LLC is based in Washington, Connecticut. History The service was founded in 2002 by Michael Paolucci. Its Canary Islands telescope went online December 25, 2003, but was not available to the public until 2004. Participating observatories The original astronomical observatory is located on the island Tenerife in the Canary Islands on the volcano called Teide. The site is at the elevation and situated away from city light pollution. This (Canary Islands) site includes 2 domes, each with 2 telescopes. Each dome has a high-magnification telescope and a wide-field telescope. One dome is optimized for planetary views (e.g., more magnification and a different CCD), and the other is optimized for deep sky objects (e.g., less magnification, more light sensitive CCD). Each dome offers 2 telescopic views: one high magnification (narrow field) view through a Celestron Schmidt-Cassegrain telescope; and a wide view through either a telephoto lens or an APO refractor. In 2012, the Slooh.com Canary Islands Observatory was assigned observatory code G40. On February 14, 2009, Slooh launched a second observatory in the hills above La Dehesa, Chile. This site offers views from the Southern Hemisphere. In 2014, the Slooh.com Chile Observatory was assigned observatory code W88. Unlike Google Sky which features images from the Hubble Space Telescope, Slooh can take new images of the sky with its telescopes. See also Amateur astronomy LightBuckets Lists of telescopes References Further reading External links Astronomical observatories in Chile Astronomical observatories in Spain Telescopes Education companies established in 2003 2003 establishments in the Canary Islands
Slooh
[ "Astronomy" ]
502
[ "Telescopes", "Astronomical instruments" ]
3,054,008
https://en.wikipedia.org/wiki/Sensitization
Sensitization is a non-associative learning process in which repeated administration of a stimulus results in the progressive amplification of a response. Sensitization often is characterized by an enhancement of response to a whole class of stimuli in addition to the one that is repeated. For example, repetition of a painful stimulus may make one more responsive to a loud noise. History Eric Kandel was one of the first to study the neural basis of sensitization, conducting experiments in the 1960s and 1970s on the gill withdrawal reflex of the seaslug Aplysia. Kandel and his colleagues first habituated the reflex, weakening the response by repeatedly touching the animal's siphon. They then paired noxious electrical stimulus to the tail with a touch to the siphon, causing the gill withdrawal response to reappear. After this sensitization, a light touch to the siphon alone produced a strong gill withdrawal response, and this sensitization effect lasted for several days. (After Squire and Kandel, 1999). In 2000, Eric Kandel was awarded the Nobel Prize in Physiology or Medicine for his research in neuronal learning processes. Neural substrates The neural basis of behavioral sensitization is often not known, but it typically seems to result from a cellular receptor becoming more likely to respond to a stimulus. Several examples of neural sensitization include: Electrical or chemical stimulation of the rat hippocampus causes strengthening of synaptic signals, a process known as long-term potentiation or LTP. LTP of AMPA receptors is a potential mechanism underlying memory and learning in the brain. In "kindling", repeated stimulation of hippocampal or amygdaloid neurons in the limbic system eventually leads to seizures in laboratory animals. After sensitization, very little stimulation may be required to produce seizures. Thus, kindling has been suggested as a model for temporal lobe epilepsy in humans, where stimulation of a repetitive type (flickering lights for instance) can cause epileptic seizures. Often, people suffering from temporal lobe epilepsy report symptoms of negative effects such as anxiety and depression that might result from limbic dysfunction. In "central sensitization", nociceptive neurons in the dorsal horns of the spinal cord become sensitized by peripheral tissue damage or inflammation. This type of sensitization has been suggested as a possible causal mechanism for chronic pain conditions. The changes of central sensitization occur after repeated trials to pain. Research from animals has consistently shown that when a trial is repeatedly exposed to a painful stimulus, the animal’s pain threshold will change and result in a stronger pain response. Researchers believe that there are parallels that can be drawn between these animal trials and persistent pain in people. For example, after a back surgery that removed a herniated disc from causing a pinched nerve, the patient may still continue to feel pain. Also, newborns who are circumcised without anesthesia have shown tendencies to react more greatly to future injections, vaccinations, and other similar procedures. The responses of these children are an increase in crying and a greater hemodynamic response (tachycardia and tachypnea). Drug sensitization occurs in drug addiction, and is defined as an increased effect of drug following repeated doses (the opposite of drug tolerance). Such sensitization involves changes in brain mesolimbic dopamine transmission, as well as a protein inside mesolimbic neurons called delta FosB. An associative process may contribute to addiction, for environmental stimuli associated with drug taking may increase craving. This process may increase the risk for relapse in addicts attempting to quit. Cross-sensitization Cross-sensitization is a phenomenon in which sensitization to a stimulus is generalized to a related stimulus, resulting in the amplification of a particular response to both the original stimulus and the related stimulus. For example, cross-sensitization to the neural and behavioral effects of addictive drugs are well characterized, such as sensitization to the locomotor response of a stimulant resulting in cross-sensitization to the motor-activating effects of other stimulants. Similarly, reward sensitization to a particular addictive drug often results in reward cross-sensitization, which entails sensitization to the rewarding property of other addictive drugs in the same drug class or even certain natural rewards. In animals, cross-sensitization has been established between the consumption of many different types of drugs of abuse – in line with the gateway drug theory – and also between sugar consumption and the self-administration of drugs of abuse. As a causal factor in pathology Sensitization has been implied as a causal or maintaining mechanism in a wide range of apparently unrelated pathologies including addiction, allergies, asthma, overactive bladder and some medically unexplained syndromes such as fibromyalgia and multiple chemical sensitivity. Sensitization may also contribute to psychological disorders such as post-traumatic stress disorder, panic anxiety and mood disorders. See also Long-term potentiation Multiple chemical sensitivity Neuroplasticity Synaptic plasticity References Behaviorism Learning
Sensitization
[ "Biology" ]
1,069
[ "Behavior", "Behaviorism" ]
3,054,267
https://en.wikipedia.org/wiki/Freestone%20%28masonry%29
A freestone is a type of stone used in masonry for molding, tracery and other replication work required to be worked with the chisel. Freestone, so named because it can be freely cut in any direction, must be fine-grained, uniform and soft enough to be cut easily without shattering or splitting. Some sources, including numerous nineteenth-century dictionaries, say that the stone has no grain, but this is incorrect. Oolitic stones are generally used, although in some countries soft sandstones are used; in some churches an indurated chalk called clunch is employed for internal lining and for carving. Some have believed that the word "freemason" originally referred, from the 14th century, to a person capable of carving freestone. See also Aquia Creek sandstone Hummelstown brownstone References Masonry
Freestone (masonry)
[ "Engineering" ]
173
[ "Construction", "Masonry" ]
3,054,440
https://en.wikipedia.org/wiki/Flight%20airspeed%20record
An air speed record is the highest airspeed attained by an aircraft of a particular class. The rules for all official aviation records are defined by Fédération Aéronautique Internationale (FAI), which also ratifies any claims. Speed records are divided into a number of classes with sub-divisions. There are three classes of aircraft: landplanes, seaplanes, and amphibians, and within these classes there are records for aircraft in a number of weight categories. There are still further subdivisions for piston-engined, turbojet, turboprop, and rocket-engined aircraft. Within each of these groups, records are defined for speed over a straight course and for closed circuits of various sizes carrying various payloads. Timeline Gray text indicates unofficial records, including unconfirmed or unpublicized war secrets. Official records versus unofficial The Lockheed SR-71 Blackbird holds the official Air Speed Record for a crewed airbreathing jet engine aircraft with a speed of . The record was set on 28 July 1976 by Eldon W. Joersz and George T. Morgan Jr. near Beale Air Force Base, California, USA. It was able to take off and land unassisted on conventional runways. SR-71 pilot Brian Shul claimed in The Untouchables that he flew in excess of Mach 3.5 on 15 April 1986, over Libya, in order to avoid a missile. Although the official record for fastest piston-engined aeroplane in level flight was held by a Grumman F8F Bearcat, the Rare Bear, with a speed of , the unofficial record for fastest piston-engined aeroplane in level flight is held by a British Hawker Sea Fury at . Both were demilitarised and modified fighters, while the fastest stock (original, factory-built) piston-engined aeroplane was unofficially the Supermarine Spiteful F Mk 16, which "achieved a speed of 494m.p.h. at 28,500ft during official tests at Boscombe Down" in level flight. The unofficial record for fastest piston-engined aeroplane (not in level flight) is held by a Supermarine Spitfire Mk.XIX, which was calculated to have achieved a speed of in a dive on 5 February 1952. The last new speed record ratified before the outbreak of World War II was set on 26 April 1939 with a Me 209 V1, at . The chaos and secrecy of World War II meant that new speed breakthroughs were neither publicized nor ratified. In October 1941, an unofficial speed record of was secretly set by a Messerschmitt Me 163A "V4" rocket aircraft. Continued research during the war extended the secret, unofficial speed record to by July 1944, achieved by a Messerschmitt Me 163B "V18". The first new official record in the post-war period was achieved by a Gloster Meteor F Mk.4 in November 1945, at . The first aircraft to exceed the unofficial October 1941 record of the Me 163A V4 was the Douglas D-558-1 Skystreak, which achieved in August 1947. The July 1944 unofficial record of the Me 163B V18 was officially surpassed in November 1947, when Chuck Yeager flew the Bell X-1 to . The official speed record for a seaplane moved by piston engine is , which attained on 24 October 1934, by Francesco Agello in the Macchi-Castoldi M.C.72 seaplane ("idrocorsa") and it remains the current record. It was equipped with the Fiat AS.6 engine (version 1934) developing a power of at 3,300 rpm, with coaxial counter-rotating propellers. The original record holding Macchi-Castoldi M.C.72 MM.181 seaplane is at the Air Force Museum at Vigna di Valle in Italy. Other air speed records Flying between any two airports allow a large number of combinations, so setting a speed record ("speed over a recognised course") is fairly easy with an ordinary aircraft, although there are many administrative requirements for recognition. See also Flight altitude record Fastest propeller-driven aircraft List of vehicle speed records Lockheed X-7 - Mach 4.31 (2,881 mph) in the 1950s Messerschmitt Me 163 Komet World record References Allward, Maurice. Modern Combat Aircraft 4: F-86 Sabre. London: Ian Allan, 1978. . Andrews, C.F. and E.B. Morgan. Supermarine Aircraft since 1914. London:Putnam, 1987. . Belyakov, R.A. and J. Marmain. MiG: Fifty Years of Secret Aircraft Design. Shrewsbury, UK:Airlife, 1994. . Bowers, Peter M. Curtiss Aircraft 1907–1947. London:Putnam, 1979. . Cooper, H.J. "The World's Speed Record". Flight, 25 May 1951, pp. 617–619. "Eighteen Years of World's Records". Flight, 7 February 1924, pp. 73–75. Francillon, René J. McDonnell Douglas Aircraft since 1920. London:Putnam, 1979. . James, Derek N. Gloster Aircraft since 1917. London:Putnam, 1971. . Mason, Francis K. The British Fighter since 1912. Annapolis Maryland, US: Naval Institute Press, 1992. . Munson, Kenneth and John William Ransom Taylor Jane's Pocket Book of Record-breaking Aircraft. New York New York, US: Macmillan, 1978. . Taylor, H. A. Fairey Aircraft since 1915. London:Putnam, 1974. . Taylor, John W. R. Jane's All The World's Aircraft 1965–66. London:Sampson Low, Marston & Company, 1965. Taylor, John W. R. Jane's All The World's Aircraft 1976–77. London:Jane's Yearbooks, 1976. . Taylor, John W. R. Jane's All The World's Aircraft 1988–89. Coulsdon, UK:Jane's Defence Data, 1988. . Organ, Richard Avro Arrow: The Story of the Avro Arrow From Its Evolution To Its Extinction. Erin, ON, Canada: Boston Mills Press, 1980. . External links Web site of the Fédération Aéronautique Internationale (FAI) Speed records time line Speed Record Club - The Speed Record Club seeks to promote an informed and educated enthusiast identity, reporting accurately and impartially to the best of its ability on record-breaking engineering, events, attempts and history. Ground Speed Records - Breakdown of speed records by aircraft type Aviation records Air racing Airspeed Flight
Flight airspeed record
[ "Physics" ]
1,354
[ "Wikipedia categories named after physical quantities", "Airspeed", "Physical quantities" ]
3,054,460
https://en.wikipedia.org/wiki/Kasanka%20National%20Park
Kasanka National Park is a park located in the Chitambo District of Zambia’s Central Province. At roughly , Kasanka is one of Zambia’s smallest national parks. Kasanka was the first of Zambia’s national parks to be managed by a private-public partnership. The privately funded Kasanka Trust Ltd has been in operation since 1986 and undertakes all management responsibilities, in partnership with the Department of National Parks and Wildlife (DNPW - previously ZAWA). The park has an average elevation between and above mean sea level. It has a number permanent shallow lakes and water bodies with the largest being Wasa. There are five perennial rivers in the park, with the largest being the Luwombwa River. The Luwombwa is the only river that drains the NP, which flows out in the northwestern corner. It is a tributary of the Luapula, which further upstream also drains the Bangweulu Swamp and forms the main source of the Congo River. Although Kasanka NP is part of the Greater Bangweulu Ecosystem, there is no direct hydrological connection between the park and the Bangweulu Wetlands. A total of 114 mammal species have been recorded in the park including elephant, hippopotamus and sitatunga. A number of species have been reintroduced in the park by Kasanka Trust - the most successful of which are zebra and buffalo. Close to ten million Eidolon helvum (African straw-coloured fruit bat) migrate to the Mushitu swamp evergreen forest in the park for three months during October to December, making it the largest mammal migration in the world. Over 471 bird species have been identified in the park. An airfield lies there. Topography and vegetation Kasanka has a varying altitude of and above mean sea level. The park is located in the Zambia in Serenje District of Zambia. While most sources quote the area of the park to be around , others record the area close to , making it one of the smaller national parks in the country. It has a relatively flat topography with few noteworthy relief features, with the exception of the Mambilima Falls located close to the Kasanka Conservation Centre and the rocky Mpululwe and Bwalya Bemba hills. Nine permanent lakes are found in the park and it is dissected by a network of rivers and streams. The larger rivers are the Luwombwa, Mulembo, Kasanka, Mulaushi and the swampy Musola River. The river streams and the lagoons have reed and papyrus beds. All of these rivers eventually shed their water via one another into the Luapula River, the only drainage outlet for the Bangweulu basin, and a major tributary of the Congo River. Habitats There are a variety of habitats in the park. Brachystegia woodland, otherwise known as Miombo Woodland, covers around 70% of Kasanka’s surface area, interspersed with grassy dambos. It is very rich in tree species and in many places forms a half closed canopy but also supports a well-developed herbaceous stratum. A high frequency of fires removes this stratum and young saplings and leads to Miombo Woodland with large, widely separated trees. Decades of “early burning” in the park have resulted in more natural Miombo with a strong presence of young trees and thicket species. Evergreen forests of three kinds occur within Kasanka; Mushitu or swamp forest, riverine forests and very small patches of Mateshe (dry evergreen forest). The Mushitu is characterised by huge red mahoganies, waterberries and quinine trees among others and is fairly well represented. The largest tract of intact Mushitu, in the Fibwe area, hosts the annual gathering of straw-coloured fruit bats from October to December making it the largest fruit bat roost on Earth. Riverine forests are found along most rivers in Kasanka, with the largest stretches being found along the Luwombwa. True Mateshe probably was common in historic times but is rare now, as a result of centuries of frequent fires. All forest types are at risk from frequent wildfires as the tree species they support are not resistant to fire. Chipya, also known as Lake Basin Woodland has interspersed trees and do not form a closed canopy. This allows sunlight that aids tall grasses to grow. Chipya is prone to very hot fires in the dry season, and this gives these woodlands their name as ‘chiya’ means ‘burnt’ in the local language. Chipya typically occurs on relatively soils and are thought to be a fire derivate form of Mateshe. Dambos are grassy drainage channels and basins with little to no woody vegetation but very palatable grasses. Most woody species grow on exposed termitaria as dambos tend to retain water very well. Dambos are of a vital importance to grazing mammal species as well as several woodland mammals that choose to graze on the fringes, especially during the dry season. Several large (several square km) grassy plains occur within the park such as Chinyangali close to Fibwe and the Chikufwe plain east of the Luwombwa River. Papyrus swamps are considered the crown jewels of Kasanka with vast marshes supporting large tracts of thick papyrus swamp and home to the elusive sitatunga. Kasanka has nine permanent lakes and over of rivers flowing through the park. Many of the rivers, especially the Luwombwa in the west support riparian fringe forests on their banks. Large areas of grassy floodplains are found along the Kasanka, Mulembo and Luwombwa rivers. The rivers and lakes are host to a variety of fish and are rich in other forms of aquatic and semi-aquatic wildlife. Fauna Mammals A total of 114 mammal species have been recorded in the park. Although severely depleted in the past, due to an ongoing anti-poaching presence, game populations in Kasanka have recovered. Puku are the most plentiful antelope and graze on the grassy floodplains and dambos throughout the Park. Common duiker, bushbuck, warthog, vervet monkey and Kinda baboon (related to the yellow baboon (Papio cynocephalus)) are common throughout the park and hippo can frequently be encountered in Kasanka’s rivers and lakes, including in Lake Wasa, opposite the main lodge. Kasanka is perhaps the best place in the world to spot the shy and reclusive sitatunga, of which the park holds an estimated 500-1,000 animals, and offers great opportunities for sightings of the rare blue monkey. Elephant are faring increasingly well and several breeding herds and bachelor bulls traverse the park and the surrounding game management area. Several of the plains like Chikufwe are home to common reedbuck, buffalo, sable antelope and Lichtenstein's hartebeest, which are often encountered in the dry season. A small population of plains zebra occurs in the park. Roan antelope, defassa waterbuck and Sharpe's grysbok occur but are rare and seldom seen, whereas warthog numbers are increasing and they are commonly sighted. Yellow-backed duiker and Moloney’s monkey, which are poached elsewhere, have also got a steady increase in the population in the park. The largest resident predator in the park is the leopard. Lions and hyenas are no longer resident but hyenas are seasonal visitors. Side-striped jackal are common and often spotted in the early mornings. A range of smaller carnivores occur, of which water mongoose, white-tailed mongoose, African civet and large spotted genet are commonly encountered at night and slender, banded and dwarf mongoose can often be seen crossing pathways during the day. Caracal, serval, honey badger and the rare Meller's mongoose occur but are very seldom sighted. Two species of otter live in Kasanka’s rivers, marshes and lakes. Bats The first of Kasanka’s famous straw-coloured fruit bats (Eidolon helvum) start arriving towards the middle of October each year. By mid-November the roost has reached its highest density and numbers are estimated to be around eight to ten million. It is believed to be the highest density of mammalian biomass on the planet, as well as the greatest known mammal migration. The arrival of the bats normally coincides with the start of the first rains and the ripening of many local fruit and berry species such as the masuku (wild loquat) and waterberry, on which the bats feed. It is estimated that 330,000 tonnes of fruits are consumed by the bats during the three months. The bat roost is centred on one of the largest remaining patches of Mushitu (indigenous forest) in Kasanka along the Musola stream. There are a number of excellent 'hides' in trees at the edge of the forest which provide fantastic sightings of the bats in flight at dawn and dusk. The high concentration of bats attracts an incredible variety of predators and scavengers to the bat forest. Martial eagles, pythons, fish eagles, lesser-spotted and African hawk-eagles, kites, vultures and hobby falcons are amongst the raptors that concentrate on the roost for easy pickings, whereas leopard, water monitors and crocodiles make off with those bats unfortunate enough to drop to the forest floor. The origin of the various colonies that make up this 'mega-colony' has never been fully established, however it is known that bats travel from other parts of Africa including Congo. Studies indicate that the abundance of fruits during the season are the major reason for the migration. The bats' arrival starts gradually during the first week of October, numbers peak in November and early December. Numbers start to decrease around the second week of December. The departure of the last bats has been getting later over recent years, with the final bats now generally leaving Kasanka in early January. A study published in the African Journal of Ecology indicated that the migratory impact of the bats could ultimately threaten the viability of the seasonal roost as it increases faster tree mortality. The Kasanka Trust undertakes an extensive fire management program to protect the forest from the devastating fires which over time have reduced the size of the forest. A 'Fire Exclusion Zone' has been established which over time will allow natural regeneration of the bats' forest in order to protect the unique phenomenon. Birds Kasanka holds undoubtedly some of the finest birding in Africa’ according to Dr Ian Sinclair, one of Africa’s leading ornithologists. With over 330 species recorded in this relatively small area without altitudinal variation, one will find it difficult to argue with this statement. Kasanka is blessed with a wide variety of habitats, each hosting its own community of bird species, many of which are rare or uncommon. A boat-trip along the Luwomwba River, or any other major river in the park may reveal Pel’s fishing owl, African finfoot, half-collared kingfisher, Ross's turaco and Böhm's bee-eater. The vast wetlands of Kasanka support some species not easily seen elsewhere such as rufous-bellied heron, lesser jacana and African pygmy goose. The shoebill was confirmed for the first time in 20 years at the end of 2010 and a breeding pair of wattled cranes and their offspring are often encountered. Marsh tchagra, coppery-tailed coucal, Fulleborn's longclaw, locustfinch, pale-crowned, croaking and short-winged cisticola, chestnut-headed and streaky-breasted flufftail, harlequin and blue quail, black-rumped buttonquail and fawn-breasted waxbill are amongst the other specials on the wetland fringes and in the large dambos. The Mushitu is host to a wide range of other species, the sought-after Narina trogon can often be heard and seen in the small patches of forest close to Pontoon and Fibwe. A range of other species occur such as blue-mantled crested flycatcher, Schalow's turaco, brown-headed apalis, black-backed barbet, grey waxbill, Bocage's robin, West African (olive) thrush, dark-backed weaver, red-throated twinspot, green twinspot, red-backed mannikin, green-headed sunbird, yellow-rumped tinkerbird, scaly-throated honeyguide, pallid honeyguide, purple-throated cuckooshrike, black-throated wattle-eye, yellow-throated leaflove and little, grey-olive, yellow-bellied and Cabanis's greenbul. However, perhaps the richest birding areas of Kasanka are the extensive tracts of miombo woodland. A variety of specialist species occur here, many of which are not found outside the sub-region, these include black-collared and green-capped eremomelas, racket-tailed roller, rufous-bellied and Miombo grey tits, grey penduline tit, woodland and bushveld pipit, spotted creeper, white-tailed blue flycatcher, Böhm's flycatcher, yellow-bellied hyliota, red-capped crombec, Cabanis's bunting, Reichar's and black-eared seedeater, miombo scrub robin, miombo rock thrush, thick-billed cuckoo, Anchieta's sunbird, and Anchieta's, Whyte's and Miombo pied barbets. Other wildlife In addition to the large and more visible game and wildlife, Kasanka is home to an incredible variety of insects and other arthropods. The many rivers and marshes are home to a wide range of reed frogs and other amphibians. Large crocodiles dwell in the rivers and huge specimens can be seen along the Kasanka and Luwombwa Rivers. Large Nile monitors occur as well, as do Speke's hinged tortoise. Common snake species include African rock python, forest cobra, lined olympic snake, olive marsh snake and herald snake. Three geckos, one agama, five skinks, one worm-lizard and two lizard species are known to occur as well. Administration and management Zambia's wildlife management is considered one of best managed and administered among all countries in Africa. The management faces ever increasing challenges from a rapidly increasing human population in the country. This manifest in the form of increasing poaching and illegal harvesting from national parks, encroachment, and deforestation due to charcoal production and farming activities. During the 1980s, the Zambian administration faced a shortage of trained staff, transport and patrols, leading the government to seek the support of NGOs and private bodies. After visiting the neglected and completely undeveloped Kasanka National Park for the first time in 1985 and hearing gunshots, the late David Lloyd, a British colonial officer, impressed with the wide range of habitats and amazing scenery, concluded that if there was still poaching, there must still be wildlife. He made it his life's mission to develop the park and safeguard the biodiversity of Kasanka. In 1987 the Kasanka Trust (KTL) was founded as a non-profit charitable institution with tax-exemption within Zambia. It has since become a registered charity in the United Kingdom and the Netherlands. The Kasanka Trust has a Memorandum of Understanding with the Department of National Parks and Wildlife (previously ZAWA - Zambian Wildlife Authority) and under this agreement is responsible for infrastructure and habitat management, community outreach and tourism. It became the first national park to be managed by a private body. DNPW retains the responsibilities for anti-poaching work in the park and surrounding Game Management Area in conjunction with the Trust. The Kasanka Trust aims to cover its costs through tourism-generated revenue, however it is still reliant on external funding from grants and donations for part of its budget. Since KTL's involvement at Kasanka significant progress has been made, a vast network of roads has been created as well as an excellent tourist-infrastructure, a community conservation centre and the implementation of effective anti-poaching measures. The Trust employs about 60 local staff and has an ongoing community outreach program within the surrounding communities including amongst other things; sponsorship of secondary school students, promotion of conservation farming techniques, a human/elephant conflict program and promotion of the conservation message to local villages. In 2011 Kasanka Trust began operations in the undeveloped and depleted 1,600 km2 Lavushi Manda National Park with assistance from the World Bank. References Bibliography External links Kasanka National Park website Kasanka Trust website National parks of Zambia Geography of Northern Province, Zambia Geography of Central Province, Zambia Miombo Tourist attractions in Northern Province, Zambia Tourist attractions in Central Province, Zambia Animal migration Central Zambezian miombo woodlands Important Bird Areas of Zambia
Kasanka National Park
[ "Biology" ]
3,564
[ "Ethology", "Behavior", "Animal migration" ]
3,054,516
https://en.wikipedia.org/wiki/Acquired%20situational%20narcissism
Acquired situational narcissism (ASN) is a form of narcissism that develops in late adolescence or adulthood, brought on by wealth, fame and the other trappings of celebrity. It was coined by Robert B. Millman, professor of psychiatry at the Weill Cornell Medical College of Cornell University. ASN differs from conventional narcissism in that it develops after childhood and is triggered and supported by the celebrity-obsessed society: fans, assistants and tabloid media all play into the idea that the person really is vastly more important than other people, triggering a narcissistic problem that might have been only a tendency, or latent, and helping it to become a full-blown personality disorder. "Millman says that what happens to celebrities is that they get so used to people looking at them that they stop looking back at other people." In its presentation and symptoms, it is indistinguishable from narcissistic personality disorder, differing only in its late onset and its support by large numbers of others. "The lack of social norms, controls, and of people telling them how life really is, also makes these people believe they're invulnerable," so that the person with ASN may suffer from unstable relationships, substance abuse and erratic behaviour. A famous fictional character with ASN is Norma Desmond, the main character of Sunset Boulevard. References Narcissism
Acquired situational narcissism
[ "Biology" ]
291
[ "Behavior", "Narcissism", "Human behavior" ]
3,054,565
https://en.wikipedia.org/wiki/Apodization
In signal processing, apodization (from Greek "removing the foot") is the modification of the shape of a mathematical function. The function may represent an electrical signal, an optical transmission, or a mechanical structure. In optics, it is primarily used to remove Airy disks caused by diffraction around an intensity peak, improving the focus. Apodization in electronics Apodization in signal processing The term apodization is used frequently in publications on Fourier-transform infrared (FTIR) signal processing. An example of apodization is the use of the Hann window in the fast Fourier transform analyzer to smooth the discontinuities at the beginning and end of the sampled time record. Apodization in digital audio An apodizing filter can be used in digital audio processing instead of the more common brick-wall filters, in order to reduce the pre- and post-ringing that the latter introduces. Apodization in mass spectrometry During oscillation within an Orbitrap, ion transient signal may not be stable until the ions settle into their oscillations. Toward the end, subtle ion collisions have added up to cause noticeable dephasing. This presents a problem for the Fourier transformation, as it averages the oscillatory signal across the length of the time-domain measurement. The software allows “apodization”, the removal of the front and back section of the transient signal from consideration in the FT calculation. Thus, apodization improves the resolution of the resulting mass spectrum. Another way to improve the quality of the transient is to wait to collect data until ions have settled into stable oscillatory motion within the trap. Apodization in nuclear magnetic resonance spectroscopy Apodization is applied to NMR signals before discrete Fourier Transformation. Typically, NMR signals are truncated due to time constraints (indirect dimension) or to obtain a higher signal-to-noise ratio. In order to reduce truncation artifacts, the signals are subjected to apodization with different types of window functions. Apodization in optics In optical design jargon, an apodization function is used to purposely change the input intensity profile of an optical system, and it may be a complicated function to tailor the system to certain properties. Usually, it refers to a non-uniform illumination or transmission profile that approaches zero at the edges. Apodization in imaging Since side lobes of the Airy disk are responsible for degrading the image, techniques for suppressing them are utilized. If the imaging beam has Gaussian distribution, when the truncation ratio (the ratio of the diameter of the Gaussian beam to the diameter of the truncating aperture) is set to 1, the side-lobes become negligible and the beam profile becomes purely Gaussian. In medical ultrasonography, the effect of grating lobes can be reduced by activating ultrasonic transducer elements using variable voltages in apodization process. Apodization in photography Most camera lenses contain diaphragms which decrease the amount of light coming into the camera. These are not strictly an example of apodization, since the diaphragm does not produce a smooth transition to zero intensity, nor does it provide shaping of the intensity profile (beyond the obvious all-or-nothing, "top hat" transmission of its aperture). Some lenses use other methods to reduce the amount of light let in. For example, the Minolta/Sony STF 135mm f/2.8 T4.5 lens however, has a special design introduced in 1999, which accomplishes this by utilizing a concave neutral-gray tinted lens element as an apodization filter, thereby producing a pleasant bokeh. The same optical effect can be achieved by combining depth-of-field bracketing with multi exposure, as implemented in the Minolta Maxxum 7's STF function. In 2014, Fujifilm announced a lens utilizing a similar apodization filter in the Fujinon XF 56mm F1.2 R APD lens. In 2017, Sony introduced the E-mount full-frame lens Sony FE 100mm F2.8 STF GM OSS (SEL-100F28GM) based on the same optical Smooth Trans Focus principle. Simulation of a Gaussian laser beam input profile is also an example of apodization. Photon sieves provide a relatively easy way to achieve tailored optical apodization. Apodization in astronomy Apodization is used in telescope optics in order to improve the dynamic range of the image. For example, stars with low intensity in the close vicinity of very bright stars can be made visible using this technique, and even images of planets can be obtained when otherwise obscured by the bright atmosphere of the star they orbit. Generally, apodization reduces the resolution of an optical image; however because it reduces diffraction edge effects, it can actually enhance certain small details. In fact, the notion of resolution, as it is commonly defined with the Rayleigh criterion, is in this case partially irrelevant. One has to understand that the image formed in the focal plane of a lens (or a mirror) is modeled through the Fresnel diffraction formalism. The classical diffraction pattern, the Airy disk, is connected to a circular pupil, without any obstruction, and with a uniform transmission. Any change in the shape of the pupil (for example a square instead of a circle), or its transmission, results in an alteration in the associated diffraction pattern. See also Apodization function References Signal processing
Apodization
[ "Technology", "Engineering" ]
1,141
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
3,054,665
https://en.wikipedia.org/wiki/K%C3%A9gresse%20track
A Kégresse track is a kind of rubber or canvas continuous track which uses a flexible belt rather than interlocking metal segments. It can be fitted to a conventional car or truck to turn it into a half-track, suitable for use over rough or soft ground. Conventional front wheels and steering are used, although skis may also be fitted. A snowmobile is a smaller ski-only type. The mechanism incorporates an articulated bogie, fitted to the rear of the vehicle with a large drive wheel at one end, a large unpowered idler wheel at the other, and several small guide wheels in between, over which run a reinforced flexible belt. The belt is fitted with metal or rubber treads to grip the ground. History Russia Adolphe Kégresse designed the original system whilst working for Tsar Nicholas II of Russia between 1906 and 1916. He applied it to several cars in the royal garage including Rolls-Royce cars and Packard trucks. The Russian army also fitted the system to a number of their Austin Armoured Cars. France Following the Russian Revolution, Kégresse returned to his native France, where the system was used on Citroën cars between 1921 and 1937 for off-road and military vehicles. Expeditions across undeveloped parts of Asia, America, and Africa were undertaken by Citroën, demonstrating all-terrain capabilities. During World War II, the Wehrmacht captured many Citroën half-track vehicles and armored them for their own use. Great Britain British firm Burford developed the Burford-Kégress, an armoured personnel carrier conversion of their 30 cwt trucks. The rear-axle powered Kégresse tracks were produced under license from Citroën. A 1921 prototype passed trials and the British Army placed an order, but in continuous operation the tracks wore and broke. By 1929, the vehicles were taken out of service and later scrapped. Poland Citroën-Kégresse vehicles served in the Polish motorized artillery during the 1930s. Domestically produced Kégresse half-track trucks included the 1934 Półgąsienicowy - "Half-track car" - or better known C4P derived from the 4.5-ton Polski Fiat 621 truck. The C4P was designed by the BiRZ Badań Technicznych Broni Pancernych - Warsaw Armored Weapons and Technical Research Bureau in 1934. The engine and cab received some modifications and the front axle reinforced to integrate the 4x4 transmission. Production began in 1936 at Państwowe Zakłady Inżynierii's Warsaw plant. By 1939, more than 400 were produced including at least 80 artillery tractors. Belgium The FN-Kégresse 3T was a half-track vehicle used by Belgian armed forces as an artillery tractor between 1934 and 1940. 130 were built, with some 100 in service before the German invasion. United States In the late 1920s, the US Army purchased Citroën-Kégresse vehicles for evaluation, followed by a licence to produce the tracks. A 1939 prototype went into production with M2 and M3 half-track versions. More than 41,000 vehicles in over 70 versions were produced between 1940 and 1944. Sources External links Citroën-Kegresse halftracks in the Polish Army Informationen über Leben und Werk von Adolphe Kégresse (German language) Register of Kegresse Track vehicles Automotive suspension technologies Engineering vehicles
Kégresse track
[ "Engineering" ]
687
[ "Engineering vehicles" ]
3,054,711
https://en.wikipedia.org/wiki/Software%20license%20server
A software license server is a centralized computer software system which provides access tokens, or keys, to client computers in order to enable licensed software to run on them. In 1989, Sassafras Software Inc developed their trademarked KeyServer software license management tool. Since that time, other computing technology firms have adopted the phrase "key server" to be used interchangeably with "software license server." It is the job of a software license server to determine and control the number of copies of a program permitted to be used based on the license entitlements that an organization owns. Typically, an end-user customer organization will install a software license server on a host computer to provide licensing services to an enterprise computing environment. Publisher-specific license servers are commonly provided by software publishers, or through third party providers, to manage software licensing for a specific software publisher's products. Publisher-specific license servers are more commonly used for industry specialized software products than for common software products due to the high value of the managed software products. The server component of a client–server application may also contain an internal license server. References See also Floating licensing Product activation Digital rights management Information technology management Software licensing
Software license server
[ "Technology" ]
242
[ "Information technology", "Information technology management" ]
3,054,735
https://en.wikipedia.org/wiki/Upset%20welding
Upset welding (UW)/resistance butt welding is a welding technique that produces coalescence simultaneously over the entire area of abutting surfaces or progressively along a joint, by the heat obtained from resistance to electric current through the area where those surfaces are in contact. Pressure is applied before heating is started and is maintained throughout the heating period. The equipment used for upset welding is very similar to that used for flash welding. It can be used only if the parts to be welded are equal in cross-sectional area. The abutting surfaces must be very carefully prepared to provide for proper heating. The difference from flash welding is that the parts are clamped in the welding machine and force is applied bringing them tightly together. High-amperage current is then passed through the joint, which heats the abutting surfaces. When they have been heated to a suitable forging temperature an upsetting force is applied and the current is stopped. The high temperature of the work at the abutting surfaces plus the high pressure causes coalescence to take place. After cooling, the force is released and the weld is completed. References Welding Handbook, American Welding Society, 1950. page 438-442 See also Welding Flash welding Spot welding Percussion welding Gas tungsten arc welding Welding
Upset welding
[ "Engineering" ]
253
[ "Welding", "Mechanical engineering" ]
3,054,853
https://en.wikipedia.org/wiki/Three-dimensional%20space
In geometry, a three-dimensional space (3D space, 3-space or, rarely, tri-dimensional space) is a mathematical space in which three values (coordinates) are required to determine the position of a point. Most commonly, it is the three-dimensional Euclidean space, that is, the Euclidean space of dimension three, which models physical space. More general three-dimensional spaces are called 3-manifolds. The term may also refer colloquially to a subset of space, a three-dimensional region (or 3D domain), a solid figure. Technically, a tuple of numbers can be understood as the Cartesian coordinates of a location in a -dimensional Euclidean space. The set of these -tuples is commonly denoted and can be identified to the pair formed by a -dimensional Euclidean space and a Cartesian coordinate system. When , this space is called the three-dimensional Euclidean space (or simply "Euclidean space" when the context is clear). In classical physics, it serves as a model of the physical universe, in which all known matter exists. When relativity theory is considered, it can be considered a local subspace of space-time. While this space remains the most compelling and useful way to model the world as it is experienced, it is only one example of a 3-manifold. In this classical example, when the three values refer to measurements in different directions (coordinates), any three directions can be chosen, provided that these directions do not lie in the same plane. Furthermore, if these directions are pairwise perpendicular, the three values are often labeled by the terms width/breadth, height/depth, and length. History Books XI to XIII of Euclid's Elements dealt with three-dimensional geometry. Book XI develops notions of orthogonality and parallelism of lines and planes, and defines solids including parallelpipeds, pyramids, prisms, spheres, octahedra, icosahedra and dodecahedra. Book XII develops notions of similarity of solids. Book XIII describes the construction of the five regular Platonic solids in a sphere. In the 17th century, three-dimensional space was described with Cartesian coordinates, with the advent of analytic geometry developed by René Descartes in his work La Géométrie and Pierre de Fermat in the manuscript Ad locos planos et solidos isagoge (Introduction to Plane and Solid Loci), which was unpublished during Fermat's lifetime. However, only Fermat's work dealt with three-dimensional space. In the 19th century, developments of the geometry of three-dimensional space came with William Rowan Hamilton's development of the quaternions. In fact, it was Hamilton who coined the terms scalar and vector, and they were first defined within his geometric framework for quaternions. Three dimensional space could then be described by quaternions which had vanishing scalar component, that is, . While not explicitly studied by Hamilton, this indirectly introduced notions of basis, here given by the quaternion elements , as well as the dot product and cross product, which correspond to (the negative of) the scalar part and the vector part of the product of two vector quaternions. It was not until Josiah Willard Gibbs that these two products were identified in their own right, and the modern notation for the dot and cross product were introduced in his classroom teaching notes, found also in the 1901 textbook Vector Analysis written by Edwin Bidwell Wilson based on Gibbs' lectures. Also during the 19th century came developments in the abstract formalism of vector spaces, with the work of Hermann Grassmann and Giuseppe Peano, the latter of whom first gave the modern definition of vector spaces as an algebraic structure. In Euclidean geometry Coordinate systems In mathematics, analytic geometry (also called Cartesian geometry) describes every point in three-dimensional space by means of three coordinates. Three coordinate axes are given, each perpendicular to the other two at the origin, the point at which they cross. They are usually labeled , and . Relative to these axes, the position of any point in three-dimensional space is given by an ordered triple of real numbers, each number giving the distance of that point from the origin measured along the given axis, which is equal to the distance of that point from the plane determined by the other two axes. Other popular methods of describing the location of a point in three-dimensional space include cylindrical coordinates and spherical coordinates, though there are an infinite number of possible methods. For more, see Euclidean space. Below are images of the above-mentioned systems. Lines and planes Two distinct points always determine a (straight) line. Three distinct points are either collinear or determine a unique plane. On the other hand, four distinct points can either be collinear, coplanar, or determine the entire space. Two distinct lines can either intersect, be parallel or be skew. Two parallel lines, or two intersecting lines, lie in a unique plane, so skew lines are lines that do not meet and do not lie in a common plane. Two distinct planes can either meet in a common line or are parallel (i.e., do not meet). Three distinct planes, no pair of which are parallel, can either meet in a common line, meet in a unique common point, or have no point in common. In the last case, the three lines of intersection of each pair of planes are mutually parallel. A line can lie in a given plane, intersect that plane in a unique point, or be parallel to the plane. In the last case, there will be lines in the plane that are parallel to the given line. A hyperplane is a subspace of one dimension less than the dimension of the full space. The hyperplanes of a three-dimensional space are the two-dimensional subspaces, that is, the planes. In terms of Cartesian coordinates, the points of a hyperplane satisfy a single linear equation, so planes in this 3-space are described by linear equations. A line can be described by a pair of independent linear equations—each representing a plane having this line as a common intersection. Varignon's theorem states that the midpoints of any quadrilateral in form a parallelogram, and hence are coplanar. Spheres and balls A sphere in 3-space (also called a 2-sphere because it is a 2-dimensional object) consists of the set of all points in 3-space at a fixed distance from a central point . The solid enclosed by the sphere is called a ball (or, more precisely a 3-ball). The volume of the ball is given by and the surface area of the sphere is Another type of sphere arises from a 4-ball, whose three-dimensional surface is the 3-sphere: points equidistant to the origin of the euclidean space . If a point has coordinates, , then characterizes those points on the unit 3-sphere centered at the origin. This 3-sphere is an example of a 3-manifold: a space which is 'looks locally' like 3-D space. In precise topological terms, each point of the 3-sphere has a neighborhood which is homeomorphic to an open subset of 3-D space. Polytopes In three dimensions, there are nine regular polytopes: the five convex Platonic solids and the four nonconvex Kepler-Poinsot polyhedra. Surfaces of revolution A surface generated by revolving a plane curve about a fixed line in its plane as an axis is called a surface of revolution. The plane curve is called the generatrix of the surface. A section of the surface, made by intersecting the surface with a plane that is perpendicular (orthogonal) to the axis, is a circle. Simple examples occur when the generatrix is a line. If the generatrix line intersects the axis line, the surface of revolution is a right circular cone with vertex (apex) the point of intersection. However, if the generatrix and axis are parallel, then the surface of revolution is a circular cylinder. Quadric surfaces In analogy with the conic sections, the set of points whose Cartesian coordinates satisfy the general equation of the second degree, namely, where and are real numbers and not all of and are zero, is called a quadric surface. There are six types of non-degenerate quadric surfaces: Ellipsoid Hyperboloid of one sheet Hyperboloid of two sheets Elliptic cone Elliptic paraboloid Hyperbolic paraboloid The degenerate quadric surfaces are the empty set, a single point, a single line, a single plane, a pair of planes or a quadratic cylinder (a surface consisting of a non-degenerate conic section in a plane and all the lines of through that conic that are normal to ). Elliptic cones are sometimes considered to be degenerate quadric surfaces as well. Both the hyperboloid of one sheet and the hyperbolic paraboloid are ruled surfaces, meaning that they can be made up from a family of straight lines. In fact, each has two families of generating lines, the members of each family are disjoint and each member one family intersects, with just one exception, every member of the other family. Each family is called a regulus. In linear algebra Another way of viewing three-dimensional space is found in linear algebra, where the idea of independence is crucial. Space has three dimensions because the length of a box is independent of its width or breadth. In the technical language of linear algebra, space is three-dimensional because every point in space can be described by a linear combination of three independent vectors. Dot product, angle, and length A vector can be pictured as an arrow. The vector's magnitude is its length, and its direction is the direction the arrow points. A vector in can be represented by an ordered triple of real numbers. These numbers are called the components of the vector. The dot product of two vectors and is defined as: The magnitude of a vector is denoted by . The dot product of a vector with itself is which gives the formula for the Euclidean length of the vector. Without reference to the components of the vectors, the dot product of two non-zero Euclidean vectors and is given by where is the angle between and . Cross product The cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. The cross product A × B of the vectors A and B is a vector that is perpendicular to both and therefore normal to the plane containing them. It has many applications in mathematics, physics, and engineering. In function language, the cross product is a function . The components of the cross product are and can also be written in components, using Einstein summation convention as where is the Levi-Civita symbol. It has the property that . Its magnitude is related to the angle between and by the identity The space and product form an algebra over a field, which is not commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Specifically, the space together with the product, is isomorphic to the Lie algebra of three-dimensional rotations, denoted . In order to satisfy the axioms of a Lie algebra, instead of associativity the cross product satisfies the Jacobi identity. For any three vectors and One can in n dimensions take the product of vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. Abstract description It can be useful to describe three-dimensional space as a three-dimensional vector space over the real numbers. This differs from in a subtle way. By definition, there exists a basis for . This corresponds to an isomorphism between and : the construction for the isomorphism is found here. However, there is no 'preferred' or 'canonical basis' for . On the other hand, there is a preferred basis for , which is due to its description as a Cartesian product of copies of , that is, . This allows the definition of canonical projections, , where . For example, . This then allows the definition of the standard basis defined by where is the Kronecker delta. Written out in full, the standard basis is Therefore can be viewed as the abstract vector space, together with the additional structure of a choice of basis. Conversely, can be obtained by starting with and 'forgetting' the Cartesian product structure, or equivalently the standard choice of basis. As opposed to a general vector space , the space is sometimes referred to as a coordinate space. Physically, it is conceptually desirable to use the abstract formalism in order to assume as little structure as possible if it is not given by the parameters of a particular problem. For example, in a problem with rotational symmetry, working with the more concrete description of three-dimensional space assumes a choice of basis, corresponding to a set of axes. But in rotational symmetry, there is no reason why one set of axes is preferred to say, the same set of axes which has been rotated arbitrarily. Stated another way, a preferred choice of axes breaks the rotational symmetry of physical space. Computationally, it is necessary to work with the more concrete description in order to do concrete computations. Affine description A more abstract description still is to model physical space as a three-dimensional affine space over the real numbers. This is unique up to affine isomorphism. It is sometimes referred to as three-dimensional Euclidean space. Just as the vector space description came from 'forgetting the preferred basis' of , the affine space description comes from 'forgetting the origin' of the vector space. Euclidean spaces are sometimes called Euclidean affine spaces for distinguishing them from Euclidean vector spaces. This is physically appealing as it makes the translation invariance of physical space manifest. A preferred origin breaks the translational invariance. Inner product space The above discussion does not involve the dot product. The dot product is an example of an inner product. Physical space can be modelled as a vector space which additionally has the structure of an inner product. The inner product defines notions of length and angle (and therefore in particular the notion of orthogonality). For any inner product, there exist bases under which the inner product agrees with the dot product, but again, there are many different possible bases, none of which are preferred. They differ from one another by a rotation, an element of the group of rotations SO(3). In calculus Gradient, divergence and curl In a rectangular coordinate system, the gradient of a (differentiable) function is given by and in index notation is written The divergence of a (differentiable) vector field F = U i + V j + W k, that is, a function , is equal to the scalar-valued function: In index notation, with Einstein summation convention this is Expanded in Cartesian coordinates (see Del in cylindrical and spherical coordinates for spherical and cylindrical coordinate representations), the curl ∇ × F is, for F composed of [Fx, Fy, Fz]: where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. This expands as follows: In index notation, with Einstein summation convention this is where is the totally antisymmetric symbol, the Levi-Civita symbol. Line, surface, and volume integrals For some scalar field f : U ⊆ Rn → R, the line integral along a piecewise smooth curve C ⊂ U is defined as where r: [a, b] → C is an arbitrary bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C and . For a vector field F : U ⊆ Rn → Rn, the line integral along a piecewise smooth curve C ⊂ U, in the direction of r, is defined as where · is the dot product and r: [a, b] → C is a bijective parametrization of the curve C such that r(a) and r(b) give the endpoints of C. A surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral. To find an explicit formula for the surface integral, we need to parameterize the surface of interest, S, by considering a system of curvilinear coordinates on S, like the latitude and longitude on a sphere. Let such a parameterization be x(s, t), where (s, t) varies in some region T in the plane. Then, the surface integral is given by where the expression between bars on the right-hand side is the magnitude of the cross product of the partial derivatives of x(s, t), and is known as the surface element. Given a vector field v on S, that is a function that assigns to each x in S a vector v(x), the surface integral can be defined component-wise according to the definition of the surface integral of a scalar field; the result is a vector. A volume integral is an integral over a three-dimensional domain or region. When the integrand is trivial (unity), the volume integral is simply the region's volume. It can also mean a triple integral within a region D in R3 of a function and is usually written as: Fundamental theorem of line integrals The fundamental theorem of line integrals, says that a line integral through a gradient field can be evaluated by evaluating the original scalar field at the endpoints of the curve. Let . Then Stokes' theorem Stokes' theorem relates the surface integral of the curl of a vector field F over a surface Σ in Euclidean three-space to the line integral of the vector field over its boundary ∂Σ: Divergence theorem Suppose is a subset of (in the case of represents a volume in 3D space) which is compact and has a piecewise smooth boundary (also indicated with ). If is a continuously differentiable vector field defined on a neighborhood of , then the divergence theorem says: The left side is a volume integral over the volume , the right side is the surface integral over the boundary of the volume . The closed manifold is quite generally the boundary of oriented by outward-pointing normals, and is the outward pointing unit normal field of the boundary . ( may be used as a shorthand for .) In topology Three-dimensional space has a number of topological properties that distinguish it from spaces of other dimension numbers. For example, at least three dimensions are required to tie a knot in a piece of string. In differential geometry the generic three-dimensional spaces are 3-manifolds, which locally resemble . In finite geometry Many ideas of dimension can be tested with finite geometry. The simplest instance is PG(3,2), which has Fano planes as its 2-dimensional subspaces. It is an instance of Galois geometry, a study of projective geometry using finite fields. Thus, for any Galois field GF(q), there is a projective space PG(3,q) of three dimensions. For example, any three skew lines in PG(3,q) are contained in exactly one regulus. See also 3D rotation Rotation formalisms in three dimensions Dimensional analysis Distance from a point to a plane Four-dimensional space Three-dimensional graph Solid geometry Terms of orientation Notes References Arfken, George B. and Hans J. Weber. Mathematical Methods For Physicists, Academic Press; 6 edition (June 21, 2005). . External links Elementary Linear Algebra - Chapter 8: Three-dimensional Geometry Keith Matthews from University of Queensland, 1991 Analytic geometry Multi-dimensional geometry Three-dimensional coordinate systems 3 (number) Space
Three-dimensional space
[ "Physics", "Mathematics" ]
4,055
[ "Spacetime", "Space", "Geometry", "Euclidean solid geometry" ]
3,054,856
https://en.wikipedia.org/wiki/Detonation%20velocity
Explosive velocity, also known as detonation velocity or velocity of detonation (VoD), is the velocity at which the shock wave front travels through a detonated explosive. Explosive velocities are always higher than the local speed of sound in the material. If the explosive is confined before detonation, such as in an artillery shell, the force produced is focused on a much smaller area, and the pressure is significantly intensified. This results in an explosive velocity that is higher than if the explosive had been detonated in open air. Unconfined velocities are often approximately 70 to 80 percent of confined velocities. Explosive velocity is increased with smaller particle size (i.e., increased spatial density), increased charge diameter, and increased confinement (i.e., higher pressure). Typical detonation velocities for organic dust mixtures range from 1400 to 1650m/s. Gas explosions can either deflagrate or detonate based on confinement; detonation velocities are generally around 1700 m/s but can be as high as 3000m/s. Solid explosives often have detonation velocities ranging beyond 4000 m/s to 10300 m/s. Detonation velocity can be measured by the Dautriche method. In essence, this method relies on the time lag between the initiation of two ends of a detonating fuse of a known detonation velocity, inserted radially at two points into the explosive charge at a known distance apart. When the explosive charge is detonated, it triggers one end of the fuse, then the second end. This causes two detonation fronts travelling in opposite direction along the length of the detonating fuse, which meet at a specific point away from the centre of the fuse. Knowing the distance along the detonation charge between the two ends of the fuse, the position of the collision of the detonation fronts, and the detonation velocity of the detonating fuse, the detonation velocity of the explosive is calculated and is expressed in km/s. In other words "VOD is the velocity or rate of propagation of chemical decomposition/reaction." And for high explosives, it is generally above 1000m/s. In the cases where a material has not received dedicated testing, rough predictions based upon gas behavior theory are sometimes used (see Chapman–Jouguet condition). The detonation velocity can be effectively determined by the Chapman–Jouguet (CJ) state, which represents the minimum sustainable steady detonation speed. See also Table of explosive detonation velocities Brisance Burn rate Detonation Explosion Deflagration Flame speed Gurney equations References Explosives engineering
Detonation velocity
[ "Engineering" ]
560
[ "Explosives engineering" ]
3,055,133
https://en.wikipedia.org/wiki/Serous%20gland
Serous glands secrete serous fluid. They contain serous acini, a grouping of serous cells that secrete serous fluid, isotonic with blood plasma, that contains enzymes such as alpha-amylase. Serous glands are most common in the parotid gland and lacrimal gland but are also present in the submandibular gland and, to a far lesser extent, the sublingual gland. See also Mucous gland References External links - "Tongue: Mucous and Serous Glands" - "Lingual Glands" - "Epithelial Tissue, Surface Specializations, and Glands multicellular; pure serous gland" Overview at siumed.edu Glands
Serous gland
[ "Biology" ]
144
[ "Exocrine system", "Organ systems" ]
3,055,177
https://en.wikipedia.org/wiki/Timothy%20Williamson
Timothy Williamson (born 6 August 1955) is a British philosopher whose main research interests are in philosophical logic, philosophy of language, epistemology and metaphysics. He is the former Wykeham Professor of Logic at the University of Oxford, and a fellow of New College, Oxford. Education and career Born on 6 August 1955, Williamson's education began at Leighton Park School and continued at Henley Grammar School (now the Henley College). He then went to Balliol College, Oxford University. He graduated in 1976 with a Bachelor of Arts degree with first-class honours in mathematics and philosophy, and in 1980 with a doctorate in philosophy (DPhil) for a thesis entitled The Concept of Approximation to the Truth. Williamson was Professor of Logic and Metaphysics at the University of Edinburgh (1995–2000), fellow and lecturer in philosophy at University College, Oxford (1988–1994), and lecturer in philosophy at Trinity College, Dublin (1980–1988). He took up the Wykeham Professorship in 2000 and retired in 2023, when he took up a Senior Research and Teaching Fellowship in Philosophy. He has been visiting professor at Yale University, Princeton University, MIT, the University of Michigan, and the Chinese University of Hong Kong. He was president of the Aristotelian Society from 2004 to 2005. He is a Fellow of the British Academy (FBA), the Norwegian Academy of Science and Letters, Fellow of the Royal Society of Edinburgh (FRSE), and a Foreign Honorary Fellow of the American Academy of Arts & Sciences. Since 2022 he is visiting professor at the Università della Svizzera Italiana. Philosophical work Williamson has contributed to analytic philosophy of language, logic, metaphysics and epistemology. On vagueness, he holds a position known as epistemicism, which states that every seemingly vague predicate (like "bald" or "thin") actually has a sharp cutoff, which is impossible for us to know. For instance, there is some number of hairs such that anyone with that number is bald, and anyone with even one more hair is not. In actuality, this condition will be spelled out only partly in terms of numbers of hairs, but whatever measures are relevant will have some sharp cutoff. This solution to the difficult Sorites paradox was considered an astonishing and unacceptable consequence, but has become a relatively mainstream view since his defence of it. Williamson is fond of using the statement, "no one knows whether I am thin" to illustrate his view. In epistemology, Williamson suggests that the concept of knowledge is unanalysable. This went against the common trend in philosophical literature up to that point, which was to argue that knowledge could be analysed into constituent concepts. (Typically this would be justified true belief plus an extra factor.) He agrees that knowledge entails justification, truth and belief, but argues that it is conceptually primitive. He accounts for the importance of belief by discussing its connections with knowledge, but avoids the disjunctivist position of saying that belief can be analysed as the disjunction of knowledge with some distinct, non-factive mental state. Williamson also argues against the traditional distinction of knowing-how and knowing-that. He says that knowledge-how is a type of knowledge-that. Williamson argues that knowledge-how does not relate one's ability. As an example, he gives a ski instructor who knows how to perform a complex move without having the ability to do it himself. In metaphysics, Williamson defends necessitism, according to which necessarily everything is necessarily something, in short, that everything exists of necessity. Necessitism is associated with the Barcan formula: it is possible for something to have a property only if there is something which has that property. Thus, since it is possible for Wittgenstein to have had a child, there is something which is a possible child of Wittgenstein. However, Williamson has also developed an ontology of bare possibilia which he argues alleviates the worst consequences of necessitism and of the Barcan formula. It's not that Wittgenstein's possible child is concrete; rather, it is contingently non-concrete. Publications Identity and Discrimination, Oxford: Blackwell, 1990. Vagueness, London: Routledge, 1994. Knowledge and Its Limits, Oxford: Oxford University Press, 2000. The Philosophy of Philosophy, Oxford: Blackwell, 2007 The Philosophy of Philosophy (2nd ed.), Hoboken: Wiley Blackwell, 2022. Modal Logic as Metaphysics, Oxford: Oxford University Press, 2013. Tetralogue: I'm Right, You're Wrong, Oxford: Oxford University Press, 2015. Doing Philosophy: From Common Curiosity to Logical Reasoning, Oxford University Press, 2017. Suppose and Tell: The Semantics and Heuristics of Conditionals, Oxford University Press, 2020. Debating the A Priori (with Paul Boghossian, Oxford University Press, 2020. Overfitting and Heuristics in Philosophy, Oxford University Press, 2024. Williamson has also published more than 120 articles in peer-reviewed scholarly journals. References External links Profile at University of Oxford An in-depth autobiographical interview with Timothy Williamson Interview at 3:AM Magazine Università della Svizzera Italiana professors. 1955 births 20th-century British philosophers 20th-century British essayists 21st-century British philosophers 21st-century British essayists Academics of the University of Edinburgh Academics of Trinity College Dublin Analytic philosophers Aristotelian philosophers Atheist philosophers British atheists British male essayists British epistemologists Fellows of New College, Oxford Fellows of the British Academy Fellows of the Royal Society of Edinburgh Fellows of University College, Oxford Living people Members of the Norwegian Academy of Science and Letters British metaphysicians Ontologists People from Uppsala British philosophers of language British philosophers of logic Philosophers of mathematics British philosophers of mind British philosophy academics Philosophy writers Presidents of the Aristotelian Society British social philosophers Wykeham Professors of Logic Henry Wilde Prize winners
Timothy Williamson
[ "Mathematics" ]
1,237
[]
3,055,402
https://en.wikipedia.org/wiki/Ironmaster
An ironmaster is the manager, and usually owner, of a forge or blast furnace for the processing of iron. It is a term mainly associated with the period of the Industrial Revolution, especially in Great Britain. The ironmaster was usually a large-scale entrepreneur and thus an important member of a community. He would have a large country house or mansion as his residence. The organization of operations surrounding the smelting, refining, and casting of iron was labour-intensive, and so there would be numerous workers reliant on the furnace works. There were ironmasters (possibly not called such) from the 17th century onward, but they became more prominent with the great expansion in the British iron industry during the Industrial Revolution. 17th-century ironmasters (examples) An early ironmaster was John Winter (about 1600–1676) who owned substantial holdings in the Forest of Dean. During the English Civil War he cast cannons for Charles I. Following the Restoration, Winter developed his interest in the iron industry, and experimented with a new type of coking oven. This was a precursor to the later work of Abraham Darby I who successfully used coke to smelt iron. 18th-century ironmasters (examples) Abraham Darby Three successive generations of the same family all bearing the name Abraham Darby are renowned for their contributions to the development of the English iron industry. Their works at Coalbrookdale in Shropshire nurtured the start of improvements in metallurgy that allowed large-scale production of the iron that made the development of steam engines and railways possible, although their most notable innovation was The Iron Bridge. John Wilkinson One of the best-known ironmasters of the early part of the industrial revolution was John Wilkinson (1728–1808), who was considered to have "iron madness", extending even to making cast iron coffins. Wilkinson's patented method for boring iron cylinders was first used to create cannons, but later provided the precision needed to create James Watt's first steam engines. Samuel Van Leer Samuel Van Leer was a well-known ironmaster and a United States Army officer during the American Revolutionary War. He started a military career with enthusiasm with his neighbor General Anthony Wayne in 1775. His furnace, Reading Furnace in Pennsylvania, supplied cannon and cannonballs for the Continental Army. Van Leer's furnace was a center of colonial ironmaking and is associated with the introduction of the Franklin Stove, and the retreat of George Washington's army following its defeat at the Battle of Brandywine, where they came for musket repairs. The location is listed as a temporary George Washington Headquarter.W Van Leer's children all joined the iron business as well. 19th-century ironmasters (examples) Lowthian Bell Lowthian Bell (1816–1904) was, like Abraham Darby, the forceful patriarch of an ironmaking dynasty. Both his son Hugh Bell and his grandson Maurice Bell were directors of the Bell iron and steel company. His father, Thomas Bell, was a founder of Losh, Wilson and Bell, an iron and alkali company. The firm had works at Walker, near Newcastle upon Tyne, and at Port Clarence, Middlesbrough, contributing largely to the growth of those towns and of the economy of the northeast of England. Bell accumulated a large fortune, with mansions including Washington New Hall, Rounton Grange near Northallerton, and the mediaeval Mount Grace Priory near Osmotherley. Henry Bolckow and John Vaughan Henry Bolckow (1806–1878) and John Vaughan (1799–1868) were lifelong business partners, friends, and brothers-in-law. They established what became the largest of all Victorian era iron and steel companies, Bolckow Vaughan, in Middlesbrough. Bolckow brought financial acumen, and Vaughan brought ironmaking and engineering expertise. The two men trusted each other implicitly and "never interfered in the slightest degree with each other's work. Mr. Bolckow had the entire management of the financial department, while Mr. Vaughan as worthily controlled the practical work of the establishment." At its peak the firm was the largest steel producer in Britain, possibly in the world. Andrew Handyside Andrew Handyside (1805–1887) was born in Edinburgh and set up works in Derby where he made ornamental items, bridges and pillar boxes, many of which survive today. Samuel Richards Samuel Richards (1769–1842) was born in Philadelphia to William Richards, the manager of the Batsto Iron Works beginning in 1784. Samuel Richards was heavily involved with the early 19th century iron industry in southern New Jersey. His most notable enterprise was the management of the iron works at Atsion, New Jersey from 1824 until his death in 1842. He was also involved with Martha Furnace, and Weymouth Furnace. See also Pig iron Wrought iron References Industrial Revolution History of metallurgy Metalworking occupations
Ironmaster
[ "Chemistry", "Materials_science" ]
983
[ "Metallurgists", "Metallurgy", "History of metallurgy", "Ironmasters" ]
3,055,861
https://en.wikipedia.org/wiki/Wagner%E2%80%93Meerwein%20rearrangement
A Wagner–Meerwein rearrangement is a class of carbocation 1,2-rearrangement reactions in which a hydrogen, alkyl or aryl group migrates from one carbon to a neighboring carbon. They can be described as cationic [1,2]-sigmatropic rearrangements, proceeding suprafacially and with stereochemical retention. As such, a Wagner–Meerwein shift is a thermally allowed pericyclic process with the Woodward-Hoffmann symbol [ω0s + σ2s]. They are usually facile, and in many cases, they can take place at temperatures as low as –120 °C. The reaction is named after the Russian chemist Yegor Yegorovich Vagner; he had German origin and published in German journals as Georg Wagner; and Hans Meerwein. Several reviews have been published. The rearrangement was first discovered in bicyclic terpenes for example the conversion of isoborneol to camphene: The story of the rearrangement reveals that many scientists were puzzled with this and related reactions and its close relationship to the discovery of carbocations as intermediates. In a simple demonstration reaction of 1,4-dimethoxybenzene with either 2-methyl-2-butanol or 3-methyl-2-butanol in sulfuric acid and acetic acid yields the same disubstituted product, the latter via a hydride shift of the cationic intermediate: Currently, there are works relating to the use of skeletal rearrangement in the synthesis of bridged azaheterocycles. These data are summarized in Plausible mechanisms of the Wagner–Meerwein rearrangement of diepoxyisoindoles: The related Nametkin rearrangement, named after Sergey Namyotkin, involves the rearrangement of methyl groups in certain terpenes. In some cases the reaction type is also called a retropinacol rearrangement (see pinacol rearrangement). References Rearrangement reactions Name reactions
Wagner–Meerwein rearrangement
[ "Chemistry" ]
438
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
3,055,918
https://en.wikipedia.org/wiki/PBR322
pBR322 is a plasmid and was one of the first widely used E. coli cloning vectors. Created in 1977 in the laboratory of Herbert Boyer at the University of California, San Francisco, it was named after Francisco Bolivar Zapata, the postdoctoral researcher and Raymond L. Rodriguez. The p stands for "plasmid," and BR for "Bolivar" and "Rodriguez." pBR322 is 4361 base pairs in length and has two antibiotic resistance genes – the gene bla encoding the ampicillin resistance (AmpR) protein, and the gene tetA encoding the tetracycline resistance (TetR) protein. It contains the origin of replication of pMB1, and the rop gene, which encodes a restrictor of plasmid copy number. The plasmid has unique restriction sites for more than forty restriction enzymes. Eleven of these forty sites lie within the TetR gene. There are two sites for restriction enzymes HindIII and ClaI within the promoter of the TetR gene. There are six key restriction sites inside the AmpR gene.The source of these antibiotic resistance genes are from pSC101 for Tetracycline and RSF2124 for Ampicillin. The circular sequence is numbered such that 0 is the middle of the unique EcoRI site and the count increases through the TetR gene. If we have to remove ampicillin for instance, we must use restriction endonuclease or molecular scissors against PstI and then pBR322 will become anti-resistant to ampicillin. The same process of Insertional Inactivation can be applied to Tetracycline. The AmpR gene is penicillin beta-lactamase. Promoters P1 and P3 are for the beta-lactamase gene. P3 is the natural promoter, and P1 is artificially created by the ligation of two different DNA fragments to create pBR322. P2 is in the same region as P1, but it is on the opposite strand and initiates transcription in the direction of the tetracycline resistance gene. Background Early cloning experiments may be conducted using natural plasmids such the ColE1 and pSC101. Each of these plasmids may have its advantages and disadvantages. For example, the ColE1 plasmid and its derivatives have the advantage of higher copy number and allow for chloramphenicol amplification of plasmid to produce a high yield of plasmid, however screening for immunity to colicin E1 is not technically simple. The plasmid pSC101, a natural plasmid from Salmonella panama, confers tetracycline resistance which allows for simpler screening process with antibiotic selection, but it is a low copy number plasmid which does not give a high yield of plasmid. Another plasmid, RSF 2124, which is a derivative of ColE1, confers ampicillin resistance but is larger. Many other plasmids were artificially constructed to create one that would be ideal for cloning purpose, and pBR322 was found to be most versatile by many and was therefore the one most popularly used. It has two antibiotic resistance genes, as selectable markers, and a number of convenient unique restriction sites that made it suitable as a cloning vector. The plasmid was constructed with genetic material from 3 main sources – the tetracycline resistance gene of pSC101, the ampicillin resistance gene of RSF 2124, and the replication elements of pMB1, a close relative of the ColE1 plasmid. A large number of other plasmids based on pBR322 have since been constructed specifically designed for a wide variety of purposes. Examples include the pUC series of plasmids. Most expression vectors for extrachromosomal protein expression and shuttle vectors contain the pBR322 origin of replication, and fragments of pBR322 are very popular in the construction of intraspecies shuttle or binary vectors and vectors for targeted integration and excision of DNA from chromosome. DNA sequence The sequence in pBR322 is See also List of restriction enzyme cutting sites References External links pBR322 Map pBR322 Features pBR322 DNA DNA mobile genetic elements Molecular biology techniques Plasmids
PBR322
[ "Chemistry", "Biology" ]
922
[ "Plasmids", "Molecular biology techniques", "Bacteria", "Molecular biology" ]
3,055,924
https://en.wikipedia.org/wiki/Return%20on%20capital%20employed
Return on capital employed is an accounting ratio used in finance, valuation, and accounting. It is a useful measure for comparing the relative profitability of companies after taking into account the amount of capital used. The formula (Expressed as a %) It is similar to return on assets (ROA), but takes into account sources of financing. Capital employed In the denominator we have net assets or capital employed instead of total assets (which is the case of Return on Assets). Capital Employed has many definitions. In general it is the capital investment necessary for a business to function. It is commonly represented as total assets less current liabilities (or fixed assets plus working capital requirement). ROCE uses the reported (period end) capital numbers; if one instead uses the average of the opening and closing capital for the period, one obtains return on average capital employed (ROACE). Application ROCE is used to prove the value the business gains from its assets and liabilities. Companies create value whenever they are able to generate returns on capital above the weighted average cost of capital (WACC). A business which owns much land will have a smaller ROCE compared to a business which owns little land but makes the same profit. It basically can be used to show how much a business is gaining for its assets, or how much it is losing for its liabilities. Drawbacks The main drawback of ROCE is that it measures return against the book value of assets in the business. As these are depreciated the ROCE will increase even though cash flow has remained the same. Thus, older businesses with depreciated assets will tend to have higher ROCE than newer, possibly better businesses. In addition, while cash flow is affected by inflation, the book value of assets is not. Consequently, revenues increase with inflation while capital employed generally does not (as the book value of assets is not affected by inflation). See also Cash flow return on investment (CFROI) Cash surplus value added (CsVA) index Economic value added (EVA) Return on assets (ROA) Return on equity (ROE) Return on invested capital (ROIC) References Financial ratios Yield (finance)
Return on capital employed
[ "Mathematics" ]
446
[ "Financial ratios", "Quantity", "Metrics" ]
16,129,890
https://en.wikipedia.org/wiki/Shoin
is a type of audience hall in Japanese architecture that was developed during the Muromachi period. The term originally meant a study and a place for lectures on the sūtra within a temple, but later it came to mean just a drawing room or study. From this room takes its name the shoin-zukuri style. In a shoin-zukuri building, the shoin is the zashiki, a tatami-room dedicated to the reception of guests. The emerging architecture of the Muromachi period was subsequently influenced by the increasing use and appearance of shoin. One of the most noticeable changes in architecture to arise from the shoin came from the practice of lining their floors with tatami mats. Since tatami mats have a standardized size the floor plans for shoin rooms had to be developed around the proportions of the tatami mat; this in turn affected the proportions of doors, the height of rooms, and other aspects of the structure. Before the shoin popularized the practice of lining floors with tatami mats it had been standard to only bring out a single tatami mat for the highest-ranking person in the room to sit on. The architecture surrounding and influenced by the shoin quickly developed many other distinguishing features. Since the guests sat on the floor instead of on furniture, they were positioned at a lower vantage point than their Chinese counterparts who were accustomed to using furniture. This lower vantage point generated such developments as the suspended ceilings which functioned to make the room feel less expansive, and also resulted in the ceilings rafters being no longer visible as they were in China. The new suspended ceilings also allowed for more elaborate decoration, resulting in many highly ornate suspended ceilings in addition to the much simpler ones. Another characteristic development to arise from the lower vantage point were the tokonoma and chigaidana. The tokonoma was an elevated recess built into the wall to create a space for displaying the Chinese art which was popular at the time at a comfortable eye level. The chigaidana, or "staggered shelves", were shelving structures built into the tokonoma to display smaller objects. Occurring at the same time as the development of the shoin architecture, the fusuma, or "sliding doors", were becoming a popular means to divide rooms. As a result, columns began to be created that were square-shaped to accommodate the sliding doors. The asymmetry of the tokonoma and chigaidana pair, as well as the squared pillars differentiated the shoin design with the Chinese design at the time which preferred symmetric pairs of furniture and round pillars. Soon after its advent shoin architecture became associated with these evolving elements as it became the predominant format for formal gathering rooms. References Japanese architectural features Rooms
Shoin
[ "Engineering" ]
553
[ "Rooms", "Architecture" ]
16,130,126
https://en.wikipedia.org/wiki/New%20digraph%20reconstruction%20conjecture
The reconstruction conjecture of Stanisław Ulam is one of the best-known open problems in graph theory. Using the terminology of Frank Harary it can be stated as follows: If G and H are two graphs on at least three vertices and ƒ is a bijection from V(G) to V(H) such that G\{v} and H\{ƒ(v)} are isomorphic for all vertices v in V(G), then G and H are isomorphic. In 1964 Harary extended the reconstruction conjecture to directed graphs on at least five vertices as the so-called digraph reconstruction conjecture. Many results supporting the digraph reconstruction conjecture appeared between 1964 and 1976. However, this conjecture was proved to be false when P. K. Stockmeyer discovered several infinite families of counterexample pairs of digraphs (including tournaments) of arbitrarily large order. The falsity of the digraph reconstruction conjecture caused doubt about the reconstruction conjecture itself. Stockmeyer even observed that “perhaps the considerable effort being spent in attempts to prove the (reconstruction) conjecture should be balanced by more serious attempts to construct counterexamples.” In 1979, Ramachandran revived the digraph reconstruction conjecture in a slightly weaker form called the new digraph reconstruction conjecture. In a digraph, the number of arcs incident from (respectively, to) a vertex v is called the outdegree (indegree) of v and is denoted by od(v) (respectively, id(v)). The new digraph conjecture may be stated as follows: The new digraph reconstruction conjecture reduces to the reconstruction conjecture in the undirected case, because if all the vertex-deleted subgraphs of two graphs are isomorphic, then the corresponding vertices must have the same degree. Thus, the new digraph reconstruction conjecture is stronger than the reconstruction conjecture, but weaker than the disproved digraph reconstruction conjecture. Several families of digraphs have been shown to satisfy the new digraph reconstruction conjecture and these include all the digraphs in the known counterexample pairs to the digraph reconstruction conjecture. Reductions All digraphs are N-reconstructible if all digraphs with 2-connected underlying graphs are N-reconstructible. All digraphs are N-reconstructible if and only if either of the following two classes of digraphs are N-reconstructible, where diam(D) and radius(D) are defined to be the diameter and radius of the underlying graph of D. Digraphs with diam(D) ≤ 2 or diam(D) = diam(Dc) = 3 Digraphs D with 2-connected underlying graphs and radius(D) ≤ 2 Present status As of 2024, no counterexample to the new digraph reconstruction conjecture is known. This conjecture is now also known as the degree associated reconstruction conjecture. References Conjectures Unsolved problems in graph theory Directed graphs
New digraph reconstruction conjecture
[ "Mathematics" ]
614
[ "Unsolved problems in mathematics", "Mathematical problems", "Conjectures", "Unsolved problems in graph theory" ]
16,130,377
https://en.wikipedia.org/wiki/Victor%20von%20Richter
Victor von Richter (April 15, 1841 – October 8, 1891) was a German chemist and discovered the von Richter reaction. Works Chemie der Kohlenstoffverbindungen. Band 1: Die Chemie der Fettkoerper . Cohen, Bonn 1894 Digital edition by the University and State Library Düsseldorf Chemie der Kohlenstoffverbindungen. Band 2: Carbocyclische und heterocyclische Verbindungen . Cohen, Bonn 1896 Digital edition by the University and State Library Düsseldorf V. v. Richter's Lehrbuch der anorganischen Chemie : mit 90 Holzschnitten u. 1 Spectraltaf. . 8. Aufl. / neu bearb. von H. Klinger Cohen, Bonn 1895 Digital edition by the University and State Library Düsseldorf Traité de chimie organique . Vol. 1&2 . éd. française trad. d'après la 11e éd. allemande, Béranger, Paris 1910 Digital edition by the University and State Library Düsseldorf References External links 1841 births 1891 deaths 19th-century German chemists People involved with the periodic table
Victor von Richter
[ "Chemistry" ]
240
[ "Periodic table", "People involved with the periodic table" ]
16,130,469
https://en.wikipedia.org/wiki/Ute%20meridian
The Ute meridian, also known as the Grand River meridian, was established in 1880 and is a principal meridian of Colorado. The initial point lies inside the boundaries of Grand Junction Regional Airport, Grand Junction, Colorado. See also List of principal and guide meridians and base lines of the United States References External links Surveying Named meridians Geography of Colorado Meridians and base lines of the United States
Ute meridian
[ "Engineering" ]
80
[ "Surveying", "Civil engineering" ]
16,130,885
https://en.wikipedia.org/wiki/Disinfection%20by-product
Disinfection by-products (DBPs) are organic and inorganic compounds resulting from chemical reactions between organic and inorganic substances such as contaminates and chemical treatment disinfection agents, respectively, in water during water disinfection processes. Chlorination disinfection byproducts Chlorinated disinfection agents such as chlorine and monochloramine are strong oxidizing agents introduced into water in order to destroy pathogenic microbes, to oxidize taste/odor-forming compounds, and to form a disinfectant residual so water can reach the consumer tap safe from microbial contamination. These disinfectants may react with naturally present fulvic and humic acids, amino acids, and other natural organic matter, as well as iodide and bromide ions, to produce a range of DBPs such as the trihalomethanes (THMs), haloacetic acids (HAAs), bromate, and chlorite (which are regulated in the US), and so-called "emerging" DBPs such as halonitromethanes, haloacetonitriles, haloamides, halofuranones, iodo-acids such as iodoacetic acid, iodo-THMs (iodotrihalomethanes), nitrosamines, and others. Chloramine has become a popular disinfectant in the US, and it has been found to produce N-nitrosodimethylamine (NDMA), which is a possible human carcinogen, as well as highly genotoxic iodinated DBPs, such as iodoacetic acid, when iodide is present in source waters. Residual chlorine and other disinfectants may also react further within the distribution network – both by further reactions with dissolved natural organic matter and with biofilms present in the pipes. In addition to being highly influenced by the types of organic and inorganic matter in the source water, the different species and concentrations of DBPs vary according to the type of disinfectant used, the dose of disinfectant, the concentration of natural organic matter and bromide/iodide, the time since dosing (i.e. water age), temperature, and pH of the water. Swimming pools using chlorine have been found to contain trihalomethanes, although generally they are below current EU standard for drinking water (100 micrograms per litre). Concentrations of trihalomethanes (mainly chloroform) of up to 0.43 ppm have been measured. In addition, trichloramine has been detected in the air above swimming pools, and it is suspected in the increased asthma observed in elite swimmers. Trichloramine is formed by the reaction of urea (from urine and sweat) with chlorine and gives the indoor swimming pool its distinctive odor. Byproducts from non-chlorinated disinfectants Several powerful oxidizing agents are used in disinfecting and treating drinking water, and many of these also cause the formation of DBPs. Ozone, for example, produces ketones, carboxylic acids, and aldehydes, including formaldehyde. Bromide in source waters can be converted by ozone into bromate, a potent carcinogen that is regulated in the United States, as well as other brominated DBPs. As regulations are tightened on established DBPs such as THMs and HAAs, drinking water treatment plants may switch to alternative disinfection methods. This change will alter the distribution of classes of DBPs. Occurrence DBPs are present in most drinking water supplies that have been subject to chlorination, chloramination, ozonation, or treatment with chlorine dioxide. Many hundreds of DBPs exist in treated drinking water and at least 600 have been identified. The low levels of many of these DBPs, coupled with the analytical costs in testing water samples for them, means that in practice only a handful of DBPs are actually monitored. Increasingly it is recognized that the genotoxicities and cytotoxicities of many of the DBPs not subject to regulatory monitoring, (particularly iodinated, nitrogenous DBPs) are comparatively much higher than those DBPs commonly monitored in the developed world (THMs and HAAs). In 2021, a new group of DBPs known as halogenated pyridinols was discovered, containing at least 8 previously unknown heterocyclic nitrogenous DBPs. They were found to require low pH treatments of 3.0 to be removed effectively. When their developmental and acute toxicity was tested on zebrafish embryos, it found to be slightly lower than those of halogenated benzoquinones, but dozens of times higher than of commonly known DBPs such as tribromomethane and iodoacetic acid. Health effects Epidemiological studies have looked at the associations between exposure to DBPs in drinking water with cancers, adverse birth outcomes and birth defects. Meta-analyses and pooled analyses of these studies have demonstrated consistent associations for bladder cancer and for babies being born small for gestational age, but not for congenital anomalies (birth defects). Early-term miscarriages have also been reported in some studies. The exact putative agent remains unknown, however, in the epidemiological studies since the number of DBPs in a water sample are high and exposure surrogates such as monitoring data of a specific by-product (often total trihalomethanes) are used in lieu of more detailed exposure assessment. The World Health Organization has stated that "the risk of death from pathogens is at least 100 to 1000 times greater than the risk of cancer from disinfection by-products (DBPs)" {and} the "risk of illness from pathogens is at least 10 000 to 1 million times greater than the risk of cancer from DBPs". Regulation and monitoring The United States Environmental Protection Agency has set Maximum Contaminant Levels (MCLs) for bromate, chlorite, haloacetic acids and total trihalomethanes (TTHMs). In Europe, the level of TTHMs has been set at 100 micrograms per litre, and the level for bromate to 10 micrograms per litre, under the Drinking Water Directive. No guideline values have been set for HAAs in Europe. The World Health Organization has established guidelines for several DBPs, including bromate, bromodichloromethane, chlorate, chlorite, chloroacetic acid, chloroform, cyanogen chloride, dibromoacetonitrile, dibromochloromethane, dichloroacetic acid, dichloroacetonitrile, NDMA, and trichloroacetic acid. See also Stuart W. Krasner References Chlorine Drinking water Water pollution Water treatment
Disinfection by-product
[ "Chemistry", "Engineering", "Environmental_science" ]
1,454
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
16,131,177
https://en.wikipedia.org/wiki/Watari%20Museum%20of%20Contemporary%20Art
The , commonly referred to as Watari-um, is a museum of contemporary art located in Shibuya, Tokyo. Founded by Shizuko Watari and opened in 1990, the museum is near Gaienmae Station on the Tokyo Metro Ginza Line. The institution promotes conceptual art and other non-commercial artists in Japan. It began as a commercial venue known as the Galerie Watari, which showcased a range of artists such as Sol LeWitt and Nam June Paik, as well as famous pop artists Andy Warhol and Keith Haring. The Watari-um became noted for its exhibitions of international and Japanese artists, while also reflecting on the position of Japanese art in the international context. The museum also organizes lectures, learning workshops for children, and small project room exhibitions. History From 1972 to 1989, Shizuko Watari was the director of the Galerie Watari in Tokyo, which organized exhibitions for Japanese and international artists including Nam June Paik, Keith Haring, Marcel Broodthaers, On Kawara and Shinro Ohtake. The gallery was expanded and became the Watari Museum of Contemporary Art. Is also known as the Watari-um, which derives from the combination of Watari and museum. The 6-story museum was designed by Swiss architect Mario Botta and opened in September 1990. The first floor is devoted entirely to the museum shop. The fourth floor offers a bird’s eye view of works displayed below, and the glass-walled mezzanine of the third floor makes for visual correspondence between artworks displayed in the exhibition rooms of the art gallery. The top floors accommodate the offices and the owner's residence. Beginning with an exhibition of Joseph Beuys in 1991, the Watari-um became noted for its exhibitions of significant international artists. The institution also helped to establish connections between Japanese and Asian artists, through projects such as Chinese Contemporary Art 1997, which included a large-scale performance by Zhang Huan. Large retrospective exhibitions of the artists Larry Clark, Henry Darger, Jan Fabre, Federico Herrero, Mike Kelley, John Lurie, Barry McGee, Gerda Steiner & Jörg Lenzlinger, have been held at the museum over the last few decades. References External links Official website— Art museums and galleries in Tokyo Contemporary art galleries in Japan Buildings and structures in Shibuya Art museums and galleries established in 1990 Buildings and structures completed in 1990 1990 establishments in Japan Mario Botta buildings Modernist architecture in Japan Postmodern architecture
Watari Museum of Contemporary Art
[ "Engineering" ]
503
[ "Postmodern architecture", "Architecture" ]
16,133,434
https://en.wikipedia.org/wiki/Optically%20active%20additive
Optically active additive (OAA) is an organic or inorganic material which, when added to a coating, makes that coating react to ultraviolet light. This effect enables quick, non-invasive inspection of very large coated areas during the application process allowing the coating inspector to identify and concentrate on defective areas, thus reducing inspection time while assuring the probability of good application and coverage. It works by highlighting holidays and pin-holes, areas of over and under application as well as giving the opportunity for crack detection and identification of early coating deterioration through life. The use of optically active additives or fluorescing additives is specified in US Military Specification MIL-SPEC-23236C. The use of OAAs and the inspection technique is described in the SSPC document Technology Up-date 11. Inorganic versus organic There are two common types of optically active additives available commercially: inorganic and organic. Inorganic OAAs exhibit large particle sizes of (no mobility), are light-stable, can have a choice of colours as shown in image above, are useful in a wide range of coating systems, and are more expensive. Some inorganic OAAs can exhibit some degree of afterglow aiding inspection. Organic OAAs require low addition levels, are soluble in solvents and organic liquids (mobile), are blue under UV (emitting the same colour as lint, oil, grease etc.), can fade quickly, have limited use in a range of coating systems and are less expensive. They are also indistinguishable from old tar epoxy-type coatings still seen on some structures and vessels. Organic OAAs have no afterglow. Physics of optically active technology If a single photon approaches an atom which is receptive to it, the photon can be absorbed by the atom in a manner very similar to a radio wave being picked up by an aerial. At the moment of absorption the photon ceases to exist and the total energy contained within the atom increases. This increase in energy is usually described symbolically by saying that one of the outermost electrons "jumps" to a "higher orbit". This new atomic configuration is unstable and the tendency is for the electron to fall back to its lower orbit or energy level, emitting a new photon as it goes. The entire process may take no more than 10−9 seconds. The result is much the same as with reflective colour, but because of the process of absorption and emission, the substance emits a glow. According to Planck, the energy of each photon is given by multiplying its frequency by a constant (the Planck constant, ). It follows that the wavelength of a photon emitted from a luminescent system is directly related to the difference between the energy of the two atomic levels involved. In terms of wavelength, this relationship is an inverse one so that if an emitted photon is to be of short wavelength (high energy), the gap to be jumped by the electron must be a large one. The numerical relationship between these two aspects is the reciprocal of the Planck constant. Chemical engineers are able to devise molecules with these energy levels in mind, so as to adjust the wavelength of the emitted photons to produce a specific colour. References Sources Buckhurst and Bowry. "An optically-active coating system for coating ballast tanks". Paper T-44, presented at the Paint and Coatings Expo 2005, SSPC, Pittsburgh, 2005 Department of Defense Single Stocking Point for Specifications and Standards (DoDSSP), Standardisation Document Order Desk, 700 Robbins Avenue, Bldg 4D, Philadelphia, PA 19111–5094 Technology Update 11 – Inspection of Fluorescent Coating Systems, SSPC, Pittsburgh October 2006 Planck, M. "On the law of distribution of energy in the normal spectrum", Annalen der Physik, 4, 553, 1901 Paint&Coatings.com; 28 November 2000. "Scottish company develops additive to revolutionize coating inspection process" Coatings
Optically active additive
[ "Chemistry" ]
813
[ "Coatings" ]
16,135,470
https://en.wikipedia.org/wiki/Magnetic%20isotope%20effect
Magnetic isotope effects arise when a chemical reaction involves spin-selective processes, such as the radical pair mechanism. The result is that some isotopes react preferentially, depending on their nuclear spin quantum number I. This is in contrast to more familiar mass-dependent isotope effects. References Physical chemistry
Magnetic isotope effect
[ "Physics", "Chemistry" ]
60
[ "Physical chemistry", "Applied and interdisciplinary physics", "Physical chemistry stubs", "nan" ]
16,136,171
https://en.wikipedia.org/wiki/List%20of%20Ecma%20standards
This is a list of standards published by Ecma International, formerly the European Computer Manufacturers Association. ECMA-1 – ECMA-99 ECMA-100 – ECMA-199 ECMA-200 – ECMA-299 ECMA-205 – Commercially Oriented Functionality Class for Security Evaluation (COFC) ECMA-206 – Association Context Management including Security Context Management ECMA-208 – System-Independent Data Format - SIDF (same as ISO/IEC 14863) ECMA-209 ECMA-219 – Authentication and Privilege Attribute Security Application with related Key Distribution Functions - Part 1, 2 and 3 ECMA-231 ECMA-234 – Application Programming Interface for Windows ECMA-235 – The Ecma GSS-API Mechanism ECMA-246 – Specification of AIT-1 ECMA-258 ECMA-259 ECMA-262 – ECMAScript (ISO/IEC 16262) (standardised JavaScript) ECMA-286 ECMA-291 – Specification of AIT-1 with MIC Format ECMA-292 – Specification of AIT-2 with MIC Format ECMA-300 – ECMA-399 ECMA-307 – Corporate Telecommunication Networks - Signalling Interworking between QSIG and H.323 - Generic Functional Protocol for the Support of Supplementary Services ECMA-308 – Corporate Telecommunication Networks - Signalling Interworking between QSIG and H.323 - Call Transfer Supplementary Services ECMA-309 – Corporate Telecommunication Networks - Signalling Interworking between QSIG and H.323 - Call Diversion Supplementary Services ECMA-316 – VXA ECMA-319 – Ultrium-1 ECMA-320 – Super DLT ECMA-326 – Corporate Telecommunication Networks - Signalling Interworking between QSIG and H.323 - Call Completion Supplementary Services ECMA-329 – Specification of AIT-3 ECMA-332 – Corporate Telecommunication Networks - Signalling Interworking between QSIG and H.323 - Basic Services ECMA-334 – C# programming language (ISO/IEC 23270) ECMA-335 – Common Language Infrastructure (ISO/IEC 23271) ECMA-355 – Corporate Telecommunication Networks - Tunnelling of QSIG over SIP ECMA-357 – ECMAScript for XML (E4X) (withdrawn) ECMA-360 – Corporate telecommunication networks - Signalling interworking between QSIG and SIP - Call diversion ECMA-361 – Corporate telecommunication networks - Signalling interworking between QSIG and SIP - Call transfer ECMA-363 – Universal 3D file format ECMA-365 – Universal Media Disc (UMD) ECMA-367 – Eiffel: Analysis, Design and Programming Language ECMA-368 – Ultra-wideband physical and MAC layers ECMA-369 – Ultra-wideband MAC-PHY interface ECMA-370 – TED - The Eco Declaration ECMA-372 – C++/CLI ECMA-376 – Office Open XML ECMA-377 – Holographic Versatile Disc 200 GB recordable cartridge ECMA-378 – Holographic Versatile Disc 100 GB HVD-ROM ECMA-379 – Test Method for the Estimation of the Archival Lifetetime of Optical Media ECMA-380 – Ultra Density Optical (UDO) ECMA-381 – Procedure for the Registration of Assigned Numbers for ECMA-368 and ECMA-369 ECMA-388 – Open XML Paper Specification ECMA-400 – ECMA-499 ECMA-402 – ECMAScript Internationalization API Specification ECMA-404 – The JSON Data Interchange Format ECMA-407 – Scalable Sparse Spatial Sound System (S5) – Base S5 Coding ECMA-408 – Dart Programming Language Specification See also List of ISO standards External links List of Ecma standards (Ecma International) Index of Ecma Standards (Ecma International) List of Ecma withdrawn Standards (Ecma International) Ecma standards
List of Ecma standards
[ "Technology" ]
811
[ "Computer standards", "Ecma standards" ]
16,136,723
https://en.wikipedia.org/wiki/Discursive%20dilemma
Discursive dilemma or doctrinal paradox is a paradox in social choice theory. The paradox is that aggregating judgments with majority voting can result in self-contradictory judgments. Consider a community voting on road repairs asked three questions; the repairs go ahead if all three answers are 'Yes'. The questions are: "Are the roads important?", "Is the weather right for road repair?" and "Are there available funds for repairs?" Imagine that three (non-overlapping) groups of 20% of people vote 'No' for each question, and everyone else votes 'Yes'. Then each question has an 80% agreement of 'Yes', so the repairs go ahead. However, now consider the situation where the community are asked one question: "Are all three conditions (importance, weather and funds) met?" Now 60% of people disagree with one of these conditions, so only 40% agree on a 'Yes' vote. In this case, the repairs do not go ahead. Thus the road repair team gets different feedback depending on how they poll their community. In general, any decision that is not unanimous can be logically self-contradictory. The theorem is closely related to the Condorcet paradox. Overview The doctrinal paradox shows it is difficult to construct a model of public opinion simply by identifying the majority opinion on multiple questions. This is because contradictory conceptions of a group can emerge depending on the type of questioning that is chosen. To see how, imagine that a three-member court must decide whether someone is liable for a breach of contract. For example, a lawn caretaker is accused of violating a contract not to mow over the land-owner's roses. The jurors must decide which of the following propositions are true: P: the defendant performed a certain action (i.e. did the caretaker mow over the roses?); Q: the defendant had a contractual obligation not to do that action (i.e. was there a contract not to mow over the roses?); C: the defendant is liable. Additionally, all judges accept the proposition . In other words, the judges agree that a defendant should be liable if and only if the two propositions, P and Q, are both true. Each judge could make consistent (non-contradictory) judgments, and the paradox will still emerge. Most judges could think P is true, and most judges could think Q is true. In this example, that means they would vote that the caretaker probably mowed over the roses, and that the contract did indeed forbid that action. This suggests the caretaker is liable. At the same time, most judges may think that P and Q are not both true at once. In this example, that means most judges conclude the caretaker is not liable. The table above illustrates how majority decisions can contradict (because the judges vote in favor of the premises, and yet reject the conclusion). Explanation This dilemma results because an actual decision-making procedure might be premise-based or conclusion-based. In a premise-based procedure, the judges decide by voting whether the conditions for liability are met. In a conclusion-based procedure, the judges decide directly whether the defendant should be liable. In the above formulation, the paradox is that the two procedures do not necessarily lead to the same result; the two procedures can even lead to opposite results. Pettit believes that the lesson of this paradox is that there is no simple way to aggregate individual opinions into a single, coherent "group entity". These ideas are relevant to sociology, which endeavors to understand and predict group behaviour. Petitt warns that we need to understand groups because they can be very powerful, can effect greater change, and yet the group as a whole may not have a strong conscience (see Diffusion of responsibility). He says we sometimes fail to hold groups (e.g. corporations) responsible because of the difficulties described above. Collective responsibility is important to sort out, and Petitt insists that groups should have limited rights, and various obligations and checks on their power. The discursive dilemma can be seen as a special case of the Condorcet paradox. List and Pettit argue that the discursive dilemma can be likewise generalized to a sort of "List–Pettit theorem". Their theorem states that the inconsistencies remain for any aggregation method which meets a few natural conditions. See also Multi-issue voting - similar to judgement aggregation in that voters have to decide on several related issues; different in that they vote according to their preferences, rather than according to their beliefs. Belief merging - similar to judgement aggregation in that there are several conflicting beliefs (represented as logical formulae) that have to be combined into a consistent database. References List, C. and Pettit, P.: Aggregating Sets of Judgments: Two Impossibility Results Compared, Synthese 140 (2004) 207–235 External links Judgment aggregation: an introduction and bibliography of the discursive dilemma by Christian List Social choice theory Paradoxes Dilemmas Social epistemology
Discursive dilemma
[ "Technology" ]
1,032
[ "Social epistemology", "Science and technology studies" ]
16,137,162
https://en.wikipedia.org/wiki/Ackermann%20ordinal
In mathematics, the Ackermann ordinal is a certain large countable ordinal, named after Wilhelm Ackermann. The term "Ackermann ordinal" is also occasionally used for the small Veblen ordinal, a somewhat larger ordinal. There is no standard notation for ordinals beyond the Feferman–Schütte ordinal Γ0. Most systems of notation use symbols such as ψ(α), θ(α), ψα(β), some of which are modifications of the Veblen functions to produce countable ordinals even for uncountable arguments, and some of which are "collapsing functions". The last one is an extension of the Veblen functions for more than 2 arguments. The smaller Ackermann ordinal is the limit of a system of ordinal notations invented by , and is sometimes denoted by or , , or , where Ω is the smallest uncountable ordinal. Ackermann's system of notation is weaker than the system introduced much earlier by , which he seems to have been unaware of. References Ordinal numbers
Ackermann ordinal
[ "Mathematics" ]
240
[ "Ordinal numbers", "Mathematical objects", "Number stubs", "Order theory", "Numbers" ]
16,137,416
https://en.wikipedia.org/wiki/Trimagnesium%20phosphate
Trimagnesium phosphate describes inorganic compounds with formula Mg3(PO4)2.xH2O. They are magnesium acid salts of phosphoric acid, with varying amounts of water of crystallization: x = 0, 5, 8, 22. The octahydrate forms upon reaction of stoichiometric quantities of monomagnesium phosphate (tetrahydrate) with magnesium hydroxide. Mg(H2PO4)2•4H2O + 2 Mg(OH)2 → Mg3(PO4)2•8H2O The octahydrate is found in nature as the mineral bobierrite. The anhydrous compound is obtained by heating the hydrates to 400 °C. It is isostructural with cobalt(II) phosphate. The metal ions occupy both octahedral (six-coordinate) and pentacoordinate sites in a 1:2 ratio. Safety Magnesium phosphate tribasic is listed on the FDA's generally recognized as safe, or GRAS, list of substances. See also Magnesium phosphate References Acid salts Phosphates Magnesium compounds E-number additives
Trimagnesium phosphate
[ "Chemistry" ]
239
[ "Acid salts", "Phosphates", "Salts" ]
16,137,794
https://en.wikipedia.org/wiki/Star%20position
Star position is the apparent angular position of any given star in the sky, which seems fixed onto an arbitrary sphere centered on Earth. The location is defined by a pair of angular coordinates relative to the celestial equator: right ascension () and declination (). This pair based the equatorial coordinate system. While is given in degrees (from +90° at the north celestial pole to −90° at the south), is usually given in hour angles (0 to 24 h). This is due to the observation technique of star transits, which cross the field of view of telescope eyepieces due to Earth's rotation. The observation techniques are topics of positional astronomy and of astrogeodesy. Ideally, the Cartesian coordinate system refers to an inertial frame of reference. The third coordinate is the star's distance, which is normally used as an attribute of the individual star. The following factors change star positions over time: axial precession and nutation – slow tilts of Earth's axis with rates of 50 arcseconds and 2 arcseconds respectively, per year; the aberration and parallax – effects of Earth's orbit around the Sun; and the proper motion of the individual stars. The first and second effects are considered by so-called mean places of stars, contrary to their apparent places as seen from the moving Earth. Usually the mean places refer to a special epoch, e.g. 1950.0 or 2000.0. The third effect has to be handled individually. The star positions are compiled in several star catalogues of different volume and accuracy. Absolute and very precise coordinates of 1000-3000 stars are collected in fundamental catalogues, starting with the FK (Berlin ~1890) up to the modern FK6. Relative coordinates of numerous stars are collected in catalogues like the Bonner Durchmusterung (Germany 1859-1863, 342,198 rough positions), the SAO catalogue (USA 1966, 250.000 astrometric stars) or the Hipparcos and Tycho catalogue (110.000 and 2 million stars by space astrometry). See also Star catalogue, FK4, FK6 Equatorial coordinates, Ecliptic coordinates annual aberration, improper motion Geodetic astronomy, transit instruments References External links How Astronomers describe the position of stars Apparent Places of Fundamental Stars Astrometry Position
Star position
[ "Physics", "Astronomy", "Mathematics" ]
484
[ "Geometric measurement", "Point (geometry)", "Physical quantities", "Astrometry", "Position", "Space", "Vector physical quantities", "Spacetime", "Wikipedia categories named after physical quantities", "Astronomical sub-disciplines" ]
16,137,869
https://en.wikipedia.org/wiki/Curvelet
Curvelets are a non-adaptive technique for multi-scale object representation. Being an extension of the wavelet concept, they are becoming popular in similar fields, namely in image processing and scientific computing. Wavelets generalize the Fourier transform by using a basis that represents both location and spatial frequency. For 2D or 3D signals, directional wavelet transforms go further, by using basis functions that are also localized in orientation. A curvelet transform differs from other directional wavelet transforms in that the degree of localisation in orientation varies with scale. In particular, fine-scale basis functions are long ridges; the shape of the basis functions at scale j is by so the fine-scale bases are skinny ridges with a precisely determined orientation. Curvelets are an appropriate basis for representing images (or other functions) which are smooth apart from singularities along smooth curves, where the curves have bounded curvature, i.e. where objects in the image have a minimum length scale. This property holds for cartoons, geometrical diagrams, and text. As one zooms in on such images, the edges they contain appear increasingly straight. Curvelets take advantage of this property, by defining the higher resolution curvelets to be more elongated than the lower resolution curvelets. However, natural images (photographs) do not have this property; they have detail at every scale. Therefore, for natural images, it is preferable to use some sort of directional wavelet transform whose wavelets have the same aspect ratio at every scale. When the image is of the right type, curvelets provide a representation that is considerably sparser than other wavelet transforms. This can be quantified by considering the best approximation of a geometrical test image that can be represented using only wavelets, and analysing the approximation error as a function of . For a Fourier transform, the squared error decreases only as . For a wide variety of wavelet transforms, including both directional and non-directional variants, the squared error decreases as . The extra assumption underlying the curvelet transform allows it to achieve . Efficient numerical algorithms exist for computing the curvelet transform of discrete data. The computational cost of the discrete curvelet transforms proposed by Candès et al. (Discrete curvelet transform based on unequally-spaced fast Fourier transforms and based on the wrapping of specially selected Fourier samples) is approximately 6–10 times that of an FFT, and has the same dependence of for an image of size . Curvelet construction To construct a basic curvelet and provide a tiling of the 2-D frequency space, two main ideas should be followed: Consider polar coordinates in frequency domain Construct curvelet elements being locally supported near wedges The number of wedges is at the scale , i.e., it doubles in each second circular ring. Let be the variable in frequency domain, and be the polar coordinates in the frequency domain. We use the ansatz for the dilated basic curvelets in polar coordinates: To construct a basic curvelet with compact support near a ″basic wedge″, the two windows and need to have compact support. Here, we can simply take to cover with dilated curvelets and such that each circular ring is covered by the translations of . Then the admissibility yields see Window Functions for more information For tiling a circular ring into wedges, where is an arbitrary positive integer, we need a -periodic nonnegative window with support inside such that , for all , can be simply constructed as -periodizations of a scaled window . Then, it follows that For a complete covering of the frequency plane including the region around zero, we need to define a low pass element with that is supported on the unit circle, and where we do not consider any rotation. Applications Image processing Seismic exploration Fluid mechanics PDEs solving Compressed sensing See also Bandelet transform Chirplet transform Contourlet transform Fresnelet transform Noiselet transform Scale space Shearlet transform References E. Candès and D. Donoho, "Curvelets – a surprisingly effective nonadaptive representation for objects with edges." In: A. Cohen, C. Rabut and L. Schumaker, Editors, Curves and Surface Fitting: Saint-Malo 1999, Vanderbilt University Press, Nashville (2000), pp. 105–120. Majumdar Angshul Bangla Basic Character Recognition using Digital Curvelet Transform Journal of Pattern Recognition Research (JPRR), Vol 2. (1) 2007 p. 17-26 Emmanuel Candes, Laurent Demanet, David Donoho and Lexing Ying Fast Discrete Curvelet Transforms Jianwei Ma, Gerlind Plonka, The Curvelet Transform: IEEE Signal Processing Magazine, 2010, 27 (2), 118-133. Jean-Luc Starck, Emmanuel J. Candès, and David L. Donoho, The Curvelet Transform for Image Denoising,: IEEE Transactions on Image Processing, Vol. 11, No. 6, June 2002 External links Curvelet.org homepage Image processing Time–frequency analysis Wavelets
Curvelet
[ "Physics" ]
1,022
[ "Frequency-domain analysis", "Spectrum (physical sciences)", "Time–frequency analysis" ]
16,138,478
https://en.wikipedia.org/wiki/Enzymatic%20biofuel%20cell
An enzymatic biofuel cell is a specific type of fuel cell that uses enzymes as a catalyst to oxidize its fuel, rather than precious metals. Enzymatic biofuel cells, while currently confined to research facilities, are widely prized for the promise they hold in terms of their relatively inexpensive components and fuels, as well as a potential power source for bionic implants. Operation Enzymatic biofuel cells work on the same general principles as all fuel cells: use a catalyst to separate electrons from a parent molecule and force it to go around an electrolyte barrier through a wire to generate an electric current. What makes the enzymatic biofuel cell distinct from more conventional fuel cells are the catalysts they use and the fuels that they accept. Whereas most fuel cells use metals like platinum and nickel as catalysts, the enzymatic biofuel cell uses enzymes derived from living cells (although not within living cells; fuel cells that use whole cells to catalyze fuel are called microbial fuel cells). This offers a couple of advantages for enzymatic biofuel cells: Enzymes are relatively easy to mass-produce and so benefit from economies of scale, whereas precious metals must be mined and so have an inelastic supply. Enzymes are also specifically designed to process organic compounds such as sugars and alcohols, which are extremely common in nature. Most organic compounds cannot be used as fuel by fuel cells with metal catalysts because the carbon monoxide formed by the interaction of the carbon molecules with oxygen during the fuel cell's functioning will quickly “poison” the precious metals that the cell relies on, rendering it useless. Because sugars and other biofuels can be grown and harvested on a massive scale, the fuel for enzymatic biofuel cells is extremely cheap and can be found in nearly any part of the world, thus making it an extraordinarily attractive option from a logistics standpoint, and even more so for those concerned with the adoption of renewable energy sources. Enzymatic biofuel cells also have operating requirements not shared by traditional fuel cells. What is most significant is that the enzymes that allow the fuel cell to operate must be “immobilized” near the anode and cathode in order to work properly; if not immobilized, the enzymes will diffuse into the cell's fuel and most of the liberated electrons will not reach the electrodes, compromising its effectiveness. Even with immobilization, a means must also be provided for electrons to be transferred to and from the electrodes. This can be done either directly from the enzyme to the electrode (“direct electron transfer”) or with the aid of other chemicals that transfer electrons from the enzyme to the electrode (“mediated electron transfer”). The former technique is possible only with certain types of enzymes whose activation sites are close to the enzyme's surface, but doing so presents fewer toxicity risks for fuel cells intended to be used inside the human body. Finally, completely processing the complex fuels used in enzymatic biofuel cells requires a series of different enzymes for each step of the ‘metabolism’ process; producing some of the required enzymes and maintaining them at the required levels can pose problems. History Early work with biofuel cells, which began in the early 20th century, was purely of the microbial variety. Research on using enzymes directly for oxidation in biofuel cells began in the early 1960s, with the first enzymatic biofuel cell being produced in 1964. This research began as a product of NASA's interest in finding ways to recycle human waste into usable energy on board spacecraft, as well as a component of the quest for an artificial heart, specifically as a power source that could be put directly into the human body. These two applications – use of animal or vegetable products as fuel and development of a power source that can be directly implanted into the human body without external refueling – remain the primary goals for developing these biofuel cells. Initial results, however, were disappointing. While the early cells did successfully produce electricity, there was difficulty in transporting the electrons liberated from the glucose fuel to the fuel cell's electrode and further difficulties in keeping the system stable enough to produce electricity at all due to the enzymes’ tendency to move away from where they needed to be in order for the fuel cell to function. These difficulties led to an abandonment by biofuel cell researchers of the enzyme-catalyst model for nearly three decades in favor of the more conventional metal catalysts (principally platinum), which are used in most fuel cells. Research on the subject did not begin again until the 1980s after it was realized that the metallic-catalyst method was not going to be able to deliver the qualities desired in a biofuel cell, and since then work on enzymatic biofuel cells has revolved around the resolution of the various problems that plagued earlier efforts at producing a successful enzymatic biofuel cell. However, many of these problems were resolved in 1998. In that year, it was announced that researchers had managed to completely oxidize methanol using a series (or “cascade”) of enzymes in a biofuel cell. Previous to this time, the enzyme catalysts had failed to completely oxidize the cell's fuel, delivering far lower amounts of energy than what was expected given what was known about the energy capacity of the fuel. While methanol is now far less relevant in this field as a fuel, the demonstrated method of using a series of enzymes to completely oxidize the cell's fuel gave researchers a way forward, and much work is now devoted to using similar methods to achieve complete oxidation of more complicated compounds, such as glucose. In addition, and perhaps what is more important, 1998 was the year in which enzyme “immobilization” was successfully demonstrated, which increased the usable life of the methanol fuel cell from just eight hours to over a week. Immobilization also provided researchers with the ability to put earlier discoveries into practice, in particular the discovery of enzymes that can be used to directly transfer electrons from the enzyme to the electrode. This process had been understood since the 1980s but depended heavily on placing the enzyme as close to the electrode as possible, which meant that it was unusable until after immobilization techniques were devised. In addition, developers of enzymatic biofuel cells have applied some of the advances in nanotechnology to their designs, including the use of carbon nanotubes to immobilize enzymes directly. Other research has gone into exploiting some of the strengths of the enzymatic design to dramatically miniaturize the fuel cells, a process that must occur if these cells are ever to be used with implantable devices. One research team took advantage of the extreme selectivity of the enzymes to completely remove the barrier between anode and cathode, which is an absolute requirement in fuel cells not of the enzymatic type. This allowed the team to produce a fuel cell that produces 1.1 microwatts operating at over half a volt in a space of just 0.01 cubic millimeters. While enzymatic biofuel cells are not currently in use outside of the laboratory, as the technology has advanced over the past decade non-academic organizations have shown an increasing amount of interest in practical applications for the devices. In 2007, Sony announced that it had developed an enzymatic biofuel cell that can be linked in sequence and used to power an mp3 player, and in 2010 an engineer employed by the US Army announced that the Defense Department was planning to conduct field trials of its own "bio-batteries" in the following year. In explaining their pursuit of the technology, both organizations emphasized the extraordinary abundance (and extraordinarily low expense) of fuel for these cells, a key advantage of the technology that is likely to become even more attractive if the price of portable energy sources goes up, or if they can be successfully integrated into electronic human implants. Feasibility of enzymes as catalysts With respect to fuel cells, enzymes have several advantages to their incorporation. An important enzymatic property to consider is the driving force or potential necessary for successful reaction catalysis. Many enzymes operate at potentials close to their substrates which is most suitable for fuel cell applications. Furthermore, the protein matrix surrounding the active site provides many vital functions; selectivity for the substrate, internal electron coupling, acidic/basic properties and the ability to bind to other proteins (or the electrode). Enzymes are more stable in the absence of proteases, while heat resistant enzymes can be extracted from thermophilic organisms, thus offering a wider range of operational temperatures. Operating conditions is generally between 20-50 °C and pH 4.0 to 8.0. A drawback with the use of enzymes is size; given the large size of enzymes, they yield a low current density per unit electrode area due to the limited space. Since it is not possible to reduce enzyme size, it has been argued that these types of cells will be lower in activity. One solution has been to use three-dimensional electrodes or immobilization on conducting carbon supports which provide high surface area. These electrodes are extended into three-dimensional space which greatly increases the surface area for enzymes to bind thus increasing the current. Hydrogenase-based biofuel cells As per the definition of biofuel cells, enzymes are used as electrocatalysts at both the cathode and anode. In hydrogenase-based biofuel cells, hydrogenases are present at the anode for H2 oxidation in which molecular hydrogen is split into electrons and protons. In the case of H2/O2 biofuel cells, the cathode is coated with oxidase enzymes which then convert the protons into water. Hydrogenase as an energy source In recent years, research on hydrogenases has grown significantly due to scientific and technological interest in hydrogen. The bidirectional or reversible reaction catalyzed by hydrogenase is a solution to the challenge in the development of technologies for the capture and storage of renewable energy as fuel with use on demand. This can be demonstrated through the chemical storage of electricity obtained from a renewable source (e.g. solar, wind, hydrothermal) as H2 during periods of low energy demands. When energy is desired, H2 can be oxidized to produce electricity which is very efficient. The use of hydrogen in energy converting devices has gained interest due to being a clean energy carrier and potential transportation fuel. Feasibility of hydrogenase as catalysts In addition to the advantages previously mentioned associated with incorporating enzymes in fuel cells, hydrogenase is a very efficient catalyst for H2 consumption forming electrons and protons. Platinum is typically the catalyst for this reaction however, the activity of hydrogenases are comparable without the issue of catalyst poisoning by H2S and CO. In the case of H2/O2 fuel cells, there is no production of greenhouse gases where the product is water. With regards to structural advantages, hydrogenase is highly selective for its substrate. The lack of need for a membrane simplifies the biofuel cell design to be small and compact, given that hydrogenase does not react with oxygen (an inhibitor) and the cathode enzymes (typically laccase) does not react with the fuel. The electrodes are preferably made from carbon which is abundant, renewable and can be modified in many ways or adsorb enzymes with high affinity. The hydrogenase is attached to a surface which also extends the lifetime of the enzyme. Challenges There are several difficulties to consider associated with the incorporation of hydrogenase in biofuel cells. These factors must be taken into account to produce an efficient fuel cell. Enzyme immobilization Since the hydrogenase-based biofuel cell hosts a redox reaction, hydrogenase must be immobilized on the electrode in such a way that it can exchange electrons directly with the electrode to facilitate the transfer of electrons. This proves to be a challenge in that the active site of hydrogenase is buried in the center of the enzyme where the FeS clusters are used as an electron relay to exchange electrons with its natural redox partner. Possible solutions for greater efficiency of electron delivery include the immobilization of hydrogenase with the most exposed FeS cluster close enough to the electrode or the use of a redox mediator to carry out the electron transfer. Direct electron transfer is also possible through the adsorption of the enzyme on graphite electrodes or covalent attachment to the electrode. Another solution includes the entrapment of hydrogenase in a conductive polymer. Enzyme size Immediate comparison of the size of hydrogenase with standard inorganic molecular catalysts reveal that hydrogenase is very bulky. It is approximately 5 nm in diameter compared to 1-5 nm for Pt catalysts. This limits the possible electrode coverage by capping the maximum current density. Since altering the size of hydrogenase is not a possibility, to increase the density of enzyme present on the electrode to maintain fuel cell activity, a porous electrode can be used instead of one that is planar. This increases the electroactive area allowing more enzyme to be loaded onto the electrode. An alternative is to form films with graphite particles adsorbed with hydrogenase inside a polymer matrix. The graphite particles then can collect and transport electrons to the electrode surface. Oxidative damage In a biofuel cell, hydrogenase is exposed to two oxidizing threats. O2 inactivates most hydrogenases with the exception of [NiFe] through diffusion of O2 to the active site followed by destructive modification of the active site. O2 is the fuel at the cathode and therefore must be physically separated or else the hydrogenase enzymes at the anode would be inactivated. Secondly, there is a positive potential imposed on hydrogenase at the anode by the enzyme on the cathode. This further enhances the inactivation of hydrogenase by O2 causing even [NiFe] which was previously O2-tolerant, to be affected. To avoid inactivation by O2, a proton exchange membrane can be used to separate the anode and cathode compartments such that O2 is unable to diffuse to and destructively modify the active site of hydrogenase. Applications Entrapment of hydrogenase in polymers There are many ways to adsorb hydrogenases onto carbon electrodes that have been modified with polymers. An example is a study done by Morozov et al. where they inserted NiFe hydrogenase into polypyrrole films and to provide proper contact to the electrode, there were redox mediators entrapped into the film. This was successful because the hydrogenase density was high in the films and the redox mediator helped to connect all enzyme molecules for catalysis which was about the same power output as hydrogenase in solution. Immobilizing hydrogenase on carbon nanotubes Carbon nanotubes can also be used for a support for hydrogenase on the electrode due to their ability to assemble in large porous and conductive networks. These hybrids have been prepared using [FeFe] and [NiFe] hydrogenases. The [NiFe] hydrogenase isolated from A. aeolicus (thermophilic bacteria) was able to oxidize H2 with direct electron transfer without a redox mediator with a 10-fold higher catalytic current with stationary CNT-coated electrodes than with bare electrodes. Another way of coupling hydrogenase to the nanotubes was to covalently bind them to avoid a time delay. Hydrogenase isolated from D. gigas (jumbo squid) was coupled to multiwalled carbon nanotube (MWCNT) networks and produced a current ~30 times higher than the graphite-hydrogenase anode. A slight drawback to this method is that the ratio of hydrogenase covering the surface of the nanotube network leaves hydrogenase to cover only the scarce defective spots in the network. It is also found that some adsorption procedures tend to damage the enzymes whereas covalently coupling them stabilized the enzyme and allows it to remain stable for longer. The catalytic activity of hydrogenase-MWCNT electrodes provided stability for over a month whereas the hydrogenase-graphite electrodes only lasted about a week. Hydrogenase-based biofuel cell applications A fully enzymatic hydrogen fuel cell was constructed by the Armstrong group who used the cell to power a watch. The fuel cell consisted of a graphite anode with hydrogenase isolated from R. metallidurans and a graphite cathode modified with fungal laccase. The electrodes were placed in a single chamber with a mixture of 3% H2 gas in air and there was no membrane due to the tolerance of the hydrogenase to oxygen. The fuel cell produced a voltage of 950mV and generated 5.2 uW/cm2 of electricity. Although this system was very functional, it was still not at optimum output due to the low accessible H2 levels, the lower catalytic activity of the oxygen tolerant hydrogenases and the lower density of catalysts on the flat electrodes. This system was then later improved by adding a MWCNT network to increase the electrode area. Applications Self-powered biosensors The beginning concept of applying enzymatic biofuel cells for self-powered biosensing applications has been introduced since 2001. With continued efforts, several types of self-powered enzyme-based biosensors have been demonstrated. In 2016, the first example of stretchable textile-based biofuel cells, acting as wearable self-powered sensors, was described. The smart textile device utilized a lactate oxidase-based biofuel cell, allowing real-time monitoring of lactate in sweat for on-body applications. See also Bioelectrochemical reactor Biobattery Electrochemical reduction of carbon dioxide Electromethanogenesis Microbial fuel cell References Fuel cells Bioelectrochemistry ja:燃料電池#バイオ燃料電池
Enzymatic biofuel cell
[ "Chemistry" ]
3,705
[ "Electrochemistry", "Bioelectrochemistry" ]
16,138,812
https://en.wikipedia.org/wiki/PFA-100
The PFA-100 (Platelet Function Assay or Platelet Function Analyser) is a platelet function analyser that aspirates blood in vitro from a blood specimen into disposable test cartridges through a microscopic aperture cut into a biologically active membrane at the end of a capillary. The membrane of the cartridges are coated with collagen and adenosine diphosphate (ADP) or collagen and epinephrine inducing a platelet plug to form which closes the aperture. The PFA test result is dependent on platelet function, plasma von Willebrand Factor level, platelet number, and (to some extent) the hematocrit (that is, the percent composition of red blood cells in the sample). The PFA test is initially performed with the Collagen/Epinepherine membrane. A normal Col/Epi closure time (<180 seconds) excludes the presence of a significant platelet function defect. If the Col/Epi closure time is prolonged (>180 seconds), the Col/ADP test is automatically performed. If the Col/ADP result is normal (<120 seconds), aspirin-induced platelet dysfunction is most likely. Prolongation of both test results (Col/Epi >180 seconds, Col/ADP >120 seconds) may indicate the following; Anemia (hematocrit <0.28) Thrombocytopenia (platelet count < 100 x 10/L) A significant platelet function defect other than aspirin Once anemia and thrombocytopenia have been excluded, further evaluation to exclude von Willebrand disease and inherited/acquired platelet dysfunction can be made. References External links Practical Haemostasis Medical testing equipment
PFA-100
[ "Chemistry", "Biology" ]
372
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
16,139,161
https://en.wikipedia.org/wiki/Transit%20instrument
In astronomy, a transit instrument is a small telescope with an extremely precisely graduated mount used for the precise observation of star positions. They were previously widely used in astronomical observatories and naval observatories to measure star positions in order to compile nautical almanacs for use by mariners for celestial navigation, and observe star transits to set extremely accurate clocks (astronomical regulators) which were used to set marine chronometers carried on ships to determine longitude, and as primary time standards before atomic clocks. The instruments can be divided into three groups: meridian, zenith, and universal instruments. Types Meridian instruments For observation of star transits in the exact direction of South or North: Meridian circles, Mural quadrants etc. Passage instruments (transportable, also for prime vertical transits) Zenith instruments Zenith telescope Photozenith tube (PZT) zenith cameras Danjon astrolabe, Zeiss Ni2 astrolabe, Circumzenital Universal instruments Allow transit measurements in any direction Theodolite (Describing a theodolite as a transit may refer to the ability to turn the telescope a full rotation on the horizontal axis, which provides a convenient way to reverse the direction of view, or to sight the same object with the yoke in opposite directions, which causes some instrumental errors to cancel.) Altaz telescopes with graduated eyepieces (also for satellite transits) Cinetheodolites Observation techniques and accuracy Depending on the type of instrument, the measurements are carried out visually and manual time registration (stopwatch, Auge-Ohr-Methode, chronograph) visually by impersonal micrometer (moving thread with automatic registration) photographic registration CCD or other electro optic sensors. The accuracy reaches from 0.2" (theodolites, small astrolabes) to 0.01" (modern meridian circles, Danjon). Early instruments (like the mural quadrants of Tycho Brahe) had no telescope and were limited to about 0.01°. See also Astronomical transit Latitude/longitude observation, vertical deflection Positional astronomy, astro-geodesy References Further reading Karl Ramsayer: Geodätische Astronomie, Vol.2a of Handbuch der Vermessungskunde, 900 p., J.B.Metzler, Stuttgart 1969 Cauvenet and Brünnow's Handbooks of Spherical Geodesy External links Great Transit at Lick Observatory, +Photo Modern roboter telescopes The Carlsberg Automatic Meridian Circle Photo of a 19th-century transit instrument (Jones 1826) Transit instruments used by the Survey of India, 1867 Astrometry Geodesy
Transit instrument
[ "Astronomy", "Mathematics" ]
539
[ "Applied mathematics", "Geodesy", "Astrometry", "Astronomical sub-disciplines" ]
16,140,112
https://en.wikipedia.org/wiki/Resolution%20%28structural%20biology%29
Resolution in the context of structural biology is the ability to distinguish the presence or absence of atoms or groups of atoms in a biomolecular structure. Usually, the structure originates from methods such as X-ray crystallography, electron crystallography, or cryo-electron microscopy. The resolution is measured of the "map" of the structure produced from experiment, where an atomic model would then be fit into. Due to their different natures and interactions with matter, in X-ray methods the map produced is of the electron density of the system (usually a crystal), whereas in electron methods the map is of the electrostatic potential of the system. In both cases, atomic positions are assumed similarly. Qualitative measures In structural biology, resolution can be broken down into 4 groups: (1) sub-atomic, when information about the electron density is obtained and quantum effects can be studied, (2) atomic, individual atoms are visible and an accurate three-dimensional model can be constructed, (3) helical, secondary structure, such as alpha helices and beta sheets; RNA helices (in ribosomes), (4) domain, no secondary structure is resolvable. X-ray crystallography As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-ray crystallography are often discerned, "small-molecule" and "macromolecular" crystallography. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that its atoms can be discerned as isolated "blobs" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses with hundreds of thousands of atoms. Cryo-electron microscopy In cryo-electron microscopy (cryoEM), resolution is typically measured by the Fourier shell correlation (FSC), a three-dimensional extension of the Fourier ring correlation (FRC), which is also known as the spatial frequency correlation function. The FSC is a comparison of the Fourier transforms of two different constructed electrostatic potential maps, each map constructed from a random half of the original dataset. Historically, there was much disagreement on which cutoff in the FSC would provide a good estimation of resolution, but the emerging gold-standard is the FSC cutoff of 0.143. This cutoff is derived from equivalencies to the X-ray crystallography standards of resolution definition. Historical measurements Many other criteria for determining resolution using the FSC curve exist, including the 3-σ criterion, 5-σ criterion, and 0.5 threshold. However, fixed-value thresholds (like 0.5, or 0.143) were argued to be based on incorrect statistical assumptions, though 0.143 has been shown to be strict enough so as to likely not overestimate resolution. The half-bit criterion indicates at which resolution there exists enough information to reliably interpret the volume, and the (modified) 3-σ criterion indicates where the FSC systematically emerges above the expected random correlations of the background noise. In 2007, a resolution criterion independent of the FSC, Fourier Neighbor Correlation (FNC), was developed using the correlation between neighboring Fourier voxels to distinguish signal from noise. The FNC can be used to predict a less-biased FSC. See also Structural biology X-ray crystallography Cryogenic electron microscopy Image resolution Notes References External links PDB 101 Looking at Structures: Resolution EMstats Trends and distributions of maps in EM Data Bank (EMDB), e.g. resolution trends Structural resolution and electron density Learning crystallography Diffraction
Resolution (structural biology)
[ "Physics", "Chemistry", "Materials_science" ]
874
[ "Crystallography", "Diffraction", "Spectroscopy", "Spectrum (physical sciences)" ]
16,141,471
https://en.wikipedia.org/wiki/Gaiter%20%28vehicle%29
On a vehicle, a gaiter or boot is a protective flexible sleeve covering a moving part, intended to keep the part clean. On motorcycles and bicycles Gaiters are pleated rubber tubes enclosing the front suspension tubes of some motorcycles and mountain bikes with telescopic front forks. Gaiters protect the sliding parts of the front suspension from dirt and water. On cars and other vehicles Similar gaiters to those described above find multiple uses on most vehicles. They are used at both ends of driveshafts, protecting constant-velocity joints from the ingress of dirt, and retaining the grease. They also prevent the ingress of dirt where one component slides within another, for example, on suspension struts or the ends of steering racks. Finally, they are also usually used to perform the same function on ball joints, which appear on suspension wishbones and steering tie rod ends. The gear stick gaiter is to resist dirt entering the ball joint at the bottom of the stick and to not have oil or grease from the joint exposed to passengers. They are commonly leather, faux leather, rubber or a waterproof cloth. Vehicle technology
Gaiter (vehicle)
[ "Engineering" ]
234
[ "Vehicle technology", "Mechanical engineering by discipline" ]
16,141,600
https://en.wikipedia.org/wiki/Syntelog
Syntelog: a special case of gene homology where sets of genes are derived from the same ancestral genomic region. This may arise from speciation events, or through whole or partial genome duplication events (e.g. polyploidy). This term is distinct from ortholog, paralog, in-paralog, out-paralog, and xenolog because it refers only to genes' evolutionary history evidenced by sequence similarity and relative genomic position. Example Comparison between two genomic regions of Arabidopsis thaliana derived from its most recent genome duplication event. Syntelogs are indicated by red lines connecting regions of sequence similarly (red boxes): Sequence analysis and visualization of syntelogs performed by GEvo. Regerate this analysis in CoGe's GEvo using this link. GEvo Sequences were compared using the BlastZ algorithm. See also Synteny Homology (biology) Polyploidy Comparative genomics References External links Evolutionary biology Genetics
Syntelog
[ "Biology" ]
208
[ "Evolutionary biology", "Genetics" ]
16,142,167
https://en.wikipedia.org/wiki/History%20of%20personal%20computers
The history of the personal computer as a mass-market consumer electronic device began with the microcomputer revolution of the 1970s. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time-sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers – generally called microcomputers – were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians. Etymology There are several competing claims as to the origins of the term "personal computer". Yale Law School librarian Fred Shapiro notes an early published use of the phrase in a 1968 Hewlett-Packard advertisement for a programmable calculator, which they called "The new Hewlett-Packard 9100A personal computer." Other claims include computer pioneer Alan Kay's purported use of the term in a 1972 paper, Whole Earth Catalog publisher Stewart Brand's usage in a 1974 book, MITS co-founder Ed Roberts' usage in 1975, and Byte magazine's May 1976 usage of "[in] the personal computing field" in its first edition. In 1975, Creative Computing defined the personal computer as a "non-(time)shared system containing sufficient processing power and storage capabilities to satisfy the needs of an individual user." Overview The history of the personal computer as mass-market consumer electronic devices effectively began in 1977 with the introduction of microcomputers, although some mainframe and minicomputers had been applied as single-user systems much earlier. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers – generally called microcomputers – were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians. Mainframes, minicomputers, and microcomputers Computer terminals were used for time sharing access to central computers. Before the introduction of the microprocessor in the early 1970s, computers were generally large, costly systems owned by large corporations, universities, government agencies, and similar-sized institutions. End users generally did not directly interact with the machine, but instead would prepare tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the job had completed, users could collect the results. In some cases, it could take hours or days between submitting a job to the computing center and receiving the output. A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple computer terminals let many people share the use of one mainframe computer processor. This was common in business applications and in science and engineering. A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor. In places such as Carnegie Mellon University and MIT, students with access to some of the first computers experimented with applications that would today be typical of a personal computer; for example, computer-aided design and drafting was foreshadowed by T-square, a program written in 1961, and an ancestor of today's computer games was found in Spacewar! in 1962. Some of the first computers that might be called "personal" were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. By today's standards, they were very large (about the size of a refrigerator) and cost prohibitive (typically tens of thousands of US dollars). However, they were much smaller, less expensive, and generally simpler to operate than many of the mainframe computers of the time. Therefore, they were accessible for individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center. In addition, minicomputers were relatively interactive and soon had their own operating systems. The minicomputer Xerox Alto (1973) was a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high resolution screen, large internal and external memory storage, mouse, and special software. In 1945, Vannevar Bush published an essay called "As We May Think" in which he outlined a possible solution to the growing problem of information storage and retrieval. In 1968, SRI researcher Douglas Engelbart gave what was later called "The Mother of All Demos", in which he offered a preview of things that have become the staples of daily working life in the 21st century: e-mail, hypertext, word processing, video conferencing, and the mouse. The demo was the culmination of research in Engelbart's Augmentation Research Center laboratory, which concentrated on applying computer technology to facilitate creative human thought. Microprocessor and cost reduction The minicomputer ancestors of the modern personal computer used early integrated circuit (microchip) technology, which reduced size and cost, but they contained no microprocessor. This meant that they were still large and difficult to manufacture just like their mainframe predecessors. After the "computer-on-a-chip" was commercialized, the cost to manufacture a computer system dropped dramatically. The arithmetic, logic, and control functions that previously occupied several costly circuit boards were now available in one integrated circuit, making it possible to produce them in high volume. Concurrently, advances in the development of solid-state memory eliminated the bulky, costly, and power-hungry magnetic-core memory used in prior generations of computers. The single-chip microprocessor was made possible by an improvement in MOS technology, the silicon-gate MOS chip, developed in 1968 by Federico Faggin, who later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. A few researchers at places such as SRI and Xerox PARC were working on computers that a single person could use and that could be connected by fast, versatile networks: not home computers, but personal ones. At RCA, Joseph Weisbecker designed and built a true home computer known as FRED, but this saw mixed interest from management. The CPU design was released as the COSMAC in 1974 and several experimental machines using it were built in 1975, but RCA declined to market any of these until introducing the COSMAC ELF in 1976, in kit form. By this time a number of other machines had entered the market. After the introduction of the Intel 4004 in 1972, microprocessor costs declined rapidly. In 1974 the American electronics magazine Radio-Electronics described the Mark-8 computer kit, based on the Intel 8008 processor. In January of the following year, Popular Electronics magazine published an article describing a kit based on the Intel 8080, a somewhat more powerful and easier to use processor. The Altair 8800 sold remarkably well even though initial memory size was limited to a few hundred bytes and there was no software available. However, the Altair kit was much less costly than an Intel development system of the time and so was purchased by companies interested in developing microprocessor control for their own products. Expansion memory boards and peripherals were soon listed by the original manufacturer, and later by plug compatible manufacturers. The very first Microsoft product was a 4 kilobyte paper tape BASIC interpreter, which allowed users to develop programs in a higher-level language. The alternative was to hand-assemble machine code that could be directly loaded into the microcomputer's memory using a front panel of toggle switches, push buttons and LED displays. While the hardware front panel emulated those used by early mainframe and minicomputers, after a very short time I/O through a terminal was the preferred human/machine interface, and front panels became extinct. The beginnings of the personal computer industry Simon Simon was a small electro-mechanical computer project developed by Edmund Berkeley and presented in a thirteen articles series issued in Radio-Electronics magazine from October 1950. The Simon was in some sense a personal computer although was not of much practical use, the four-function ALU was only 2 bits wide meaning that it was not capable of operating on any number greater than 3. There were far more sophisticated and practical computers available at the time (such as EDSAC) and the kit was intended only as an educational machine for hobbyists to learn about the operation and design of a digital computer. The value in Simon was that the digital principles learnt could be scaled up to the task of building a larger and more useful machine. LGP-30 The LGP-30 was an off-the-shelf vacuum-tube computer manufactured by the Librascope company of Glendale, California. The LGP-30 was first manufactured in 1956, at a retail price of $47,000. The LGP-30 was commonly referred to as a desk computer, as it was the size of a desk. It weighed about 800 pounds (360 kg). It was a binary, 31-bit word computer with a 4096-word drum memory. Standard inputs were the Flexowriter keyboard and paper tape. The standard output was the Flexowriter typewriter. Up to 493 units were produced. IBM 610 The IBM 610 was a vacuum-tube computer designed by John Lentz at the Watson Lab of Columbia University. It was announced by IBM as the 610 Auto-Point in 1957. The machine consisted of a large cabinet but could fit in a conventional office and required no specialist arrangements for air conditioning or power. It was intended to be used by only one operator and was symbolically programmable using a keyboard. With a price tag of $55,000, only 180 units were produced and it was quickly displaced by the transistorized IBM 1620. LINC The LINC (Laboratory INstrument Computer) was an early minicomputer first produced in 1962 at MIT's Lincoln Laboratory. The machine was intended for use in biomedical research applications. The LINC consisted of a large unit that could fit on a desk with keyboard input and a monitor screen constructed from an oscilloscope, although it required a second chassis about the size of a wardrobe that contained the CPU and memory. Despite its size, the LINC had the nascent characteristics of a personal computer, being one of the first machines specifically intended to serve a single user as opposed to being a shared resource. Machines of the day were most usually extremely large fixed installations. The LINC was (just barely) portable to another location by a single person. It was possible to disassemble and fit all of the apparatus into a car and assemble it in a reasonable time for use elsewhere without the support of a computer lab. Olivetti Programma 101 The Programma 101, released in 1965 by the Italian company Olivetti, was one of the first printing programmable calculators. The desktop sized device included the capability for conditional jumps with a 240 byte delay-line memory which enabled software to be written. Some of the design was based on a preceding experimental computer produced by a young Federico Faggin who would later design the first commercial microprocessor at Intel. The Programma 101 was presented at the 1965 New York World's Fair after two years work (1962- 1964) and was a commercial success with over 44,000 units sold worldwide; in the US its cost at launch was $3,200. It was targeted to offices and scientific entities for their daily work because of its capable computing capabilities in a small space with a relatively low cost; NASA was amongst its first owners. Built without integrated circuits or microprocessors, it used only transistors, resistors and condensers for its processing, the Programma 101 had features found in modern personal computers, such as memory, keyboard, printing unit, magnetic card reader/recorder, control and arithmetic unit. Hewlett-Packard was later ordered to pay Olivetti $900,000 for patent infringement of this design in its HP 9100 series. Datapoint 2200 Released in June 1970, the programmable terminal called the Datapoint 2200 is among the earliest known devices that bears significant resemblance to the modern personal computer, with a CRT screen, keyboard, programmability, and program storage. It was made by CTC (later known as Datapoint after the success of this machine) and was a complete system in a case with the approximate footprint of an IBM Selectric typewriter. The system's CPU was constructed from roughly a hundred (mostly) TTL logic components, which are groups of gates, latches, counters, etc. The company had commissioned Intel to develop a single-chip solution with similar functionality. In the end, the chip did not meet CTC's requirements and was not used. A deal was made that in return for not charging CTC for the development work, Intel could instead sell the processor as their own product (along with the supporting ICs they had developed). This became the Intel 8008. Although the design of the Datapoint 2200's TTL based bit serial CPU and the Intel 8008 were technically very different, they were largely software-compatible. From a software perspective, the Datapoint 2200 therefore functioned as if it were using an 8008. Kenbak-1 The Kenbak-1, released in early 1971, is considered by the Computer History Museum to be the world's first personal computer. It was designed and invented by John Blankenbaker of Kenbak Corporation in 1970, and was first sold in early 1971. Unlike a modern personal computer, the Kenbak-1 was built of small-scale integrated circuits, and did not use a microprocessor. The system first sold for US$750. Only 44 machines were ever sold, though it's said 50 to 52 were built. In 1973, production of the Kenbak-1 stopped as Kenbak Corporation folded. With a fixed 256 bytes of memory, input and output restricted to lights and switches (no ports or serial output), and no possible way to extend its capabilities, the Kenbak-1 was only really useful for educational use. 256 bytes of memory, 8 bit word size, and I/O limited to switches and lights on the front panel are also characteristics of the 1975 Altair 8800, whose fate was diametrically opposed to that of the Kenbak. However, there were three major differentiating factors between the Altair and the Kenbak which led to the later Altair 8800 selling over 25000 units and influencing many, while the Kenbak-1 only sold 44, and influenced mostly no one. The Kenbak-1, designed before the invention of the microprocessor, had a limited instruction set that was professionally considered "incompatible with microcomputer application goals", according to a citation pointing at the KENBAK-1 programming manual in the contemporary February 1974 issue of RCA Engineer Magazine. The Kenbak-1 had no ability for expansion. There were no expansion slots, and tragically, no serial port or any other way to get data out of the machine (other than the 8 lamps on the front). There was also no way to load data into the machine other than its physical switches. There was no ability to upgrade the capacity of the RAM, and even if there were, there would have been no way to simultaneously address more than 256 bytes of RAM due to limitations of the machine code language. The Kenbak-1 was not advertised outside of the educational market. It was advertised in Science magazine and in-person at a local teacher's convention. There was no attempt to market the machine at the hobbyist market as later successful computers did. John Blankenbaker would later cite this as the reason that his machine failed, as the educational market was "too slow" to adopt his machine while it could have been relevant. However, it is also worth noting that in the educational market, the Kenbak-1 was competing against timeshares of more capable and established computers such as the PDP-8. If the Kenbak-1 were advertised better, and the machine had at least one serial port to make it more useful, it may have done very well at its price-point of $750 in 1971, which no other Turing-complete computer on the market came close to. However, it would not be very long before personal computers based on the much more capable Intel 8008 would come to market, followed shortly after once again by the ten-times-as-fast Intel 8080, in the highly-expandable Altair 8800. The single-chip CPU In 1967, Italian engineer Federico Faggin joined SGS-Fairchild of Italy where he worked on Metal Oxide Semiconductor (MOS) integrated circuits which had higher switching speeds and lower power consumption than alternatives. He was later sent to a California division. There he developed self-aligned silicon gate technology (from an original idea by Bell labs) which improved the reliability of MOS transistors and assisted with the commercial viability of the process. He also developed important technologies that improved circuit density of chips such as the "buried contact" technique. Fairchild were not using Faggin's work and in 1970 he moved to the recently founded Intel. There he joined a team developing the first commercially available microprocessor, the 4-bit Intel 4004. Japanese electronics company Busicom had approached Intel in 1969 for a set of several separate chips to implement a CPU for their calculator products. They had the idea of building the calculator from a standardised general purpose computer that they could re-program to implement different models for use in different markets. According to Tadashi Sasaki of Busicom, the concept for what would be the 4004 architecture came from a woman he could not recall the name of who was a graduate of Japan's Nara Women's University. Intel employee, Marcian Hoff, realised the design would be too expensive. He suggested combining the multiple CPU chips requested by Busicom into a single chip (a technique which generally reduces product cost because there are fixed costs associated with producing each individual device) and also reducing the complexity. This design was later improved by Stanley Mazor. Faggin brought the silicon design expertise to the team and using his latest techniques that improved device density, managed to squeeze it all into a single chip. Busicom was in financial trouble and Intel arranged a deal that enabled them to sell the CPU as a product in exchange for lowering costs to Busicom. This was then marketed as the Intel 4004 in 1971 and was the first commercial single chip microprocessor. Coincidentally, Computer Terminal Corporation (CTC) had approached Intel also in 1969 with a request to reduce the chip count in one of their Datapoint terminal range, resulting in a similar single-chip CPU design. CTC ultimately declined the device leaving Intel with the intellectual property. This resulted in the Intel 8008, released in 1972, that would eventually become the foundation of Intel's personal computer CPU range. It was followed by the Intel 8080 in 1974, the Intel 8086 in 1978 and the cost-reduced 8088 in 1979 used in the original IBM PC. The single-chip CPU went on to significantly reduce the costs (and size) of computers and place the devices within the purchasing power of individuals. In only a few years, a number of other manufacturers were producing competing single-chip CPUs including the Motorola 6800 (1974), the Fairchild F8 (1974), the MOS Technology 6502 (1975) and the National Semiconductor SC/MP (1976). Federico Faggin would go on to co-found Zilog which would produce the Z80 in 1976. Some of these CPUs would be widely used in early personal computers and other applications. Q1 On December 11, 1972, Q1 Corporation sold the first Q1 microcomputer, based on the Intel 8008 microprocessor. The first generation 8008-based Q1 and Q1/c had a QWERTY keyboard, one-line 80-character display, built-in printer, and capability to interface with an under-desk floppy drive. (From this point on, the names "Q1" and "Q1/Lite" seem to be used interchangeably on the computer's enclosure and in its marketing.) The second generation 1974 Q1/Lite ran on an Intel 8080, integrated two floppy drives into the computer's enclosure, and included an updated multi-line flat-panel plasma display. Around this time, a Q1 MicroLite was also introduced, incorporating the Lite's plasma display and printer but only one of its two floppy drives into an identical case. There also seems to have been a Q1 model with an enclosure, printer, and display identical to that of the second-generation Lite/MicroLite, but lacking both floppy drives; and yet another, with a slightly modified case, which lacked both the integrated printer and floppy drives entirely. (The former of these two was incorrectly labeled as a first-generation 1972 Q1 when a unit of this model was found in early 2024.) The third generation Q1/Lite system removed the integrated printer and floppy drives, kept the plasma display, and introduced a Zilog Z80 CPU and 16 KB of memory. At some point, a Q1 "Basic Office Machine" was also introduced, bearing resemblance to the third-generation Q1/Lite, but re-integrating a printer. Several Q1s were ordered for use in various NASA bases in 1979. Micral N The French company R2E was formed by two former engineers of the Intertechnique company to sell their microcomputer designs based around the Intel 8008, beginning with the Micral N in 1973. The system was originally developed for the Institut national de la recherche agronomique to automate crop hygrometric measurements. The Micral N is credited as being the first commercially available microcomputer to feature a single chip CPU. It was possible to use the Micral as a personal computer but most were sold for use as controllers in automation applications. The Micral N ran at 500 kHz, included 16 KB of memory and sold for 8500 Francs. A bus, called Pluribus, was introduced that allowed connection of up to 14 boards. Boards for digital I/O, analog I/O, memory, floppy disk were available from R2E. The Micral operating system was initially called Sysmic, and was later renamed Prologue. R2E was absorbed by Groupe Bull in 1978. Although Groupe Bull continued the production of Micral computers, it was not interested in the personal computer market, and Micral computers were mostly confined to highway toll gates (where they remained in service until 1992) and similar niche markets. Intel Intellec The Intellec was a microcomputer released by Intel in 1973 intended as a platform to support software development for their newly released series of microprocessors from the 4004 to the 8080. The machines were not openly marketed to the public and were aimed mainly at developers. The Intellec featured a ZIF socket on the front panel for programming EPROM chips. The intent was the EPROM chips would be used in embedded devices. The Intellec bore a resemblance to the Altair 8800 which would be released about two years later. While it could be used as a general purpose microcomputer, this was not Intel's intent. The Front Panel A front panel, consisting of LEDs and toggle switches, was a distinctive feature of many 1970s personal computers that is not usually seen on modern machines. The panel could be used to enter programs into the machine, but the process is particularly laborious. A memory address can be set on the toggle switches in binary, then this is followed by the data to be stored at that memory location, also set in binary via the switches. The rows of LEDs can be used to display the contents of a specific memory address. In some respects the front panel was simply a continuation of a common feature supplied on minicomputers of the time, but it did have some practical uses. Keyboards, displays and terminals were very expensive peripherals and the front panel represented a cheap way to interact with the computer that could be supplied with the machine as standard. A hobbyist could then immediately make some use of their new purchase out of the box. Additionally, the front panel would often be used for bootstrapping. When initially switched on, a computer needs to understand how to read its initial program (such as an operating system) from disk. The earliest machines did not contain the relevant software to do this and the machine would do nothing at all when initially powered on. A simple program informing the machine how to read a larger program from attached storage could be entered on the front panel. Xerox Alto and Star The Xerox Alto, developed at Xerox PARC in 1973, was the first computer to use a mouse, the desktop metaphor, and a graphical user interface (GUI), concepts first introduced by Douglas Engelbart while at International. It was the first example of what would today be recognized as a complete personal computer. The first machines were introduced on 1 March 1973. In 1981, Xerox Corporation introduced the Xerox Star workstation, officially known as the "8010 Star Information System". Drawing upon its predecessor, the Xerox Alto, it was the first commercial system to incorporate various technologies that today have become commonplace in personal computers, including a bit-mapped display, a windows-based graphical user interface, icons, folders, mouse, Ethernet networking, file servers, print servers and e-mail. While its use was limited to the engineers at Xerox PARC, the Alto had features years ahead of its time. Both the Xerox Alto and the Xerox Star would inspire the Apple Lisa and the Apple Macintosh. IBM SCAMP In 1972–1973, a team led by Dr. Paul Friedl at the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT and full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL\1130. In 1973 APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because it was the first to emulate APL\1130 performance on a portable, single-user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". The prototype is in the Smithsonian Institution. CP/M Into the late 1970s, an operating system (O/S) was an optional extra for personal computers. It was common to run software directly on the machine with no O/S loaded at all. An application would load from disk or tape and then when a different application was required, the machine would be reset and a different disk inserted. Particularly, games software would operate in this way to reclaim the memory that the O/S used. Disk drives were relatively complicated to control and it was helpful for cross-compatibility between business software for files to always be written in the same way by each piece of software. The user would also need a suite of tools to copy, rename and move files between disks. A large number of entirely different personal computers would emerge with incompatible hardware and it was helpful to have a unified platform where software could be written once and would run across all of them. As the cost of memory fell, demand would increase to have more than one program loaded simultaneously. An operating system would help solve these problems. Operating systems were common on mainframes and minicomputers but in the nascent personal computer space, little was available other than a monitor. While this term now most commonly refers to display hardware, it referred at the time to a small program specific to each computer that was capable of starting other software, some debugging, and maybe saving memory contents at specified locations to a storage device. Gary Kildall developed CP/M in 1974 as an O/S for the Intel Intellec and established his company, Digital Research, in the same year. CP/M originally stood for "Control Program/Monitor" but later became "Control Program for Microcomputers". It was first licensed to a small manufacturer called Gnat Computers in 1977. By that year the number of computer vendors was increasing and demand for a standardised operating system grew. CP/M 2.0 was developed in 1978. By 1981, over 250,000 copies had been sold and the O/S had a large library of compatible software written for it. IBM 5100 IBM 5100 was a desktop computer introduced in September 1975, six years before the IBM PC. It was the evolution of SCAMP (Special Computer APL Machine Portable) that IBM demonstrated in 1973. In January 1978 IBM announced the IBM 5110, its larger cousin. The 5100 was withdrawn in March 1982. When the PC was introduced in 1981, it was originally designated as the IBM 5150, putting it in the "5100" series, though its architecture wasn't directly descended from the IBM 5100. Kit Computers Development of the single-chip microprocessor was the gateway to the popularization of cheap, easy to use, and truly personal computers. In the mid to late 1970s, interest was gaining momentum amongst hobbyists around the idea that it was now becoming economically feasible for an individual to own a personal computer. To serve this demand a number of publications and organisations began producing computer designs that hobbyists could construct. The designs were sometimes available assembled but were less commonly finished products and ranged from purely circuit diagrams supplied on paper, through to provision of a PCB with or without a selection of parts, to partially completed boards with some components soldered. Through kits, personal computers were now in theory easily available to the general public but in practice some considerable expertise was required to construct and use these products, which restricted uptake to only the enthusiast market. Assembling a kit computer necessitated soldering skills, the ability to identify electronic components, access to test equipment and fault finding knowledge. The kit format though enabled small organisations with no manufacturing facility or experience to release a machine with little capital required and low commercial risk. A precursor to the kit computer appeared in the form of the TV Typewriter, which was published in Radio-Electronics magazine in 1973. The device was not a computer but demonstrated how many techniques that would shortly become standard features of personal computers could be implemented at an affordable price. The device showed how characters could be typed on a keyboard, rendered on a domestic television set and edited interactively. Expensive commercial terminals could do this but the TV Typewriter held the promise of considerably lower cost and was an instant hit with electronics enthusiasts. Several thousand copies of the plans were sold. A further evolution of kit computers was in the form of educational machines such as the MOS KIM-1 released in 1976 and later rebranded as a Commodore product. These were not kits intended to be general purpose personal computers but instead would demonstrate the basics of computer programming and hardware to enthusiasts, hobbyists and commercial users. Educational kits would include only limited or no features at all for connection of peripherals but instead might include LED displays and calculator style keyboards for interaction with the machine. Kits then moved on to more capable designs. Dozens of kit computer designs were produced including the Mark-8 (1974) published in Radio-Electronics magazine, the Altair 8800 (1975), the SWTPC 6800 (1975), the COSMAC ELF (1976) in Popular Electronics magazine, the Newbear 77-68 (1977) and the Transam Triton (1978) from Electronics Today International magazine. The first product of Apple, the Apple I (1976), was partially a kit computer, requiring some additional components to be supplied, although the main board was available assembled. By the end of the 1970s more machines were becoming available as finished products. The Sinclair ZX80 (1980) was available in both formats as either an assembled product or as an electronics kit at a lower price. By the early 1980s many choices of professionally manufactured personal computer were available at affordable prices and kit machines largely disappeared. Altair 8800 It was only a matter of time before one personal computer design was able to hit a sweet spot in terms of pricing and performance, and that machine is generally considered to be the Altair 8800, from MITS, a small company that produced electronics kits for hobbyists. The Altair 8800 was introduced in a Popular Electronics magazine article in the January 1975 issue (published in late 1974). In keeping with MITS' earlier projects, the Altair was sold in kit form, although a relatively complex one consisting of four circuit boards and many parts. Priced at only $400, the Altair tapped into pent-up demand and surprised its creators when it generated thousands of orders in the first month. Unable to keep up with demand, MITS sold the design after about 10,000 kits had shipped. The introduction of the Altair spawned an entire industry based on the basic layout and internal design. New companies like Cromemco started up to supply add-on kits. Soon after, a number of complete "clone" designs, typified by the IMSAI 8080, appeared on the market. This led to a wide variety of systems based on the S-100 bus introduced with the Altair, machines of generally improved performance, quality and ease-of-use. The Altair was relatively difficult to use. No peripherals were supplied with the machine. It did not include a keyboard or display or even any circuitry that might control such devices. One possible mode of operation was via a teletype terminal with the addition of a serial interface card. The Teletype Corporation model ASR 33 was a popular choice, being commonly used with minicomputers of the era, but was both expensive and difficult for an individual to obtain. The teletype cost several times the price of the Altair and the manufacturer was not used to retail sales. Often the teletypes had to be acquired through secondary markets. The Altair contained no operating system or other software in ROM, so starting it up required a machine language program to be entered by hand via front-panel switches, one location at a time. The program was typically a small driver for an attached cassette tape reader, which would then be used to read in a larger program. Later systems added bootstrapping code to improve this process and would run the CP/M operating system loaded from floppy disk. Homebrew Computer Club The Homebrew Computer Club was formed by Gordon French to gather together hobbyists interested in computing such that they could trade information. The first meeting was in March 1975 in Menlo Park in California and would include a demonstration of the Altair 8800. The club was relatively influential in the development of early personal computers with several attendees subsequently having an impact on the industry. Noted members included Steve Wozniak, who attended the first meeting and later demonstrated the Apple I at the club and Ron Nicholson who was one the designers of the Amiga. Lee Felsenstein moderated club meetings and would design the Sol-20 and early portable Osborne 1 computer released by another club member Adam Osborne. Attendee Jerry Lawson would design the Fairchild Channel F game console. Sol-20 The Altair 8800 was not an easy machine to use and did not ship as standard with peripherals or interfaces to enable interactive use as would be expected from a personal computer. The Sol-20 computer (released in 1976) would correct many of these deficiencies and assemble the required parts into a finished unit. The machine placed an entire S-100 system including QWERTY keyboard, CPU, display card, memory and ports into a convenient single box. The systems were packaged with a cassette tape interface for storage and a 12" monochrome monitor. Complete with a copy of BASIC, the system was priced at US$2,100 and up to 12,000 were sold. BASIC and Microsoft The BASIC programming language had been created in 1963 at Dartmouth College as a language intended for students not in scientific fields of study, in order to widen the appeal of computers. Previous languages were often quite difficult to learn and only those with a particular interest in computer science would do so. BASIC was well suited to minicomputers due to very low memory requirements and became a widely known language. The ease of use combined with low memory demands and widespread adoption made BASIC an attractive language for the microcomputers that would follow. Paul Allen heard about the upcoming MITS Altair 8800 microcomputer kit in late 1974 from a magazine and showed it to his friend Bill Gates. The duo had experience with computers and had previously formed an enterprise that built a computer for processing city traffic data. They recognised the relevance that the BASIC language might have and boldly offered to demonstrate a BASIC for the Altair to MITS, despite having neither BASIC or access to an Altair. Allen developed an Altair emulator for a minicomputer and with the assistance of friend Monte Davidoff they were able to write a BASIC interpreter on punched tape, which had 25 commands and fit in 4 KB of memory. Allen flew to a meeting with MITS and amazingly the interpreter worked despite never having been tested on a real Altair. MITS then agreed to distribute BASIC and Microsoft would produce three different versions, adding an edition that required 8K of memory and an "expanded" edition with more features. Microsoft was co-founded by Allen and Gates in 1976 to sell BASIC products to the personal computer market. New versions of Microsoft BASIC were produced with greater sophistication and BASIC was ported to several CPUs and architectures. Microsoft BASIC was widely used in many machines of the 1970s and 1980s including the Apple II and Commodore 64, although it was sometimes branded under a different name. In many home computers BASIC was supplied on a ROM chip and it was commonplace for machines to start BASIC as standard as the first program the user saw on switching the machine on. While it was a popular implementation, Microsoft BASIC was not the only variation to be in use. The free Tiny BASIC was designed in 1975 as a direct response to Microsoft and many more would follow. Several computer manufacturers chose to supply their own BASIC with varying degrees of compatibility between them. BASIC software listings were commonly supplied with changes required for different microcomputer versions. In particular, methods of producing graphics and sound were rarely standard across BASIC implementations. Other machines of the era Other 1977 machines that were important within the hobbyist community at the time included the Exidy Sorcerer, the NorthStar Horizon, the Cromemco Z-2, and the Heathkit H8. Video game consoles and the personal computer Programmable video game consoles emerged around the same era as personal computers started to become widely available. Game consoles had been on the market previously (such as the Magnavox Odyssey released in 1972 and Home Pong in 1975) but usually as custom devices that could only play a few games hardwired into the electronics. Increased levels of integration placed computer component prices within the affordable range for home entertainment devices and game consoles gained new capabilities. The Fairchild Channel F was released in 1976, which was CPU-based and used interchangeable ROM cartridges, then the more widely sold Atari 2600 was released in 1977. Game consoles differ from personal computers in that they are not usually intended to be user programmable, do not include a keyboard input device (at least as standard) and are mainly intended for a single class of application and not general purposes software. However, in other respects the internal architecture of the machines is not usually dissimilar and game consoles can almost always in theory be used as a general purpose personal computer with the addition of relevant software and peripherals. In fact a few later game consoles would be essentially identical to released models of personal computers but just presented in a different format. The 1990 Commodore C64GS was similar to a Commodore 64 and the 1993 Amiga CD32 was similar to an Amiga 1200. Peripherals have been made available for several game consoles to convert the devices into a full personal computer. Even as far back as the Atari 2600, an adapter called the Atari Graduate was completed in 1983 that expanded the machine with a keyboard and I/O capabilities for use with storage devices. The Graduate was never sold but a similar 1983 adapter from a 3rd party called the CompuMate was released. An expansion module called Adam for the ColecoVision was released in 1983 that included a keyboard and storage capabilities (it was also available as a dedicated personal computer). A product called Family Basic was released for the Nintendo NES in 1984 with a similar aim. The concept of conversion of consoles into personal computers has not disappeared in more recent times. The 2006 Sony PlayStation 3 was originally specifically intended to have the option of being converted into a general purpose personal computer using the OtherOS feature, although the capability was controversially removed in later models. Hardware hackers have routinely prided themselves in demonstrating that dedicated game consoles can be persuaded to run personal computer operating systems such as Linux in combination with general purpose software despite not being intended to do so. The line between dedicated game consoles and personal computers has also blurred with the devices gaining subsets of popular personal computer features such as video playback and internet browsing. Cassette tape storage The floppy disk drive was available through most of the history of the widely available personal computer and some form of floppy disk interface could usually be found for most machines. However, the peripherals were initially expensive, often costing as much as or significantly more than the computer itself. The cost of disk drives remained prohibitive to many potential buyers, particularly individual customers in the home computer market. Without a storage device a computer was of little use. Audio cassette tape filled the void for computer users on a budget whilst providing a storage density and convenience improvement over punched paper tape that had been used previously. Even the first IBM PC model, aimed at the relatively well financed business market, originally included a cassette interface out of concern that purchasers might be dissuaded by the price of a disk drive. Audio cassette recorders were already found in the majority of homes and these could be simply interfaced to a variety of home computers. Many low-end machines would feature audio jacks for a cassette recorder. Some manufacturers chose to provide their own branded cassette unit (or provision a built-in device) but these were little different to any other audio tape recorder other that being pre-configured and sometimes transferring part of the necessary electronic interface from the computer into the cassette unit. Loading software from audio tape was a slow process. Speeds of 300–1200 baud were typical and filling a tiny computer memory of only a few dozen kilobytes was a process lasting many minutes. Computer tape protocols often lacked the sophistication of forward error correction and loading errors were commonplace, resulting in the already frustratingly slow process needing to be repeated. Audio tape is not random access, when storing multiple files on a cassette the user would need to manually fast-forward the tape to the relevant location of the file to avoid waiting for the device to read the entire length of the cassette (which might be 30 minutes or more). To assist in the process a tape counter would be provided such that the user could note down the position of the file on the tape. To reliably make use of audio tape, data was stored within the range of conventional audio bandwidth, this meant the same signal could be stored on any other audio storage medium. Audio tape data was occasionally found as a novelty on vinyl records and audio CDs (as audio, not as digital data and unrelated to CD-ROM). In modern times retro enthusiasts replay audio data into vintage systems using more reliable digital audio players. By the end of the 1980s the cost of a disk drive had fallen to prices that fell within the budget of even the most cost conscious buyer and computer memories had exceeded the entire capacity of audio cassettes. Tape use consequently declined as a primary storage medium. Tape technology continued to advance though and high-density digital tape units remain in use in specialist data backup applications but less usually with personal computers. 1977 and the emergence of the "Trinity" By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, the Apple II, PET 2001 and TRS-80 were all released in 1977, becoming the most popular by late 1978. Byte magazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity". Apple II Steve Wozniak (known as "Woz"), a regular visitor to Homebrew Computer Club meetings, designed the single-board Apple I computer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each from the Byte Shop, Woz, Steve Jobs and associate Ronald Wayne founded Apple Computer. About 200 of the machines sold before the company announced the Apple II as a complete computer. It had color graphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple II operating system was only the built-in BASIC interpreter contained in ROM. Apple DOS was added to support the diskette drive; the last version was "Apple DOS 3.3". The high price of the Apple II, lack of floating point BASIC, along with limited retail distribution, caused it to lag in sales behind the other Trinity machines. However, in 1979 it surpassed the Commodore PET, receiving a sales boost attributed to the release of the extremely popular VisiCalc spreadsheet which was initially exclusive to the platform. It was again pushed into 4th place when Atari, Inc. introduced the Atari 400 and Atari 800 computers. Despite slow initial sales, the Apple II's lifetime was about eight years longer than other machines, and so accumulated the highest total sales. By 1985, 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993. PET Chuck Peddle designed the Commodore PET (short for Personal Electronic Transactor) around his MOS 6502 processor. It was essentially a single-board computer with a simple TTL-based CRT driver circuit driving a small built-in monochrome monitor with 40×25 character graphics. The processor card, keyboard, monitor and cassette drive were all mounted in a single metal case. In 1982, Byte referred to the PET design as "the world's first personal computer". The PET shipped in two models; the 2001–4 with 4 KB of RAM, and the 2001–8 with 8 KB. The machine also included a built-in Datassette for data storage located on the front of the case, which left little room for the keyboard. The 2001 was announced in June 1977 and the first 100 units were shipped in mid October 1977. However they remained back-ordered for months, and to ease deliveries they eventually canceled the 4 KB version early the next year. Although the machine was fairly successful, there were frequent complaints about the tiny calculator-like keyboard, often referred to as a "chiclet keyboard" due to the keys' resemblance to the popular gum candy. This was addressed in the upgraded "dash N" and "dash B" versions of the 2001, which put the cassette outside the case, and included a much larger keyboard with a full stroke non-click motion. Internally a newer and simpler motherboard was used, along with an upgrade in memory to 8, 16, or 32 KB, known as the 2001-N-8, 2001-N-16 or 2001-N-32, respectively. The PET line ended earlier than the other 1977 Trinity machines because Commodore had moved onto other newer product lines that eventuated in the first computer model ever to sell 1 million units, the VIC-20, and the biggest selling single model of computer of all time, the Commodore 64. TRS-80 Tandy Corporation (Radio Shack) introduced the TRS-80, retroactively known as the Model I as the company expanded the line with more powerful models. The Model I combined motherboard and keyboard into one unit with a separate black-and-white monitor and power supply. Tandy's 3000+ Radio Shack storefronts ensured the computer would have widespread distribution and support (repair, upgrade, training services) that neither Apple nor Commodore could touch. The Model I used a Zilog Z80 processor clocked at 1.77 MHz (later specimens shipped with the Z80A). The basic model originally shipped with 4 KB of RAM and Level 1 BASIC produced in-house. RAM in the first 4 KB machines was upgradeable to 16 KB and Level 2 Microsoft BASIC, which became the standard basic configuration. An Expansion Interface provided sockets for further RAM expansion to 48 KB. Its other strong features were its full stroke QWERTY keyboard with numeric keypad (lacking in the very first units but upgradeable), small size, well written Microsoft floating-point BASIC and inclusion of a 64-column monitor and tape deck—all for approximately half the cost of the Apple II. Eventually, 5.25-inch floppy drives and megabyte-capacity hard disks were made available by Tandy and third parties. The Expansion Interface provided for up to four floppy drives and hard drives to be daisy-chained, a slot for an RS-232 serial port and a parallel port for printers. With the (later) LDOS operating system, double-sided 80-track floppy drives were supported, and features such as Disk Basic with support for overlays and suspended/background programs, device independent data redirection, Job Control Language (batch processing), flexible backup and file maintenance, typeahead and keyboard macros. The Model I could not meet FCC regulations on radio interference due to its plastic case and exterior cables. Apple resolved the issue with an interior metallic foil but the solution would not work for Tandy with the Model I. The Model I also suffered from problems with its cabling between its CPU and Expansion Interface (spontaneous reboots) and keyboard bounce (keystrokes would randomly repeat), and the earliest versions of TRSDOS similarly had technical troubles. Though these issues were quickly or eventually resolved, the computer suffered in some quarters from a reputation for poor build quality. Nevertheless, all the early microcomputer manufacturers experienced similar difficulties. Since the Model II and Model III were already in production by 1981, Tandy decided to stop manufacturing the Model I. Radio Shack sold some 1.5 million Model I's. The line continued until late 1991 when the TRS-80 Model 4 was at last retired. The Japanese Trinity Similarly to the American trinity, Japan has a term for their own most important machines of that era: "the eight-bit gosanke" (8ビット御三家, hachi-bitto gosanke). It consists of the (1978–09), Sharp MZ-80K (1978–12) and the NEC PC-8001 (announced 1979–05, shipped 1979–09). Each of these was the first of a series of machines from each manufacturer; NEC and Sharp continued these 8-bit lines into the late 1980s but Hitachi ended the series in 1984 as it was replaced in the gosanke by Fujitsu (see below). VisiCalc and the killer app Through the 1970s, personal computers had proven popular with electronics enthusiasts and hobbyists, however it was unclear why the general public might want to own one. This perception changed in 1979 with the release of VisiCalc from VisiCorp (originally Personal Software), which was the first spreadsheet application. Spreadsheets were a common business tool prior to the personal computer but up until this time were created by hand. Updating a cell necessitated the manual re-calculation of all of the referencing cells. Dan Bricklin was watching a lecture at Harvard Business School where a spreadsheet was being tediously redrawn by hand and realised that the process could be automated with a computer. VisiCalc proved exceptionally popular with the business community and ultimately 700,000 copies were sold. VisiCalc was initially released for the Apple II and was credited as being a key reason for the ultimate commercial success of the machine and by extension Apple corporation, as it was exclusively available on the platform for the first 12 months which gave the hardware a sales lead over competitors. VisiCalc was retroactively described as the first "killer app" as it was in itself such useful software in business that it justified the purchase of personal computer hardware regardless of all other applications. In an anecdote recounted by Chuck Peddle, VisiCalc could have easily been released for the Commodore PET first instead of the Apple II. VisiCorp owned four Commodore PET machines but had decided to try the Apple II market and purchased a single machine. In a twist of fate that may have changed the course of personal computer history, at the exact moment Dan Bricklin arrived, there was no PET available for use and so one of the founders of VisiCorp Dan Fylstra suggested that he should try the Apple machine as it had a similar BASIC. The early 1980s and home computers Byte in January 1980 announced in an editorial that "the era of off-the-shelf personal computers has arrived". The magazine stated that "a desirable contemporary personal computer has 64 K of memory, about 500 K bytes of mass storage on line, any old competently designed computer architecture, upper and lowercase video terminal, printer, and high-level languages". The author reported that when he needed to purchase such a computer quickly he did so at a local store for $6000 in cash, and cited it as an example of "what the state of the art is at present ... as a mass-produced product". By early that year Radio Shack, Commodore, and Apple manufactured the vast majority of the one half-million microcomputers that existed. As component prices continued to fall, many companies entered the computer business. This led to an explosion of low-cost machines known as home computers that sold millions of units before the market imploded in a price war in the early 1980s. Atari 8-bit computers Atari, Inc. was a well-known brand in the late 1970s, both due to their hit arcade video games like Pong, as well as the hugely successful Atari Video Computer System game console. Realizing that the VCS would have a limited lifetime in the market before a technically advanced competitor came along, Atari decided they would be that competitor, and started work on a new console design that was much more advanced. While these designs were being developed, the Trinity machines hit the market with considerable fanfare. Atari's management decided to change their work to a home computer system instead. Their knowledge of the home market through the VCS resulted in machines that were almost indestructible and just as easy to use as a games machine—simply plug in a cartridge and go. The new machines were first introduced as the Atari 400 and 800 in 1979, but production problems prevented widespread sales until the next year. With a trio of custom graphics and sound co-processors and a 6502 CPU clocked ~80% faster than most competitors, the Atari machines had capabilities that no other microcomputer could match. In spite of a promising start with about 600,000 sold by 1981, they were unable to compete effectively with Commodore's introduction of the Commodore 64 in 1982, and only about 2 million machines were produced by the end of their production run. The 400 and 800 were tweaked into superficially improved models—the 1200XL, 600XL, 800XL, 65XE—as well as the 130XE with 128K of bank-switched RAM. Sinclair Sinclair Research Ltd was a British consumer electronics company founded by Sir Clive Sinclair in Cambridge. Clive Sinclair had originally founded Sinclair Radionics, but by 1976 was beginning to lose control over the company and started a new independent venture to pursue projects under his own direction. The new company was originally staffed by Chris Curry (later to co-found Acorn) who had interested Clive Sinclair in the computer market. Following the commercial success of a kit computer in 1977 aimed at electronics enthusiasts called the MK14, Sinclair Research (then trading as Science of Cambridge) entered the home computer market in 1980 with the ZX80 at £99.95. At the time the ZX80 was the cheapest personal computer for sale in the UK. This was succeeded by the more well-known ZX81 in the following year (sold as the Timex Sinclair 1000 in the United States). The ZX81 was one of the first computers in the UK to be aimed at the general public and was offered for sale via major high street retail channels. It would become a significant success, selling 1.5 million units. In 1982 the ZX Spectrum was released, later becoming Britain's best selling computer, competing aggressively against Commodore and Amstrad. It would be followed by enhanced models in the form of the ZX Spectrum+ and 128. The ZX Spectrum series would sell more than 5 million units. The machine was widely used as a home gaming platform with more than 3,500 games titles eventually released. The ZX Spectrum was extensively cloned in many countries, some machines with the involvement of Sinclair Research but large numbers with no official approval at all. The clones were particularly popular in Eastern Bloc countries, which at the time had only limited access to western markets, with dozens of Spectrum variants being produced. The Sinclair QL was released in 1984. This machine was aimed at the serious home user and professional markets. It was a departure from the architecture of the ZX Spectrum series, using a Motorola 68008 processor and was not compatible with its predecessor. The QL was particularly distinctive due to an unusual choice of storage device. It was shipped with two integrated microdrives, a continuous loop tape system intended to have performance sitting somewhere between disk and cassette but at a lower price than a disk drive. The QL was not a commercial success, in part due to issues with reliability but also the ascent of the IBM PC in business with which it was not compatible. Production had been discontinued by 1985. The same technology was subsequently re-used in the ICL One Per Desk integrated telephone/computer device. The QL achieved some note due to Linus Torvalds crediting it with being the machine on which he became practiced at programming prior to developing Linux, in part due to the limited software support. The combination of the market failure of the Sinclair QL and a portable TV product called the TV80 led Sinclair Research into financial difficulties in 1985. A year later Sinclair sold the rights to the computer products to Amstrad. Four further models in the Spectrum range would be released by Amstrad using the Sinclair brand name. The ZX Spectrum +2 included an integrated tape recorder and the +3 model incorporated a 3-inch CF2 disk drive. Production continued under Amstrad until 1992. Miles Gordon Technology attempted to release a successor to the Spectrum known as the SAM Coupé in 1989, but this was not a commercial success. Sinclair Radionics would also independently design a personal computer, unrelated to Sinclair Research but with the involvement of Clive Sinclair. Following sale of the assets of Sinclair Radionics this design would eventually see production in 1982 as the Grundy NewBrain. TI-99/4A Texas Instruments, at the time the world's largest chip manufacturer, decided to enter the home computer market with the TI-99/4. The first home computer designed around a 16-bit microprocessor, its specs on paper were far ahead of the competition, and Texas Instruments had enormous cash reserves and development capability. When it was released in late 1979, Texas Instruments initially focused on schools. Despite the 16-bit processor and custom video processor with sprite support, architectural restrictions prevented it from living up to expectations. It was updated to the TI-99/4A in 1981. A total of 2.8 million units were shipped between the two models, many at bargain basement prices resulting from a price war with Commodore in 1982–83, before the TI-99/4A was discontinued in March 1984. VIC-20 and Commodore 64 Realizing that the monochrome PET could not easily compete with color machines like the Apple II and Atari 8-bit computers, Commodore introduced the color VIC-20 in 1980 to address the home market. The machine offered only 5 KB of memory, which was small even for the time, however the use of more expensive SRAM reduced the complexity of the design versus cheaper DRAM which was more difficult to use. The distribution of games on ROM cartridge helped offset the small memory and RAM expansion cartridges were also released. The machine was sold at a cost competitive price and achieved strong sales, becoming the first personal computer to sell 1 million units and ultimately selling at least 2.5 million. Commodore would address the memory limitations with the 1982 release of the Commodore 64, the name advertising the inclusion of 64 KB of memory, which was more comparable and in many cases exceeded contemporary home computers. The machine was based around the MOS 6510 and was particularly praised for the relatively advanced audio chip (the MOS 6581 or "SID") and competent graphics performance in a low cost package, which made it an ideal platform for games releases. At least 5,600 titles were ultimately released. Commodore's then ownership of the chip manufacturer MOS Technology was a factor in enabling the C64 to undercut competition on price as Commodore were able to be their own supplier of several key components. The machine was estimated by the Guinness book of World Records to be the highest selling desktop personal computer model of all time with somewhere between 12.5 million and 17 million units sold, with the record still standing in 2023. BBC Micro The BBC became interested in running a series of educational computer literacy TV programmes in the early 1980s and issued an invitation to tender for a personal computer to accompany the project. After examining several entrants, what was then known as the Proton from Acorn Computers was selected. A number of minor changes were made resulting in the BBC Micro (released 1981). The machine was MOS 6502 based and was originally sold as a choice of the model A with 16KB of memory or the more popular model B with 32KB. The machine achieved widespread success in the UK education sector and the system sold more than 1.5 million units. A cost-reduced version called the Electron would be released in 1983 and several other models would appear in the range including the BBC Master in 1986. The BBC Micro included a number of innovative features less commonly seen on other contemporary home computers including local area networking in the form of Econet, which was widely deployed in education. A Teletext chip was standard and when combined with an adapter enabled software (known as Telesoftware) and other information to be downloaded from broadcast television signals. The BBC Micro was architected to be multi-processor as standard. Additional CPUs could be connected via the included "Tube" interface which was designed for this purpose. Adapters for the Z80 and the 32-bit NS32016 CPUs would be released for the machine. Acorn subsequently used the Tube interface to develop the ARM processor (which then stood for Acorn RISC Machine) to power future projects and released an ARM development system for the BBC Micro. The CPU saw first commercial use in the Acorn Archimedes. ARM CPUs are now widely deployed in the majority of smartphones and tablet computers amongst a vast collection of other devices. Commodore price war and crash In 1982, the TI 99/4A and Atari 400 were both $349, Radio Shack's Color Computer sold at $379, and Commodore had reduced the price of the VIC-20 to $199 and the Commodore 64 to $499 shortly after C64 release. In the early 1970s, Texas Instruments had forced Commodore from the calculator market by dropping the price of its own-brand calculators to less than the cost of the chipsets it sold to third parties to make the same design. Commodore's CEO, Jack Tramiel, vowed that this would not happen again, and purchased MOS Technology in 1976 to ensure a supply of chips. With his supply guaranteed, and good control over the component pricing, Tramiel launched a war against Texas Instruments soon after the introduction of the Commodore 64. By 1983 the VIC-20 could be purchased for as little as $90. With a decreasing price and strong production, the VIC-20 continued to dominate Commodore sales through 1983 as interest in the C64 built. Commodore lowered the retail price of the C64 to $300 at the June 1983 Consumer Electronics Show, and stores sold it for as little as $199. At one point the company was selling as many computers as the rest of the industry combined. Commodore, which even discontinued list prices, could make a profit when selling the C64 for a retail price of $200 because of vertical integration, the practice of a company owning many of its suppliers. Particularly, the ownership of MOS Technology put Commodore in a dominating position as they were also a chip supplier to some of their competitors including Atari and Apple. Commodore's sharp and sudden price drops did not always please retailers who were left holding stock on which they would make a loss and Commodore was sometimes forced into compensating them. Competitors also reduced prices in response to Commodore. The Atari 800's price in July was $165, and by the time Texas Instruments was ready in 1983 to introduce the 99/2 computer (designed to sell for $99) and the TI-99/4A sold for $99 in June. The 99/4A had sold for $400 in the fall of 1982, causing a loss for Texas Instruments of hundreds of millions of dollars. A Service Merchandise executive stated, "I've been in retailing 30 years and I have never seen any category of goods get on a self-destruct pattern like this." Such low prices probably hurt home computers' reputation; one retail executive said of the 99/4A, '"When they went to $99, people started asking 'What's wrong with it?'" The founder of Compute! stated in 1986 that "our market dropped from 300 percent growth per year to 20 percent". While Tramiel's target was TI many competitors in the home computer market were subject to financial difficulties as a result of Commodore. Even Commodores own finances showed strain with the demands of financing the massive building expansion needed to deliver the machines. Price cutting was one factor in Tramiel developing a rocky relationship with the main Commodore investor, Irving Gould. Due to a combination of other disagreements with Gould about management style, Tramiel surprised many by leaving Commodore in early 1984 despite his business strategy with the C64 ultimately being a substantial success. Japanese computers From the late 1970s to the early 1990s, Japan's personal computer market was largely dominated by domestic computer products. NEC became the market leader following the release of the PC-8001 in 1979, continuing with the 8-bit PC-88 and 16-bit PC-98 series in the 1980s, but had early competition from the Sharp MZ and series, and later competition from the 8-bit Fujitsu FM-7, Sharp X1, MSX and MSX2 series and 16-bit FM Towns and Sharp X68000 series. Several of these systems were also released in Europe, MSX in particular gaining some popularity there. A key difference between early Western and Japanese systems was the latter's higher display resolutions (640x200 from 1979, and 640x400 from 1985) in order to accommodate Japanese text. Japanese computers also from the early 1980s employed Yamaha FM synthesis sound boards which produce higher quality sound. Japanese computers were widely used to produce video games, though only a small portion of Japanese PC games were released outside of the country. The most successful Japanese personal computer was NEC's PC-98, which sold more than 18 million units by 1999. The IBM PC IBM was one of the largest computer companies in the world and it was widely expected that they would at some time enter the rapidly expanding personal computer market, which they did by releasing the IBM PC in August 1981. Like the Apple II and S-100 systems, it was based on an open, card-based architecture, which allowed third parties to develop for it. It used the Intel 8088 CPU running at 4.77 MHz, containing 29,000 transistors. The first model used an audio cassette for external storage, though there was an expensive floppy disk option. The cassette option was never popular and was removed in the PC XT of 1983. The XT added a 10 MB hard drive in place of one of the two floppy disks and increased the number of expansion slots from 5 to 8. While the original PC design could accommodate only up to 64 KB on the main board, the architecture was able to accommodate up to 640 KB of RAM, with the rest on cards. Later revisions of the design increased the limit to 256 KB on the main board. In 1980, IBM had approached Digital Research (co-founded by Gary Kildall) for a version of CP/M for its upcoming IBM PC at the suggestion of Bill Gates of Microsoft, who was already providing a BASIC interpreter amongst other software for the PC. CP/M was a popular and widely supported operating system for personal computers at the time and would not have been an unexpected choice for the IBM machine. A long-standing industry myth persists that IBM were unable to negotiate a non-disclosure agreement with Dorothy McEwen, Digital Research co-founder and Kildall's wife, who handled much of the business side of the company and departed. In reality the barrier was overcome and Kildall did personally meet with IBM, notwithstanding a second myth that he was away flying his personal plane. He was actually at a pre-arranged meeting with a customer on the morning IBM arrived but was available later in the day. Kildall offered Digital Research's more advanced MP/M operating system but IBM were uninterested. Kildall then offered CP/M-86 but negotiations encountered difficulties when IBM demanded a flat licensing fee of $250,000 without a royalty. Digital Research more conventionally offered a deal of $10 per copy. Kildall was concerned about alienating the large number of other CP/M licensees and also with IBM's intent to re-brand CP/M as PC-DOS. Kildall ultimately believed he had reached a deal but IBM returned to negotiate with Gates who offered to provide 86-DOS (originally known as QDOS), an operating system similar to CP/M developed by Tim Paterson of Seattle Computer Products. IBM rebranded the Microsoft version as PC-DOS. Critically, IBM did not prevent Microsoft from reselling the DOS product to other vendors which would be key to Microsoft's future dominance in the operating system market. When Digital Research became aware of the similarities between PC-DOS and CP/M they raised a dispute with IBM, who offered to resolve the matter by shipping the PC without bundling an operating system. Three operating system choices would be offered for separate purchase: PC-DOS, CP/M and UCSD p-System. Digital Research accepted this deal believing the market would choose their version given the large existing base of software support. However, IBM offered PC-DOS at only $40 but CP/M-86 was marketed at $240, which made the latter uncompetitive and porting software to PC-DOS was not difficult. Digital Research would later return to compete with MS-DOS in form of the compatible DR-DOS product in 1988, which achieved some success. IBM PC clones The marketing might of IBM made the PC platform an attractive prospect for clone makers as it would likely have significant success and other manufacturers could potentially take a slice of the sales. The idea of cloning computers was not new and the IBM was not the only platform that was a target of compatible makers. The Apple II was also copied, as were many successful preceding platforms back to the Altair. Larger and more expensive minicomputers were also regularly cloned by competitors. Although the PC and XT included a version of the BASIC language in read-only memory, most were purchased with disk drives and run with an operating system, the most popular of which was the Microsoft supplied PC-DOS. Microsoft had retained the rights to re-sell PC-DOS separately to the IBM PC. When they sold the product it was branded as MS-DOS but was otherwise identical. The IBM PC was based on easily available integrated circuits and the basic card-slot design was not patented. The only IBM proprietary portion of the design was the BIOS software embedded in ROM. Discovering how this worked was not difficult, a full copy of the BIOS source code with comments and annotations was helpfully printed in the IBM Technical Reference Manual for the 5150, the only problem for clone makers was that it was copyrighted. A method called clean room design was used to overcome the copyright issue and produce a functionally identical BIOS version that could be legally sold. Not all PC cloners would take this approach though with IBM alleging Corona Data Systems and Eagle computer amongst others used their copyrighted version. With the commercial availability of MS-DOS and acquisition of a BIOS possible, there were no further barriers to competitors producing IBM PC imitations. Columbia Data Products released the MPC 1600 in 1982, which was the first clone of the original IBM PC model 5150, but with more RAM and more expansion slots. Also in 1982, DEC released the Rainbow 100 which could run several operating systems including MS-DOS. In 1983 Compaq released the Portable, which was a (just about) portable version of the IBM PC specifically designed to fit within the requirements of airline carry-on luggage. The machine resembled the earlier CP/M based Osborne 1 portable PC. The Compaq Portable sold well and became strongly associated with the IBM clone industry. A large number of other companies would release clones. Through the 1980s sources for every component of an IBM PC gradually became available at retail such that they could just be slotted together without any electronics production being needed. Any organisation of any size could now be in the clone business leading to a proliferation of tiny brands that came and went amongst the much larger names. IBM's pricing was undercut to the point where they were no longer the significant force in development, leaving only the PC standard they had established. Microsoft was however left in a strong commercial position, being a supplier to most of the clones and would begin offering OEM versions of MS-DOS aimed at small-scale system builders as opposed to larger companies in 1986. In 1984, IBM introduced the IBM Personal Computer/AT (more often called the PC/AT or AT) built around the Intel 80286 microprocessor. This chip was much faster, and could address up to 16MB of RAM but only in a mode that largely broke compatibility with the earlier 8086 and 8088. In particular, the MS-DOS operating system was not able to take advantage of this capability. The bus in the PC/AT was given the name Industry Standard Architecture (ISA). IBM's weakening position in the PC market was made clear with Intel's introduction of the 80386, which first appeared in a Compaq machine and not an IBM which made Compaq now seem to be leading the direction of the architecture. Compaq released the DeskPro 386 featuring the CPU in 1986. IBM would not sit idle and see the PC market disappear to the cloners. They would regularly take legal action (often successfully) against clone makers using their large portfolio of patents. In addition IBM launched the PS/2 range of computers in 1987 with the proprietary Micro Channel bus in an attempt to recapture control of the market through charging licenses for a key component, but this was not successful. It received only tepid support from 3rd parties and PC cloners largely stuck with ISA until the short-lived VESA Local Bus and then Peripheral Component Interconnect (PCI) was released in 1992. Apple Lisa and Macintosh In 1983 Apple Computer introduced the first mass-marketed microcomputer with a graphical user interface, the Lisa. The Lisa ran on a Motorola 68000 microprocessor and came equipped with 1 megabyte of RAM, a black-and-white monitor, dual 5¼-inch floppy disk drives and a 5 megabyte Profile hard drive. The Lisa's slow operating speed and high price (US$10,000), however, led to its commercial failure. Drawing upon its experience with the Lisa, Apple launched the Macintosh in 1984, with an advertisement during the Super Bowl. The Macintosh was the first successful mass-market mouse-driven computer with a graphical user interface or 'WIMP' (Windows, Icons, Menus, and Pointers). Based on the Motorola 68000 microprocessor, the Macintosh included many of the Lisa's features at a price of US$2,495. The Macintosh was introduced with 128 KB of RAM and later that year a 512 KB RAM model became available. To reduce costs compared the Lisa, the year-younger Macintosh had a simplified motherboard design, no internal hard drive, and a single 3.5-inch floppy drive. Applications that came with the Macintosh included MacPaint, a bit-mapped graphics program, and MacWrite, which demonstrated WYSIWYG word processing. While not a success upon its release, the Macintosh was a successful personal computer for years to come. This is particularly due to the introduction of desktop publishing in 1985 through Apple's partnership with Adobe. This partnership introduced the LaserWriter printer and Aldus PageMaker to users of the personal computer. During Steve Jobs's hiatus from Apple, a number of different models of Macintosh, including the Macintosh Plus and Macintosh II, were released to a great degree of success. The entire Macintosh line of computers was IBM's major competition up until the early 1990s. Amiga The Amiga was a range of computers first released in 1985 by Commodore with high performance graphics and audio capabilities. The machines found particular success as gaming platforms and also in video production. The Amiga achieved popularity in several European countries in the late 1980s and early 1990s, selling around 4.8 million units. Sales were comparatively muted in North America but it did find some niches in this region. The Amiga computers were originally conceived as a game console. Activision games co-founder Larry Kaplan had seen a preview of the Nintendo NES in 1982 and wanted to produce a more capable machine. He first contacted Atari employee Doug Neubauer who had worked on the sound chip for the Atari 8-bit home computer range and then later recruited Jay Miner who had also worked on the graphics chip for the same machines. They formed Hi-Toro corporation to develop the new console, later to be renamed Amiga. Kaplan and Neubauer would drop out and be replaced by Ron Nicholson from Apple and Joe Decuir who had worked for Atari. They first sold a range of peripherals to raise money for the new venture. By 1983 Amiga were running out of money and approached Atari for extra financing, which was agreed. By 1984 Atari was losing money and Amiga had a clause in their contract with Atari that they could buy-out control of the machine for half a million dollars. Commodore was looking for a new design to replace the C64 and agreed to finance this. Amiga development subsequently moved to Commodore. The Amiga machines introduced a sophisticated set of custom chips that allowed the relatively slow CPUs of the time to accelerate graphics operations. One of the key functions was the blitter, which was suggested by Nicholson, and would be central to the graphical performance of the Amiga. This was a chip for moving blocks of data at high speed and could also concurrently modify and combine blocks in various ways. This allowed regions of graphics to be copied very rapidly around the display without the involvement of the CPU. The concept wasn't new, it originally came from the Xerox Alto and had been used previously in the Mindset computer (1984), but was popularised by the Amiga. The blitter was used to effect in a 1984 Amiga industry demo featuring a pseudo-3D rotating checkered ball, later to be known as the "boing ball" and would become associated with the Amiga brand. The fluidity of motion of a graphic of that size had not been seen on any other cost competitive machines of the era. Competitors allowed only comparatively tiny sprites to be drawn with any speed. Hardware blitter would become a standard feature in the industry going forwards. A typical IBM PC clone of the era displayed 16 colors at best and more usually only 4 from a fixed palette. The Amiga offered 4096 colours with up to 64 colors on screen at once, but through a clever hardware trick called "HAM mode" it was possible to display all 4096 colors on screen simultaneously in some situations. This was an unprecedented feat on any machine in the same price bracket and enabled the novelty of displaying photo-realistic images on an affordable home computer. Another innovation was the inclusion of a genlock capability which allowed Amiga graphics to be mixed with a television video signal. This allowed the Amiga to be competitive in video production against more expensive rivals. The Amiga also included a complete graphical pre-emptive multi-tasking operating system called Workbench. Competing GUI operating systems were only beginning to gain momentum at the time and often did not allow several programs to be run at once. The Commodore Amiga 1000 was launched as a desktop personal computer in 1985 at an event featuring the artist Andy Warhol and the singer Debbie Harry (known professionally as Blondie). The machine featured a Motorola 68000 processor and 256KB of RAM in combination with the custom chips. The series would reach its height with the release of the cost reduced A500 version in 1987, which was not discontinued until 1992. The series would be joined by several other models including versions aimed at the high-end graphics and video markets such as the Amiga 3000 (1990). Into the CD-ROM era, a home appliance version was produce in the form of the CD-TV (1991) and a dedicated game console version known as the CD32 (1993). The Amiga range ended with the bankruptcy of Commodore in 1994, but the series continues to have a cult following from enthusiasts. The rise of the graphical user interface The now ubiquitous WIMP (Windows, Icons, Menus and Pointers) graphical user interface style was developed from research ideas (pioneered by Douglas Englebart) into a fully functional product by Xerox PARC in the early 1970s. In 1981, Xerox released the Xerox Star as a commercial product which featured a full graphical user interface with a bitmapped display. The Star was developed from the Xerox Alto developed in 1972. With a release price of over $16,000 the Star was not within the range of most personal computer buyers. The British graphics company Quantel had released the GUI driven Paintbox product in 1981 which became a staple of the television industry through the 1980s. This was an extremely expensive personal computer intended for a very specific market segment (graphics design) and not for the general public. A number of Apple employees were shown the work of Xerox and this heavily influenced the GUI driven Apple Lisa, released in 1983. Concurrently to the Lisa, VisiCorp (noted for the VisiCalc spreadsheet) was working on the Visi On GUI environment (first demonstrated at COMDEX in 1982) and released their version for the IBM PC in 1983. Neither the Lisa nor Visi On was a significant commercial success as both required expensive hardware. The Apple Lisa was followed by the first of the more successful Macintosh range in 1984 which had cut the hardware costs for running a GUI to a more palatable level. It was commonplace for individual software releases to feature elements of a GUI in the 1980s without requiring any specific GUI based operating system to be installed. Many programs used a mouse and had menus and icons but each program would have its own style and there was no consistent look and feel. The user would have to learn how to use each title separately as opposed to intuitively understanding it. There were also no standards or consistency in the types of pointing device supported and a particular brand of mouse may or may not work. The rise of GUI based operating systems would bring consistency to the user experience and also deliver a toolbox of useful library routines for software to draw upon. Each individual software title would no longer have to code the routines necessary to draw a window or fonts in varying sizes. X Window System for Unix emerged around June 1984 (derived from the earlier W windowing environment). At the time, Unix personal computers were not commonly encountered by the general public, being mostly in the form of very expensive workstations found in academia and specialist applications. X Window System continued to be the dominant windowing implementation on Unix-like operating systems and is still widely deployed as the underlying windowing system in many Linux distributions today. Tandy released a primitive GUI for their range of PCs called DeskMate in 1984. Microsoft Windows 1.0 was released in 1985 but at this stage offered only very limited functionality (not even supporting overlapping windows) and had little market impact until version 3.0 released in 1990. Thereafter Windows started to gain ground and become the dominant windowing operating system for IBM clones. Digital Research GEM (Graphics Environment Manager) is a capable WIMP GUI released in 1985 for multiple platforms including the Atari ST which shipped with GEM as standard. A number of Amstrad PCs shipped with GEM for DOS. Despite not being the inventors of the WIMP interface style, Apple would sue Digital Research resulting in a settlement that reduced the capabilities of GEM in 1986. The final retail release was in 1988. The Commodore Amiga range of machines included a WIMP GUI called Workbench as standard in 1985. A GUI was also available for the Commodore 64, known as GEOS. The Acorn Archimedes range was supplied with the Arthur GUI in 1987, later to be known as RISC OS. IBM developed their own windowing operating system, in collaboration with Microsoft, called OS/2 to coincide with the release of the IBM PS/2 range of PCs. OS/2 was released in 1987 and gained a GUI with version 1.1 in 1988. It would later become known as OS/2 Warp. Microsoft and IBM had ended their collaboration on the project by 1992 due to a commercial dispute and Microsoft had shifted focus to their own MS Windows platform. With MS Windows usually being bundled with PCs, OS/2 failed to gain significant market traction with the exception of specific market niches where IBM had dominance, such as in finance. BeOS is a windowing operating system developed by Be Inc. which was founded by Jean-Louis Gassée, a former Apple executive. It was strongly considered as a replacement for the ailing Mac OS System 7 after Apple's internal project Copland (intended to be MacOS System 8) encountered difficulties. Commercial negotiations between Apple and Be Inc. ultimately failed. Apple co-founder Steve Jobs had started a new workstation venture called NeXT and as a result had developed a GUI operating system called NeXTSTEP. Apple ultimately purchased NeXT in 1996 and the OS then formed the foundation of future Apple operating systems. This deal is also notable as the point at which Steve Jobs returned to Apple and the company found significant new success under his leadership. Workstations Workstations are a class of personal computer. The term workstation is a marketing moniker used to differentiate a personal computer from competing products. There is no strict technical definition as to what comprises a workstation other than it will usually be expected to have higher performance than an average business personal computer. Consequently, a workstation is usually more expensive and is marketed towards high-end professional use cases. Workstations are often aimed at very specific market niches such as scientific computing or graphics production that have requirements not fulfilled by mainstream business machines. Prior to around the year 2000, it was common for workstations to be based around a unique bespoke architecture and the machines would be quite distinctive relative to PCs typically seen in business or the home. It was not uncommon for workstation vendors to produce their own CPUs with architectures such as MIPS, SPARC and Alpha appearing. Workstations often had custom-built graphics processors to achieve higher resolutions and colour depth than was typical on IBM PC machines of the era. Vendors would usually offer a proprietary operating system as standard, this was most often a variant of UNIX which afforded some degree of compatibility between competing workstation vendors but less so with software for DOS or Windows based IBM PCs. Vendors that were noted for the production of workstations included HP, DEC, Sun Microsystems, Silicon Graphics (SGI) and NeXT. In subsequent years specialist workstation vendors declined as the performance of commodity PCs strengthened and became viable in the niches workstations would traditionally occupy. Personal computers described as workstations became more likely to be based around the standard PC architecture and use components not dissimilar to those found in mainstream business machines. IBM PC clones dominate Towards the end of the 1980s, IBM PC XT clones started to encroach on the home computer market segment, previously the preserve of low cost manufacturers such as Atari, Inc. and Commodore. IBM PC compatible systems became cheaper and started to sell for under $1000, particularly via mail order rather than a traditional dealer network. These prices were achieved by using older generation PC technology. Dell began as one of these manufacturers, under its original name of PC's Limited. 1990s NeXT In 1990, the NeXTstation workstation went on sale, for "interpersonal" computing as Steve Jobs described it. The NeXTstation was meant to be a new computer for the 1990s, and cheaper than the previous NeXT Computer. Despite its pioneering use of object-oriented programming concepts, the NeXTstation was somewhat a commercial failure, and NeXT shut down hardware operations in 1993. CD-ROM In the early 1990s, the CD-ROM became an industry standard, and by the mid-1990s one was built into almost all desktop computers, and toward the end of the 1990s, in laptops as well. Although introduced in 1982, the CD ROM was mostly used for audio during the 1980s, and then for computer data such as operating systems and applications into the 1990s. Another popular use of CD ROMs in the 1990s was multimedia, as many desktop computers started to come with built-in stereo speakers capable of playing CD quality music and sounds with the Sound Blaster sound card on PCs. ThinkPad IBM introduced its successful ThinkPad range at COMDEX 1992 using the series designators 300, 500 and 700 (allegedly analogous to the BMW car range and used to indicate market), the 300 series being the "budget", the 500 series "midrange" and the 700 series "high end". This designation continued until the late 1990s when IBM introduced the "T" series as 600/700 series replacements, and the 3, 5 and 7 series model designations were phased out for A (3&7) & X (5) series. The A series was later partially replaced by the R series. Dell By the mid-1990s, Amiga, Commodore and Atari systems were no longer on the market, pushed out by strong IBM PC clone competition and low prices. Other previous competition such as Sinclair and Amstrad were no longer in the computer market. With less competition than ever before, Dell rose to high profits and success, introducing low cost systems targeted at consumers and business markets using a direct-sales model. Dell surpassed Compaq as the world's largest computer manufacturer, and held that position until October 2006. Power Macintosh, PowerPC In 1994, Apple introduced the Power Macintosh series of high-end professional desktop computers for desktop publishing and graphic designers. These new computers made use of new IBM PowerPC processors as part of the AIM alliance, to replace the previous Motorola 68k architecture used for the Macintosh line. During the 1990s, the Macintosh remained with a low market share, but as the primary choice for creative professionals, particularly those in the graphics and publishing industries. Acorn Risc PC In 1994, Acorn Computers launched its Risc PC range of desktop computers as the successor to the Archimedes. The machines were at the time an entirely unique architecture, based around the Acorn ARM CPU and used the proprietary Acorn RISC OS graphical operating system. The Risc PC had a novel stacking design where multiple "slices", that is complete additional chassis with a full plastic case, could be mounted on top of each other to add additional features such as drives and expansion cards. The machines had a multi-processor architecture, with one CPU being provided as standard and at least one additional CPU could be added. The additional CPUs did not need to be ARM and entirely alien CPU architectures could run in parallel such as x86. With the addition of a second CPU, the Risc PC could run alternative operating systems (including Windows 95 and DOS) concurrently with RISC OS in a window and could seamlessly merge applications from other operating systems into the RISC OS GUI environment as if they were native applications. With Microsoft Windows on the rise and IBM clones becoming the mainstream business choice, a different architecture would always face market challenges against considerably larger rivals. The Risc PC did find some niches in applications such as television production and education however popular business applications were not natively available and developer interest in the platform waned. The capability of RISC OS to run entirely from ROM and the use of a low power CPU made it suited for embedded applications. RISC OS was used in a number of products independently of the Risc PC hardware, such as in the Oracle Network Computer thin client and a variety of set-top boxes under the name NCOS. Acorn ceased Risc PC production in 1998 following a reorganisation of the company but Castle Technology independently continued production of Acorn designs under license until 2003. Castle Technology released their own Risc PC compatible design, the Iyonix PC, which was produced until 2008. RISC OS continued beyond the end of the Risc PC in a limited form and was used in a small number of other machines and embedded devices. RISC OS is still available after ultimately becoming an open source product in 2018 and there is a version that can run the on the Raspberry Pi. The Acorn ARM CPU went on to substantial success in other markets with billions of chips manufactured. IBM clones, Apple back into profitability Due to the sales growth of IBM clones in the '90s, they became the industry standard for business and home use. This growth was augmented by the introduction of Microsoft's Windows 3.0 operating environment in 1990, and followed by Windows 3.1 in 1992 and the Windows 95 operating system in 1995. The Macintosh was sent into a period of decline by these developments coupled with Apple's own inability to come up with a successor to the Macintosh operating system, and by 1996 Apple was almost bankrupt. In December 1996 Apple bought NeXT and in what has been described as a "reverse takeover", Steve Jobs returned to Apple in 1997. The NeXT purchase and Jobs' return brought Apple back to profitability, first with the release of Mac OS 8, a major new version of the operating system for Macintosh computers, and then with the PowerMac G3 and iMac computers for the professional and home markets. The iMac was notable for its transparent bondi blue casing in an ergonomic shape, as well as its discarding of legacy devices such as a floppy drive and serial ports in favor of Ethernet and USB connectivity. The iMac sold several million units and a subsequent model using a different form factor remains in production as of August 2017. In 2001, Mac OS X, the long-awaited "next generation" Mac OS based on the NeXT technologies was finally introduced by Apple, cementing its comeback. Writable CDs, MP3, P2P file sharing The ROM in CD-ROM stands for Read Only Memory. In the late 1990s CD-R and later, rewritable CD-RW drives were included instead of standard CD ROM drives. This gave the personal computer user the capability to copy and "burn" standard Audio CDs which were playable in any CD player. As computer hardware grew more powerful and the MP3 format became pervasive, "ripping" CDs into small, compressed files on a computer's hard drive became popular. peer-to-peer networks such as Napster, Kazaa and Gnutella arose to be used almost exclusively for sharing music files and became a primary computer activity for many individuals. USB, DVD player Since the late 1990s, many more personal computers started shipping that included USB (Universal Serial Bus) ports for easy plug and play connectivity to devices such as digital cameras, video cameras, personal digital assistants, printers, scanners, USB flash drives and other peripheral devices. By the early 21st century, all shipping computers for the consumer market included at least two USB ports. Also during the late 1990s DVD players started appearing on high-end, usually more expensive, desktop and laptop computers, and eventually on consumer computers into the first decade of the 21st century. Hewlett-Packard In 2002, Hewlett-Packard (HP) purchased Compaq. Compaq itself had bought Tandem Computers in 1997 (which had been started by ex-HP employees), and Digital Equipment Corporation in 1998. Following this strategy HP became a major player in desktops, laptops, and servers for many different markets. The buyout made HP the world's largest manufacturer of personal computers, until Dell later surpassed HP. 64 bits In 2003, AMD shipped its 64-bit based microprocessor line for desktop computers, Opteron and Athlon 64. Also in 2003, IBM released the 64-bit based PowerPC 970 for Apple's high-end Power Mac G5 systems. Intel, in 2004, reacted to AMD's success with 64-bit based processors, releasing updated versions of their Xeon and Pentium 4 lines. 64-bit processors were first common in high end systems, servers and workstations, and then gradually replaced 32-bit processors in consumer desktop and laptop systems since about 2005. Lenovo In 2004, IBM announced the proposed sale of its PC business to Chinese computer maker Lenovo Group, which is partially owned by the Chinese government, for US$650 million in cash and $600 million US in Lenovo stock. The deal was approved by the Committee on Foreign Investment in the United States in March 2005, and completed in May 2005. IBM will have a 19% stake in Lenovo, which will move its headquarters to New York State and appoint an IBM executive as its chief executive officer. The company will retain the right to use certain IBM brand names for an initial period of five years. As a result of the purchase, Lenovo inherited a product line that featured the ThinkPad, a line of laptops that had been one of IBM's most successful products. Wi-Fi, LCD monitor, flash memory In the early 21st century, Wi-Fi began to become increasingly popular as many consumers started installing their own wireless home networks. Many of today's laptops and desktop computers are sold pre-installed with wireless cards and antennas. Also in the early 21st century, LCD monitors became the most popular technology for computer monitors, with CRT production being slowed down. LCD monitors are typically sharper, brighter, and more economical than CRT monitors. The first decade of the 21st century also saw the rise of multi-core processors (see following section) and flash memory. Once limited to high-end industrial use due to expense, these technologies are now mainstream and available to consumers. In 2008, the MacBook Air and Asus Eee PC were released, laptops that dispense with an optical drive and hard drive entirely relying on flash memory for storage. Local area networks The invention in the late 1970s of local area networks (LANs), notably Ethernet, allowed PCs to communicate with each other (peer-to-peer) and with shared printers. As the microcomputer revolution continued, more robust versions of the same technology were used to produce microprocessor based servers that could also be linked to the LAN. This was facilitated by the development of server operating systems to run on the Intel architecture, including several versions of both Unix and Microsoft Windows. Multiprocessing In May 2005, Intel and AMD released their first dual-core 64-bit processors, the Pentium D and the Athlon 64 X2 respectively. Multi-core processors can be programmed and reasoned about using symmetric multiprocessing (SMP) techniques known since the 60s (see the SMP article for details). Apple switched to Intel in 2006, also thereby gaining multiprocessing. In 2013, a Xeon Phi extension card is released with 57 x86 cores, at a price of $1695, equalling circa 30 dollars per core. PCI-E PCI Express is released in 2003. It becomes the most commonly used bus in PC-compatible desktop computers. Cheap 3D graphics Silicon Graphics (SGI) was a major 3D business that had grown annual revenues of $5.4 million to $3.7 billion from 1984 to 1997. The addition of 3D graphic capabilities to PCs, and the ability of clusters of Linux- and BSD-based PCs to take on many of the tasks of larger SGI servers, ate into SGI's core markets. The rise of cheap 3D accelerators displaced low-end products of Silicon Graphics, which went bankrupt in 2009. Three former SGI employees had founded 3dfx in 1994. Their Voodoo Graphics extension card relied on PCI to provide cheap 3D graphics for PC's. Towards the end of 1996, the cost of EDO DRAM dropped significantly. A card consisted of a DAC, a frame buffer processor and a texture mapping unit, along with 4 MB of EDO DRAM. The RAM and graphics processors operated at 50 MHz. It provided only 3D acceleration and as such the computer also needed a traditional video controller for conventional 2D software. NVIDIA bought 3dfx in 2000. In 2000, NVIDIA grew revenues 96%. SGI had made OpenGL. Control of the specification was passed to the Khronos Group in 2006. SDRAM In 1993, Samsung introduced its KM48SL2000 synchronous DRAM, and by 2000, SDRAM had replaced virtually all other types of DRAM in modern computers, because of its greater performance. For more information see Synchronous dynamic random-access memory#SDRAM history. Double data rate synchronous dynamic random-access memory (DDR SDRAM) is introduced in 2000. Compared to its predecessor in PC-clones, single data Rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. ACPI Released in December 1996, ACPI replaced Advanced Power Management (APM), the MultiProcessor Specification, and the Plug and Play BIOS (PnP) Specification. Internally, ACPI advertises the available components and their functions to the operating system kernel using instruction lists ("methods") provided through the system firmware (Unified Extensible Firmware Interface (UEFI) or BIOS), which the kernel parses. ACPI then executes the desired operations (such as the initialization of hardware components) using an embedded minimal virtual machine. First-generation ACPI hardware had issues. Windows 98 first edition disabled ACPI by default except on a whitelist of systems. 2010s Semiconductor fabrication In 2011, Intel announced the commercialisation of Tri-gate transistor. The Tri-Gate design is a variant of the FinFET 3D structure. FinFET was developed in the 1990s by Chenming Hu and his colleagues at UC Berkeley. Through-silicon via is used in High Bandwidth Memory (HBM), a successor of DDR-SDRAM. HBM was released in 2013. In 2016 and 2017, Intel, TSMC and Samsung begin releasing 10 nanometer chips. At the ≈10 nm scale, quantum tunneling (especially through gaps) becomes a significant phenomenon. 2020s In May 2022, Chinese officials ordered government agencies and state-backed companies to remove personal computers produced by American corporations and replace them with equipment from domestic companies. The state-mandated order is expected to result in the removal of about 50 million computers, with HP and Dell expected to lose the most future business from the mandate. Market size In 2001, 125 million personal computers were shipped in comparison to 48,000 in 1977. More than 500 million PCs were in use in 2002 and one billion personal computers had been sold worldwide since mid-1970s till this time. Of the latter figure, 75 percent were professional or work related, while the rest sold for personal or home use. About 81.5 percent of PCs shipped had been desktop computers, 16.4 percent laptops and 2.1 percent servers. United States had received 38.8 percent (394 million) of the computers shipped, Europe 25 percent and 11.7 percent had gone to Asia-Pacific region, the fastest-growing market as of 2002. Almost half of all the households in Western Europe had a personal computer and a computer could be found in 40 percent of homes in United Kingdom, compared with only 13 percent in 1985. The third quarter of 2008 marked the first time laptops outsold desktop PCs in the United States. As of June 2008, the number of personal computers worldwide in use hit one billion. Mature markets like the United States, Western Europe and Japan accounted for 58 percent of the worldwide installed PCs. About 180 million PCs (16 percent of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12 percent annually. See also History of laptops History of mobile phones History of software Timeline of electrical and electronic engineering Computer museum and Personal Computer Museum Expensive Desk Calculator MIT Computer Science and Artificial Intelligence Laboratory Educ-8 a 1974 pre-microprocessor "micro-computer" Mark-8, a 1974 microprocessor-based microcomputer SCELBI, another 1974 microcomputer Simon (computer), a 1949 demonstration of computing principles List of pioneers in computer science References Further reading External links A history of the personal computer: the people and the technology (PDF) BlinkenLights Archaeological Institute – Personal Computer Milestones Personal Computer Museum – A publicly viewable museum in Brantford, Ontario, Canada Old Computers Museum – Displaying over 100 historic machines. Chronology of Personal Computers – a chronology of computers from 1947 on "Total share: 30 years of personal computer market share figures" Obsolete Technology – Old Computers Personal computers Personal computers Personal computers
History of personal computers
[ "Technology" ]
22,117
[ "History of computing hardware", "History of computing" ]
16,142,363
https://en.wikipedia.org/wiki/Techa
The Techa (, ) is an eastward river on the eastern flank of the southern Ural Mountains noted for its nuclear contamination. It is long, and its basin covers . It begins by the once-secret nuclear processing town of Ozyorsk about northwest of Chelyabinsk and flows east then northeast to the small town of Dalmatovo to flow into the mid-part of the Iset, a tributary of the Tobol. Its basin is close to and north of the Miass, longer than these rivers apart from the Tobol. Water pollution From 1949 to 1956 the Mayak complex dumped an estimated of radioactive waste water into the Techa River, a cumulative dispersal of of radioactivity. As many as forty villages, with a combined population of about 28,000 residents, lined the river at the time. For 24 of them, the Techa was a major source of water; 23 of them were eventually evacuated. In the past 45 years, about half a million people in the region have been irradiated in one or more of the incidents, exposing them to as much as 20 times the radiation suffered by the Chernobyl disaster victims. The Tobol is a sub-tributary of the Ob, being linked by the final part of the Irtysh; all three flow generally north. See also Pollution of Lake Karachay List of most-polluted rivers Water pollution Plutopia Ozyorsk, Chelyabinsk Oblast Semipalatinsk Test Site References Rivers of Chelyabinsk Oblast Rivers of Kurgan Oblast Nuclear accidents and incidents Water pollution in Russia Disasters in the Soviet Union Radioactive waste Waste disposal incidents 1949 disasters in the Soviet Union 1956 disasters in the Soviet Union 1940s disasters in the Soviet Union 1950s disasters in the Soviet Union
Techa
[ "Chemistry", "Technology" ]
355
[ "Nuclear accidents and incidents", "Environmental impact of nuclear power", "Hazardous waste", "Radioactivity", "Radioactive waste" ]
16,142,457
https://en.wikipedia.org/wiki/Base%20One%20Foundation%20Component%20Library
The Base One Foundation Component Library (BFC) is a rapid application development toolkit for building secure, fault-tolerant, database applications on Windows and ASP.NET. In conjunction with Microsoft's Visual Studio integrated development environment, BFC provides a general-purpose web application framework for working with databases from Microsoft, Oracle, IBM, Sybase, and MySQL, running under Windows, Linux/Unix, or IBM iSeries or z/OS. BFC includes facilities for distributed computing, batch processing, queuing, and database command scripting, and these run under Windows or Linux with Wine. Design BFC is based on a database-centric architecture whose cross-DBMS data dictionary plays a central role in supporting data security, validation, optimization, and maintainability features. Some of BFC’s core technologies are based on underlying U.S. patents in database communication and high precision arithmetic. BFC supports a unique model of large scale, distributed computing. This is intended to reduce the vulnerability and performance impact of either depending on a centralized process to distribute tasks or communicating directly between nodes through messages. Deutsche Bank made use of the initial version of BFC to build its securities' custody system and is one of the earliest successful examples of commercial grid computing. BFC implements a grid computing architecture that revolves around the model of a "virtual supercomputer" composed of loosely coupled "batch job servers". These perform tasks that are specified and coordinated through database-resident control structures and queues. The model is virtual, as it uses the available processing power and resources of ordinary servers and database systems, which can also continue to work in their previous roles. The result is termed a virtual supercomputer because it presents itself as a single, unified computational resource that can be scaled both in capacity and processing power. History BFC was originally developed by Base One International Corp., funded by projects done for Marsh & McLennan and Deutsche Bank that started in the mid-1990s. Beginning in 1994, Johnson & Higgins (later acquired by Marsh & McLennan), built Stars, an insurance risk management system, using components known as ADF (Application Development Framework). ADF was the predecessor of BFC and was jointly developed by Johnson & Higgins and Base One programmers, with Base One retaining ownership of ADF, and Johnson & Higgins retaining all rights to Stars risk management software. In 2014, BFC was acquired by Content Galaxy Inc., whose video publishing service was built with BFC. The name "BFC" was a play on MFC Microsoft Foundation Classes, which BFC extended through Visual C++ class libraries to facilitate the development of large-scale, client/server database applications. Developers can incorporate BFC components into web and Windows applications written in any of the major Microsoft programming languages (C#, ASP.NET, Visual C++, VB.NET). They can also use a variety of older technologies, including COM/ActiveX, MFC, and Crystal Reports. BFC works with both managed and unmanaged code, and it can be used to construct either thin client or rich client applications, with or without browser-based interfaces. References External links Base One. Introduction to BFC Base One. The Base One Grid Computing Architecture ITJungle. Base One Update Brings Grids of Clusters, June 14, 2005. Accessed April 9, 2008. .NET object-relational mapping tools .NET programming tools Grid computing products Middleware Scripting languages Web frameworks Web development software
Base One Foundation Component Library
[ "Technology", "Engineering" ]
720
[ "Software engineering", "Middleware", "IT infrastructure" ]
16,142,729
https://en.wikipedia.org/wiki/Monomagnesium%20phosphate
Monomagnesium phosphate is one of the forms of magnesium phosphate. It is a magnesium acid salt of phosphoric acid with the chemical formula Mg(H2PO4)2. Di- and tetrahydrates are known also. It dissolves in water, forming phosphoric acid and depositing a solid precipitate of Mg(HPO4).3H2O, dimagnesium phosphate. As a food additive, it is used as an acidity regulator and has the E number E343. References Phosphates Magnesium compounds Acid salts Food additives E-number additives
Monomagnesium phosphate
[ "Chemistry" ]
131
[ "Inorganic compounds", "Salts", "Inorganic compound stubs", "Phosphates", "Acid salts" ]
16,142,733
https://en.wikipedia.org/wiki/Dimagnesium%20phosphate
Dimagnesium phosphate is a compound with formula MgHPO4. It is a Mg2+ salt of monohydrogen phosphate. The trihydrate is well known, occurring as a mineral. It can be formed by reaction of stoichiometric quantities of magnesium oxide with phosphoric acid. MgO + H3PO4 → MgHPO4 + H2O Dissolving monomagnesium phosphate in water, forms phosphoric acid and depositing a solid precipitate of dimagnesium phosphate trihydrate: Mg(H2PO4)2 + 3 H2O → Mg(HPO4).3H2O + H3PO4 The compound is used as a nutritional supplement, especially for infants and athletes. Its E number is E343. See also Magnesium phosphate References Acid salts Phosphates Magnesium compounds E-number additives
Dimagnesium phosphate
[ "Chemistry" ]
190
[ "Inorganic compounds", "Salts", "Inorganic compound stubs", "Phosphates", "Acid salts" ]
16,143,231
https://en.wikipedia.org/wiki/SES-2%20Enclosure%20Management
The introduction of Serial Attached SCSI (SAS) as the most recent evolution of SCSI required redefining the related standard for enclosure management, called SCSI Enclosure Services. SES-2, or SCSI Enclosure Management 2 first revision, was introduced in 2002 and is now at revision 20. SES-2 SCSI Enclosure Services (SES) permit the management and sense the state of power supplies, cooling devices, LED displays, indicators, individual drives, and other non-SCSI elements installed in an enclosure. SES2 alerts users about drive, temperature and fan failures with an audible alarm and a fan failure LED. SES-2 commands The SES-2 command set uses the SCSI SEND DIAGNOSTIC and RECEIVE DIAGNOSTIC RESULTS commands to obtain configuration information for the enclosure and to set and sense standard bits for each element installed in the enclosure. The SEND DIAGNOSTIC command is used to send control information to internal or external LED indicators or to instruct one enclosure element to change its state or perform an operation. The application client has two mechanisms for accessing the enclosure service process: a) Directly to a standalone enclosure services process, for example an enclosure controller chip. SCSI conditions communicated directly include hard reset, logical unit Reset and I_T nexus loss. b) Indirectly through a LUN of another peripheral device – for example a drive within the enclosure. The drive will communicate with the Enclosure through the Enclosure Services Interface. In this case the only SCSI device condition communicated through the LUN is hard reset. Subenclosures The SES-2 process handles a single primary subenclosure or multiple subenclosures. In the second case, one primary subenclosure will manage all the other secondary subenclosures. Thresholds Like SES, SES-2 establishes two types of thresholds for elements with limited sensing capability, like voltage, temperature, current etcetera: critical and warning. So for example in the case of temperature we may have: High critical threshold: 57c High warning threshold: 50c Low warning threshold: 7c Low critical threshold: 0c When managed values fall within the warning range, the SES-2 processor will communicate a warning signal to the application client, typically a Host Bus Adapter (HBA). When values fall outside acceptable ranges, depending from the commands supported by the device server, the sense code shall be HARDWARE FAILURE or ENCLOSURE FAILURE. Reporting methods SES-2 lists four types of reporting methods: Polling Polling based on the limited completion function CHECK CONDITION status Asynchronous event notification The standard If you are a member of the T10 working group, the Standard, controlled by the T10 technical committee, can be found at: http://www.t10.org/cgi-bin/ac.pl?t=f&f=ses2r19a.pdf Due to INCITS policy changes the SCSI T10 drafts for released standards are no longer available online for non-T10 members and must be purchased from INCITS at http://www.incits.org . See the official INCITS policy at http://www.incits.org/rd1/INCITS_RD1.pdf . Alternative technologies SES-2 over I²C is still used for storage backplanes enclosure management, although a competing method for enclosure management communication is now becoming prominent. Serial GPIO (SGPIO), provides a simpler, less expensive solution and is now more widespread than SES-2. Existing products using SES-2 American Megatrends’ Backplane controllers, the MG9071 and MG9072 can used either SES-2 or SGPIO for enclosure management with a simple configuration selection. References Old link : [broken] http://www.t10.org/ftp/t10/drafts/ses2/ses2r19a.pdf New link : [restricted] http://www.t10.org/cgi-bin/ac.pl?t=f&f=ses2r19a.pdf American Megatrends, Inc. Company Website AMI Backplane Controllers from AMI , Computer hardware standards Computer peripherals SCSI
SES-2 Enclosure Management
[ "Technology" ]
873
[ "Computer peripherals", "Computer standards", "Computer hardware standards", "Components" ]
16,143,262
https://en.wikipedia.org/wiki/Echo%20chamber%20%28media%29
In news media and social media, an echo chamber is an environment or ecosystem in which participants encounter beliefs that amplify or reinforce their preexisting beliefs by communication and repetition inside a closed system and insulated from rebuttal. An echo chamber circulates existing views without encountering opposing views, potentially resulting in confirmation bias. Echo chambers may increase social and political polarization and extremism. On social media, it is thought that echo chambers limit exposure to diverse perspectives, and favor and reinforce presupposed narratives and ideologies. The term is a metaphor based on an acoustic echo chamber, in which sounds reverberate in a hollow enclosure. Another emerging term for this echoing and homogenizing effect within social-media communities on the Internet is neotribalism. Many scholars note the effects that echo chambers can have on citizens' stances and viewpoints, and specifically implications has for politics. However, some studies have suggested that the effects of echo chambers are weaker than often assumed. Concept The Internet has expanded the variety and amount of accessible political information. On the positive side, this may create a more pluralistic form of public debate; on the negative side, greater access to information may lead to selective exposure to ideologically supportive channels. In an extreme "echo chamber", one purveyor of information will make a claim, which many like-minded people then repeat, overhear, and repeat again (often in an exaggerated or otherwise distorted form) until most people assume that some extreme variation of the story is true. The echo chamber effect occurs online when a harmonious group of people amalgamate and develop tunnel vision. Participants in online discussions may find their opinions constantly echoed back to them, which reinforces their individual belief systems due to the declining exposure to other's opinions. Their individual belief systems are what culminate into a confirmation bias regarding a variety of subjects. When an individual wants something to be true, they often will only gather the information that supports their existing beliefs and disregard any statements they find that are contradictory or speak negatively upon their beliefs. Individuals who participate in echo chambers often do so because they feel more confident that their opinions will be more readily accepted by others in the echo chamber. This happens because the Internet has provided access to a wide range of readily available information. People are receiving their news online more rapidly through less traditional sources, such as Facebook, Google, and Twitter. These and many other social platforms and online media outlets have established personalized algorithms intended to cater specific information to individuals’ online feeds. This method of curating content has replaced the function of the traditional news editor. The mediated spread of information through online networks causes a risk of an algorithmic filter bubble, leading to concern regarding how the effects of echo chambers on the internet promote the division of online interaction. Members of an echo chamber are not fully responsible for their convictions. Once part of an echo chamber, an individual might adhere to seemingly acceptable epistemic practices and still be further misled. Many individuals may be stuck in echo chambers due to factors existing outside of their control, such as being raised in one. Furthermore, the function of an echo chamber does not entail eroding a member's interest in truth; it focuses upon manipulating their credibility levels so that fundamentally different establishments and institutions will be considered proper sources of authority. Empirical research However, empirical findings to clearly support these concerns are needed and the field is very fragmented when it comes to empirical results. There are some studies that do measure echo chamber effects, such as the study of Bakshy et al. (2015). In this study the researchers found that people tend to share news articles they align with. Similarly, they discovered a homophily in online friendships, meaning people are more likely to be connected on social media if they have the same political ideology. In combination, this can lead to echo chamber effects. Bakshy et al. found that a person's potential exposure to cross-cutting content (content that is opposite to their own political beliefs) through their own network is only 24% for liberals and 35% for conservatives. Other studies argue that expressing cross-cutting content is an important measure of echo chambers: Bossetta et al. (2023) find that 29% of Facebook comments during Brexit were cross-cutting expressions. Therefore, echo chambers might be present in a person's media diet but not in how they interact with others on social media. Another set of studies suggests that echo chambers exist, but that these are not a widespread phenomenon: Based on survey data, Dubois and Blank (2018) show that most people do consume news from various sources, while around 8% consume media with low diversity. Similarly, Rusche (2022) shows that, most Twitter users do not show behavior that resembles that of an echo chamber. However, through high levels of online activity, the small group of users that do, make up a substantial share populist politicians' followers, thus creating homogeneous online spaces. Finally, there are other studies which contradict the existence of echo chambers. Some found that people also share news reports that don't align with their political beliefs. Others found that people using social media are being exposed to more diverse sources than people not using social media. In summation, it remains that clear and distinct findings are absent which either confirm or falsify the concerns of echo chamber effects. Research on the social dynamics of echo chambers shows that the fragmented nature of online culture, the importance of collective identity construction, and the argumentative nature of online controversies can generate echo chambers where participants encounter self-reinforcing beliefs. Researchers show that echo chambers are prime vehicles to disseminate disinformation, as participants exploit contradictions against perceived opponents amidst identity-driven controversies. As echo chambers build upon identity politics and emotion, they can contribute to political polarization and neotribalism. Difficulties of researching processes Echo chamber studies fail to achieve consistent and comparable results due to unclear definitions, inconsistent measurement methods, and unrepresentative data. Social media platforms continually change their algorithms, and most studies are conducted in the US, limiting their application to political systems with more parties. Echo chambers vs epistemic bubbles In recent years, closed epistemic networks have increasingly been held responsible for the era of post-truth and fake news. However, the media frequently conflates two distinct concepts of social epistemology: echo chambers and epistemic bubbles. An epistemic bubble is an informational network in which important sources have been excluded by omission, perhaps unintentionally. It is an impaired epistemic framework which lacks strong connectivity. Members within epistemic bubbles are unaware of significant information and reasoning. On the other hand, an echo chamber is an epistemic construct in which voices are actively excluded and discredited. It does not suffer from a lack in connectivity; rather it depends on a manipulation of trust by methodically discrediting all outside sources. According to research conducted by the University of Pennsylvania, members of echo chambers become dependent on the sources within the chamber and highly resistant to any external sources. An important distinction exists in the strength of the respective epistemic structures. Epistemic bubbles are not particularly robust. Relevant information has merely been left out, not discredited. One can ‘pop’ an epistemic bubble by exposing a member to the information and sources that they have been missing. Echo chambers, however, are incredibly strong. By creating pre-emptive distrust between members and non-members, insiders will be insulated from the validity of counter-evidence and will continue to reinforce the chamber in the form of a closed loop. Outside voices are heard, but dismissed. As such, the two concepts are fundamentally distinct and cannot be utilized interchangeably. However, one must note that this distinction is conceptual in nature, and an epistemic community can exercise multiple methods of exclusion to varying extents. Similar concepts A filter bubble – a term coined by internet activist Eli Pariser – is a state of intellectual isolation that allegedly can result from personalized searches when a website algorithm selectively guesses what information a user would like to see based on information about the user, such as location, past click-behavior and search history. As a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The choices made by these algorithms are not transparent. Homophily is the tendency of individuals to associate and bond with similar others, as in the proverb "birds of a feather flock together". The presence of homophily has been detected in a vast array of network studies. For example, a study conducted by Bakshy et. al. explored the data of 10.1 million Facebook users. These users identified as either politically liberal, moderate, or conservative, and the vast majority of their friends were found to have a political orientation that was similar to their own. Facebook algorithms recognize this and selects information with a bias towards this political orientation to showcase in their newsfeed. Recommender systems are information filtering systems put in place on different platforms that provide recommendations depending on information gathered from the user. In general, recommendations are provided in three different ways: based on content that was previously selected by the user, content that has similar properties or characteristics to that which has been previously selected by the user, or a combination of both. Both echo chambers and filter bubbles relate to the ways individuals are exposed to content devoid of clashing opinions, and colloquially might be used interchangeably. However, echo chamber refers to the overall phenomenon by which individuals are exposed only to information from like-minded individuals, while filter bubbles are a result of algorithms that choose content based on previous online behavior, as with search histories or online shopping activity. Indeed, specific combinations of homophily and recommender systems have been identified as significant drivers for determining the emergence of echo chambers. Culture wars are cultural conflicts between social groups that have conflicting values and beliefs. It refers to "hot button" topics on which societal polarization occurs. A culture war is defined as "the phenomenon in which multiple groups of people, who hold entrenched values and ideologies, attempt to contentiously steer public policy." Echo chambers on social media have been identified as playing a role on how multiple social groups, holding distinct values and ideologies, create groups circulate conversations through conflict and controversy. Implications of echo chambers Online communities Online social communities become fragmented by echo chambers when like-minded people group together and members hear arguments in one specific direction with no counter argument addressed. In certain online platforms, such as Twitter, echo chambers are more likely to be found when the topic is more political in nature compared to topics that are seen as more neutral. Social networking communities are communities that are considered to be some of the most powerful reinforcements of rumors due to the trust in the evidence supplied by their own social group and peers, over the information circulating the news. In addition to this, the reduction of fear that users can enjoy through projecting their views on the internet versus face-to-face allows for further engagement in agreement with their peers. This can create significant barriers to critical discourse within an online medium. Social discussion and sharing can potentially suffer when people have a narrow information base and do not reach outside their network. Essentially, the filter bubble can distort one's reality in ways which are not believed to be alterable by outside sources. Findings by Tokita et al. (2021) suggest that individuals’ behavior within echo chambers may dampen their access to information even from desirable sources. In highly polarized information environments, individuals who are highly reactive to socially-shared information are more likely than their less reactive counterparts to curate politically homogenous information environments and experience decreased information diffusion in order to avoid overreacting to news they deem unimportant. This makes these individuals more likely to develop extreme opinions and to overestimate the degree to which they are informed. Research has also shown that misinformation can become more viral as a result of echo chambers, as the echo chambers provide an initial seed which can fuel broader viral diffusion. Offline communities Many offline communities are also segregated by political beliefs and cultural views. The echo chamber effect may prevent individuals from noticing changes in language and culture involving groups other than their own. Online echo chambers can sometimes influence an individual's willingness to participate in similar discussions offline. A 2016 study found that "Twitter users who felt their audience on Twitter agreed with their opinion were more willing to speak out on that issue in the workplace". Group polarization can occur as a result of growing echo chambers. The lack of external viewpoints and the presence of a majority of individuals sharing a similar opinion or narrative can lead to a more extreme belief set. Group polarisation can also aid the current of fake news and misinformation through social media platforms. This can extend to offline interactions, with data revealing that offline interactions can be as polarising as online interactions (Twitter), arguably due to social media-enabled debates being highly fragmented. Examples Echo chambers have existed in many forms. Examples cited since the late 20th century include: News coverage of the 1980s McMartin preschool trial was criticized by David Shaw in a series of 1990 Pulitzer Prize winning articles as an echo chamber. Shaw noted that, despite the charges in the trial never being proven, news media reporting on the trial "largely acted in a pack" and "fed on one another", creating an "echo chamber of horrors" where journalists ultimately abandoned journalistic principles and sensationalized coverage to be "the first with the latest shocking allegation". Conservative radio host Rush Limbaugh and his radio show were categorized as an echo chamber in the first empirical study concerning echo chambers by researchers Kathleen Hall Jamieson and Frank Capella in their 2008 book Echo Chamber: Rush Limbaugh and the Conservative Media Establishment. The Clinton–Lewinsky scandal reporting was chronicled in Time magazine's 16 February 1998 "Trial by Leaks" cover story "The Press And The Dress: The anatomy of a salacious leak, and how it ricocheted around the walls of the media echo chamber" by Adam Cohen. This case was also reviewed in depth by the Project for Excellence in Journalism in "The Clinton/Lewinsky Story: How Accurate? How Fair?" A New Statesman essay argued that echo chambers were linked to the United Kingdom Brexit referendum. The subreddit /r/incels and other online incel communities have also been described as echo chambers. Discussion concerning opioid drugs and whether or not they should be considered suitable for long-term pain maintenance has been described as an echo chamber capable of affecting drug legislation. The 2016 United States presidential election was described as an echo chamber, as information on the campaigns were exchanged primarily among individuals with similar political and ideological views. Donald Trump and Hillary Clinton were extremely vocal on Twitter throughout the electoral campaigns, bringing many vocal opinion leaders to the platform. A study conducted by Guo et. al. showed that Twitter communities in support of Trump and Clinton differed significantly, and those that were most vocal were responsible for creating echo chambers within these communities. The network of social media accounts and communities harboring and circulating the Flat Earth theory has been described as an echo chamber. Since the creation of the internet, scholars have been curious to see the changes in political communication. Due to the new changes in information technology and how it is managed, it is unclear how opposing perspectives can reach common ground in a democracy. The effects seen from the echo chamber effect has largely been cited to occur in politics, such as Twitter and Facebook during the 2016 presidential election in the United States. Some believe that echo chambers played a big part in the success of Donald Trump in the 2016 presidential elections. Countermeasures From media companies Some companies have also made efforts in combating the effects of an echo chamber on an algorithmic approach. A high-profile example of this is the changes Facebook made to its "Trending" page, which is an on-site news source for its users. Facebook modified their "Trending" page by transitioning from displaying a single news source to multiple news sources for a topic or event. The intended purpose of this was to expand the breadth of news sources for any given headline, and therefore expose readers to a variety of viewpoints. There are startups building apps with the mission of encouraging users to open their echo chambers, such as UnFound.news. Another example is a beta feature on BuzzFeed News called "Outside Your Bubble", which adds a module to the bottom of BuzzFeed News articles to show reactions from various platforms like Twitter, Facebook, and Reddit. This concept aims to bring transparency and prevent biased conversations, diversifying the viewpoints their readers are exposed to. See also References Influence of mass media Mass media issues Media bias Public opinion Propaganda techniques Social influence Sociology of technology
Echo chamber (media)
[ "Technology" ]
3,454
[ "nan" ]