id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,701,933
https://en.wikipedia.org/wiki/Global%20Health%20Security%20Initiative
The Global Health Security Initiative (GHSI) is a collaborative effort among several nations and organizations focused on strengthening global health security. Established in response to the 2001 terrorist attacks, its primary goal is to prepare for and address public health risks related biological, chemical, nuclear terrorism, or pandemics. The initiative includes members from North America, Europe, and Asia, with the World Health Organization participating as an observer. History The idea on which the Global Health Security Initiative is based was suggested by then US Secretary of Health and Human Services, Tommy Thompson, after the World Trade Center attacks on 11 September 2001. He proposed that countries fighting bioterrorism should collaborate, share information, and coordinate their efforts in order to best protect global health. GHSI was launched in November 2001 by Canada (who hosted the first meeting in Ottawa), the European Commission, France, Germany, Italy, Japan, Mexico, the United Kingdom, and the United States. The World Health Organization (WHO) would act as observer to the GHSI. The ministers agreed on eight areas in which the partnership could collaborate in order to "strengthen public health preparedness and response to the threat of international biological, chemical and radio-nuclear terrorism." In December 2002, at a meeting in Mexico City, the Ministers broadened the scope of the mandate to include the public health threat posed by pandemic influenza. Aims and scope GHSI states that its mandate is "to undertake concerted global action to strengthen public health preparedness and response to chemical, biological, radiological, and nuclear (CBRN) threats, as well as pandemic influenza," including intentional, accidental, and naturally occurring events. Organization The Global Health Security Action Group (GHSAG) is made up of senior officials from each member country. The GHSI Secretariat organises, manages, and administers meetings and committees and sets priorities. Various technical and scientific working groups focus on specific areas of knowledge. Current working groups include: Chemical Events Working Group: focuses on the risk prioritization of chemicals, the identification of research needs and best practices in the area of medical countermeasures, as well as other cross-hazard projects such as early alerting and reporting. Biological Working Group: focuses on addressing existing gaps and research and development needs required for GHSI member countries to prepare for and respond to biological threats, excluding pandemic influenza and other respiratory viruses of pandemic potential. Laboratory Network: focuses on promoting quality assurance in diagnostics, flexibility and adaptability of techniques and technologies, and addressing issues regarding transport of specimens. Radio-Nuclear Threats Working Group: focuses on collaboration with other radiation protection and nuclear safety authorities on emergency preparedness, undertakes projects in areas such as countermeasures and laboratory mapping, and serves as an informal communication network during emergencies. Pandemic Influenza Working Group: focuses on sharing and comparing respective national approaches to pandemic preparedness, including vaccine and anti-viral stockpiling and use, surveillance and epidemiology, diagnostics, and public health measures. Research GHSI conducts research and collaborates to address global health security concerns. Some of the research GHSI has been involved in includes: Research on mass casualties since the release of Opioids: The GHSI participated in research to explore the significant health threats linked to the increasing availability of synthetic opioids. The research highlights the dual risks of these substances contributing to mass casualty incidents, either through unintentional overdose or intentional misuse as weapons. In response to these challenges, the GHSI Chemical Events Working Group organized a workshop in 2018 to evaluate the current state of preparedness for handling large-scale incidents involving Opioids. Development of SCRIPT: Screening Categorization Risk Prioritization Tool (SCRIPT) is designed to assess public health risks associated with the release of airborne chemicals. See also Biosecurity Bioterrorism CBRN defense Centers for Disease Control and Prevention (United States) Council of Europe Convention on the Prevention of Terrorism Emergent virus European Centre for Disease Prevention and Control (EU) Health Threat Unit (EU) Pandemic References Sources External links Working Together to Counter Global Health Threats Event featuring Michael Leavitt, Secretary of the U.S. Department of Health and Human Services, at the Woodrow Wilson Center in October 2007. Public health organizations Nuclear warfare Biological warfare Chemical warfare International medical and health organizations International organizations based in Canada Radiation protection organizations
Global Health Security Initiative
[ "Chemistry", "Engineering", "Biology" ]
902
[ "Nuclear organizations", "Biological warfare", "nan", "Nuclear warfare", "Radiation protection organizations", "Radioactivity" ]
8,702,775
https://en.wikipedia.org/wiki/Zero-forcing%20equalizer
The zero-forcing equalizer is a form of linear equalization algorithm used in communication systems which applies the inverse of the frequency response of the channel. This form of equalizer was first proposed by Robert Lucky. The zero-forcing equalizer applies the inverse of the channel frequency response to the received signal, to restore the signal after the channel. It has many useful applications. For example, it is studied heavily for IEEE 802.11n (MIMO) where knowing the channel allows recovery of the two or more streams which will be received on top of each other on each antenna. The name zero-forcing corresponds to bringing down the intersymbol interference (ISI) to zero in a noise-free case. This will be useful when ISI is significant compared to noise. For a channel with frequency response the zero-forcing equalizer is constructed by . Thus the combination of channel and equalizer gives a flat frequency response and linear phase . In reality, zero-forcing equalization does not work in most applications, for the following reasons: Even though the channel impulse response has finite length, the impulse response of the equalizer needs to be infinitely long At some frequencies the received signal may be weak. To compensate, the magnitude of the zero-forcing filter ("gain") grows very large. As a consequence, any noise added after the channel gets boosted by a large factor and destroys the overall signal-to-noise ratio. Furthermore, the channel may have zeros in its frequency response that cannot be inverted at all. (Gain * 0 still equals 0). This second item is often the more limiting condition. These problems are addressed in the linear MMSE equalizer by making a small modification to the denominator of : , where k is related to the channel response and the signal SNR. Algorithm If the channel response (or channel transfer function) for a particular channel is H(s) then the input signal is multiplied by the reciprocal of it. This is intended to remove the effect of channel from the received signal, in particular the intersymbol interference (ISI). The zero-forcing equalizer removes all ISI, and is ideal when the channel is noiseless. However, when the channel is noisy, the zero-forcing equalizer will amplify the noise greatly at frequencies f where the channel response H(j2πf) has a small magnitude (i.e. near zeroes of the channel) in the attempt to invert the channel completely. A more balanced linear equalizer in this case is the minimum mean-square error equalizer, which does not usually eliminate ISI completely but instead minimizes the total power of the noise and ISI components in the output. References Filter theory
Zero-forcing equalizer
[ "Engineering" ]
557
[ "Telecommunications engineering", "Filter theory" ]
8,702,779
https://en.wikipedia.org/wiki/Nanofiltration
Nanofiltration is a membrane filtration process that uses nanometer sized pores through which particles smaller than about 1–10 nanometers pass through the membrane. Nanofiltration membranes have pore sizes of about 1–10 nanometers, smaller than those used in microfiltration and ultrafiltration, but a slightly bigger than those in reverse osmosis. Membranes used are predominantly polymer thin films. It is used to soften, disinfect, and remove impurities from water, and to purify or separate chemicals such as pharmaceuticals. Membranes Membrane materials that are commonly used are polymer thin films such as polyethylene terephthalate or metals such as aluminium. Pore dimensions are controlled by pH, temperature and time during development with pore densities ranging from 1 to 106 pores per cm2. Membranes made from polyethylene terephthalate (PET) and other similar materials, are referred to as "track-etch" membranes, named after the way the pores on the membranes are made. "Tracking" involves bombarding the polymer thin film with high energy particles. This results in making tracks that are chemically developed into the membrane, or "etched" into the membrane, which are the pores. Membranes created from metal such as alumina membranes, are made by electrochemically growing a thin layer of aluminum oxide from aluminum in an acidic medium. Range of applications Historically, nanofiltration and other membrane technology used for molecular separation was applied entirely on aqueous systems. The original uses for nanofiltration were water treatment and in particular water softening. Nanofilters "soften" water by retaining scale-forming divalent ions (e.g. Ca2+, Mg2+). Nanofiltration has been extended into other industries such as milk and juice production as well as pharmaceuticals, fine chemicals, and flavour and fragrance industries. Advantages and disadvantages One of the main advantages of nanofiltration as a method of softening water is that during the process of retaining calcium and magnesium ions while passing smaller hydrated monovalent ions, filtration is performed without adding extra sodium ions, as used in ion exchangers. Many separation processes do not operate at room temperature (e.g. distillation), which greatly increases the cost of the process when continuous heating or cooling is applied. Performing gentle molecular separation is linked with nanofiltration that is often not included with other forms of separation processes (centrifugation). These are two of the main benefits that are associated with nanofiltration. Nanofiltration has a very favorable benefit of being able to process large volumes and continuously produce streams of products. Still, Nanofiltration is the least used method of membrane filtration in industry as the membrane pores sizes are limited to only a few nanometers. Anything smaller, reverse osmosis is used and anything larger is used for ultrafiltration. Ultrafiltration can also be used in cases where nanofiltration can be used, due to it being more conventional. A main disadvantage associated with nanotechnology, as with all membrane filter technology, is the cost and maintenance of the membranes used. Nanofiltration membranes are an expensive part of the process. Repairs and replacement of membranes is dependent on total dissolved solids, flow rate and components of the feed. With nanofiltration being used across various industries, only an estimation of replacement frequency can be used. This causes nanofilters to be replaced a short time before or after their prime usage is complete. Design and operation Industrial applications of membranes require hundreds to thousands of square meters of membranes and therefore an efficient way to reduce the footprint by packing them is required. Membranes first became commercially viable when low cost methods of housing in 'modules' were achieved. Membranes are not self-supporting. They need to be stayed by a porous support that can withstand the pressures required to operate the NF membrane without hindering the performance of the membrane. To do this effectively, the module needs to provide a channel to remove the membrane permeation and provide appropriate flow condition that reduces the phenomena of concentration polarisation. A good design minimises pressure losses on both the feed side and permeate side and thus energy requirements. Concentration polarisation Concentration polarization describes the accumulation of the species being retained close to the surface of the membrane which reduces separation capabilities. It occurs because the particles are convected towards the membrane with the solvent and its magnitude is the balance between this convection caused by solvent flux and the particle transport away from the membrane due to the concentration gradient (predominantly caused by diffusion.) Although concentration polarization is easily reversible, it can lead to fouling of the membrane. Spiral wound module Spiral wound modules are the most commonly used style of module and are 'standardized' design, available in a range of standard diameters (2.5", 4" and 8") to fit standard pressure vessel that can hold several modules in series connected by O-rings. The module uses flat sheets wrapped around a central tube. The membranes are glued along three edges over a permeate spacer to form 'leaves'. The permeate spacer supports the membrane and conducts the permeate to the central permeate tube. Between each leaf, a mesh like feed spacer is inserted. The reason for the mesh like dimension of the spacer is to provide a hydrodynamic environment near the surface of the membrane that discourages concentration polarisation. Once the leaves have been wound around the central tube, the module is wrapped in a casing layer and caps placed on the end of the cylinder to prevent 'telescoping' that can occur in high flow rate and pressure conditions Tubular module Tubular modules look similar to shell and tube heat exchangers with bundles of tubes with the active surface of the membrane on the inside. Flow through the tubes is normally turbulent, ensuring low concentration polarisation but also increasing energy costs. The tubes can either be self-supporting or supported by insertion into perforated metal tubes. This module design is limited for nanofiltration by the pressure they can withstand before bursting, limiting the maximum flux possible. Due to both the high energy operating costs of turbulent flow and the limiting burst pressure, tubular modules are more suited to 'dirty' applications where feeds have particulates such as filtering raw water to gain potable water in the Fyne process. The membranes can be easily cleaned through a 'pigging' technique with foam balls are squeezed through the tubes, scouring the caked deposits. Flux enhancing strategies These strategies work to reduce the magnitude of concentration polarisation and fouling. There is a range of techniques available however the most common is feed channel spacers as described in spiral wound modules. All of the strategies work by increasing eddies and generating a high shear in the flow near the membrane surface. Some of these strategies include vibrating the membrane, rotating the membrane, having a rotor disk above the membrane, pulsing the feed flow rate and introducing gas bubbling close to the surface of the membrane. Characterisation Performance parameters Retention of both charged and uncharged solutes and permeation measurements can be categorised into performance parameters since the performance under natural conditions of a membrane is based on the ratio of solute retained/ permeated through the membrane. For charged solutes, the ionic distribution of salts near the membrane-solution interface plays an important role in determining the retention characteristic of a membrane. If the charge of the membrane and the composition and concentration of the solution to be filtered is known, the distribution of various salts can be found. This in turn can be combined with the known charge of the membrane and the Gibbs–Donnan effect to predict the retention characteristics for that membrane. Uncharged solutes cannot be characterised simply by Molecular Weight Cut Off (MWCO,) although in general an increase in molecular weight or solute size leads to an increase in retention. The charge and structure, pH of the solute, influence the retention characteristics. Morphology parameters The morphology of a membrane is usually established by microscopy. Atomic force microscopy (AFM) is one method used to characterise the surface roughness of a membrane by passing a small sharp tip (<100 Ă) across the surface of a membrane and measuring the resulting Van der Waals force between the atoms in the end of the tip and the surface. This is useful as a direct correlation between surface roughness and colloidal fouling has been developed. Correlations also exist between fouling and other morphology parameters, such as hydrophobe, showing that the more hydrophobic a membrane is, the less prone to fouling it is. See membrane fouling for more information. Methods to determine the porosity of porous membranes have also been found via permporometry, making use of differing vapour pressures to characterise the pore size and pore size distribution within the membrane. Initially all pores in the membrane are completely filled with a liquid and as such no permeation of a gas occurs, but after reducing the relative vapour pressure some gaps will start to form within the pores as dictated by the Kelvin equation. Polymeric (non-porous) membranes cannot be subjected to this methodology as the condensable vapour should have a negligible interaction within the membrane. Solute transport and rejection Unlike membranes with larger and smaller pore sizes, passage of solutes through nanofiltration is significantly more complex. Because of the pore sizes, there are three modes of transport of solutes through the membrane. These include 1) diffusion (molecule travel due to concentration potential gradients, as seen through reverse osmosis membranes), 2) convection (travel with flow, like in larger pore size filtration such as microfiltration), and 3) electromigration (attraction or repulsion from charges within and near the membrane). Additionally, the exclusion mechanisms in nanofiltration are more complex than in other forms of filtration. Most filtration systems operate solely by size (steric) exclusion, but at small length scales seen in nanofiltration, important effects include surface charge and hydration (solvation shell). The exclusion due to hydration is referred to as dielectric exclusion, a reference to the dielectric constants (energy) associated with a particles precense in solution versus within a membrane substrate. Solution pH strongly impacts surface charge, providing a method to understand and better control rejection. The transport and exclusion mechanisms are heavily influenced by membrane pore size, solvent viscosity, membrane thickness, solute diffusivity, solution temperature, solution pH, and membrane dielectric constant. The pore size distribution is also important. Modeling rejection accurately for NF is very challenging. It can be done with applications of the Nernst–Planck equation, although a heavy reliance on fitting parameters to experimental data is usually required. In general, charged solutes are much more effectively rejected in NF than uncharged solutes, and multivalent solutes like (valence of 2) experience very high rejection. Typical figures for industrial applications Keeping in mind that NF is usually part of a composite system for purification, a single unit is chosen based on the design specifications for the NF unit. For drinking water purification many commercial membranes exist, coming from chemical families having diverse structures, chemical tolerances and salt rejections. NF units in drinking water purification range from extremely low salt rejection (<5% in 1001A membranes) to almost complete rejection (99% in 8040-TS80-TSA membranes.) Flow rates range from 25 to 60 m3/day for each unit, so commercial filtration requires multiple NF units in parallel to process large quantities of feed water. The pressures required in these units are generally between 4.5 and 7.5 bar. For seawater desalination using a NF-RO system a typical process is shown below. Because NF permeate is rarely clean enough to be used as the final product for drinking water and other water purification, is it commonly used as a pre treatment step for reverse osmosis (RO) as is shown above. Post-treatment As with other membrane based separations such as ultrafiltration, microfiltration and reverse osmosis, post-treatment of either permeate or retentate flow streams (depending on the application) – is a necessary stage in industrial NF separation prior to commercial distribution of the product. The choice and order of unit operations employed in post-treatment is dependent on water quality regulations and the design of the NF system. Typical NF water purification post-treatment stages include aeration and disinfection & stabilisation. Aeration A Polyvinyl chloride (PVC) or fibre-reinforced plastic (FRP) degasifier is used to remove dissolved gases such as carbon dioxide and hydrogen sulfide from the permeate stream. This is achieved by blowing air in a countercurrent direction to the water falling through packing material in the degasifier. The air effectively strips the unwanted gases from the water. Disinfection and stabilisation The permeate water from a NF separation is demineralised and may be disposed to large changes in pH, thus providing a substantial risk of corrosion in piping and other equipment components. To increase the stability of the water, chemical addition of alkaline solutions such as lime and caustic soda is employed. Furthermore, disinfectants such as chlorine or chloroamine are added to the permeate, as well as phosphate or fluoride corrosion inhibitors in some cases. Research trends Challenges in nanofiltration (NF) technology include minimising membrane fouling and reducing energy requirements. Thin film composite membranes (TFC), which consist of a number of extremely thin selective layers interfacially polymerized over a microporous substrate, have had commercial success in industrial membrane applications. Electrospunnanofibrous membrane layers (ENMs) enhances permeate flux. Energy-efficient alternatives to the commonly used spiral wound arrangement are hollow fibre membranes, which require less pre-treatment. Titanium Dioxide nanoparticles have been used to minimize for membrane fouling. See also References External links Project ETAP-ERN, that uses renewable energies for desalinization. Nano based methods to improve water quality - Hawk's Perch Technical Writing, LLC Nanotechnology Water treatment Filters Water desalination Membrane technology
Nanofiltration
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
2,946
[ "Water desalination", "Separation processes", "Water treatment", "Chemical equipment", "Filters", "Materials science", "Water pollution", "Membrane technology", "Filtration", "Environmental engineering", "Water technology", "Nanotechnology" ]
8,703,587
https://en.wikipedia.org/wiki/Silvan%20Dam
Silvan Dam is an embankment concrete-face rock-fill currently under construction on the Batman River in the district of Silvan, Diyarbakır Province in southeastern Turkey. It is part of the Southeastern Anatolia Project and located upstream of the Batman Dam. Construction began on 26 July 2011. The purpose of the dam is hydroelectric power production and irrigation. It is designed to irrigate an area of . The power station will have an installed capacity of 160 MW. In 2014, the dam, as well as other in southeast Turkey such as the Ilisu Dam, became a prime target of Kurdistan Workers' Party (PKK) militants after peace talks collapsed with the government. Attacks on the dam, supporting structures and workers are part of the PKK's efforts to stop construction. These attacks delayed construction by 2 years. The construction of the Silvan Tunnel which brings the water from the dam to the surrounding plains started in June 2019. The dam body was completed in January 2021 and some sources say the entire project was completed in the end of 2022. other sources say it is still under construction. References External links Dams in Diyarbakır Province Southeastern Anatolia Project Rock-filled dams Hydroelectric power stations in Turkey Dams on the Batman River Dams under construction in Turkey
Silvan Dam
[ "Engineering" ]
260
[ "Southeastern Anatolia Project", "Irrigation projects" ]
8,704,209
https://en.wikipedia.org/wiki/Outline%20of%20design
The following outline is provided as an overview of a topical guide to design: Design (as a verb: designing, or, to design) is the intentional creation of a plan or specification for the construction or manufacturing of an object or system or for the implementation of an activity or process. Design (as a noun: a design) can refer to such a plan or specification (e.g. a drawing or other document) or to the created object, etc., and features of it such as aesthetic, functional, economic or socio-political. Design professions Architecture – An Architect typically has a B.Arch or M.Arch, as well as professional certification through groups such as the NCARB. Their primary focus is the design of buildings. Engineering – An Engineer typically has a BS or MS degree, as well as professional certification as a Professional Engineer. They primarily focus on applying. No professional certification is required. Their primary focus is the design of apparel. Graphic design – A Graphic Designer typically has a BFA or MFA. No professional certification is required. Their primary focus is the design of visual communication. Industrial design – An Industrial Designer typically has a BFA or MFA. No professional certification is required. Their primary focus is the design of physical, functional objects. Interior design – An Interior Designer typically has a Bachelor's degree. No professional certification is required. Their primary focus is the design of human environment, particularly affecting aesthetics and emotions. Software design – A Software designer typically has a BS or MS degree in computer science. While professional certification is not required, many exist. Their primary focus is the functional design of computer software. Design approaches and methods Co-Design Creative problem solving Creativity techniques Design-build Design for X Design management Design methods Design Science Design thinking Engineering design process Error-tolerant design Fault tolerant design Functional design Metadesign Mind mapping Open-design movement Participatory design Reliable system design Strategic design TRIZ Universal design User innovation Design activities Creativity Design methods Design thinking Designing objects Business New product development Engineering Cellular manufacturing Mechanical engineering New product development System design Fashion Fashion design Graphic design Game design Packaging design Industrial design Automotive design Industrial design New product development Product design Software design Game design New product development Software engineering Software design Software development Other Furniture Floral design System design System design Business New product development Service design Engineering Graphic design Information design Design tools Computer-aided design Graphic organizers Environments and experiences Architects Building design Urban design Graphic design Communication design Motion graphic design User interface design Web design Interior design Experience design Interaction design Software design User experience design Other Garden design Landscape design Sound design Theatrical design Impact of design Creative industries Design classic Design organizations European Design Awards Chartered Society of Designers Studying design Critical design Design research Wicked problem – problem that is difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognize. The use of term "wicked" here has come to denote resistance to resolution, rather than evil. Moreover, because of complex interdependencies, the effort to solve one aspect of a wicked problem may reveal or create other problems. See also References External links Design Search Engine Design Design
Outline of design
[ "Engineering" ]
624
[ "Design" ]
8,704,220
https://en.wikipedia.org/wiki/Outline%20of%20construction
The following outline is provided as an overview of and topical guide to construction: Construction – process of building or assembling infrastructure. A complex activity, large scale construction involves extensive multitasking. Normally, a job is managed by a project manager, and supervised by a construction manager, design engineer, construction engineer or project architect. Essence of construction Building Planning permission Nonbuilding structures including infrastructure Types of construction Building construction Home construction High-rise construction Skyscraper Low-rise construction Industrial construction Factories Refineries Offshore construction Road construction Underground construction Tunnel construction History of construction History of construction History of architecture History of the civil engineering profession History of the science of civil engineering History of structural engineering General construction concepts Architecture Architectural engineering Autonomous building Blueprint Builders' rites Topping out Building automation Building code Building construction Building envelope Building insulation Building material Civil engineering Cladding Construction and demolition waste Construction bidding Construction contract Construction delay Construction engineering Construction equipment theft Construction loan Construction law Construction management Construction site safety Construction worker Deconstruction (building) Demolition Design-bid-build Design-build Engineering, procurement, and construction Fast-track construction Egyptian pyramid construction techniques Fire safety Framing Green building Green roof Industrialization of construction Occupancy Occupational safety Prefabricated buildings Project management Real estate (the product of most construction) Steel frame Sustainability in construction Zoning Components of a building Escalator Electrical wiring Elevator Fireplace Chimney Floor Flooring Foundation Light fixtures Plumbing Plumbing fixtures Roof Stairs Walls Doors Wallcoverings Windows HVAC Construction trades workers List of construction trades Banksman Bricklayer Concrete finisher Construction foreman Electrician Framer Glazier House painter and decorator Ironworker Joiner Laborer Millwright Plasterer Plumber Rigger Roofer Slater Steel fixer Welder Masonry Design elements of a building Halls Entryway Rooms Bathroom Bedroom Dining room Garage Kitchen Living room Utility room Heavy construction projects Bridge Highway Heavy equipment Heavy equipment Bulldozer Compactor Excavator Loader Heavy equipment operator Building construction methods List of construction methods Earthbag construction Ferrocement Lift slab construction Monocrete construction Slip forming Materials and equipment List of building materials :Category:Construction equipment :Category:Cutting tools :Category:Lifting equipment :Category:Metalworking tools :Category:Power tools :Category:Stonemasonry tools :Category:Woodworking tools Light tower Living building material Staff Temporary equipment Box crib Dropcloth Falsework Fill trestle Formwork Masking tape Ram board Scaffolding Tube and clamp scaffold Temporary fencing Roles in construction Building engineer Building estimator Building officials Chartered Building Surveyor Chief Construction Adviser to UK Government Civil estimator Clerk of works Construction foreman Master builder Quantity surveyor Site manager Structural engineer Superintendent (construction) See also Index of construction articles Megaproject Megastructure External links Associated General Contractors of America National Association of Home Builders Construction
Outline of construction
[ "Engineering" ]
561
[ "Construction" ]
8,704,223
https://en.wikipedia.org/wiki/Outline%20of%20chemical%20engineering
The following outline is provided as an overview of and topical guide to chemical engineering: Chemical engineering – deals with the application of physical science (e.g., chemistry and physics), and life sciences (e.g., biology, [[microbi logy]] and biochemistry) with mathematics and economics, to the process of converting raw materials or chemicals into more useful or valuable forms. In addition to producing useful materials, modern chemical engineering is also concerned with pioneering valuable new materials and techniques – such as nanotechnology, fuel cells and biomedical engineering. Essence of chemical engineering Math Chemistry Physics Fluid Mechanics Chemical Reaction Engineering Thermodynamics Chemical Thermodynamics Engineering Mechanics Fluid Dynamics Heat Transfer Mass Transfer Transport Phenomena Green Chemistry and Sustainability Process Control Process Instrumentation Process Safety Unit Operation Process Design Chemical Process Modeling and Simulation Engineering Economics Branches of chemical engineering Biochemical engineering Biomedical engineering Biotechnology Ceramics Chemical process modeling Chemical Technologist Chemical reactor Chemical reaction engineering Distillation Design Electrochemistry Fluid dynamics Food engineering Heat transfer Mass transfer Materials science Microfluidics Nanotechnology Natural environment Plastics engineering Polymer engineering Process control Process design (chemical engineering) Separation processes (see also: separation of mixture) Crystallization processes Distillation processes Membrane processes Semiconductors Thermodynamics Transport phenomena Unit operations Unit Operations of Chemical Engineering History of chemical engineering History of chemical engineering Batch production General chemical engineering concepts Chemical engineer Chemical reaction Distillation Design Fluid mechanics Heat transfer Mass transfer and equilibrium stages Operations involving particulate solids. Process design Transport Phenomena Unit operations Polymerization 3D Plant Design FEED Leaders in chemical engineering List of chemical engineers See also Outline of chemistry References External links Computer Aids for Chemical Engineering Education (CACHE) Engineering Learning Resources Wiki What is a Chemical Engineer? Chemical Engineers' Resource Page History of Chemical Engineering Timeline American Institute of Chemical Engineers (USA) Institution of Chemical Engineers (UK) Canadian Society for Chemical Engineers Brazilian Association of Chemical Engineering (BRA) Engineers Australia (AUS) Chemical Engineering Information -Turkey (TR) Chemical Engineering Information Exchange) Chemical engineering Chemical engineering Chemical engineering
Outline of chemical engineering
[ "Chemistry", "Engineering" ]
411
[ "Chemical engineering", "nan" ]
16,037,749
https://en.wikipedia.org/wiki/Synantherology
Synantherology is a branch of botany that deals with the study of the plant family Asteraceae (also called Compositae). The name of the field refers to the fused anthers possessed by members of the family, and recalls an old French name, synantherées, for the family. Although many of the plants of the Asteraceae were described for the European community at least as long ago as Theophrastus, an organization of the family into tribes, which remained largely stable throughout the 20th century, was published in 1873 by George Bentham. In a 1970 article titled "The New Synantherology", Harold E. Robinson advocated greater attention to microstructures (studied with the compound light microscope). He was not the first, as Alexandre de Cassini and others of the 19th century split species based on fine distinctions of microstructure, a tendency which Bentham found excessive. Noted United States synantherologists include: T. M. Barkley V. A. Funk D. J. Keil R. M. King Harold E. Robinson J. A. Soule T. F. Stuessy Billie Lee Turner Sr. References Asteraceae Branches of botany
Synantherology
[ "Biology" ]
252
[ "Branches of botany" ]
16,039,139
https://en.wikipedia.org/wiki/Automatic-tracking%20satellite%20dish
Automatic Tracking Satellite Dishes are satellite dishes used while a vehicle, boat or ship is in motion. Automatic tracking satellite dishes utilize gyroscopes, GPS position sensors, and uses unique satellite identification data and an integrated DVB decoder to aid in identification of the satellite that it is pointing at. The dishes consist usually of stepper motors to drive and aim the dish, gyroscopes to detect changes in position while the vehicle is in motion, a parabolic reflector, low-noise block converter, and control unit. They can use also shifted Phased arrays. (example: Starlink Dish). Manufacturers Winegard Company KVH Industries Sea Tel Orbit Technology Group Ten-Haaft SpaceX: Starlink Dish See also USALS = Universal Satellites Automatic Location System DiSEqC = Digital Satellite Equipment Control SAT>IP end user consumer equipment that can switch different ip streams from different SAT>IP servers and facilitates selection of reception from different satellites Duo LNB Monoblock LNB DiSEqC Motor-driven Satellite dish Starlink Dish 6°oF Phased array References Radio frequency antenna types Satellite broadcasting Antennas (radio)
Automatic-tracking satellite dish
[ "Engineering" ]
231
[ "Telecommunications engineering", "Satellite broadcasting" ]
16,040,681
https://en.wikipedia.org/wiki/Ronald%20D.%20Macfarlane
Ronald D. Macfarlane (born February 21, 1933, Buffalo, New York) is distinguished professor of Chemistry at Texas A&M University. In 1991, he received the Inaugural Award of the American Society for Mass Spectrometry's Distinguished Achievement Award. Early life and education 1954 University at Buffalo, New York - B.A. Chemistry 1957 Carnegie-Mellon University, Pennsylvania - M.S. Chemistry 1959 Carnegie-Mellon University, Pennsylvania - Ph.D. Chemistry Research interests Separations Methods for Medical Diagnosis Ultra-Sensitive Mass Spectrometry Is researching the new methods of Conceptual Learning Awards Guggenheim Fellowship, 1968 Distinguished Achievement in Research Award ACS Nuclear Chemistry Award 1990 ASMS Distinguished Contribution in Mass Spectrometry Award References Living people Texas A&M University faculty Thomson Medal recipients Mass spectrometrists 21st-century American chemists University at Buffalo alumni 1933 births
Ronald D. Macfarlane
[ "Physics", "Chemistry" ]
179
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
5,515,464
https://en.wikipedia.org/wiki/Carolina%20Henriette%20MacGillavry
Carolina Henriette MacGillavry (22 January 1904 in Amsterdam – 9 May 1993 in Amsterdam) was a Dutch chemist and crystallographer. She is known for her discoveries on the use of diffraction in crystallography. Biography MacGillavry (nicknamed "Mac") was born the second of six children in an intellectual family (her father was a brain surgeon, her mother a teacher). Education In 1921, MacGillavry began studying chemistry at the University of Amsterdam. After graduating in 1925, she developed an interest in the (then) emerging field of quantum mechanics. In 1928, she gave "a very topical" presentation on quantum mechanical calculations on the hydrogen molecule. She obtained her Master's degree (cum laude) on March 16, 1932, and continued to work as an assistant to a chemist named A. Smits. She became a friend of J. M. Bijvoet, and became interested in crystallography which led to her 1937 PhD thesis on the subject, which she completed cum laude with Prof. AHW Aten on 27 January 1937. She then became an assistant to A. E. van Arkel at Leiden, but Bijvoet asked her to come back to the Amsterdam crystallography laboratory that same year. Together with Bijvoet, she researched electromagnetic diffraction and its use in crystallography. She also did research in inorganic chemistry. Crystallography After World War II, MacGillavry was one of the developers of direct methods, an innovative calculus that could be used in crystallography. The method uses the Harker–Kasper inequality, that was first published in 1948 by crystallographers D. Harker and J. S. Kasper. Due to her work on Harker–Kasper inequalities, she became an international authority on the subject and co-authored the standard text about it in the Netherlands. In 1948 she worked with R. Pepinsky in Auburn, Alabama, for a year. The Dutch company Philips also grew interested in her work on the chemistry of solids. In 1950 she became the first woman to be appointed to the Royal Netherlands Academy of Arts and Sciences. In the same year she became a professor at the University of Amsterdam and she retired in 1972. In 1986, In the English-speaking world MacGillavry became famous for her book Symmetry aspects of M. C. Escher's periodic drawings on the works of the Dutch graphic artist M. C. Escher. The book was instrumental in drawing international attention to the artist. Personal life MacGillavry married the oto-rhino-laryngologist J. H. Nieuwenhuijsen in 1968. She died 9 May 1993 in Amsterdam and is buried in Utrecht. A street in Watergraafsmeer, the Netherlands, is named in her honor. References 1904 births 1993 deaths Crystallographers 20th-century Dutch chemists Dutch women chemists Members of the Royal Netherlands Academy of Arts and Sciences Scientists from Amsterdam University of Amsterdam alumni Academic staff of the University of Amsterdam
Carolina Henriette MacGillavry
[ "Chemistry", "Materials_science" ]
632
[ "Crystallographers", "Crystallography" ]
5,516,086
https://en.wikipedia.org/wiki/Ac%C3%A1mbaro%20figures
The Acámbaro figures are about 33,000 small ceramic figurines allegedly found by Waldemar Julsrud in July 1944, in the Mexican city of Acámbaro, Guanajuato. The figurines are said by some to resemble dinosaurs and are sometimes cited as anachronisms. Some young-Earth creationists have adduced the existence of figurines as credible evidence for the coexistence of dinosaurs and humans, in an attempt to cast doubt on scientific dating methods and potentially offer support for a literal interpretation of the Genesis creation narrative. However, there is no known reliable evidence for the validity of the Acámbaro figures as actual ancient artifacts; and many have questioned the motives of those who argue for their validity. History The Acámbaro figures were uncovered by a German immigrant and hardware merchant named Waldemar Julsrud. According to Dennis Swift, a young-Earth creationist and major proponent of the figures' authenticity, Julsrud stumbled upon the figures while riding his horse and hired a local farmer to dig up the remaining figures, paying him for each figure he brought back. Eventually, the farmer and his assistants brought him over 32,000 figures which included representations of everything from the supposed dinosaurs to peoples from all over the world including Egyptians, Sumerians, and "bearded Caucasians". Archaeologist Charles C. Di Peso was working for the Amerind Foundation, an anthropological organization dedicated to preserving Native American culture. Di Peso examined the figures and determined that they were not authentic, and had instead been produced by local modern-day farmers. He concluded that the figurines were indeed fakes: their surfaces displayed no signs of age; no dirt was packed into their crevices; and though some figurines were broken, no pieces were missing and no broken surfaces were worn. Furthermore, the excavation’s stratigraphy clearly showed that the artifacts were placed in a recently dug hole filled with a mixture of the surrounding archaeological layers. DiPeso also learned that a local family had been making and selling these figurines to Julsrud for a peso apiece since 1944, presumably inspired by films shown at Acámbaro’s cinema, locally available comic books and newspapers, and accessible day trips to Mexico City’s Museo Nacional. Charles Hapgood, pioneer of pole shift theory, became one of the figures' most high profile and devout supporters. The figures continue to draw attention in the present day. They have been cited in some pseudoscientific books such as Atlantis Rising by David Lewis. Another young-Earth creationist, Don Patton, has emerged as one of their staunchest supporters. He has proposed some new lines of evidence, including the figure's resemblance to the dinosaurs depicted in Robert Bakker’s book, Dinosaur Heresies. In 1970, Erle Stanley Gardner published his last travel book, Host With the Big Hat with a chapter on the collection. His biographer Dorothy B. Hughes wrote that "the story of Acámbaro may be the crowning achievement of his archeological investigations". Dating Attempts have been made to date the figures using thermoluminescence (TL) dating. The earliest results, from tests done when TL dating was in its infancy, suggested a date around 2500 BC. However, later tests contradicted these findings. In 1976, Gary W. Carriveau and Mark C. Han attempted to date twenty Acámbaro figures using TL dating. They found that the figures had been fired at temperatures between , which contradicted claims that these figures had been fired at temperatures too low for them to be accurately dated. However, all of the samples failed the "plateau test", which indicated that dates obtained for the Acámbaro figures using standard high-temperature TL dating techniques were unreliable and lacked any chronological significance. Based on the degree of signal regeneration found in remeasured samples, they estimated that the figures tested had been fired approximately 30 years prior to 1969. The date estimate as well as the notion the artifacts were made by some undiscovered culture was rejected by archeologists and paleontologists. See also Ica stones Out-of-place artifact References External links Acámbaro figures and the Julsrud Museum at Municipality of Acámbaro official page. 1944 archaeological discoveries 1944 in Mexico Figurines Creationism Guanajuato Pseudoarchaeology Archaeological forgeries Genesis creation narrative it:Acámbaro#Statuette
Acámbaro figures
[ "Biology" ]
923
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
5,516,188
https://en.wikipedia.org/wiki/Moore%20plane
In mathematics, the Moore plane, also sometimes called Niemytzki plane (or Nemytskii plane, Nemytskii's tangent disk topology), is a topological space. It is a completely regular Hausdorff space (that is, a Tychonoff space) that is not normal. It is an example of a Moore space that is not metrizable. It is named after Robert Lee Moore and Viktor Vladimirovich Nemytskii. Definition If is the (closed) upper half-plane , then a topology may be defined on by taking a local basis as follows: Elements of the local basis at points with are the open discs in the plane which are small enough to lie within . Elements of the local basis at points are sets where A is an open disc in the upper half-plane which is tangent to the x axis at p. That is, the local basis is given by Thus the subspace topology inherited by is the same as the subspace topology inherited from the standard topology of the Euclidean plane. Properties The Moore plane is separable, that is, it has a countable dense subset. The Moore plane is a completely regular Hausdorff space (i.e. Tychonoff space), which is not normal. The subspace of has, as its subspace topology, the discrete topology. Thus, the Moore plane shows that a subspace of a separable space need not be separable. The Moore plane is first countable, but not second countable or Lindelöf. The Moore plane is not locally compact. The Moore plane is countably metacompact but not metacompact. Proof that the Moore plane is not normal The fact that this space is not normal can be established by the following counting argument (which is very similar to the argument that the Sorgenfrey plane is not normal): On the one hand, the countable set of points with rational coordinates is dense in ; hence every continuous function is determined by its restriction to , so there can be at most many continuous real-valued functions on . On the other hand, the real line is a closed discrete subspace of with many points. So there are many continuous functions from L to . Not all these functions can be extended to continuous functions on . Hence is not normal, because by the Tietze extension theorem all continuous functions defined on a closed subspace of a normal space can be extended to a continuous function on the whole space. In fact, if X is a separable topological space having an uncountable closed discrete subspace, X cannot be normal. See also Hedgehog space References Stephen Willard. General Topology, (1970) Addison-Wesley . (Example 82) Topological spaces
Moore plane
[ "Mathematics" ]
553
[ "Topological spaces", "Mathematical structures", "Topology", "Space (mathematics)" ]
5,516,588
https://en.wikipedia.org/wiki/Glaze3D
Glaze3D was a family of graphics cards announced by BitBoys Oy on August 2, 1999, that would have produced substantially better performance than other consumer products available at the time. The family, which would have come in the Glaze3D 1200, Glaze3D 2400 and Glaze3D 4800 models, was supposed to offer full support for DirectX 7, OpenGL 1.2, AGP 4×, 4× anisotropic filtering, full-screen anti-aliasing and a host of other technologies not commonly seen at the time. The 1.5 million gate GPU would have been fabricated by Infineon on a 0.2 μm eDRAM process, later to be reduced to 0.17 μm with a minimum of 9 MB of embedded DRAM and 128 to 512 MB of external SDRAM. The maximum supported video resolution was 2048×1536 pixels. Development history The Glaze3D family of cards were developed in several generations, beginning with the original Glaze3D "400" with multi-channel RDRAM instead of internal eDRAM. This was offered only as IP but with no takers. Bitboys revised the design and decided to have it manufactured themselves, in cooperation with Infineon Technologies, the chip fabrication arm of Siemens. They came up with a new Glaze3D pitched for release in Q1, 2000. The card promised extremely high performance compared to contemporary consumer GPUs. As bug-hunting, validation and manufacturing problems delayed the launch, new features became necessary and a DX7 variant with built-in hardware Transform & Lighting was announced, but never appeared. The GPU was later redesigned under a new codename, Axe, to take advantage of DirectX 8 and compete with a developing competition. The new version sported such features as an additional 3 MB of eDRAM, proprietary Matrix Antialiasing and a vastly improved fillrate, as well as offering a programmable vertex shader and widened internal memory bus. The new card was to have been released as Avalanche3D by the end of 2001. The third development, codenamed Hammer, started development as Axe lost viability toward the end of 2001. This new card was to be a high-end DirectX 9 part, offering new features such as occlusion culling, improved rendering performance and various other innovations. This version, like the ones before it, never shipped commercially. Bitboys turned to mobile graphics and developed an accelerator licensed and probably used by at least one flat panel display manufacture, although it was intended and designed primarily for higher-end handhelds. Later on ATI bought Bitboys for an extra research and development unit, so as of 2008 Bitboys was owned by AMD. In 2009, Bitboys was transferred to Qualcomm. Specifications Glaze3D chip Infineon on a 0.2 μm eDRAM process Compatible with OpenGL and DirectX Quad-pixel pipeline at 150 MHz 4.5 million Triangles 10 million Triangles with multi-chip 1.5 million logic gate 130 mm2 die size 304 pin BGA Thor Geometry processor PCI or AGP 2X/4X Fillrate 1.2 GigaTextel/s 4.8 GigaTextel/s with multi-chip 0.6 GigaTextel/s (Dual textured) 2.4 GigaTextel/s with multi-chip Memory Embedded RAM 9 MB Embedded framebuffer memory 4 module of 2.25 MB with 3 bank each 150 MHz 9.6 GB/s memory bandwidth 512 bit interface External RAM Up to 128 MB max Texture cache 16 KB for even mipmap and surface texture 8 KB for odd mipmap and lightmap Two-way associative Performance claims The Glaze3D family was well known for the bold performance claims that were associated with it. The low-end 1200 model was purported to achieve a fillrate of 1.2 billion texels per second, with a geometry throughput of 15 million triangles per second. Most importantly, the card was originally claimed to achieve over 200 frames per second in id Software's Quake III Arena at maximum visual quality. The 1200 model's claimed specifications would place it as the rough equivalent of the GeForce FX 5200 Ultra or Radeon 9200 Pro (very low performance GPUs of 2002 vintage), while its claimed performance would place it at the same level as the GeForce 3 Ti 500 or Radeon 8500 (high-end GPUs from 2000 to 2001). To compound matters, the cards' specifications were later updated to nearly double their original performance levels. While the Glaze3D 1200 was supposed to achieve unheard-of performance in video games, it was claimed that the 2400 and 4800 models would each be substantially more powerful in turn. Using two and four GPU configurations respectively, and including an additional geometry accelerator on the 4800, the higher-end Glaze3D cards were to be aimed at the very highest end of the video-gaming market. See also ATI Technologies Nvidia References External links Glaze3D Announced (link expired as of 09.2019). PDF version of a presentation by Petri Norlund, Chief Architect at BitBoys Oy in 1999. BitBoys at Siggraph - analysis of the Glaze3D cards. (link expired as of 09.2019). A Look Inside BitBoys - a detailed description of the development history of Glaze3D. (archived). Vaporware Graphics cards
Glaze3D
[ "Technology" ]
1,120
[ "Computer industry", "Vaporware" ]
5,517,014
https://en.wikipedia.org/wiki/Mauke%20starling
The Mauke starling or mysterious starling (Aplonis mavornata) is an extinct species of starling found on the island of Mauke, Cook Islands. The binomen is the result of Buller's misreading of the name inornata on the specimen label. As he seems to have genuinely believed this spelling to be correct, the binomial, although it has no meaning, is valid. Description Its overall length is . Bill from gape , from anterior margin of nostril, 1.24 cm. Tarsus 2.74 cm, tail 6.4 cm, wing 10.5 cm, wingspan 32 cm. Wing and tarsus measurement are somewhat less than in the living bird due to shrinkage of the specimen. The other measurements are either from the freshly killed bird or are unlikely to have changed. Dull dusky black overall, with lighter brown feather edges which are prominent on the body feathers and less conspicuous on the remiges and tail. Iris yellow. Feet dusky brownish; bill the same colour or somewhat lighter. The geographically closest relative is the Rarotonga starling, which is larger and has a greyish body plumage with light grey feather margins. In overall appearance, A. mavornata is closest to the Polynesian starling's subspecies tenebrosus of Niuatoputapu and Tafahi, Tonga; alternatively, it looks much like a much (nearly one-third) smaller, yellow-eyed version of the Samoan starling. Extinction There is a lot of mystery surrounding the Mauke Starling. The only known specimen (BMNH Old Vellum Catalog 12.192) was shot "hopping about [on a] tree", by Andrew Bloxam, naturalist of HMS Blonde, roughly between 2:30 and 3:30 pm on August 9, 1825. The island of Mauke was not visited again by ornithologists until 1973, by which time the bird was extinct, presumably due to predation by introduced rats. Bloxam noted that in 1825, only two years after the arrival of the first Europeans, they "saw quantities of rats with long tails, different in appearance from the common South Sea rat and resembling in colour and almost in size the Norway rat". Thus, and considering the vulnerability of other Aplonis species to rat predation, it can be assumed that the species became extinct soon thereafter. The mystery and its resolution There was much uncertainty surrounding the specimen, as it had no information on its place of origin or date of collection. Sharpe is the origin of much of this confusion, but it actually started with Buller's 1887 description, when he misread the name on the label. Sharpe corrected this to inornata, but this was both unjustified (as Buller apparently really believed to have read mavornata) and in any case preoccupied, as Salvadori had already named another starling Calornis inornata in 1880. Thus, although Buller's description – a few throwaway lines in an account of the striated starling referring to the unique specimen – is barely sufficient and his name nonsensical, it is nonetheless valid according to ICZN rules. There exists a drawing by Georg Forster, made on June 1, 1774, and some notes of a bird collected on Rai’atea (formerly known as Ulieta) between May 14 and June 1 (popularised in Martin Davies' 2005 novel The Conjurer's Bird as the "Mysterious Bird of Ulieta"). Sharpe and many subsequent authors claimed that the bird on the painting was the same species as the specimen, despite numerous discrepancies between the specimen and Forster's description. Stresemann debunked this theory thoroughly, but writers did not stop referring A. mavornata to Forster's bird, connecting it with the Society Islands or with Cook's second voyage. Only in 1986, when Olson published the results of his research, which included analysis of Bloxam's original diary and notes and concluded that his "Sturnus Mautiensis" can be identified with Buller's A. mavornata, was the mystery of Specimen 12.192 resolved. Since Bloxam's notes were originally published in a much bowdlerized and misleading edition where it is only mentioned that they "...saw [...] a starling..." without any details and especially no reference to a specimen, the true origin of the mysterious starling was long overlooked. In an ironic twist, Forster's bird, which had long puzzled ornithologists and was sometimes called "the mysterious bird of Raiatea" and variously considered a thrush or honeyeater is almost certainly another now-extinct species of Aplonis – thus, one could say that there are indeed two, not one species of "mysterious starling" from Pacific islands. References External links Species factsheet - BirdLife International Mauke starling Birds of the Cook Islands Extinct birds of Oceania Bird extinctions since 1500 Mauke starling Mauke starling Species known from a single specimen †
Mauke starling
[ "Biology" ]
1,064
[ "Individual organisms", "Species known from a single specimen" ]
5,517,287
https://en.wikipedia.org/wiki/Deconstructivism
Deconstructivism is a postmodern architectural movement which appeared in the 1980s. It gives the impression of the fragmentation of the constructed building, commonly characterised by an absence of obvious harmony, continuity, or symmetry. Its name is a portmanteau of Constructivism and "Deconstruction", a form of semiotic analysis developed by the French philosopher Jacques Derrida. Architects whose work is often described as deconstructivist (though in many cases the architects themselves reject the label) include Zaha Hadid, Peter Eisenman, Frank Gehry, Rem Koolhaas, Daniel Libeskind, Bernard Tschumi, and Coop Himmelb(l)au. The term does not inherently refer to the style's deconstructed visuals as the English adjective suggests, but instead derives from the movement's foundations in contrast to the Russian Constructivist movement during the First World War that "broke the rules" of classical architecture through the French language. Besides fragmentation, deconstructivism often manipulates the structure's surface skin and deploys non-rectilinear shapes which appear to distort and dislocate established elements of architecture. The finished visual appearance is characterized by unpredictability and controlled chaos. History, context and influences Deconstructivism came to public notice with the 1982 Parc de la Villette architectural design competition, in particular the entry from Jacques Derrida and Peter Eisenman and the winning entry by Bernard Tschumi, as well as the Museum of Modern Art’s 1988 Deconstructivist Architecture exhibition in New York, organized by Philip Johnson and Mark Wigley. Tschumi stated that calling the work of these architects a "movement" or a new "style" was out of context and showed a lack of understanding of their ideas, and believed that Deconstructivism was simply a move against the practice of PoMo, which he said involved "making Doric temple forms out of plywood". Other influential exhibitions include the 1989 opening of the Wexner Center for the Arts in Columbus, designed by Peter Eisenman. The New York exhibition has featured works by Frank Gehry, Daniel Libeskind, Rem Koolhaas, Peter Eisenman, Zaha Hadid, Coop Himmelb(l)au, and Bernard Tschumi. Since their exhibitions, some architects associated with Deconstructivism have distanced themselves from it; nonetheless, the term has stuck and has come to embrace a general trend within Contemporary architecture. Early antecedents of the architectural movement could be found in industrial design, notably in Ettore Sottsass' design for the 1969 Olivetti Valentine typewriter, a non-conformist design that deconstructed what was typically the typewriter's bodywork, revealing elements normally concealed, using 'floating keys' and a body-colored plastic 'rail' ahead of the spacebar, visually detached from the typewriter's main body. Modernism and postmodernism The term Deconstructivism in contemporary architecture is opposed to the ordered rationality of Modernism and Postmodernism. Though postmodernist and nascent deconstructivist architects both published in the journal Oppositions (published between 1973 and 1984), that journal's contents mark a decisive break between the two movements. Deconstructivism took a confrontational stance to architectural history, wanting to "disassemble" architecture. While postmodernism returned to embrace the historical references that modernism had shunned, possibly ironically, deconstructivism rejected the postmodern acceptance of such references, as well as the idea of ornament as an after-thought or decoration. In addition to Oppositions, a defining text for both deconstructivism and postmodernism was Robert Venturi's Complexity and Contradiction in Architecture (1966). It argues against the purity, clarity and simplicity of modernism. With its publication, functionalism and rationalism, the two main branches of modernism, were overturned as paradigms. The reading of the postmodernist Venturi was that ornament and historical allusion added a richness to architecture that modernism had foregone. Some Postmodern architects endeavored to reapply ornament even to economical and minimal buildings, described by Venturi as "the decorated shed". Rationalism of design was dismissed but the functionalism of the building was still somewhat intact. This is close to the thesis of Venturi's next major work, that signs and ornament can be applied to a pragmatic architecture, and instill the philosophic complexities of semiology. The deconstructivist reading of Complexity and Contradiction is quite different. The basic building was the subject of problematics and intricacies in deconstructivism, with no detachment for ornament. Rather than separating ornament and function, like postmodernists such as Venturi, the functional aspects of buildings were called into question. Geometry was to deconstructivists what ornament was to postmodernists, the subject of complication, and this complication of geometry was in turn, applied to the functional, structural, and spatial aspects of deconstructivist buildings. One example of deconstructivist complexity is Frank Gehry's Vitra Design Museum in Weil-am-Rhein, which takes the typical unadorned white cube of modernist art galleries and deconstructs it, using geometries reminiscent of cubism and abstract expressionism. This subverts the functional aspects of modernist simplicity while taking modernism, particularly the international style, of which its white stucco skin is reminiscent, as a starting point. Another example of the deconstructivist reading of Complexity and Contradiction is Peter Eisenman's Wexner Center for the Arts. The Wexner Center takes the archetypal form of the castle, which it then imbues with complexity in a series of cuts and fragmentations. A three-dimensional grid runs somewhat arbitrarily through the building. The grid, as a reference to modernism, of which it is an accoutrement, collides with the medieval antiquity of a castle. Some of the grid's columns intentionally do not reach the ground, hovering over stairways creating a sense of neurotic unease and contradicting the structural purpose of the column. The Wexner Center deconstructs the archetype of the castle and renders its spaces and structure with conflict and difference. Deconstructivist philosophy Some Deconstructivist architects were influenced by the French philosopher Jacques Derrida. Eisenman was a friend of Derrida, but even so his approach to architectural design was developed long before he became a Deconstructivist. For him Deconstructivism should be considered an extension of his interest in radical formalism. Some practitioners of deconstructivism were also influenced by the formal experimentation and geometric imbalances of Russian constructivism. There are additional references in deconstructivism to 20th-century movements: the modernism/postmodernism interplay, expressionism, cubism, minimalism and contemporary art. Deconstructivism attempts to move away from the supposedly constricting 'rules' of modernism such as "form follows function", "purity of form", and "truth to materials". The main channel from deconstructivist philosophy to architectural theory was through the philosopher Jacques Derrida's influence with Peter Eisenman. Eisenman drew some philosophical bases from the literary movement Deconstruction, and collaborated directly with Derrida on projects including an entry for the Parc de la Villette competition, documented in Chora l Works. Both Derrida and Eisenman, as well as Daniel Libeskind were concerned with the "metaphysics of presence", and this is the main subject of deconstructivist philosophy in architecture theory. The presupposition is that architecture is a language capable of communicating meaning and of receiving treatments by methods of linguistic philosophy. The dialectic of presence and absence, or solid and void occurs in much of Eisenman's projects, both built and unbuilt. Both Derrida and Eisenman believe that the locus, or place of presence, is architecture, and the same dialectic of presence and absence is found in construction and deconstructivism. According to Derrida, readings of texts are best carried out when working with classical narrative structures. Any architectural deconstructivism requires the existence of a particular archetypal construction, a strongly-established conventional expectation to play flexibly against. The design of Frank Gehry’s own Santa Monica residence, (from 1978), has been cited as a prototypical deconstructivist building. His starting point was a prototypical suburban house embodied with a typical set of intended social meanings. Gehry altered its massing, spatial envelopes, planes and other expectations in a playful subversion, an act of "de"construction" In addition to Derrida's concepts of the metaphysics of presence and deconstructivism, his notions of trace and erasure, embodied in his philosophy of writing and arche-writing found their way into deconstructivist memorials. Daniel Libeskind envisioned many of his early projects as a form of writing or discourse on writing and often works with a form of concrete poetry. He made architectural sculptures out of books and often coated the models in texts, openly making his architecture refer to writing. The notions of trace and erasure were taken up by Libeskind in essays and in his project for the Jewish Museum Berlin. The museum is conceived as a trace of the erasure of the Holocaust, intended to make its subject legible and poignant. Memorials such as Maya Lin's Vietnam Veterans Memorial and Peter Eisenman's Memorial to the Murdered Jews of Europe are also said to reflect themes of trace and erasure. Constructivism and Russian Futurism Another major current in deconstructivist architecture takes inspiration from the Constructivist and Russian Futurist movements of the early twentieth century, both in their graphics and in their visionary architecture, little of which was actually constructed. Artists Naum Gabo, El Lissitzky, Kazimir Malevich, and Alexander Rodchenko, have influenced the graphic sense of geometric forms of deconstructivist architects such as Zaha Hadid and Coop Himmelb(l)au. Both Deconstructivism and Constructivism have been concerned with the tectonics of making an abstract assemblage. Both were concerned with the radical simplicity of geometric forms as the primary artistic content, expressed in graphics, sculpture and architecture. The Constructivist tendency toward purism, though, is absent in Deconstructivism: form is often deformed when construction is deconstructed. Also lessened or absent is the advocacy of socialist and collectivist causes. The primary graphic motifs of constructivism were the rectangular bar and the triangular wedge, others were the more basic geometries of the square and the circle. In his series Prouns, El Lizzitzky assembled collections of geometries at various angles floating free in space. They evoke basic structural units such as bars of steel or sawn lumber loosely attached, piled, or scattered. They were also often drafted and share aspects with technical drawing and engineering drawing. Similar in composition is the deconstructivist series Micromegas by Daniel Libeskind. Contemporary art Two strains of modern art, minimalism and cubism, have had an influence on deconstructivism. Analytical cubism had a sure effect on deconstructivism, as forms and content are dissected and viewed from different perspectives simultaneously. A synchronicity of disjoined space is evident in many of the works of Frank Gehry and Bernard Tschumi. Synthetic cubism, with its application of found object art, is not as great an influence on deconstructivism as Analytical cubism, but is still found in the earlier and more vernacular works of Frank Gehry. Deconstructivism also shares with minimalism a disconnection from cultural references. With its tendency toward deformation and dislocation, there is also an aspect of expressionism and expressionist architecture associated with deconstructivism. At times deconstructivism mirrors varieties of expressionism, neo-expressionism, and abstract expressionism as well. The angular forms of the Ufa Cinema Center by Coop Himmelb(l)au recall the abstract geometries of the numbered paintings of Franz Kline, in their unadorned masses. The UFA Cinema Center also would make a likely setting for the angular figures depicted in urban German street scenes by Ernst Ludwig Kirchner. The work of Wassily Kandinsky also bears similarities to deconstructivist architecture. His movement into abstract expressionism and away from figurative work, is in the same spirit as the deconstructivist rejection of ornament for geometries. Several artists in the 1980s and 1990s contributed work that influenced or took part in deconstructivism. Maya Lin and Rachel Whiteread are two examples. Lin's 1982 project for the Vietnam Veterans Memorial, with its granite slabs severing the ground plane, is one. Its shard-like form and reduction of content to a minimalist text influenced deconstructivism, with its sense of fragmentation and emphasis on reading the monument. Lin also contributed work for Eisenman's Wexner Center. Rachel Whiteread's cast architectural spaces are another instance where contemporary art is confluent with architecture. Ghost (1990), an entire living space cast in plaster, solidifying the void, alludes to Derrida's notion of architectural presence. Gordon Matta-Clark's Building cuts were deconstructed sections of buildings exhibited in art galleries. 1988 MoMA exhibition Mark Wigley and Philip Johnson curated the 1988 Museum of Modern Art exhibition Deconstructivist architecture, which crystallized the movement, and brought fame and notoriety to its key practitioners. The architects presented at the exhibition were Peter Eisenman, Frank Gehry, Zaha Hadid, Coop Himmelblau, Rem Koolhaas, Daniel Libeskind, and Bernard Tschumi. Mark Wigley wrote the accompanying essay and tried to show a common thread among the various architects whose work was usually more noted for their differences. Computer-aided design Computer-aided design is now an essential tool in most aspects of contemporary architecture, but the particular nature of deconstructivism makes the use of computers especially pertinent. Three-dimensional modelling and animation (virtual and physical) assists in the conception of very complicated spaces, while the ability to link computer models to manufacturing jigs (CAM—computer-aided manufacturing) allows the mass production of subtly different modular elements to be achieved at affordable costs. Also, Gehry is noted for producing many physical models as well as computer models as part of his design process. Though the computer has made the designing of complex shapes much easier, not everything that looks odd is "deconstructivist". Gallery Critical responses Since the publication of Kenneth Frampton's Modern Architecture: A Critical History (first edition 1980) there has been a keen consciousness of the role of criticism within architectural theory. Whilst referencing Derrida as a philosophical influence, deconstructivism can also be seen as having as much a basis in critical theory as the other major offshoot of postmodernism, critical regionalism. The two aspects of critical theory, urgency and analysis, are found in deconstructivism. There is a tendency to re-examine and critique other works or precedents in deconstructivism, and also a tendency to set aesthetic issues in the foreground. An example of this is the Wexner Center. Critical Theory, however, had at its core a critique of capitalism and its excess, and from that respect many of the works of the Deconstructivists would fail in that regard if only they are made for an elite and are, as objects, highly expensive, despite whatever critique they may claim to impart on the conventions of design. The difference between criticality in deconstructivism and criticality in critical regionalism is that critical regionalism reduces the overall level of complexity involved and maintains a clearer analysis while attempting to reconcile modernist architecture with local differences. In effect, this leads to a modernist "vernacular". Critical regionalism displays a lack of self-criticism and a utopianism of place. Deconstructivism, meanwhile, maintains a level of self-criticism and a dystopianism of place, as well as external criticism and tends towards maintaining a level of complexity. Some architects identified with the movement, notably Frank Gehry, have actively rejected the classification of their work as deconstructivist. Critics of deconstructivism see it as a purely formal exercise with little social significance. Kenneth Frampton finds it "elitist and detached". Nikos Salingaros calls deconstructivism a "viral expression" that invades design thinking in order to build destroyed forms; while curiously similar to both Derrida's and Philip Johnson's descriptions, this is meant as a harsh condemnation of the entire movement. Other criticisms are similar to those of deconstructivist philosophy—that since the act of deconstructivism is not an empirical process, it can result in whatever an architect wishes, and it thus suffers from a lack of consistency. Today there is a sense that the philosophical underpinnings of the beginning of the movement have been lost, and all that is left is the aesthetic of deconstructivism. Other criticisms reject the premise that architecture is a language capable of being the subject of linguistic philosophy, or, if it was a language in the past, critics claim it is no longer. Others question the wisdom and impact on future generations of an architecture that rejects the past and presents no clear values as replacements and which often pursues strategies that are intentionally aggressive to human senses. See also Günter Behnisch Constructivism (art) Deconstruction (fashion) Futurism (art) Khôra Thom Mayne Novelty architecture Reconstruction (architecture) Rooftop Remodeling Falkestrasse Structuralism (architecture) Vorticism Citations General and cited references Derrida, Jacques (1967). Of Grammatology, (hardcover: , paperback: , corrected edition: ) trans. Gayatri Chakravorty Spivak. Johns Hopkins University Press. Derrida, Jacques & Eisenman, Peter (1997). Chora l Works. Monacelli Press. . Derrida, Jacques & Husserl, Edmund (1989). Edmund Husserl's Origin of Geometry: An Introduction. University of Nebraska Press. Frampton, Kenneth (1992). Modern Architecture, a critical history. Thames & Hudson- Third Edition. Johnson, Phillip & Wigley, Mark (1988). Deconstructivist Architecture: The Museum of Modern Art, New York. Little Brown and Company. Hays, K.M. (ed.) (1998). Oppositions Reader. Princeton Architectural Press. Kandinsky, Wassily. Point and Line to Plane. Dover Publications, New York. McLeod, Mary, "Architecture and Politics in the Reagan Era: From Postmodernism to Deconstructivism," "Assemblage," 8 (1989), pp. 23–59. Rickey, George (1995). Constructivism: Origins and Evolution. George Braziller; Revised edition. Salingaros, Nikos (2008). "Anti-Architecture and Deconstruction", 3rd edition. Umbau-Verlag, Solingen, Germany. Tschumi, Bernard (1994). Architecture and Disjunction. The MIT Press. Cambridge. Van der Straeten, Bart. Image and Narrative – The Uncanny and the architecture of Deconstruction Retrieved April, 2006. Venturi, Robert (1966). Complexity and Contradiction in Architecture, The Museum of Modern Art Press, New York. Venturi, Robert (1977). Learning from Las Vegas (with D. Scott Brown and S. Izenour), Cambridge MA, 1972, revised 1977. Wigley, Mark (1995). The Architecture of Deconstruction: Derrida's Haunt. The MIT Press. . Vicente Esteban Medina (2003) Forma y composición en la Arquitectura deconstructivista, © Tesis doctoral, Universidad Politécnica de Madrid. Registro Propiedad Intellectual Madrid Nº 16/2005/3967. Link de descarga de tesis en pdf: http://oa.upm.es/481/ Further reading External links Wiener Postmoderne Vicente Esteban Medina (2003). Forma y composición en la Arquitectura deconstructivista Art movements 20th-century architectural styles 21st-century architectural styles + Architectural design Deconstructivist Architecture
Deconstructivism
[ "Engineering" ]
4,403
[ "Postmodern architecture", "Design", "Architectural design", "Architecture" ]
5,517,376
https://en.wikipedia.org/wiki/Sphenosuchia
Sphenosuchia is a suborder of basal crocodylomorphs that first appeared in the Triassic and occurred into the Middle Jurassic. Most were small, gracile animals with an erect limb posture. They are now thought to be ancestral to crocodyliforms, a group which includes all living crocodilians. Stratigraphic range The earliest known members of the group (i.e. Hesperosuchus) are early Norian in age, found in the Blue Mesa Member of the Chinle Formation. Only one sphenosuchian is currently known from the Middle Jurassic, Junggarsuchus, from the Junggar Basin (Shishugou Formation) of China during either the Bathonian or the Callovian (~165 Ma) age, and the Hallopodidae are known from the Late Jurassic of North America. Phylogeny The monophyly of the group is debated, although several synapomorphies characterize the clade, including extremely slender limbs, a compact carpus and an elongate coracoid process. In 2002, Clark and Sues found a possible sphenosuchian clade of Dibothrosuchus, Sphenosuchus, and possibly Hesperosuchus and Saltoposuchus, with several other genera in unresolved positions (Kayentasuchus, Litargosuchus, Pseudhesperosuchus, and Terrestrisuchus). More recently, however, Clark et al. (2004) argued for the paraphyly of the group, contending that morphological characters were secondarily lost in more highly derived crocodylomorphs. Further analysis and study is required before the group's monophyly is resolved with certainty — a perfect phylogenetic analysis is, at present, impossible due to a paucity of fossil remains demonstrating phylogenetically informative characters. Below is a cladogram modified from Nesbitt (2011). Sphenosuchians are marked by the green bracket. Genera References Terrestrial crocodylomorphs Triassic crocodylomorpha Jurassic crocodylomorphs Paraphyletic groups
Sphenosuchia
[ "Biology" ]
453
[ "Phylogenetics", "Paraphyletic groups" ]
5,517,534
https://en.wikipedia.org/wiki/Frankford%20Avenue%20Bridge
The Frankford Avenue Bridge, also known as the Pennypack Creek Bridge, the Pennypack Bridge, the Holmesburg Bridge, and the King's Highway Bridge, erected in 1697 in the Holmesburg section of Northeast Philadelphia, in the U.S. state of Pennsylvania, is the oldest surviving roadway bridge in the United States. The three-span, twin stone arch bridge carries Frankford Avenue (U.S. Route 13), just north of Solly Avenue, over Pennypack Creek in Pennypack Park. The bridge was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1970. It was listed in the National Register of Historic Places in 1988. Construction The bridge, built at the request of William Penn to connect his mansion with the new city of Philadelphia, was an important link on the King's Highway that linked Philadelphia with cities to the north (Trenton, New York, and Boston). On March 10, 1683, the Pennsylvania General Assembly passed a law requiring the building of bridges across all of the rivers and creeks along all of the King's Highway in Pennsylvania, from the Falls of the Delaware (at Trenton, N.J.) to the southernmost ports of Sussex County (now part of the state of Delaware). The bridges, which were to be completed within 18 months, were to be ten feet wide and include railings along each side. The areas on either side of the bridges were to be cleared to facilitate horse and cart traffic. Each bridge was to be built by male inhabitants of the surrounding area; those who failed to appear were to be fined 20 shillings. In 1970, the bridge earned an award by the American Society of Civil Engineers, Philadelphia Section, as an outstanding engineering achievement and a historic civil engineering landmark. A bronze plaque was placed on the western parapet in commemoration. Notable travelers Anyone who traveled to Philadelphia by horseback or coach from the northern colonies crossed over the bridge, including delegates to the First or Second Continental Congresses, such as John Adams, from Massachusetts. In 1789, George Washington crossed the bridge on his way to his first presidential inauguration in New York. Improvements In 1803, the bridge was paved with macadam, and at its south end a toll booth was erected, remaining in operation until 1892 when the turnpike was purchased by the city of Philadelphia. The bridge was widened in 1893 to accommodate streetcars, which commenced service in 1895, and again in 1950 to better accommodate automobile traffic. It remains in use today. The bridge was reconstructed during 2018. Transportation SEPTA's trackless trolley route 66, which was formerly a streetcar, crosses the bridge on its journey from Frankford Transportation Center to Torresdale. Honors The bridge was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1970. It was listed in the National Register of Historic Places in 1988. See also List of bridges documented by the Historic American Engineering Record in Pennsylvania Frankford Avenue Bridge over Poquessing Creek, built 1904, also on the National Register. References External links Friends of Pennypack Park article American Society of Civil Engineers - Frankford Avenue Bridge Bridges on the National Register of Historic Places in Philadelphia Bridges completed in the 17th century Buildings and structures completed in 1697 Transport infrastructure completed in the 1690s U.S. Route 13 Historic Civil Engineering Landmarks Historic American Buildings Survey in Philadelphia Historic American Engineering Record in Philadelphia Philadelphia Register of Historic Places Holmesburg, Philadelphia Former toll bridges in Pennsylvania Bridges of the United States Numbered Highway System Road bridges on the National Register of Historic Places in Pennsylvania 1697 establishments in Pennsylvania Stone arch bridges in the United States Bridges in Philadelphia
Frankford Avenue Bridge
[ "Engineering" ]
725
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
5,517,556
https://en.wikipedia.org/wiki/Extension%20topology
In topology, a branch of mathematics, an extension topology is a topology placed on the disjoint union of a topological space and another set. There are various types of extension topology, described in the sections below. Extension topology Let X be a topological space and P a set disjoint from X. Consider in X ∪ P the topology whose open sets are of the form A ∪ Q, where A is an open set of X and Q is a subset of P. The closed sets of X ∪ P are of the form B ∪ Q, where B is a closed set of X and Q is a subset of P. For these reasons this topology is called the extension topology of X plus P, with which one extends to X ∪ P the open and the closed sets of X. As subsets of X ∪ P the subspace topology of X is the original topology of X, while the subspace topology of P is the discrete topology. As a topological space, X ∪ P is homeomorphic to the topological sum of X and P, and X is a clopen subset of X ∪ P. If Y is a topological space and R is a subset of Y, one might ask whether the extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no. Note the similarity of this extension topology construction and the Alexandroff one-point compactification, in which case, having a topological space X which one wishes to compactify by adding a point ∞ in infinity, one considers the closed sets of X ∪ {∞} to be the sets of the form K, where K is a closed compact set of X, or B ∪ {∞}, where B is a closed set of X. Open extension topology Let be a topological space and a set disjoint from . The open extension topology of plus is Let . Then is a topology in . The subspace topology of is the original topology of , i.e. , while the subspace topology of is the discrete topology, i.e. . The closed sets in are . Note that is closed in and is open and dense in . If Y a topological space and R is a subset of Y, one might ask whether the open extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no. Note that the open extension topology of is smaller than the extension topology of . Assuming and are not empty to avoid trivialities, here are a few general properties of the open extension topology: is dense in . If is finite, is compact. So is a compactification of in that case. is connected. If has a single point, is ultraconnected. For a set Z and a point p in Z, one obtains the excluded point topology construction by considering in Z the discrete topology and applying the open extension topology construction to Z – {p} plus p. Closed extension topology Let X be a topological space and P a set disjoint from X. Consider in X ∪ P the topology whose closed sets are of the form X ∪ Q, where Q is a subset of P, or B, where B is a closed set of X. For this reason this topology is called the closed extension topology of X plus P, with which one extends to X ∪ P the closed sets of X. As subsets of X ∪ P the subspace topology of X is the original topology of X, while the subspace topology of P is the discrete topology. The open sets of X ∪ P are of the form Q, where Q is a subset of P, or A ∪ P, where A is an open set of X. Note that P is open in X ∪ P and X is closed in X ∪ P. If Y is a topological space and R is a subset of Y, one might ask whether the closed extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no. Note that the closed extension topology of X ∪ P is smaller than the extension topology of X ∪ P. For a set Z and a point p in Z, one obtains the particular point topology construction by considering in Z the discrete topology and applying the closed extension topology construction to Z – {p} plus p. Notes Works cited Topological spaces Topology
Extension topology
[ "Physics", "Mathematics" ]
888
[ "Mathematical structures", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Spacetime" ]
5,517,733
https://en.wikipedia.org/wiki/Applied%20Mechanics%20Division
The Applied Mechanics Division (AMD) is a division in the American Society of Mechanical Engineers (ASME). The AMD was founded in 1927, with Stephen Timoshenko being the first chair. The current AMD membership is over 5000, out of about 90,000 members of the ASME. AMD is the largest of the six divisions in the ASME Basic Engineering Technical Group. Mission The mission of the Applied Mechanics Division is to foster fundamental research in, and intelligent application of, applied mechanics. Summer Meeting The Division participates annually in a Summer Meeting by programming Symposia and committee meetings. The principal organisers of the Summer Meetings rotate among several organizations, with a period of four years, as described below. Year 4n (2020, 2024, etc.): International Union of Theoretical and Applied Mechanics (IUTAM). Year 4n + 1 (2017, 2021, etc.): Materials Division of the ASME (joined with the Applied Mechanics Division of ASME, Engineering Mechanics of the American Society of Civil Engineers, and Society of Engineering Sciences). Year 4n + 2 (2018, 2022, etc.): National Committee of Theoretical and Applied Mechanics. Year 4n + 3 (2019, 2023, etc.): Applied Mechanics Division of the ASME (joined with Materials Division of ASME). Publications Newsletters of the Applied Mechanics Division Journal of Applied Mechanics Applied Mechanics Reviews Awards Timoshenko Medal Koiter Medal Drucker Medal Thomas K. Caughey Dynamics Award Ted Belytschko Applied Mechanics Award Thomas J.R. Hughes Young Investigator Award Journal of Applied Mechanics Award These awards are conferred every year at the Applied Mechanics Division Banquet held during the annual ASME (IMECE) conference. Awards other than those mentioned above are also celebrated during this banquet, such as the Haythornthwaite Research Initiation Grant Award and the Eshelby Mechanics Award for Young Faculty. Executive committee The responsibility for guiding the Division, within the framework of the ASME, is vested in an executive committee of five members. The executive committee meets twice a year at the Summer Meeting and Winter Annual Meeting. Members correspond throughout the year by emails and conference calls. Three members shall constitute a quorum, and all action items must be approved by a majority of the committee. Each member serves a term of five years, beginning and ending at the conclusion of the Summer Meeting, spending one year in each of the following positions: Secretary Vice-chair of the Program Committee Chair of the Program Committee Vice-chair of the Division Chair of the Division New members of the executive committee are sought from the entire membership of the Division. Due considerations are given to leadership, technical accomplishment, as well as diversity in geographic locations, sub-disciplines, and genders. At the Winter Annual Meeting each year, the executive committee nominates one new member, who is subsequently appointed by the ASME Council. The executive committee has an additional non-rotating position, the Recording Secretary. The responsibility of the Recording Secretary is to attend and record minutes for the Executive Committee Meeting at the Summer and Winter Annual Meeting and the General Committee Meeting at the Winter Annual Meeting. The Recording Secretary serves a term of two years and is selected from the junior members (i.e. young investigators) of the AMD. Current members of the Executive Committee Yashashree Kulkarni, University of Houston, Houston, TX, United States: Secretary Samantha Daly, University of California at Santa Barbara, Santa Barbara, CA, United States: Vice-chair of the Program Committee Narayana Aluru, University of Texas at Austin, Austin, TX, United States: Chair of the Program Committee Glaucio Paulino, Princeton University, Princeton NJ, United States: Vice-chair of the Division Marco Amabili, McGill University, Montreal, Canada: Chair of the Division Technical Committees The mission of a Technical Committee is to promote a field in Applied Mechanics. The principal approach for a Technical Committee to accomplish this mission is to organize symposia at the Summer and Winter Meetings. Technical Committees generally meet at the Winter Annual Meeting and the Summer Meeting; they may also schedule special meetings. There are 17 Technical Committees in the Applied Mechanics Division. Technical Committees are established and dissolved by the executive committee. Financial History See Naghdi's "A Brief History of the Applied Mechanics Division of ASME" for details of the history from 1927 to 1977. Past chairs of the Applied Mechanics Division Taher Saif (2023), Pradeep Guduru (2022), Yuri Bazilevs (2021), Yonggang Huang (2020), Balakumar Balachandran (2019), Pradeep Sharma (2018), Arun Shukla (2017), Peter Wriggers (2016), Huajian Gao (2015), Lawrence A. Bergman (2014), Ken Liechti (2013), Ares Rosakis (2012), Tayfun Tezduyar (2011), Zhigang Suo (2010), Dan Inman (2009), K. Ravi-Chandar (2008), Thomas N. Farris (2007), Wing Kam Liu (2006), Mary C. Boyce (2005), Pol Spanos (2004), Stelios Kyriakides (2003), Dusan Krajcinovic (2002), Thomas J.R. Hughes (2001), Alan Needleman (2000), Lallit Anand (1999), Stanley A. Berger (1998), Carl T. Herakovich (1997), Thomas A. Cruse (1996), John W. Hutchinson (1995), L.B. Freund (1994), David B. Bogy (1993), William S. Saric (1992), Ted Belytschko (1991), Michael J. Forrestal (1990), Sidney Leibovich (1989), Thomas L. Geers (1988), James R. Rice (1987), Michael M. Carroll (1986), Jan D. Achenbach (1985), Charles R. Steele (1984), William G. Gottenberg (1983), R.C. DiPrima (1982), R.M. Christensen (1981), R.S. Rivlin (1980), Richard Skalak (1979), F. Essenburg (1978), Yuan-Cheng Fung (1977), J. Miklowitz (1976), B.A. Boley (1975), George Herrmann (1974), J. Kestin (1973), Paul M. Naghdi (1972), S. Levy (1971), H.N. Abramson (1970), Stephen H. Crandall (1969), P.G. Hodge Jr. (1968), R. Plunkett (1967), M.V. Barton (1966), George F. Carrier (1965), Daniel C. Drucker (1964), E. Reissner (1963), A.M. Wahl (1961, 1962), S.B. Batdorf (1960), William Prager (1959), W. Ramberg (1958), M. Hetenyl (1957), Raymond D. Mindlin (1956), Nicholas J. Hoff (1955), N.M. Newmark (1954), D. Young (1953), R.E. Peterson (1952), L.H. Donnell (1951), R.P. Kroon (1950), M. Golan (1949), W.M. Murray (1948), H.W. Emmons (1947), H. Poritsky (1946), J.N. Goodier (1945), J.H. Keenan (1943, 1944), H.L. Dryden (1942), J.P. Den Hartog (1940, 1941), C.R. Soderberg (1937,1938), E.O. Waters (1936), J.A. Goff (1935), F.M. Lewis (1934), J.M. Lessells (1933), G.B. Pegram (1932), A.L. Kimball (1931), G.M. Eaton (1928, 1929), Stephen P. Timoshenko (1927, 1930) Relevant websites Homepage of Applied Mechanics Division iMechanica.org, a web of mechanics and mechanicians. References P.M. Naghdi, A brief history of the Applied Mechanics Division of ASME. Journal of Applied Mechanics 46, 723–794. Bylaws of Applied Mechanics Division Organizations established in 1927 American Society of Mechanical Engineers
Applied Mechanics Division
[ "Engineering" ]
1,800
[ "American Society of Mechanical Engineers", "Mechanical engineering organizations" ]
5,517,814
https://en.wikipedia.org/wiki/Michael%20Bartosh
Michael Bartosh (September 18, 1977 – June 11, 2006) was president and CTO of 4am Media, Inc, an Apple Certified Trainer, certified member of the Apple Consultants Network, published author and former systems engineer for Apple Computer. Previous to joining Apple full-time he had worked as an Apple campus rep (at Texas A&M) and had the opportunity to meet Steve Jobs after his 1999 MacWorld keynote. His main focus and expertise was directory services and integration, and was considered by members of the Macintosh support and development community to be one of the foremost experts on the subject, having literally "written the book." His most recent work includes Mac OS X Tiger Server Administration (published posthumously), Essential Mac OS X Panther Server Administration, articles published on O'Reilly network (Open Directory and Active Directory parts 1-4 and Panther and Active Directory ), as well as presentations and classes at many training centers/events, trade shows and conferences. He was also a regular contributor on several technical mailing lists related to Mac OS X and Mac OS X Server. Death He died as a result of injuries caused by a fall from a balcony at a friend's home in Tokyo in June 2006. Police ruled the death an accident. The Michael Bartosh Memorial Scholarship was created in his honor. Bibliography Mac OS X Tiger Server Administration, O'Reilly Media, September 2006, Essential Mac OS X Panther Server Administration, O'Reilly Media, May 2005, References External links 4AM Media was Michael's training and consulting business. Bio and list of articles at O'Reilly. 1977 births 2006 deaths Accidental deaths from falls Apple Inc. employees Computer systems engineers Technical writers Accidental deaths in Japan
Michael Bartosh
[ "Technology" ]
343
[ "Computer systems engineers", "Computer systems" ]
5,517,983
https://en.wikipedia.org/wiki/Period-after-opening%20symbol
The period-after-opening symbol or PAO symbol is a graphic symbol that identifies the useful lifetime of a cosmetic product after its package has been opened for the first time. It depicts an open cosmetics pot and is used together with a written number of months or years. In the European Union, cosmetics products with a shelf life of at least 30 months are not required to carry a "best used before end of ..." date. Instead, there has to be "an indication of the period of time after opening for which the product can be used without any harm to the consumer". The EU Cosmetics Directive defines in Annex VIIIa the language-neutral open-jar symbol, which manufacturers should use to indicate this period. The time period is most often represented compactly as a number of months, followed by the letter "M", as in "36M" or "36M" for a period of thirty-six months, written either onto the front side of the depicted pot or to the right or bottom of it. The letter "M" is the initial for the word month not only in , but also in and many other European languages. It is also used in the ISO 8601 duration notation. References , Annex VIIIa, modified by Directive 2003/15/EC. Practical implementation of Article 6(1)(c) of the Cosmetics Directive (76/768/EEC), Labelling of product durability: “Period of time after opening”, European Commission, 04/ENTR/COS/28. External links What is the Period of time after opening? – European Commission web site Pictograms Cosmetics Consumer symbols
Period-after-opening symbol
[ "Mathematics" ]
339
[ "Symbols", "Pictograms" ]
5,518,314
https://en.wikipedia.org/wiki/Association%20for%20Computer%20Aided%20Design%20in%20Architecture
The Association for Computer Aided Design In Architecture (ACADIA) is a 501(c)(3) non-profit organization active in the area of computer-aided architectural design (CAAD). Mission statement Begun in 1981, the organization's objectives are recorded in its bylaws: "ACADIA was formed for the purpose of facilitating communication and information exchange regarding the use of computers in architecture, planning and building science. A particular focus is education and the software, hardware and pedagogy involved in education." "The organization is also committed to the research and development of computer aides that enhance design creativity, rather than simply production, and that aim at contributing to the construction of humane physical environments." Membership Membership is open to anyone who subscribes to the objectives of the organization, including architects, educators, and software developers, whether resident in North America or not. An online membership registration form and directory is available via the organization. The organization is primarily governed by the elected Board of Directors. The organization is led by the elected President, who presides over Board of Directors meetings, but does not vote except in the case of a tie. Presidents (elected) Activities Annual conference ACADIA sponsors an annual national conference, held in the autumn of each year at a different site in North America. Papers for the conferences undergo extensive blind review before being accepted for presentation (and publication). Membership is not a prerequisite for submission of a paper. Proceedings Each year the conference papers are gathered into a proceedings publication which is distributed to members, and available to the public via the open access database CumInCAD. Awards Started in 1998, ACADIA Awards of Excellence are "the highest award that can be achieved in the field of architectural computing". The awards are given in areas of practice, teaching, research and service, with at most one award in each category per year. Past awards have recognized various significant contributors to the field of architectural computing. The current awards given annually or biannually are the Lifetime Achievement Award, the Digital Practice Award of Excellence, the Innovative Academic Program Award of Excellence, the Innovative Research Award of Excellence, the Society Award for Leadership, and the Teaching Award of Excellence. Lifetime Achievement Award Innovative Research Award of Excellence Digital Practice Award of Excellence Society Award for Leadership Innovative Academic Program Award of Excellence History ACADIA was founded in 1981 by some of the pioneers in the field of design computation including Bill Mitchell, Chuck Eastman, and Chris Yessios. Since then, ACADIA has hosted over 40 conferences across North America and has grown into a strong network of academics and professionals in the design computation field. Related organizations Sister organizations There are four sister organizations around the world to provide a more accessible regional forum for discussion of computing and design. The major ones are CAADRIA - The Association for Computer Aided Architectural Design in Asia, since 1996. SIGraDi - Iberoamerican Society of Digital Graphics, since 1997. ASCAAD - The Arab Society for Computer Aided Architectural Design, since 2001. eCAADe - The Association for Education and Research in Computer-Aided Architectural Design in Europe. Other related organizations CAAD Futures - Computer Aided Architectural Design Futures, since 1985. CUMINCAD - The Cumulative Index of Computer Aided Architectural Design, with public CumInCAD records available via an Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) feed and records are available via multiple bibliographic archives and citation indexes online. References External links Association for Computer Aided Design In Architecture Information technology organizations based in North America Architectural design Non-profit architecture organizations based in the United States Charities based in North Dakota
Association for Computer Aided Design in Architecture
[ "Engineering" ]
737
[ "Design", "Architectural design", "Architecture" ]
5,518,587
https://en.wikipedia.org/wiki/W.%20E.%20P.%20Duncan
Wilfred Eben Pinkerton Duncan (1897 – 28 January 1977) was an important figure in the early period of the Toronto Transit Commission's history. He was born in Glasgow, Scotland, and graduated with a B.Sc. degree in engineering from Glasgow University. He emigrated to Canada and worked from 1910 to 1914 in the construction department of the Canadian Pacific Railway. Between 1915 and 1919 he served overseas in the Great War with the Canadian Expeditionary Force and the Royal Engineers, attaining the rank of Major. After the war he worked as a construction engineer in Toronto. He joined the Toronto Transportation Commission in 1921, and served in various engineering roles. By 1945 he was the TTC's Chief Engineer, and he became General Manager, the senior staff position, in 1952. In 1959, when the senior position was split in two, he became General Manager – Subway Construction, while John G. Inglis assumed the role of General Manager - Operations. Duncan retired in 1961 but remained active as a General Consultant to the TTC until the opening of the University Subway in 1963. He was instrumental in the growth of the system and was in charge of the TTC during the building of the Yonge Subway. The Duncan Shops, a heavy bus maintenance facility at the TTC's Hillcrest Complex, is named in his honour. References TTC Coupler, September 1952 Vol 27 No 9 TTC Coupler, March 1961 Vol 36 No 3 TTC Coupler, March 1977 Vol 52 No 3 Specific 1897 births 1977 deaths Engineers from Glasgow Alumni of the University of Glasgow Canadian civil engineers Toronto Transit Commission general managers Scottish emigrants to Canada Royal Engineers
W. E. P. Duncan
[ "Engineering" ]
330
[ "Civil engineering", "Civil engineering stubs" ]
5,518,588
https://en.wikipedia.org/wiki/1-Chloro-9%2C10-bis%28phenylethynyl%29anthracene
1-Chloro-9,10-bis(phenylethynyl)anthracene is a fluorescent dye used in lightsticks. It emits yellow-green light, used in 30-minute high-intensity Cyalume sticks. See also 9,10-Bis(phenylethynyl)anthracene 2-Chloro-9,10-bis(phenylethynyl)anthracene References Fluorescent dyes Organic semiconductors Anthracenes Alkyne derivatives Chloroarenes
1-Chloro-9,10-bis(phenylethynyl)anthracene
[ "Chemistry" ]
115
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
5,518,616
https://en.wikipedia.org/wiki/5%2C12-Bis%28phenylethynyl%29naphthacene
5,12-Bis(phenylethynyl)naphthacene is a fluorescent dye used in lightsticks. It yields orange light. See also 9,10-bis(phenylethynyl)anthracene References Fluorescent dyes Polycyclic aromatic hydrocarbons Organic semiconductors Alkyne derivatives
5,12-Bis(phenylethynyl)naphthacene
[ "Chemistry" ]
71
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
5,518,657
https://en.wikipedia.org/wiki/Bis%282%2C4%2C5-trichloro-6-%28pentyloxycarbonyl%29phenyl%29oxalate
Bis[2,4,5-trichloro-6-(pentyloxycarbonyl)phenyl]oxalate (also known as bis(2,4,5-trichloro-6-carbopentoxyphenyl) oxalate or CPPO) is an organic compound with the formula . A white solid, it is classified as a diester of oxalic acid. It is an active ingredient for the chemiluminescence in glowsticks. It can be synthesized by reacting 2-carbopentoxy-3,5,6-trichlorophenol with oxalyl chloride. When mixed with hydrogen peroxide in an organic solvent (diethyl phthalate, ethyl acetate, etc...) in the presence of a fluorescent dye CPPO causes the emission of light, following its degradation to 1,2-dioxetanedione (note that many side-groups of the aromatic rings are not shown.): The reaction rate is pH dependent, and slightly alkaline conditions achieved by adding a weak base, e.g. sodium salicylate, will produce brighter light. Developed by American Cyanamid in the 1960s, the formulation containing CPPO, a fluorescer, and a glass capsule containing hydrogen peroxide and a base catalyst, all in dialkyl phthalate solvents, was marketed as Cyalume. The following colors can be produced by using different dyes: References Oxalate esters Phenol esters Chemiluminescence
Bis(2,4,5-trichloro-6-(pentyloxycarbonyl)phenyl)oxalate
[ "Chemistry" ]
329
[ "Luminescence", "Chemiluminescence" ]
5,518,664
https://en.wikipedia.org/wiki/Acetylcarnitine
Acetyl-L-carnitine, ALCAR or ALC, is an acetylated form of L-carnitine. It is naturally produced by the human body, and it is available as a dietary supplement. Acetylcarnitine is broken down in the blood by plasma esterases to carnitine which is used by the body to transport fatty acids into the mitochondria for breakdown and energy production. Biochemical production and action Carnitine is both a nutrient and made by the body as needed; it serves as a substrate for important reactions in which it accepts and gives up an acyl group. Acetylcarnitine is the most abundant naturally occurring derivative and is formed in the reaction: acetyl-CoA + carnitine CoA + acetylcarnitine where the acetyl group displaces the hydrogen atom in the central hydroxyl group of carnitine. Coenzyme A (CoA) plays a key role in the Krebs cycle in mitochondria, which is essential for the production of ATP, which powers many reactions in cells; acetyl-CoA is the primary substrate for the Krebs cycle, once it is de-acetylated, it must be re-charged with an acetyl-group in order for the Krebs cycle to keep working. Most cell types appear to have transporters to import carnitine and export acyl-carnitines, which seems to be a mechanism to dispose of longer-chain moieties; however many cell types can also import ALCAR. Within cells, carnitine plays a key role in importing acyl-CoA into mitochondria; the acyl-group of the acyl-CoA is transferred to carnitine, and the acyl-carnitine is imported through both mitochondrial membranes before being transferred to a CoA molecule, which is then beta oxidized to acetyl-CoA. A separate set of enzymes and transporters also plays a buffering role by eliminating acetyl-CoA from inside mitochondria created by the pyruvate dehydrogenase complex that is in excess of its utilization by the Krebs cycle; carnitine accepts the acetyl moiety and becomes ALCAR, which is then transported out of the mitochondria and into the cytosol, leaving free CoA inside the mitochondria ready to accept new import of fatty acid chains. ALCAR in the cytosol can also form a pool of acetyl-groups for CoA, should the cell need it. Excess acetyl-CoA causes more carbohydrates to be used for energy at the expense of fatty acids. This occurs by different mechanisms inside and outside the mitochondria. ALCAR transport decreases acetyl-CoA inside the mitochondria, but increases it outside. Health effects Carnitine and ALCAR supplements carry warnings of a risk that they promote seizures in people with epilepsy, but a 2016 review found this risk to be based only on animal trials. Research Reviews Peripheral neuropathy: Meta-analyses from 2015 and 2017 both conclude that the current evidence suggests ALC reduces pain from peripheral neuropathy with few adverse effects. The 2017 review also suggested ALC improved electromyographic parameters. Both called for more randomized controlled trials. An updated Cochrane review in 2019 of four studies with 907 participants was very uncertain as to if ALC caused a pain reduction after 6 to 12 months of treatment. Chemotherapy-induced peripheral neuropathy (CIPN): A review of two studies concluded that ALC may be a treatment option for paclitaxel- and cisplatin-induced CIPN, while a clinical trial showed it did not prevent CIPN and appeared to worsen the conditions in taxane therapy. Male infertility: Scientific reviews from 2016 and 2014 showed mixed results, with some studies showing a positive relationship between ALC and sperm motility, and others showing no relationship. Dementia: A 2003 Cochrane review sought to determine the safety and efficacy of ALCAR in dementia but the reviewers found only clinical trials studies on Alzheimer's disease; the review found that the pharmacology of ALCAR was poorly understood and that based on the lack of efficacy, ALCAR was unlikely to be an important treatment for AD. Depression: One 2014 review assessed the use of ALCAR in fourteen clinical trials for various conditions with depressive symptoms; the trials were small (ranging from 20 to 193 subjects) and their design was so different that results could not be generalized; most studies showed positive results and a lack of adverse effects. The mechanism of action by which ALCAR could treat depression is not known. A meta-analysis from 2014 concluded that ALCAR could only be recommended for the treatment of persistent depressive disorder if publication bias was deemed improbable. A 2018 systematic review and meta-analysis of 12 randomized controlled trials found "supplementation significantly decreases depressive symptoms compared with placebo/no intervention, while offering a comparable effect with that of established antidepressant agents with fewer adverse effects." The review also indicates that "the effect of ALC in younger subjects was not more effective than placebo in improving these symptoms." indicating a need for more research explaining the age/effect relation. Fragile X syndrome: A 2015 Cochrane review of ALCAR in fragile X syndrome found only two placebo-controlled trials, each of low quality, and concluded that ALCAR is unlikely to improve intellectual functioning or hyperactive behavior in children with this condition. Hepatic encephalopathy: ALCAR has been studied in hepatic encephalopathy, a complication of cirrhosis involving neuropsychiatric impairment; ALCAR improves blood ammonia levels and generates a modest improvement in psychometric scores but does not resolve the condition – it may play a minor role in managing the condition. Studies In a small clinical study, when ALCAR was administered intravenously and insulin levels were held steady and a meal low in carnitine but high in carbohydrates was taken by healthy young men, ALCAR appeared to decrease glucose consumption in favor of fat oxidation. Mitochondrial decay and oxidative damage to RNA/DNA increases with age in the rat hippocampus, a region of the brain associated with memory. Memory performance declines with age. These increases in decay and damage, and memory loss itself, can be partially reversed in old rats by feeding acetyl-L-carnitine. References Further reading Carnitine (L-carnitine), University of Maryland Medical Center Antidepressants Acetate esters Amino acid derivatives Anti-aging substances Salts of carboxylic acids Dietary supplements Quaternary ammonium compounds
Acetylcarnitine
[ "Chemistry", "Biology" ]
1,396
[ "Salts of carboxylic acids", "Anti-aging substances", "Senescence", "Salts" ]
5,520,376
https://en.wikipedia.org/wiki/Shock%20factor
Shock factor is a commonly used figure of merit for estimating the amount of shock experienced by a naval target from an underwater explosion as a function of explosive charge weight, slant range, and depression angle (between vessel and charge). R is the slant range in feet W is the equivalent TNT charge weight in pounds = charge weight (lbs) · Relative effectiveness factor is the depression angle between the hull and warhead. The application scenario for Equation 1 is illustrated by Figure 1. The numeric result from computing the shock factor has no physical meaning, but it does provide a value that can be used to estimate the effect of an underwater blast on a vessel. Table 1 describes the effect of an explosion on a vessel for a range of shock factors. {| class="wikitable" |+ Table 1: Shock Factor Table of Effects |- ! Shock Factor !! Damage |- ! < 0.1 | Very limited damage. Generally considered insignificant |- ! 0.1–0.15 | Lighting failures; electrical failures; some pipe leaks; pipe ruptures possible |- ! 0.15–0.20 | Increase in occurrence of damage above; pipe rupture likely; machinery failures |- ! 0.2 | General machinery damage |- ! ≥ 0.5 | Usually considered lethal to a ship |} Background The idea behind the shock factor is that an explosion close to a ship generates a shock wave that can impart sudden vertical motions to a ship's hull and internal systems. Many of the internal mechanical systems (e.g. engine coupling to prop) require precise alignment in order to operate. These vibrations upset these critical alignments and render these systems inoperative. The vibrations can also destroy lighting and electrical components, such as relays. The explosion also generates a gas bubble that undergoes expansion and contraction cycles. These cycles can introduce violent vibrations into a hull, generating structural damage, even to the point of breaking the ship's keel. In fact, this is a goal of many undersea weapon systems. The magnitude of an explosion's effects have been shown through empirical and theoretical analyses to be related to the size of the explosive charge, the distance of the charge from the target, and the angular relationship of the hull to the shock wave. References Explosives
Shock factor
[ "Chemistry" ]
465
[ "Explosives", "Explosions" ]
5,520,917
https://en.wikipedia.org/wiki/Multimedia%20over%20Coax%20Alliance
The Multimedia over Coax Alliance (MoCA) is an international standards consortium that publishes specifications for networking over coaxial cable. The technology was originally developed to distribute IP television in homes using existing cabling, but is now used as a general-purpose Ethernet link where it is inconvenient or undesirable to replace existing coaxial cable with optical fiber or twisted pair cabling. MoCA 1.0 was approved in 2006, MoCA 1.1 in April 2010, MoCA 2.0 in June 2010, and MoCA 2.5 in April 2016. The most recently released version of the standard, MoCA 3.0, supports speeds of up to . This technology is not yet available to customers. Membership The Alliance currently has 45 members including pay TV operators, OEMs, CE manufacturers and IC vendors. MoCA's board of directors consists of Arris, Comcast, Cox Communications, DirecTV, Echostar, Intel, InCoax, MaxLinear and Verizon. Technology Within the scope of the Internet protocol suite, MoCA is a protocol that provides the link layer. In the 7-layer OSI model, it provides definitions within the data link layer (layer 2) and the physical layer (layer 1). DLNA approved of MoCA as a layer 2 protocol. A MoCA network can contain up to 16 nodes for MoCA 1.1 and higher, with a maximum of 8 for MoCA 1.0. The network provides a shared-medium, half-duplex link between all nodes using time-division multiplexing; within each timeslot, any pair of nodes communicates directly with each other using the highest mutually-supported version of the standard. Versions MoCA 1.0 The first version of the standard, MoCA 1.0, was ratified in 2006 and supports transmission speeds of up to 135 Mb/s. MoCA 1.1 MoCA 1.1 provides 175 Mbit/s net throughputs (275 Mbit/s PHY rate) and operates in the 500 to 1500 MHz frequency range. MoCA 2.0 MoCA 2.0 offers actual throughputs (MAC rate) up to 1 Gbit/s. Operating frequency range is 500 to 1650 MHz. Packet error rate is 1 packet error in 100 million. MoCA 2.0 also offers lower power modes of sleep and standby and is backward compatible with MoCA 1.1. In March 2017, SCTE/ISBE society and MoCA consortium began creating a new "standards operational practice" (SCTE 235) to provide MoCA 2.0 with DOCSIS 3.1 interoperability. Interoperability is necessary because both MoCA 2.0 and DOCSIS 3.1 may operate in the frequency range above 1 GHz. The standard "addresses the need to prevent degradation or failure of signals due to a shared frequency range above 1 GHz". MoCA 2.5 MoCA 2.5 (introduced April 13, 2016) offers actual data rates up to 2.5 Gbit/s, continues to be backward compatible with MoCA 2.0 and MoCA 1.1, and adds MoCA protected setup (MPS), Management Proxy, Enhanced Privacy, Network wide Beacon Power, and Bridge detection. MoCA Access is intended for multiple dwelling units (MDUs) such as hotels, resorts, hospitals, or educational facilities. It is based on the current MoCA 2.0 standard which is capable of 1 Gbit/s net throughputs, and MoCA 2.5 which is capable of 2.5 Gbit/s. MoCA 3.0 The MoCA 3.0 standard has been released and increases the maximum throughput to 10 Gbit/s. However, this is not yet available to customers. Performance profiles Frequency band plan Notes: Channel C4 is commonly used for Verizon FiOS for the "WAN" link from the ONT to the router. Channels D1-D8 are commonly used for "LAN" links, between set-top boxes and the router. E band channels are commonly used by DirecTV converter boxes. The DirecTV Ethernet-to-Coax Adapter (DECA) uses MoCA on this "Mid-RF" frequency band. D10A 100 MHz wide means it goes up to 1675 MHz, so splitters need to be 5-1675 MHz. See also Ethernet over coax G.hn Home gateway Home network HomePlug Powerline Alliance HomePNA IEEE 802.3 IEEE 802.11 IEEE 1905 Ultra-high-definition television Wi-Fi over Coax Wireless LAN References External links Computer networking Computer network organizations Consumer electronics Ethernet standards
Multimedia over Coax Alliance
[ "Technology", "Engineering" ]
965
[ "Computer networking", "Computer science", "Computer engineering" ]
5,521,022
https://en.wikipedia.org/wiki/Darwin%20Streaming%20Server
Darwin Streaming Server (DSS) was the first open sourced RTP/RTSP streaming server. It was released March 16, 1999 and is a fully featured RTSP/RTP media streaming server capable of streaming a variety of media types including H.264/MPEG-4 AVC, MPEG-4 Part 2 and 3GP. Development Developed by Apple, it is the open source equivalent of QuickTime Streaming Server, and is based on its code. Ports The initial DSS source code release compiled only on OS X, but external developers quickly ported the code to Linux, FreeBSD, Solaris, Tru64 Unix, Mac OS 9 and Windows. Source code is available as a release download or as development code via CVS. See also HTTP Live Streaming – Apple's video/audio streaming server protocol Helix Universal Server – Multiformat streaming server from RealNetworks Wowza Media Server – a unified streaming server from Wowza Media Systems References External links Darwin Streaming Server QuickTime Free audio software Streaming
Darwin Streaming Server
[ "Technology" ]
208
[ "Multimedia", "Streaming" ]
5,521,184
https://en.wikipedia.org/wiki/National%20Academy%20of%20Engineering
The National Academy of Engineering (NAE) is an American nonprofit, non-governmental organization. It is part of the National Academies of Sciences, Engineering, and Medicine (NASEM), along with the National Academy of Sciences (NAS) and the National Academy of Medicine (NAM). The NAE operates engineering programs aimed at meeting national needs, encourages education and research, and recognizes the superior achievements of engineers. New members are annually elected by current members, based on their distinguished and continuing achievements in original research. The NAE is autonomous in its administration and in the selection of its members, sharing with the rest of the National Academies the role of advising the federal government. History The National Academy of Sciences was created by an Act of Incorporation dated March 3, 1863, which was signed by then president of the United States Abraham Lincoln with the purpose to "...investigate, examine, experiment, and report upon any subject of science or art..." No reference to engineering was in the original act, the first recognition of any engineering role was with the setup of the Academy's standing committees in 1899. At that time, there were six standing committees: (mathematics and astronomy; physics and engineering; chemistry; geology and paleontology; biology; and anthropology. In 1911, this committee structure was again reorganized into eight committees: biology was separated into botany; zoology and animal morphology; and physiology and pathology; anthropology was renamed anthropology and psychology with the remaining committees including physics and engineering, unchanged. In 1913, George Ellery Hale presented a paper on the occasion of the Academy's 50th anniversary, outlining an expansive future agenda for the Academy. Hale proposed a vision of an Academy that interacted with the "whole range of science", one that actively supported newly recognized disciplines, industrial sciences and the humanities. The proposed creation of sections of medicine and engineering was protested by one member because those professions were "mainly followed for pecuniary gain". Hale's suggestions were not accepted. Nonetheless, in 1915, the Section of Physics and Engineering was recommended to be changed to physics only, and a year later the Academy began planning a separate section of engineering. The Academy was requested to investigate the great slide in Culebra Cut late in 1913 which ultimately delayed the opening of the Panama Canal by ten months. The study group, commissioned by the United States Army Corps of Engineers and although composed of both engineers and geologists resulted in a final report prepared by two geologists Charles Whitman Cross and Harry Fielding Reid. The report, submitted to President Wilson in November 1917, concluded that claims of repeated interruptions in canal traffic for years to come were unjustified. During this time, the United States confronted the prospect of war with Germany and the question of preparedness was raised. Engineering societies responded to this crisis by offering technical services to the Federal government such as the Naval Consulting Board of 1915 and the Council of National Defense of 1916. On June 19 of that year, then US President Woodrow Wilson requested the National Academy of Sciences to organize a "National Research Council" albeit with the assistance of the Engineering Foundation. (pg. 569) The purpose of the Council (at first called the National Research Foundation) was in part to foster and encourage "the increased use of scientific research in the development of American industries... the employment of scientific methods in strengthening the national defense... and such other applications of science as will promote the national security and welfare." During the period of national preparations, an increasing number of engineers were being elected to the physics and engineering section of the Academy, this did not, however, resolve the long-standing issue of where to place applied sciences such as engineering in the Academy. In 1863, the founding members who were prominent military and naval engineers comprised almost a fifth of the membership. during the latter part of the 19th century, this engineering membership steadily declined and by 1912, Henry Larcom Abbot, who had been elected in 1872, was the sole remaining representative of the Corps of Engineers. With the Engineering Division in the wartime National Research Council being used as a precedent, the Academy established its first engineering section with nine members in 1919 with civil war veteran Henry Larcom Abbot as its first chairman. OF those nine members, only two were new members, the others had transferred from existing sections; "... of the 164 members of the Academy that year, only seven chose to identify themselves as engineers." During this period of 1915-1916 activity by engineering societies, the National Academy of Sciences complained that there was a lack of scientists and the predominance of engineers on the Federal government's wartime technical committee, the Naval Consulting Board. One of the mathematicians on the Board, Robert Simpson Woodward, was actually trained and early on practiced as a civil engineer. The Academy's response was to move forward with the idea of achieving Academy control over the provision of technical services to the Government by means of formal recognition of the role played by the National Research Council (NRC) established the next year in 1916. Later in 1918, Wilson formalized the NRC's existence under Executive Order 2859. Wilson's order declared the function of the NRC to be in general: "(T)o stimulate research in the mathematical. physical, and biological sciences. and in the application of these sciences to engineering, agriculture. medicine. and other useful arts. with the object of increasing knowledge, of strengthening the national defense, and of contributing in other ways to the public welfare." In 1960, Augustus Braun Kinzel, an engineer with the Union Carbide Corporation and a member of the Academy, stated that the "..engineering profession was considering the establishment of an academy of engineering..." confirmed by the Engineers Joint Council of the national engineering societies to afford themselves of opportunities and services similar to those the Academy provided in science. The question being, whether to affiliate with the National Academy or set up a separate Academy. During the past century of the Academy's existence, engineers had been part of the founding members and a sixth of its membership, the founding of the National Research Council in 1916 with the assistance of the Engineering Foundation, the contributions of the NRC Division of Engineering in the post-World War I period, the presidency of engineer Frank B. Jewett during World War II. In short, "...the ascendancy of science in the public mind since World War I had been partly at the expense of the prestige of the engineering profession." (See also.) The Academy worked with the Engineers Joint Council led by President Eric Arthur Walker as the prime mover, to make plans to establish a new National Academy of Engineering that's independent, with a congressional charter of its own. Walker noted that this moment offered a "...singular opportunity for the engineering profession to participate actively and directly in communicating objective advice to the government..." on engineering matters related to national policy. A secondary function was to recognize distinguished individuals for their engineering contributions. Ultimately, the initial organizers decided to create the Academy of Engineering as part of the National Academy of Sciences (NAS). On December 5, 1964, marking, "a major landmark in the history of the relationships between science and engineering in our country," the Academy approved the Articles of Incorporation of the new academy and its twenty-five charter members met to organize the National Academy of Engineering (NAE) as an autonomous parallel body in the National Academy of Sciences, with Augustus B. Kinzel as its first President. OF the 675 members of the National Academy of Sciences at that time, only about 30 called themselves engineers. The National Academy of Engineering then were a "purposeful compromise" given the fears of the NAS of expanded membership by engineers. The stated objects and purposes of the newly created National Academy of Engineering were to: To advise the Congress and the executive branch... whenever called upon... on matters of national import pertinent to engineering... To cooperate with the National Academy of Sciences on matters involving both science and engineering... To serve the nation... in connection with significant problems in engineering and technology... In 1966, the National Academy of Engineering established the Committee on Public Engineering Policy (COPEP). In 1982, the NAE and NAS committees were merged to become the Committee on Science, Engineering, and Public Policy. In 1967, the NAE formed an aeronautics and space engineering board to advise NASA and other Federal agencies chaired by Horton Guyford Stever. In 1971, the National Academy of Engineering advised the Port Authority of New York and New Jersey not to construct additional runways at JFK airport as part of a $350,000 study commissioned by the Port Authority. The Port Authority accepted the recommendations of the NAE and NAS. In 1975, the NAE added eighty-six new engineer members including noted civil engineer and businessman Stephen Davison Bechtel Jr. In 1986, the NAE issued a report encouraging foreign investment, calling for stronger Federal action. That same year, NAE member Robert W. Rummel (1915-2009), space expert and aerospace engineer, served on The Presidential Commission on the Space Shuttle Challenger Accident. In 1989, the National Academy of Engineering in conjunction with the National Academy of Science advised the Department of Energy on a site location for the then proposed Superconducting Super Collider (SSC) from a number of States proposals. In 1995, the NAE along with the NAS and the National Academy of Medicine reported that the American system of doctoral education in science and engineering, while "...long a world model, should be reshaped to produce more 'versatile scientists,' rather than narrowly specialized researchers". Again, in 2000, NAE returned to this education theme with its detailed studies of engineering education as part of its "Engineer of 2020 Studies" project. The reports concluded that engineering education must be reformed, else, American engineers will be poorly prepared for engineering practice. Soon after, the American Society of Civil Engineers adopted a policy, advocating for the reconstruction of the academic foundation of the professional practice of civil engineering. Membership Formally, members of the NAE must be U.S. citizens. The term "international member" is applied to non-citizens who are elected to the NAE. "The NAE has more than 2,000 peer-elected members and international members, senior professionals in business, academia, and government who are among the world's most accomplished engineers", according to the NAE site's About page. Election to the NAE is considered to be among the highest recognitions in engineering-related fields, and it often comes as a recognition of a lifetime's worth of accomplishments. Nomination for membership can only be done by a current member of the NAE for outstanding engineers with identifiable contributions or accomplishments in one or both of the following categories: Engineering research, practice, or education, including, where appropriate, significant contributions to the engineering literature. Pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education. Since its founding, as of late-2024, the Academy has elected around 5,020 members. The Massachusetts Institute of Technology is associated with the most members with 207 members, Stanford University with 172, and the University of California at Berkeley with 127. The top fourteen institutions account for over 20% of all members ever elected. Program areas Greatest Engineering Achievements of the 20th Century In February 2000, a National Press Club luncheon during National Engineers Week 2000 sponsored by the NAE, astronaut/engineer Neil Armstrong announced the 20 top engineering achievements having the greatest impact on the quality of life in the 20th century. Twenty-nine professional engineering societies provided 105 nominations which then selected and ranked the top 20 achievements. The nominations were pared to less than fifty and then combined into 29 larger categories. "Thus, bridges, tunnels, and roads were merged into the interstate highway system, and tractors, combines, robot cotton pickers, and chisel plows were simply lumped into agricultural mechanization." Some of the achievements, though, such as the telephone and the automobile which were not invented in the 20th century were included because of the impact they had were not really apparent until the 20th century. The top achievement, electrification is essential for almost part of modern society and has "...literally lighted the world and impacted countless areas of daily life, including food production and processing, air conditioning and heating, refrigeration, entertainment, transportation, communication, health care, and computers." Later in 2003, the National Academy of Engineering published A Century of Innovation: Twenty Engineering Achievements that Transformed our Lives. The ranked list of the top 20 achievements in the 20th century was published as follows: Electrification Automobile Airplane Water Supply and Distribution Electronics Radio and Television Agricultural Mechanization Computers Telephone Air Conditioning and Refrigeration Highways Spacecraft Internet Imaging Household Appliances Health Technologies Petroleum and Petrochemical Technologies Laser and Fiber Optics Nuclear Technologies High-performance Materials Reception The NAE's achievements list was criticized for ranking space technology (listed as "Spacecraft") twelfth instead of number one despite NAE recognizing in its report that the Soviet Union's Sputnik "shocked the world and started a space race that launched the greatest engineering team effort in American history." (NAE, 2000) Time magazine ran a similar poll of 20th-century accomplishments, and its website users ranked the first Moon landing in 1969 in second place versus NAE's 12th. The NAE listing was also criticized for not recognizing the role physics played in laying the foundations for the engineering accomplishments such Michael Faraday and Joseph Henry for electrification. NAE's list ranked electronics based upon two inventions, the transistor and integrated circuits, even it neglected to mention their physicist inventors, John Bardeen, Walter H. Brattain, William B. Shockley, Jack Kilby and Robert Noyce. Another commentator noted that the list ignored the St. Lawrence seaway and power project, built between 1954 and 1959 and by extension the Panama Canal. The St. Lawrence seaway was "...one of the largest transborder projects ever undertaken by two countries and one of the greatest engineering achievements of the 20th century." It was also noted that these 20th-century accomplishments did not come without impacts on the environment or societies. Electrification as an example, resulting in fossil-fuel-burning power plants, airplanes and automobiles which emit greenhouse gases while electronics manufacturing leaves heavy-metal byproducts. Grand Challenges for Engineering The Grand Challenges confront wicked social issues that are inherently global in nature and require technological innovations and applications of systems thinking. Further, NAE argues that the solutions call upon engineers to persuasively influence "...public policy, transfer technical innovation to the market place, and to inform and be informed by social science and the humanities." The NAE's Grand Challenges overlap with the United Nations' Millennium Development Goals and its 2015 successor, the Sustainable Development Goals (SDGs) which all depend upon "a strong engineering component" for success. Development of the Grand Challenges (2008) The Academy introduced its "Grand Challenges for Engineering" project in 2007 with the commissioning of a blue-ribbon committee composed of leading technological thinkers from around the globe. The committee, led by former Secretary of Defense William Perry was charged with the task of identifying "..key engineering challenges for improving life in the 21st century." NAE's intent was to develop a set of challenges of such importance that they warranted serious investment and if successful, would "lead to a marked improvement in our quality of life." The project received "...thousands of inputs from around the world to determine its list of Grand Challenges for Engineering, and its report was reviewed by more than 50 subject-matter experts, making it among the most reviewed of Academy studies." In February 2008, the committee announced 14 Engineering Grand Challenges fitting into four broad categories: energy, sustainability, and global climate change; medicine, health informatics and health care delivery systems; reducing our vulnerability to natural and human threats; and advancing the human spirit and capabilities. NAE noted that a number of engineering schools had developed coursework based upon Grand Challenge themes. The 14 Grand Challenges for Engineering developed by the NAE committee were to: Make solar energy economical Provide energy from fusion Develop carbon sequestration Manage the nitrogen cycle Provide access to clean water Restore and Improve urban infrastructure Advance health informatics Engineer better medicines Reverse-engineer the brain Prevent nuclear terror Secure cyberspace Enhance virtual reality Advance personalized learning Engineer the tools of scientific discovery. NAE noted in its report that the Grand Challenges for Engineering were not "...ranked in importance or likelihood of solution, nor was any strategy proposed for solving them. Rather, they were offered as a way to inspire the profession, young people, and the public at large to seek the solutions." NAE also stated that the Grand Challenges were "...not targeted to any one country or corporate sector... (and)... are relevant to everyone in every country. In fact, some of them bear on the very survival of society. If solving these challenges can become an international movement, all will benefit." Reception One writer favorably observed that the Academy's list of 20th-century engineering achievements was dominated by devices and when asked to project advances for the 21st, the result was again, device dominated. With respect to the Grand Challenges, the NAE reframed its discussion from being device-centric to addressing complex or wicked social issues that cannot be solved by technology alone, i.e. more devices. With the Grand Challenges though, NAE "...charted a course for... (engineering)... to move from devices to global social challenges, and has identified a number of exciting ones." One critical reaction to the NAE's challenges noted that engineers today are the "...unacknowledged legislators of the world... (and by)... designing and constructing new structures, processes, and products, they are influencing how we live as much as any laws enacted by politicians. The author argued that NAE's Grand Challenges should have included the "...challenge of thinking about what we are doing as we turn the world into an (engineering) artifact and the appropriate limitations of this engineering power." This is already happening in the Netherlands with its Delta Works as an example of a society being an engineered artifact but also with a community of philosophers of engineering and technology. Another commentator observed that challenges with respect to sustainability concentrated on specific elements of the problem without addressing "... "what level of energy use would be sustainable on a global scale." While India and China are 1000-1500 Watt per person societies, the United States requires 12,000 W per person. An estimate of a sustainable level of power consumption made by a Swiss group is 2,000 W per person. Similar questions were raised on the NAE's challenge for access to clean water. The average daily per capita water consumption in American cities varies from 130 to 2000 liters (35 to 530 gallons). Grand Challenge Scholars Program (GCSP) In 2010, NAE developed a plan for preparing engineering students at the undergraduate academic degree level to practice in career fields that emerged as a result of the effort to answer the Grand Challenges. The program had five components, namely: Research experience based upon a project or independent research related to a NAE Grand Challenges. Interdisciplinary curriculum materials inclusive of "..public policy, business, law, ethics, human behavior, risk as well as medicine and the sciences." Entrepreneurship inclusive of skills to translate "...invention to innovation... (and)... develop market ventures that scale to global solutions in the public interest." Global dimension and perspective necessary to "..address challenges that are inherently global as well as to lead innovation in a global economy." Service learning that develops and engages the engineer's social consciousness and its willingness to bring to bear the profession's technical expertise on societal problems through programs such as Engineers Without Borders, or Engineering World Health. STEM education, Technological Literacy and the Grand Challenges While the National Academy of Engineering's GC SCholars (GCSP) program was primarily focused on undergraduate level curriculums, STEM focuses on K–12 education. The question for STEM educators was how to prepare K-12 students to participate in solving the wicked problems associated with the Grand Challenges. One response was to align STEM program theories of learning and International Technology and Engineering Educators Association (ITEEA, formerly ITEA) Technological Literacy Standards with the National Academy of Engineering's Grand Challenges in order to guide current and pending curriculum development. NAE's objective was also to inform instructional practices, particularly dealing with the connections among science, technology, engineering, and mathematics education. The Technological Literacy Standards were funded by the National Science Foundation and NASA and NAE's Technology Education Standards Committee led the Academy's efforts on the standards. Global Grand Challenges Summit As a result of NAE's Grand Challenge efforts, three national engineering academies–The National Academy of Engineering of the United States, The Royal Academy of Engineering of the United Kingdom, and the Chinese Academy of Engineering–organized a joint Global Grand Challenges Summit, held in London on March 12–13, 2013. In September 2015 a second Global Grand Challenges Summit was held in Beijing, with more than 800 attendees invited by the three academies. The third Global Grand Challenges Summit was hosted by the NAE in the United States in 2017. Frontiers of Engineering The Frontiers of Engineering program assembles a group of emerging engineering leaders - usually aged 30–45 - to discuss cutting-edge research in various engineering fields and industry sectors. The goal of the meetings is to bring participants together to collaborate, network, and share ideas. There are three Frontiers of Engineering meetings every year: the U.S. Frontiers of Engineering Symposium, the German-American Frontiers of Engineering Symposium, and the Japan-America Frontiers of Engineering Symposium. The Indo-U.S. Frontiers of Engineering Symposium is held every other year. Diversity in the Engineering Workplace The goal of the diversity office is to participate in studies addressing the issue of increasing and broadening a domestic talent pool. Through this effort the NAE convenes workshops, coordinators with other organizations, and identifies program needs and opportunities for improvement. As part of this effort the NAE has launched both the EngineerGirl! and Engineer Your Life webpages. Engineering, Economics, and Society This program area studies connections between engineering, technology, and the economic performance of the United States. Efforts aim to advance the understanding of engineering's contribution to the sectors of the domestic economy and to learn where engineering may enhance economic performance. The project also aims to investigate the best ways to determine levels of technological literacy in the United States among three distinct populations in the United States: K-12 students, K-12 teachers, and out-of-school adults. A report (and associated website), Technically Speaking, explains what "technological literacy" is, why it is important, and what is being done in the U.S. to improve it. Engineering and the Environment This program, recognizing that the engineering profession has often been associated with causing environmental harm, looks to recognize and publicize that the profession is now at the forefront of mitigating negative environmental impacts. The program will provide policy guidance to government, the private sector, and the public on ways to create a more environmentally sustainable future. Center for the Advancement of Scholarship on Engineering Education The Center for the Advancement of Scholarship on Engineering Education. was established to advance engineering education in the United States, aiming for curriculum changes that address the needs of new generations of engineering students and the unique problems they will face with the challenges of the 21st century. The Center worked closely with the Committee on Engineering Education, which works to improve the quality of engineering education by providing advice to policymakers, administrators, employers, and other stakeholders. The Center is no longer active within the National Academy of Engineering. Center for Engineering, Ethics, and Society The Center for Engineering, Ethics, and Society seeks to engage engineers and the engineering profession in identifying and resolving ethical issues associated with engineering research and practice. The Center works is closely linked with the Online Ethics Center. Outreach efforts To publicize the work of both the profession and the NAE, the institution puts considerable efforts into outreach activities. A weekly radio spot produced by the NAE is broadcast on WTOP radio in the Washington, D.C., area and the file and text of the spot can be found on the NAE site. The NAE also distributes a biweekly newsletter focusing on engineering issues and advancements. In addition, NAE has held a series of workshops titled News and Terrorism: Communicating in a Crisis, in which experts from the National Academies and elsewhere provide reporters, state and local public information officers, emergency managers, and representatives from the public sector with important information about weapons of mass destruction and their impact. This project is conducted in collaboration with the Department of Homeland Security and the Radio and Television News Directors Foundation. In addition to these efforts, the NAE fosters good relationships with members of the media to ensure coverage of the work of the institution and to serve as a resource for the media to use when they have technical questions or would like to speak to an NAE member on a particular matter. The NAE is also active in "social media," both to reach new and younger audiences and to reach traditional audiences in new ways. Prizes The Academy awards several prizes, with each recipient receiving $500,000. The prizes include the Bernard M. Gordon Prize, the Fritz J. and Dolores H. Russ Prize, and the Charles Stark Draper Prize. They are sometimes referred to collectively as the American version of a Nobel Prize for engineering. Gordon Prize The Bernard M. Gordon Prize was started in 2001 by the NAE. It is named after Bernard Marshall Gordon, the founder of Analogic Corporation. Its purpose is to recognize leaders in academia for the development of new educational approaches to engineering. Each year, the Gordon Prize awards $500,000 to the grantee, of which the recipient may personally use $250,000, and his or her institution receives $250,000 for the ongoing support of academic development. Russ Prize The Fritz J. and Dolores H. Russ Prize is an American national and international award established by the NAE in October 1999 in Athens, Ohio. The prize has been given biennially in odd years since 2001. Named after Fritz Russ, the founder of Systems Research Laboratories, and his wife Dolores Russ, it recognizes a bioengineering achievement that "has had a significant impact on society and has contributed to the advancement of the human condition through widespread use." The award was instigated at the request of Ohio University to honor Fritz Russ, one of its alumni. Charles Stark Draper Prize The NAE annually awards the Charles Stark Draper Prize, which is given for the advancement of engineering and the education of the public about engineering. The recipient receives $500,000. The prize is named for Charles S. Draper, the "father of inertial navigation", an MIT professor and founder of the Draper Laboratory. See also National Academies of Sciences, Engineering, and Medicine List of founding members of the National Academy of Engineering List of members of the National Academy of Engineering List of engineering awards References External links Official NAE website The Engineer of 2020: Visions of Engineering in the New Century (2004) NAE Grand Challenges for Engineering report (2008 Report), (2017 Update of 2008 document) National Academy of Engineering Grand Challenge Scholars Program Plan (2010) and , Committee on Science, Engineering, and Public Policy information Greatest Engineering Achievements ROBERT W. RUMMEL (1915–2009) obituary at NAE site National academies of engineering United States National Academies United States National Academy of Engineering 1964 establishments in Washington, D.C. Organizations established in 1964 History of engineering 20th century in technology
National Academy of Engineering
[ "Engineering" ]
5,700
[ "United States National Academy of Engineering", "National academies of engineering" ]
5,521,498
https://en.wikipedia.org/wiki/Araucania%20%28wasp%29
Araucania is an invalid genus of braconid wasps in the family Braconidae, found in South America. There are at least two described species in Araucania. The valid genus Araucania Pate 1947, in the family Sapygidae, has nomenclatural precedence over the braconid name, published in 1993, so the latter name must be replaced, following the International Code of Zoological Nomenclature Article 52.2. Species Araucania maculipennis Marsh, 1993 Araucania penai Marsh, 1993 References Further reading Notes Parasitic wasps Braconidae Hymenoptera of South America
Araucania (wasp)
[ "Biology" ]
130
[ "Biological hypotheses", "Controversial taxa" ]
5,521,842
https://en.wikipedia.org/wiki/Superghost
In a supersymmetric quantum field theory, a superghost is a fermionic Faddeev–Popov ghost, which is used in the gauge fixing of a fermionic symmetry generator. References Supersymmetric quantum field theory String theory
Superghost
[ "Physics", "Astronomy" ]
55
[ "Astronomical hypotheses", "Supersymmetric quantum field theory", "Theoretical physics", "Quantum physics stubs", "Quantum mechanics", "Theoretical physics stubs", "String theory", "Supersymmetry", "Symmetry" ]
5,521,966
https://en.wikipedia.org/wiki/Super%20Virasoro%20algebra
In mathematical physics, a super Virasoro algebra is an extension of the Virasoro algebra (named after Miguel Ángel Virasoro) to a Lie superalgebra. There are two extensions with particular importance in superstring theory: the Ramond algebra (named after Pierre Ramond) and the Neveu–Schwarz algebra (named after André Neveu and John Henry Schwarz). Both algebras have N = 1 supersymmetry and an even part given by the Virasoro algebra. They describe the symmetries of a superstring in two different sectors, called the Ramond sector and the Neveu–Schwarz sector. The N = 1 super Virasoro algebras There are two minimal extensions of the Virasoro algebra with N = 1 supersymmetry: the Ramond algebra and the Neveu–Schwarz algebra. They are both Lie superalgebras whose even part is the Virasoro algebra: this Lie algebra has a basis consisting of a central element C and generators Lm (for integer m) satisfying where is the Kronecker delta. The odd part of the algebra has basis , where is either an integer (the Ramond case), or half an odd integer (the Neveu–Schwarz case). In both cases, is central in the superalgebra, and the additional graded brackets are given by Note that this last bracket is an anticommutator, not a commutator, because both generators are odd. The Ramond algebra has a presentation in terms of 2 generators and 5 conditions; and the Neveu—Schwarz algebra has a presentation in terms of 2 generators and 9 conditions. Representations The unitary highest weight representations of these algebras have a classification analogous to that for the Virasoro algebra, with a continuum of representations together with an infinite discrete series. The existence of these discrete series was conjectured by Daniel Friedan, Zongan Qiu, and Stephen Shenker (1984). It was proven by Peter Goddard, Adrian Kent and David Olive (1986), using a supersymmetric generalisation of the coset construction or GKO construction. Application to superstring theory In superstring theory, the fermionic fields on the closed string may be either periodic or anti-periodic on the circle around the string. States in the "Ramond sector" admit one option (periodic conditions are referred to as Ramond boundary conditions), described by the Ramond algebra, while those in the "Neveu–Schwarz sector" admit the other (anti-periodic conditions are referred to as Neveu–Schwarz boundary conditions), described by the Neveu–Schwarz algebra. For a fermionic field, the periodicity depends on the choice of coordinates on the worldsheet. In the w-frame, in which the worldsheet of a single string state is described as a long cylinder, states in the Neveu–Schwarz sector are anti-periodic and states in the Ramond sector are periodic. In the z-frame, in which the worldsheet of a single string state is described as an infinite punctured plane, the opposite is true. The Neveu–Schwarz sector and Ramond sector are also defined in the open string and depend on the boundary conditions of the fermionic field at the edges of the open string. See also N = 2 superconformal algebra NS–NS sector Ramond–Ramond sector Superconformal algebra Notes References String theory Lie algebras Conformal field theory Boundary conditions
Super Virasoro algebra
[ "Astronomy" ]
729
[ "String theory", "Astronomical hypotheses" ]
585,577
https://en.wikipedia.org/wiki/BESM-6
BESM-6 (, short for Большая электронно-счётная машина, i.e. 'Large Electronic Calculating Machine') was a Soviet electronic computer of the BESM series. Overview The BESM-6 was the most well-known and influential model of the series designed at the Institute of Precision Mechanics and Computer Engineering. The design was completed in 1965. Production started in 1968 and continued for the following 19 years. Like its BESM-3 and BESM-4 predecessors, the original BESM-6 was transistor-based (however, the version used in the 1980s as a component of the Elbrus supercomputer was built with integrated circuits). The machine's 48-bit processor ran at 10 MHz clock speed and featured two instruction pipelines, separate for the control and arithmetic units, and a data cache of sixteen 48-bit words. The system achieved a performance of 1 MIPS. The CDC 6600, a common Western supercomputer when the BESM-6 was released, achieved about 2 MIPS. The system memory was word-addressable using 15-bit addresses. The maximum addressable memory space was thus 32K words (192K bytes). A virtual memory system allowed to expand this up to 128K words (768K bytes). The BESM-6 was widely used in USSR in the 1970s for various computation and control tasks. During the 1975 Apollo-Soyuz Test Project the processing of the space mission telemetry data was accomplished by a new computer complex which was based on a BESM-6. The Apollo-Soyuz mission's data processing by Soviet scientists finished half an hour earlier than their American colleagues from NASA. A total of 355 of these machines were built. Production ended in 1987. As the first Soviet computer with an installed base that was large for the time, the BESM-6 gathered a dedicated developer community. Over the years several operating systems and compilers for programming languages such as Fortran, ALGOL and Pascal were developed. A modification of the BESM-6 based on integrated circuits, with 2-3 times higher performance than the original machine, was produced in the 1980s under the name Elbrus-1K2 as a component of the Elbrus supercomputer. In 1992, one of the last surviving BESM-6 machines was purchased by the Science Museum in London, England. Peripherals The BESM-6 could send output to an АЦПУ-128 (Алфавитно-Цифровое Печатающее Устройство) printer, and read input from punched cards in the GOST 10859 character set. A Consul-254 teletype, made by Zbrojovka Brno in Czechoslovakia, could be used for interactive sessions. When CRT terminals became available, the BESM-6 could be connected to Videoton 340 terminals. Further reading (NB. Has information on the BESM-6 character set.) References External links BESM-6 Nostalgia Page Soviet computer systems 1965 establishments in Russia Supercomputers
BESM-6
[ "Technology" ]
672
[ "Supercomputers", "Supercomputing", "Computer systems", "Soviet computer systems" ]
585,648
https://en.wikipedia.org/wiki/Modularity
Modularity is the degree to which a system's components may be separated and recombined, often with the benefit of flexibility and variety in use. The concept of modularity is used primarily to reduce complexity by breaking a system into varying degrees of interdependence and independence across and "hide the complexity of each part behind an abstraction and interface". However, the concept of modularity can be extended to multiple disciplines, each with their own nuances. Despite these nuances, consistent themes concerning modular systems can be identified. Composability is one of tenets of functional programming. This makes functional programs modular. Contextual nuances The meaning of the word "modularity" can vary somewhat based on context. The following are contextual examples of modularity across several fields of science, technology, industry, and culture: Science In biology, modularity recognizes that organisms or metabolic pathways are composed of modules. In ecology, modularity is considered a key factor—along with diversity and feedback—in supporting resilience. In nature, modularity may refer to the construction of a cellular organism by joining together standardized units to form larger compositions, as for example, the hexagonal cells in a honeycomb. In cognitive science, the idea of modularity of mind holds that the mind is composed of independent, closed, domain-specific processing modules. Visual modularity, the various putative visual modules. Language module, the putative language module. In the study of complex networks, modularity is a benefit function that measures the quality of a division of a network into groups or communities. Technology In modular programming, modularity refers to the compartmentalization and interrelation of the parts of a software package. In software design, modularity refers to a logical partitioning of the "software design" that allows complex software to be manageable for the purpose of implementation and maintenance. The logic of partitioning may be based on related functions, implementation considerations, data links, or other criteria. In self-reconfiguring modular robotics, modularity refers to the ability of the robotic system to automatically achieve different morphologies to execute the task at hand. Industry In modular construction, modules are a bundle of redundant project components that are produced en masse prior to installation.Building components are often arranged into modules in the industrialization of construction. In industrial design, modularity refers to an engineering technique that builds larger systems by combining smaller subsystems. In manufacturing, modularity typically refers to modular design, either as the use of exchangeable parts or options in the fabrication of an object or the design and manufacture of modular components. In organizational design, Richard L. Daft and Arie Y. Lewin (1993) identified a paradigm called "modular organization" that had as its ground the need for flexible learning organizations in constant change and the need to solve their problems through coordinated self-organizing processes. This modular organization is characterized by decentralized decision-making, flatter hierarchies, self-organization of units. Culture In The Language of New Media, author Lev Manovich discusses the principle that new media is composed of modules or self-sufficient parts of the overall media object. In contemporary art and architecture, modularity can refer to the construction of an object by joining together standardized units to form larger compositions, and/or to the use of a module as a standardized unit of measurement and proportion. In modular art, modularity refers to the ability to alter the work by reconfiguring, adding to, and/or removing its parts. Modularity in different research areas Modularity in technology and management The term is widely used in studies of technological and organizational systems. Product systems are deemed "modular", for example, when they can be decomposed into a number of components that may be mixed and matched in a variety of configurations. The components are able to connect, interact, or exchange resources (such as energy or data) in some way, by adhering to a standardized interface. Unlike a tightly integrated product whereby each component is designed to work specifically (and often exclusively) with other particular components in a tightly coupled system, modular products are systems of components that are "loosely coupled." In The Language of New Media, Lev Manovich proposes five "principles of new media"—to be understood "not as absolute laws but rather as general tendencies of a culture undergoing computerization." The five principles are numerical representation, modularity, automation, variability, and transcoding. Modularity within new media represents new media as being composed of several separate self-sufficient modules that can act independently or together in synchronisation to complete the new media object. In Photoshop, modularity is most evident in layers; a single image can be composed of many layers, each of which can be treated as an entirely independent and separate entity. Websites can be defined as being modular, their structure is formed in a format that allows their contents to be changed, removed or edited whilst still retaining the structure of the website. This is because the website's content operates separately to the website and does not define the structure of the site. The entire Web, Manovich notes, has a modular structure, composed of independent sites and pages, and each webpage itself is composed of elements and code that can be independently modified. Organizational systems are said to become increasingly modular when they begin to substitute loosely coupled forms for tightly integrated, hierarchical structures. For instance, when the firm utilizes contract manufacturing rather than in-house manufacturing, it is using an organizational component that is more independent than building such capabilities in-house: the firm can switch between contract manufacturers that perform different functions, and the contract manufacturer can similarly work for different firms. As firms in a given industry begin to substitute loose coupling with organizational components that lie outside of firm boundaries for activities that were once conducted in-house, the entire production system (which may encompass many firms) becomes increasingly modular. The firms themselves become more specialized components. Using loosely coupled structures enables firms to achieve greater flexibility in both scope and scale. This is in line with modularity in the processes of production, which relates to the way that technological artifacts are produced. This consists of the artifact's entire value chain, from the designing of the artifact to the manufacturing and distribution stages. In production, modularity is often due to increased design modularity. The firm can switch easily between different providers of these activities (e.g., between different contract manufacturers or alliance partners) compared to building the capabilities for all activities in house, thus responding to different market needs more quickly. However, these flexibility gains come with a price. Therefore, the organization must assess the flexibility gains achievable, and any accompanying loss of performance, with each of these forms. Modularization within firms leads to the disaggregation of the traditional form of hierarchical governance. The firm is decomposed into relatively small autonomous organizational units (modules) to reduce complexity. Modularization leads to a structure, in which the modules integrate strongly interdependent tasks, while the interdependencies between the modules are weak. In this connection the dissemination of modular organizational forms has been facilitated by the widespread efforts of the majority of large firms to re-engineer, refocus and restructure. These efforts usually involve a strong process-orientation: the complete service-provision process of the business is split up into partial processes, which can then be handled autonomously by cross-functional teams within organizational units (modules). The co-ordination of the modules is often carried out by using internal market mechanisms, in particular by the implementation of profit centers. Overall, modularization enables more flexible and quicker reaction to changing general or market conditions. Building on the above principles, many alternative forms of modularization of organizations (for-profit or non-profit) are possible. However, modularization is not an independent and self-contained organizational concept, but rather consists of several basic ideas, which are integral parts of other organizational concepts. These central ideas can be found in every firm. Accordingly, it is not sensible to characterize a firm as "modular" or as "not modular", because firms are always modular to a some degree. Input systems, or "domain specific computational mechanisms" (such as the ability to perceive spoken language) are termed vertical faculties, and according to Jerry Fodor they are modular in that they possess a number of characteristics Fodor argues constitute modularity. Fodor's list of features characterizing modules includes the following: Domain specific (modules only respond to inputs of a specific class, and thus a "species of vertical faculty" (Fodor, 1996 [1983]:37) Innately specified (the structure is inherent and is not formed by a learning process) Not assembled (modules are not put together from a stock of more elementary subprocesses but rather their virtual architecture maps directly onto their neural implementation) Neurologically hardwired (modules are associated with specific, localized, and elaborately structured neural systems rather than fungible neural mechanisms) Autonomous (modules independent of other modules) Fodor does not argue that this is formal definition or an all-inclusive list of features necessary for modularity. He argues only that cognitive systems characterized by some of the features above are likely to be characterized by them all, and that such systems can be considered modular. He also notes that the characteristics are not an all-or-nothing proposition, but rather each of the characteristics may be manifest in some degree, and that modularity itself is also not a dichotomous construct—something may be more or less modular: "One would thus expect—what anyhow seems to be desirable—that the notion of modularity ought to admit of degrees" (Fodor, 1996 [1983]:37). Notably, Fodor's "not assembled" feature contrasts sharply with the use of modularity in other fields in which modular systems are seen to be hierarchically nested (that is, modules are themselves composed of modules, which in turn are composed of modules, etc.) However, Max Coltheart notes that Fodor's commitment to the non-assembled feature appears weak, and other scholars (e.g., Block) have proposed that Fodor's modules could be decomposed into finer modules. For instance, while Fodor distinguishes between separate modules for spoken and written language, Block might further decompose the spoken language module into modules for phonetic analysis and lexical forms: "Decomposition stops when all the components are primitive processors—because the operation of a primitive processor cannot be further decomposed into suboperations" Though Fodor's work on modularity is one of the most extensive, there is other work in psychology on modularity worth noting for its symmetry with modularity in other disciplines. For instance, while Fodor focused on cognitive input systems as modules, Coltheart proposes that there may be many different kinds of cognitive modules, and distinguishes between, for example, knowledge modules and processing modules. The former is a body of knowledge that is independent of other bodies of knowledge, while the latter is a mental information-processing system independent from other such systems. However, the data neuroscientists have accumulated have not pointed to an organization system as neat and precise as the modularity theory originally proposed originally by Jerry Fodor. It has been shown to be much messier and different from person to person, even though general patterns exist; through a mixture of neuroimaging and lesion studies, it has been shown that there are certain regions that perform certain functions and other regions that do not perform those functions. Modularity in biology As in some of the other disciplines, the term modularity may be used in multiple ways in biology. For example, it may refer to organisms that have an indeterminate structure wherein modules of various complexity (e.g., leaves, twigs) may be assembled without strict limits on their number or placement. Many plants and sessile (immobile) invertebrates of the benthic zones demonstrate this type of modularity (by contrast, many other organisms have a determinate structure that is predefined in embryogenesis). The term has also been used in a broader sense in biology to refer to the reuse of homologous structures across individuals and species. Even within this latter category, there may be differences in how a module is perceived. For instance, evolutionary biologists may focus on the module as a morphological component (subunit) of a whole organism, while developmental biologists may use the term module to refer to some combination of lower-level components (e.g., genes) that are able to act in a unified way to perform a function. In the former, the module is perceived a basic component, while in the latter the emphasis is on the module as a collective. Biology scholars have provided a list of features that should characterize a module (much as Fodor did in The Modularity of Mind). For instance, Rudy Raff provides the following list of characteristics that developmental modules should possess: discrete genetic specification hierarchical organization interactions with other modules a particular physical location within a developing organism the ability to undergo transformations on both developmental and evolutionary time scales To Raff's mind, developmental modules are "dynamic entities representing localized processes (as in morphogenetic fields) rather than simply incipient structures ... (... such as organ rudiments)". Bolker, however, attempts to construct a definitional list of characteristics that is more abstract, and thus more suited to multiple levels of study in biology. She argues that: A module is a biological entity (a structure, a process, or a pathway) characterized by more internal than external integration Modules are biological individuals that can be delineated from their surroundings or context, and whose behavior or function reflects the integration of their parts, not simply the arithmetical sum. That is, as a whole, the module can perform tasks that its constituent parts could not perform if dissociated. In addition to their internal integration, modules have external connectivity, yet they can also be delineated from the other entities with which they interact in some way. Another stream of research on modularity in biology that should be of particular interest to scholars in other disciplines is that of Günter Wagner and Lee Altenberg. Altenberg's work, Wagner's work, and their joint writing explores how natural selection may have resulted in modular organisms, and the roles modularity plays in evolution. Altenberg's and Wagner's work suggests that modularity is both the result of evolution, and facilitates evolution—an idea that shares a marked resemblance to work on modularity in technological and organizational domains. Modularity in the arts The use of modules in the fine arts has a long pedigree among diverse cultures. In the classical architecture of Greco-Roman antiquity, the module was utilized as a standardized unit of measurement for proportioning the elements of a building. Typically the module was established as one-half the diameter of the lower shaft of a classical column; all the other components in the syntax of the classical system were expressed as a fraction or multiple of that module. In traditional Japanese construction, room sizes were often determined by combinations of standard rice mats called tatami; the standard dimension of a mat was around 3 feet by 6 feet, which approximate the overall proportions of a reclining human figure. The module thus becomes not only a proportional device for use with three-dimensional vertical elements but a two-dimensional planning tool as well. Modularity as a means of measurement is intrinsic to certain types of building; for example, brick construction is by its nature modular insofar as the fixed dimensions of a brick necessarily yield dimensions that are multiples of the original unit. Attaching bricks to one another to form walls and surfaces also reflects a second definition of modularity: namely, the use of standardized units that physically connect to each other to form larger compositions. With the advent of modernism and advanced construction techniques in the 20th century this latter definition transforms modularity from a compositional attribute to a thematic concern in its own right. A school of modular constructivism develops in the 1950s among a circle of sculptors who create sculpture and architectural features out of repetitive units cast in concrete. A decade later modularity becomes an autonomous artistic concern of its own, as several important Minimalist artists adopt it as their central theme. Modular building as both an industrial production model and an object of advanced architectural investigation develops from this same period. Modularity has found renewed interest among proponents of ModulArt, a form of modular art in which the constituent parts can be physically reconfigured, removed and/or added to. After a few isolated experiments in ModulArt starting in the 1950s, several artists since the 1990s have explored this flexible, customizable and co-creative form of art. Modularity in fashion Modularity in fashion is the ability to customise garments through adding and removing elements or altering the silhouette, usually via zips, hook and eye closures or other fastenings. Throughout history it has been used to tailor garments, existing even in the 17th century. In recent years, an increasing number of fashion designers – especially those focused on slow or sustainable fashion – are experimenting with this concept. Within the realm of Haute Couture, Yohji Yamamoto and Hussein Chalayan are notable examples, the latter especially for his use of technology to create modular garments. Studies carried out in Finland and the US show favourable attitudes of consumers to modular fashion, despite this the concept has not yet made it into mainstream fashion. The current emphasis within modular fashion is on the co-designing and customisation factors for consumers, with a goal to combat the swift changes to customers needs and wants, while also tackling sustainability by increasing the life-cycle of garments. Modularity in product design Modularity is a concept that has been thoroughly used in architecture and industry. In interior design modularity is used in order to achieve customizable products that are economically viable. Examples include some of the customizable creations of IKEA and mostly high-end high-cost concepts. Modularity in interior design, or "modularity in use", refers to the opportunities of combinations and reconfigurations of the modules in order to create an artefact that suits the specific needs of the user and simultaneously grows with them. The evolution of 3D printing technology has enabled customizable furniture to become feasible. Objects can be prototyped, changed depending on the space and customized dependent on the users needs. Designers can prototype showcase their modules over the internet just by using 3D printing technology. Sofas are a common piece that have modular utilities ranging from ottoman to a bed, as well as fabrics and textiles that are swappable. This originated in the 1940s after being invented by Harvey Probber, was refined in the 1970s, and reaching mass scale consumerism in the 2010s and 2020s. Modularity in American studies In John Blair's Modular America, he argues that as Americans began to replace social structures inherited from Europe (predominantly England and France), they evolved a uniquely American tendency towards modularity in fields as diverse as education, music, and architecture. Blair observes that when the word module first emerged in the sixteenth and seventeenth centuries, it meant something very close to model. It implied a small-scale representation or example. By the eighteenth and nineteenth centuries, the word had come to imply a standard measure of fixed ratios and proportions. For example, in architecture, the proportions of a column could be stated in modules (i.e., "a height of fourteen modules equaled seven times the diameter measured at the base") and thus multiplied to any size while still retaining the desired proportions. However, in America, the meaning and usage of the word shifted considerably: "Starting with architectural terminology in the 1930s, the new emphasis was on any entity or system designed in terms of modules as subcomponents. As applications broadened after World War II to furniture, hi-fi equipment, computer programs and beyond, modular construction came to refer to any whole made up of self-contained units designed to be equivalent parts of a system, hence, we might say, "systemically equivalent." Modular parts are implicitly interchangeable and/or recombinable in one or another of several senses". Blair defines a modular system as "one that gives more importance to parts than to wholes. Parts are conceived as equivalent and hence, in one or more senses, interchangeable and/or cumulative and/or recombinable" (p. 125). Blair describes the emergence of modular structures in education (the college curriculum), industry (modular product assembly), architecture (skyscrapers), music (blues and jazz), and more. In his concluding chapter, Blair does not commit to a firm view of what causes Americans to pursue more modular structures in the diverse domains in which it has appeared; but he does suggest that it may in some way be related to the American ideology of liberal individualism and a preference for anti-hierarchical organization. Consistent themes Comparing the use of modularity across disciplines reveals several themes: One theme that shows up in psychology and biology study is innately specified. Innately specified (as used here) implies that the purpose or structure of the module is predetermined by some biological mandate. Domain specificity, that modules respond only to inputs of a specific class (or perform functions only of a specific class) is a theme that clearly spans psychology and biology, and it can be argued that it also spans technological and organizational systems. Domain specificity would be seen in the latter disciplines as specialization of function. Hierarchically nested is a theme that recurs in most disciplines. Though originally disavowed by Jerry Fodor, other psychologists have embraced it, and it is readily apparent in the use of modularity in biology (e.g., each module of an organism can be decomposed into finer modules), social processes and artifacts (e.g., we can think of a skyscraper in terms of blocks of floors, a single floor, elements of a floor, etc.), mathematics (e.g., the modulus 6 may be further divided into the moduli 1, 2 and 3), and technological and organizational systems (e.g., an organization may be composed of divisions, which are composed of teams, which are composed of individuals). Greater internal than external integration is a theme that showed up in every discipline but mathematics. Often referred to as autonomy, this theme acknowledged that there may be interaction or integration between modules, but the greater interaction and integration occurs within the module. This theme is very closely related to information encapsulation, which shows up explicitly in both the psychology and technology research. Near decomposability (as termed by Simon, 1962) shows up in all of the disciplines, but is manifest in a matter of degrees. For instance, in psychology and biology it may refer merely to the ability to delineate one module from another (recognizing the boundaries of the module). In several of the social artifacts, mathematics, and technological or organizational systems, however, it refers to the ability to actually separate components from one another. In several of the disciplines this decomposability also enables the complexity of a system (or process) to be reduced. This is aptly captured in a quote from David Marr about psychological processes where he notes that, "any large computation should be split up into a collection of small, nearly independent, specialized subprocesses." Reducing complexity is also the express purpose of casting out nines in mathematics. Substitutability and recombinability are closely related constructs. The former refers to the ability to substitute one component for another as in John Blair's "systemic equivalence" while the latter may refer both to the indeterminate form of the system and the indeterminate use of the component. In US college curricula, for example, each course is designed with a credit system that ensures a uniform number of contact hours, and approximately uniform educational content, yielding substitutability. By virtue of their substitutability, each student may create their own curricula (recombinability of the curriculum as a system) and each course may be said to be recombinable with a variety of students' curricula (recombinability of the component within multiple systems). Both substitutability and recombinability are immediately recognizable in Blair's social processes and artifacts, and are also well captured in Garud and Kumaraswamy's discussion of economies of substitution in technological systems. Blair's systemic equivalence also demonstrates the relationship between substitutability and the module as a homologue. Blair's systemic equivalence refers to the ability for multiple modules to perform approximately the same function within a system, while in biology a module as a homologue refers to different modules sharing approximately the same form or function in different organisms. The extreme of the module as homologue is found in mathematics, where (in the simplest case) the modules refer to the reuse of a particular number and thus each module is exactly alike. In all but mathematics, there has been an emphasis that modules may be different in kind. In Fodor's discussion of modular cognitive system, each module performs a unique task. In biology, even modules that are considered homologous may be somewhat different in form and function (e.g., a whale's fin versus a human's hand). In Blair's book, he points out that while jazz music may be composed of structural units that conform to the same underlying rules, those components vary significantly. Similarly in studies of technology and organization, modular systems may be composed of modules that are very similar (as in shelving units that may be piled one atop the other) or very different (as in a stereo system where each component performs unique functions) or any combination in between. See also Configuration design Object-oriented programming Pattern language References Research articles R. Phukan, D. Nam, D. Dong and R. Burgos, "Design Considerations for a Modular 2-Stage LCLC Filter for Three Phase AC-DC Interleaved Converters," 2022 IEEE Transportation Electrification Conference & Expo (ITEC), Anaheim, CA, USA, 2022, pp. 517-522. doi: 10.1109/ITEC53557.2022.9813883 S. Ohn et al., "A Scalable Filter Topology for $N$-Parallel Modular Three-Phase AC–DC Converters by an Arrangement of Coupled Inductors," in IEEE Transactions on Power Electronics, vol. 37, no. 11, pp. 13358-13367, Nov. 2022. doi: 10.1109/TPEL.2022.3179396 R. Phukan et al., "Design of an Indirectly Coupled Filter Building Block for Modular Interleaved AC–DC Converters," in IEEE Transactions on Power Electronics, vol. 37, no. 11, pp. 13343-13357, Nov. 2022. doi: 10.1109/TPEL.2022.3179346 R. Phukan, S. Ohn, D. Dong, R. Burgos, G. Mondal and S. Nielebock, "Evaluation of Modular AC Filter Building Blocks for Full SiC based Grid-Tied Three Phase Converters," 2020 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 2020, pp. 1835-1841. doi: 10.1109/ECCE44975.2020.9236265 S. Ohn et al., "Modular Filter Building Block for Modular full-SiC AC-DC Converters by an Arrangement of Coupled Inductors," 2020 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 2020, pp. 4130-4136. doi: 10.1109/ECCE44975.2020.9236309 R. Phukan, S. Ohn, D. Dong, R. Burgos, G. Mondal and S. Nielebock, "Design and Optimization of a Highly Integrated Modular Filter Building Block for Three-Level Grid Tied Converters," 2020 IEEE Energy Conversion Congress and Exposition (ECCE), Detroit, MI, USA, 2020, pp. 4949-4956. doi: 10.1109/ECCE44975.2020.9235895 Operations management Abstraction Systems Design
Modularity
[ "Engineering" ]
5,867
[ "Design" ]
585,710
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Sweden
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Sweden (SE), the three levels are: NUTS codes SE SWEDEN (SVERIGE) SE1 EAST SWEDEN (ÖSTRA SVERIGE) SE11 Stockholm (Stockholm) SE110 Stockholm County (Stockholms län) SE12 East Middle Sweden (Östra Mellansverige) SE121 Uppsala County (Uppsala län) SE122 Södermanland County (Södermanlands län) SE123 Östergötland County (Östergötlands län) SE124 Örebro County (Örebro län) SE125 Västmanland County (Västmanlands län) SE2 SOUTH SWEDEN (SÖDRA SVERIGE) SE21 Småland and the islands (Småland med öarna) SE211 Jönköping County (Jönköpings län) SE212 Kronoberg County (Kronobergs län) SE213 Kalmar County (Kalmar län) SE214 Gotland County (Gotlands län) SE22 South Sweden (Sydsverige) SE221 Blekinge County (Blekinge län) SE224 Skåne County (Skåne län) SE23 West Sweden (Västsverige) SE231 Halland County (Hallands län) SE232 Västra Götaland County (Västra Götalands län) SE3 NORTH SWEDEN (NORRA SVERIGE) SE31 North Middle Sweden (Norra Mellansverige) SE311 Värmland County (Värmlands län) SE312 Dalarna County (Dalarnas län) SE313 Gävleborg County (Gävleborgs län) SE32 Middle Norrland (Mellersta Norrland) SE321 Västernorrland County (Västernorrlands län) SE322 Jämtland County (Jämtlands län) SE33 Upper Norrland (Övre Norrland) SE331 Västerbotten County (Västerbottens län) SE332 Norrbotten County (Norrbottens län) NUTS codes prior to 31.12.2007 Prior to 31.12.2007, the codes were as follows: The National Areas of Sweden are 8 second level subdivisions (NUTS-2) of Sweden, created by the European Union for statistical purposes. The 8 riksområden (Singular : Riksområde) includes the 21 counties of Sweden. Only Stockholm (SE01) corresponds simply to the homonymous county. Local administrative units Below the NUTS levels, the two LAU (Local Administrative Units) levels are: The LAU codes of Sweden can be downloaded here: NUTS 1 compared to Lands of Sweden While similar, NUTS 1 doesn't correspond to Lands of Sweden. See also List of Swedish regions by Human Development Index Subdivisions of Sweden ISO 3166-2 codes of Sweden FIPS region codes of Sweden External links Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe Overview map of EU Countries - NUTS level 1 SVERIGE - NUTS level 2 SVERIGE - NUTS level 3 Correspondence between the NUTS levels and the national administrative units List of current NUTS codes Download current NUTS codes (ODS format) Counties of Sweden, Statoids.com References Sweden Nuts
NUTS statistical regions of Sweden
[ "Mathematics" ]
685
[ "Nomenclature of Territorial Units for Statistics", "Statistical concepts", "Statistical regions" ]
585,797
https://en.wikipedia.org/wiki/Integer-valued%20polynomial
In mathematics, an integer-valued polynomial (also known as a numerical polynomial) is a polynomial whose value is an integer for every integer n. Every polynomial with integer coefficients is integer-valued, but the converse is not true. For example, the polynomial takes on integer values whenever t is an integer. That is because one of t and must be an even number. (The values this polynomial takes are the triangular numbers.) Integer-valued polynomials are objects of study in their own right in algebra, and frequently appear in algebraic topology. Classification The class of integer-valued polynomials was described fully by . Inside the polynomial ring of polynomials with rational number coefficients, the subring of integer-valued polynomials is a free abelian group. It has as basis the polynomials for , i.e., the binomial coefficients. In other words, every integer-valued polynomial can be written as an integer linear combination of binomial coefficients in exactly one way. The proof is by the method of discrete Taylor series: binomial coefficients are integer-valued polynomials, and conversely, the discrete difference of an integer series is an integer series, so the discrete Taylor series of an integer series generated by a polynomial has integer coefficients (and is a finite series). Fixed prime divisors Integer-valued polynomials may be used effectively to solve questions about fixed divisors of polynomials. For example, the polynomials P with integer coefficients that always take on even number values are just those such that is integer valued. Those in turn are the polynomials that may be expressed as a linear combination with even integer coefficients of the binomial coefficients. In questions of prime number theory, such as Schinzel's hypothesis H and the Bateman–Horn conjecture, it is a matter of basic importance to understand the case when P has no fixed prime divisor (this has been called Bunyakovsky's property, after Viktor Bunyakovsky). By writing P in terms of the binomial coefficients, we see the highest fixed prime divisor is also the highest prime common factor of the coefficients in such a representation. So Bunyakovsky's property is equivalent to coprime coefficients. As an example, the pair of polynomials and violates this condition at : for every the product is divisible by 3, which follows from the representation with respect to the binomial basis, where the highest common factor of the coefficients—hence the highest fixed divisor of —is 3. Other rings Numerical polynomials can be defined over other rings and fields, in which case the integer-valued polynomials above are referred to as classical numerical polynomials. Applications The K-theory of BU(n) is numerical (symmetric) polynomials. The Hilbert polynomial of a polynomial ring in k + 1 variables is the numerical polynomial . References Algebra Algebraic topology Further reading Polynomials Number theory Commutative algebra Ring theory
Integer-valued polynomial
[ "Mathematics" ]
583
[ "Discrete mathematics", "Algebra", "Polynomials", "Ring theory", "Fields of abstract algebra", "Commutative algebra", "Number theory" ]
585,826
https://en.wikipedia.org/wiki/Invariant%20subspace
In mathematics, an invariant subspace of a linear mapping T : V → V i.e. from some vector space V to itself, is a subspace W of V that is preserved by T. More generally, an invariant subspace for a collection of linear mappings is a subspace preserved by each mapping individually. For a single operator Consider a vector space and a linear map A subspace is called an invariant subspace for , or equivalently, -invariant, if transforms any vector back into . In formulas, this can be writtenor In this case, restricts to an endomorphism of : The existence of an invariant subspace also has a matrix formulation. Pick a basis C for W and complete it to a basis B of V. With respect to , the operator has form for some and , where here denotes the matrix of with respect to the basis C. Examples Any linear map admits the following invariant subspaces: The vector space , because maps every vector in into The set , because . These are the improper and trivial invariant subspaces, respectively. Certain linear operators have no proper non-trivial invariant subspace: for instance, rotation of a two-dimensional real vector space. However, the axis of a rotation in three dimensions is always an invariant subspace. 1-dimensional subspaces If is a 1-dimensional invariant subspace for operator with vector , then the vectors and must be linearly dependent. Thus In fact, the scalar does not depend on . The equation above formulates an eigenvalue problem. Any eigenvector for spans a 1-dimensional invariant subspace, and vice-versa. In particular, a nonzero invariant vector (i.e. a fixed point of T) spans an invariant subspace of dimension 1. As a consequence of the fundamental theorem of algebra, every linear operator on a nonzero finite-dimensional complex vector space has an eigenvector. Therefore, every such linear operator in at least two dimensions has a proper non-trivial invariant subspace. Diagonalization via projections Determining whether a given subspace W is invariant under T is ostensibly a problem of geometric nature. Matrix representation allows one to phrase this problem algebraically. Write as the direct sum ; a suitable can always be chosen by extending a basis of . The associated projection operator P onto W has matrix representation A straightforward calculation shows that W is -invariant if and only if PTP = TP. If 1 is the identity operator, then is projection onto . The equation holds if and only if both im(P) and im(1 − P) are invariant under T. In that case, T has matrix representation Colloquially, a projection that commutes with T "diagonalizes" T. Lattice of subspaces As the above examples indicate, the invariant subspaces of a given linear transformation T shed light on the structure of T. When V is a finite-dimensional vector space over an algebraically closed field, linear transformations acting on V are characterized (up to similarity) by the Jordan canonical form, which decomposes V into invariant subspaces of T. Many fundamental questions regarding T can be translated to questions about invariant subspaces of T. The set of -invariant subspaces of is sometimes called the invariant-subspace lattice of and written . As the name suggests, it is a (modular) lattice, with meets and joins given by (respectively) set intersection and linear span. A minimal element in in said to be a minimal invariant subspace. In the study of infinite-dimensional operators, is sometimes restricted to only the closed invariant subspaces. For multiple operators Given a collection of operators, a subspace is called -invariant if it is invariant under each . As in the single-operator case, the invariant-subspace lattice of , written , is the set of all -invariant subspaces, and bears the same meet and join operations. Set-theoretically, it is the intersection Examples Let be the set of all linear operators on . Then . Given a representation of a group G on a vector space V, we have a linear transformation T(g) : V → V for every element g of G. If a subspace W of V is invariant with respect to all these transformations, then it is a subrepresentation and the group G acts on W in a natural way. The same construction applies to representations of an algebra. As another example, let and be the algebra generated by {1, T }, where 1 is the identity operator. Then Lat(T) = Lat(Σ). Fundamental theorem of noncommutative algebra Just as the fundamental theorem of algebra ensures that every linear transformation acting on a finite-dimensional complex vector space has a non-trivial invariant subspace, the fundamental theorem of noncommutative algebra asserts that Lat(Σ) contains non-trivial elements for certain Σ. One consequence is that every commuting family in L(V) can be simultaneously upper-triangularized. To see this, note that an upper-triangular matrix representation corresponds to a flag of invariant subspaces, that a commuting family generates a commuting algebra, and that is not commutative when . Left ideals If A is an algebra, one can define a left regular representation Φ on A: Φ(a)b = ab is a homomorphism from A to L(A), the algebra of linear transformations on A The invariant subspaces of Φ are precisely the left ideals of A. A left ideal M of A gives a subrepresentation of A on M. If M is a left ideal of A then the left regular representation Φ on M now descends to a representation Φ' on the quotient vector space A/M. If [b] denotes an equivalence class in A/M, Φ'(a)[b] = [ab]. The kernel of the representation Φ' is the set {a ∈ A | ab ∈ M for all b}. The representation Φ' is irreducible if and only if M is a maximal left ideal, since a subspace V ⊂ A/M is an invariant under {Φ'(a) | a ∈ A} if and only if its preimage under the quotient map, V + M, is a left ideal in A. Invariant subspace problem The invariant subspace problem concerns the case where V is a separable Hilbert space over the complex numbers, of dimension > 1, and T is a bounded operator. The problem is to decide whether every such T has a non-trivial, closed, invariant subspace. It is unsolved. In the more general case where V is assumed to be a Banach space, Per Enflo (1976) found an example of an operator without an invariant subspace. A concrete example of an operator without an invariant subspace was produced in 1985 by Charles Read. Almost-invariant halfspaces Related to invariant subspaces are so-called almost-invariant-halfspaces (AIHS's). A closed subspace of a Banach space is said to be almost-invariant under an operator if for some finite-dimensional subspace ; equivalently, is almost-invariant under if there is a finite-rank operator such that , i.e. if is invariant (in the usual sense) under . In this case, the minimum possible dimension of (or rank of ) is called the defect. Clearly, every finite-dimensional and finite-codimensional subspace is almost-invariant under every operator. Thus, to make things non-trivial, we say that is a halfspace whenever it is a closed subspace with infinite dimension and infinite codimension. The AIHS problem asks whether every operator admits an AIHS. In the complex setting it has already been solved; that is, if is a complex infinite-dimensional Banach space and then admits an AIHS of defect at most 1. It is not currently known whether the same holds if is a real Banach space. However, some partial results have been established: for instance, any self-adjoint operator on an infinite-dimensional real Hilbert space admits an AIHS, as does any strictly singular (or compact) operator acting on a real infinite-dimensional reflexive space. See also Invariant manifold Lomonosov's invariant subspace theorem References Sources Linear algebra Operator theory Representation theory
Invariant subspace
[ "Mathematics" ]
1,720
[ "Linear algebra", "Representation theory", "Fields of abstract algebra", "Algebra" ]
585,842
https://en.wikipedia.org/wiki/Sabot%20%28firearms%29
A sabot (, ) is a supportive device used in firearm/artillery ammunitions to fit/patch around a projectile, such as a bullet/slug or a flechette-like projectile (such as a kinetic energy penetrator), and keep it aligned in the center of the barrel when fired. It allows a narrower projectile with high sectional density to be fired through a barrel of much larger bore diameter with maximal accelerative transfer of kinetic energy. After leaving the muzzle, the sabot typically separates from the projectile in flight, diverting only a very small portion of the overall kinetic energy. The sabot component in projectile design is the relatively thin, tough and deformable seal known as a driving band or obturation ring needed to trap propellant gases behind a projectile, and also keep the projectile centered in the barrel, when the outer shell of the projectile is only slightly smaller in diameter than the caliber of the barrel. Driving bands and obturators are used to seal these full-bore projectiles in the barrel because of manufacturing tolerances; there always exists some gap between the projectile outer diameter and the barrel inner diameter, usually a few thousandths of an inch; enough of a gap for high pressure gasses to slip by during firing. Driving bands and obturator rings are made from material that will deform and seal the barrel as the projectile is forced from the chamber into the barrel. Sabots use driving bands and obturators, because the same manufacturing tolerance issues exist when sealing the saboted projectile in the barrel, but the sabot itself is a more substantial structural component of the in-bore projectile configuration. Refer to the two armor-piercing fin-stabilized discarding sabot (APFSDS) pictures to see the substantial material nature of a sabot to fill the bore diameter around the sub-caliber arrow-type flight projectile, compared to the very small gap sealed by a driving band or obturator to mitigate what is known classically as windage. More detailed cutaways of the internal structural complexity of advanced APFSDS saboted long rod penetrator projectiles can be found in #External links. Design The function of a sabot is to provide a larger bulkhead structure that fills the entire bore area between an intentionally designed sub-caliber flight projectile and the barrel, giving a larger surface area for propellant gasses to act upon than just the base of the smaller flight projectile. Efficient aerodynamic design of a flight projectile does not always accommodate efficient interior ballistic design to achieve high muzzle velocity. This is especially true for arrow-type projectiles, which are long and thin for low drag efficiency, but too thin to shoot from a gun barrel of equal diameter to achieve high muzzle velocity. The physics of interior ballistics demonstrates why the use of a sabot is advantageous to achieve higher muzzle velocity with an arrow-type projectile. Propellant gasses generate high pressure, and the larger the base area that pressure acts upon the greater the net force on that surface. Force (pressure times area) provides an acceleration to the mass of the projectile. Therefore, for a given pressure and barrel diameter, a lighter projectile can be driven from a barrel to a higher muzzle velocity than a heavier projectile. However, a lighter projectile may not fit in the barrel, because it is too thin. To make up this difference in diameter, a properly designed sabot provides less parasitic mass than if the flight projectile were made full-bore, in particular providing dramatic improvement in muzzle velocity for APDS (Armor-piercing discarding sabot) and APFSDS (Armor-piercing fin-stabilized discarding sabot) ammunition. Seminal research on two important sabot configurations for long rod penetrators used in APFSDS ammunition, namely the "saddle-back" and "double-ramp" sabot was performed by the US Army Ballistics Research Laboratory during the development and improvement of modern 105mm and 120mm kinetic energy APFSDS penetrators and published in 1978, permitted by the significant advancement in the computerized finite element method in structural mechanics at that time; and now represents the existing fielded technology standard. (See for example the development of the M829 series of anti-tank projectiles beginning with the base model M829 in the early 1980s, to the 2016 M829A4 model, employing ever longer "double-ramp" sabots). Upon muzzle exit, the sabot is discarded, and the smaller flight projectile flies to the target with less drag resistance than a full-bore projectile. In this manner, very high velocity and slender, low drag projectiles can be fired more efficiently, (see external ballistics and terminal ballistics). Nevertheless, the weight of the sabot represents parasitic mass that must also be accelerated to muzzle velocity, but does not contribute to the terminal ballistics of the flight projectile. For this reason, great emphasis is placed on selecting strong yet lightweight structural materials for the sabot, and configuring the sabot geometry to efficiently employ these parasitic materials at minimum weight penalty. Made of some lightweight material (usually high strength plastic in small caliber rifles, (see SLAP Saboted light armor penetrator), shotguns and muzzle loader ammunition; aluminium, steel, and carbon fiber reinforced plastic for modern anti-tank kinetic energy ammunition; and, in classic times, wood or papier-mâché – in muzzle loading cannons). The sabot usually consists of several longitudinal pieces held in place by the cartridge case, an obturator or driving band. When the projectile is fired, the sabot blocks the gas, provides significant structural support against launch acceleration, and carries the projectile down the barrel. When the sabot reaches the end of the barrel, the shock of hitting still air pulls the parts of the sabot away from the projectile, allowing the projectile to continue in flight. Modern sabots are made from high strength aluminum and graphite fiber reinforced epoxy. They are used primarily to fire long rods of very dense materials, such as tungsten heavy alloy and depleted uranium. (see for example the M829 series of anti-tank projectiles). Sabot-type shotgun slugs were marketed in the United States from about 1985, and became legal for hunting in most U.S. states. When used with a rifled slug barrel, they are very much more accurate than normal shotgun slugs. Types Cup sabot A cup sabot supports the base and rear end of a projectile, and the cup material alone can provide both structural support and barrel obturation. When the sabot and projectile exit the muzzle of the gun, air pressure alone on the sabot forces the sabot to release the projectile. Cup sabots are found typically in small arms ammunition, smooth-bore shotgun and smooth-bore muzzleloader projectiles. Expanding cup sabot Used typically in rifled small arms (SLAP, shotguns, and muzzleloaders), an expanding cup sabot has a one piece sabot surrounding the base and sides of a projectile, providing both structural support and obturation. Upon firing, when the sabot and projectile leave the muzzle of the gun, centrifugal force from the rotation of the projectile, due to barrel rifling, opens up the segments surrounding the projectile, rapidly presenting more surface area to air pressure, quickly releasing it. Although the use of cup sabots of various complexity are popular with rifle ammunition hand-loaders, in order to achieve significantly higher muzzle velocity with a lower drag, smaller diameter and lighter bullet, successful saboted projectile design has to include the resulting bullet stability characteristics. For example, simply inserting a commercially available 5.56mm (.224) bullet into a sabot that will fire it from a commercially available 7.62mm (.300) barrel may result in that 5.56mm bullet failing to achieve sufficient gyroscopic stability to fly accurately without tumbling. To achieve gyroscopic stability of longer bullets in smaller diameter requires faster rifling. Therefore, if a bullet requires at least 1 turn in 7 inch twist, (1:7 rifling), in 5.56mm, it will also require at least 1:7 rifling when saboted in 7.62mm. However, larger caliber commercial rifles generally don't need such fast twist rates; 1:10 being a readily available standard in 7.62mm. As a result, the twist rate of the larger barrel will dictate which smaller bullets can be fired with sufficient stability out of a sabot. In this example, using 1:10 rifling in 7.62mm restricts saboting to 5.56mm bullets that require 1:10 twist or slower, and this requirement will tend to restrict saboting to the shorter (and lighter) 5.56mm bullets. Base sabot A base sabot has a one piece base which supports the bottom of the projectile, and separate pieces that surround the sides of the projectile and center it. The base sabot can have better and cleaner sabot/projectile separation than cup or expanding cup sabots for small arms ammunition, but may be more expensive to manufacture and assemble. In larger caliber APDS ammunition, based on the cup, expanding cup, and base sabot concepts, significantly more complex assemblies are required. Spindle sabot A spindle sabot uses a set of at least two and upwards of four matched longitudinal rings or "petals" which have a center section in contact with a long arrow-type projectile; a front section or "bore-rider" which centers that projectile in the barrel and provides an air scoop to assist in sabot separation upon muzzle exit, and a rear section which both centers the projectile, provides a structural "bulkhead", and seals propellant gases with an obturator ring around the outside diameter. Spindle sabots are the standard type used in modern large caliber armor-piercing ammunition. Three-petal spindle-type sabots are shown in the illustrations at the right of this paragraph. The "double-ramp" and "saddle-back" sabots used on modern APFSDS ammunition are a form of spindle sabot. Shotgun slugs often use a cast plastic sabot similar to the spindle sabot. Shotgun sabots in general extend the full length of the projectile and are designed to be used more effectively in rifled barrels. Ring sabot A ring sabot uses the rear fins on a long rod projectile to help center the projectile and ride the bore, and the multi-petal sabot forms only a single bulkhead ring around the projectile near the front, with an obturator sealing gases from escaping past it, and centering the front of the projectile. The former Soviet Union favored armor-piercing sabot projectiles using ring sabots, which performed acceptably for that era, manufactured from high strength steel for both the long rod penetrator and ring sabot. The strength of the steel ring was sufficient to withstand launch accelerations without the need for sabot ramps to also support the steel flight projectile. See also Shell (projectile) Gas check References Notes External links Shotgun sabot separation photography Detailed cutaways of the internal structural complexity of advanced APFSDS saboted long rod penetrator projectiles Types of Ammunition – Norfolk Tank Museum Artillery ammunition Anti-tank rounds Ballistics Firearm terminology Shotgun cartridges
Sabot (firearms)
[ "Physics" ]
2,329
[ "Applied and interdisciplinary physics", "Ballistics" ]
585,887
https://en.wikipedia.org/wiki/Floor%20plan
In architecture and building engineering, a floor plan is a technical drawing to scale, showing a view from above, of the relationships between rooms, spaces, traffic patterns, and other physical features at one level of a structure. Dimensions are usually drawn between the walls to specify room sizes and wall lengths. Floor plans may also include details of fixtures like sinks, water heaters, furnaces, etc. Floor plans may include notes for construction to specify finishes, construction methods, or symbols for electrical items. It is also called a plan which is a measured plane typically projected at the floor height of , as opposed to an elevation which is a measured plane projected from the side of a building, along its height, or a section or cross section where a building is cut along an axis to reveal the interior structure. Overview Similar to a map, the orientation of the view is downward from above, but unlike a conventional map, a plan is drawn at a particular vertical position (commonly at about four feet above the floor). Objects below this level are seen, objects at this level are shown 'cut' in plan-section, and objects above this vertical position within the structure are omitted or shown dashed. Plan view or planform is defined as a vertical orthographic projection of an object on a horizontal plane, like a map. The term may be used in general to describe any drawing showing the physical layout of objects. For example, it may denote the arrangement of the displayed objects at an exhibition, or the arrangement of exhibitor booths at a convention. Drawings are now reproduced using plotters and large format xerographic copiers. A reflected ceiling plan (RCP) shows a view of the room as if looking from above, through the ceiling, at a mirror installed one foot below the ceiling level, which shows the reflected image of the ceiling above. This convention maintains the same orientation of the floor and ceilings plans – looking down from above. RCPs are used by designers and architects to demonstrate lighting, visible mechanical features, and ceiling forms as part of the documents provided for construction. The art of constructing ground plans (ichnography; Gr. τὸ ἴχνος, íchnos, "track, trace" and γράφειν, gráphein, "to write"; pronounced ik-nog-rəfi) was first described by Vitruvius (i.2) and included the geometrical projection or horizontal section representing the plan of any building, taken at such a level as to show the outer walls, with the doorways, windows, fireplaces, etc., and the correct thickness of the walls; the position of piers, columns or pilasters, courtyards and other features which constitute the design, as to scale. Floor plan topics Building blocks A floor plan is not a top view or bird's-eye view; it is a measured drawing to scale of the layout of a floor in a building. A top view or bird's-eye view does not show an orthogonally projected plane cut at the typical four foot height above the floor level. A floor plan may show any of the following elements: interior walls and hallways restrooms windows and doors appliances (stoves, refrigerators, water heater, etc.) interior features (fireplaces, saunas, whirlpools, etc.) the use of all rooms Plan view A plan view is an orthographic projection of a three-dimensional object from the position of a horizontal plane through the object. In other words, a plan is a section viewed from the top. In such views, the portion of the object above the plane (section) is omitted to reveal what lies beyond. In the case of a floor plan, the roof and upper portion of the walls may typically be omitted. Whenever an interior design project is being approached, a floor plan is the typical starting point for any further design considerations and decisions. Roof plans are orthographic projections, but they are not sections as their viewing plane is outside of the object. A plan is a common method of depicting the internal arrangement of a three-dimensional object in two dimensions. It is often used in technical drawing and is traditionally crosshatched. The style of crosshatching indicates the type of material the section passes through. 3D floor plans A 3D floor plan can be defined as a virtual model of a building floor plan. It is often used to better convey architectural plans to individuals not familiar with floor plans. Despite the purpose of floor plans originally being to depict 3D layouts in a 2D manner, technological expansion has made rendering 3D models much more cost effective. 3D plans show a better depth of image and are often complemented by 3D furniture in the room. This allows a greater appreciation of scale than with traditional 2D floor plans. See also 3D printing 3D scanner Architect's scale Architectural drawing List of floor plan software House House plan Indoor positioning system (IPS) Room number magicplan References External links Renaissance Visual Thinking: Architectural Representation as Medium to Contemplate ‘True Form’, Federica Goffi-Hamilton Technical drawing Architectural terminology
Floor plan
[ "Engineering" ]
1,033
[ "Design engineering", "Technical drawing", "Civil engineering", "Architectural terminology", "Architecture" ]
586,066
https://en.wikipedia.org/wiki/Ballistic%20gelatin
Ballistic gelatin is a testing medium designed to simulate the effects of bullet wounds in animal muscle tissue. It was developed and improved by Martin Fackler and others in the field of wound ballistics. It is calibrated to match pig muscle, which is ballistically similar to human muscle tissue. Ballistic gelatin is traditionally a solution of gelatin powder in water. Ballistic gelatin closely simulates the density and viscosity of human and animal muscle tissue, and is used as a standardized medium for testing the terminal performance of firearms ammunition. While ballistic gelatin does not model the tensile strength of muscles or the structures of the body such as skin and bones, it works fairly well as an approximation of tissue and provides similar performance for most ballistics testing; however, its usefulness as a model for very low velocity projectiles can be limited. Ballistic gelatin is used rather than actual muscle tissue due to the ability to carefully control the properties of the gelatin, which allows consistent and reliable comparison of terminal ballistics. History The FBI introduced its own testing protocol in December 1988 as a response to the 1986 Miami shootout, and it quickly became popular among US law enforcement agencies. Preparation Gelatin formula The most commonly used formula is an FBI-style 10% ballistic gelatin, which is prepared by dissolving one part 250 bloom type A gelatin into nine parts of warm water (by mass), mixing the water while pouring in the powdered gelatin. It is chilled to . The older NATO formula specifies a 20% solution, chilled to , but that solution costs more to prepare, as it uses twice the amount of the gelatin. In either case, a 1988 research paper by Martin Fackler recommends that the water should not be heated above , as this can cause a significant change in the ballistic performance. However, this result does not seem to be reproduced in a later study. Calibration To ensure accurate results, immediately prior to use, the gelatin block is calibrated by firing a standard .177 caliber () steel BB from an air gun over a gun chronograph into the gelatin, and the depth of penetration measured. While the exact calibration methods vary slightly, the calibration method used by the United States Immigration and Naturalization Service's National Firearms Unit is fairly typical. It requires a velocity of , and a BB penetration between . In his book Bullet Penetration, ballistics expert Duncan MacPherson describes a method that can be used to compensate for ballistic gelatin that gives a BB penetration that is off by several centimeters (up to two inches) in either direction. MacPherson's Figure 5-2, Velocity Variation Correction to Measured BB Penetration Depth, can be used to make corrections to BB penetration depth when measured BB velocity is within ±10 m/s of 180 m/s. This method can also be used to compensate for error within the allowed tolerance, and normalize results of different tests, as it is standard practice to record the exact depth of the calibration BB's penetration. Synthetic alternative Ballistic gels made from natural gelatin are typically clear yellow-brown in color, and are generally not re-usable. The more expensive synthetic substitutes are engineered to simulate the ballistic properties of natural gelatin, whilst initially being colorless and clear. Some synthetic gels are also re-usable, since they can be melted and reformed without affecting the ballistic properties of the gels. Synthetic formula Synthetic ballistic gels are typically made of an oil and a polymer instead of gelatin and water, most commonly used is white mineral oil and a styrene polymer blend, polymers used include: styrene-butadiene-styrene polymers; styrene isoprene-styrene polymers; styrene-ethylene-butylene styrene polymers; styrene-ethylenepropylene polymers; styrene-ethylene butylene polymers; styrene-butadiene polymers; and styrene-isoprene polymers. The gel usually includes about 12% to 22% by weight of the polymer, but it depends on what polymers is used. Heating temperatures vary depending on what polymer and oil is used, but should never go over . The polymer and oil solution is extremely sensitive to moisture - when moisture comes in contact with the solution, bubbles form when heat is applied. Polymer should not be added to heated up oil like gelatin is to water; the polymer and oil should be mixed when no heat is present. Dwell times are recommended after mixing the polymer and oil to prevent bubbles forming when heat is applied. These "dwell" times can go up to 12 hours if the air is above about 50 °F, or at the very least 24 hours if the air is at cooler temperatures. These discoveries were made by Darryl D. Amick. Uses Since ballistic gelatin mimics the properties of muscle tissue reasonably well, it is the preferred medium (over real porcine cadavers) for comparing the terminal performance of different expanding ammunition, such as hollow-point and soft-point bullets. These bullets use the hydraulic pressure of the tissue or gelatin to expand in diameter, limiting penetration and increasing the tissue damage along their path. While the Hague Convention restricts the use of such ammunition in warfare, it is commonly used by police and civilians in defensive weapons, as well as police sniper and hostage-rescue teams, where rapid disabling of the target and minimal risk of overpenetration are required to reduce collateral damage. Bullets intended for hunting are also commonly tested in ballistic gelatin. A bullet intended for use hunting small vermin, such as prairie dogs, for example, needs to expand very quickly to have an effect before it exits the target, and must perform at higher velocities due to the use of lighter bullets in the cartridges. The same fast-expanding bullet used for prairie dogs would be considered inhumane for use on medium game animals like whitetail deer, where deeper penetration is needed to reach vital organs and assure a quick kill. In television the MythBusters team sometimes used ballistics gel to aid in busting myths, but not necessarily involving bullets, including the exploding implants myth, the deadly card throw, and the ceiling fan decapitation. They sometimes placed real bones (from humans or pigs) or synthetic bones in the gel to simulate bone breaks as well. The US television program Forged in Fire is also known to use ballistics gelatin, often creating entire human torsos and heads complete with simulated bones, blood, organs and intestines that are cast inside the gel. Various bladed weapons are then tested on the gel torso in order to simulate and record the destructive effects the weapons would have on a real human body. See also Terminal ballistics References Further reading Putting Bullets to the Test, Officer.com Ballistics Gelatin
Ballistic gelatin
[ "Physics" ]
1,397
[ "Applied and interdisciplinary physics", "Ballistics" ]
586,091
https://en.wikipedia.org/wiki/Cannabinoid%20receptor
Cannabinoid receptors, located throughout the body, are part of the endocannabinoid system of vertebrates a class of cell membrane receptors in the G protein-coupled receptor superfamily. As is typical of G protein-coupled receptors, the cannabinoid receptors contain seven transmembrane spanning domains. Cannabinoid receptors are activated by three major groups of ligands: Endocannabinoids; Phytocannabinoids (plant-derived such as tetrahydrocannabinol (THC) produced by cannabis); Synthetic cannabinoids (such as HU-210). All endocannabinoids and phytocannabinoids are lipophilic. There are two known subtypes of cannabinoid receptors, termed CB1 and CB2. The CB1 receptor is expressed mainly in the brain (central nervous system or "CNS"), but also in the lungs, liver and kidneys. The CB2 receptor is expressed mainly in the immune system, in hematopoietic cells, and in parts of the brain. The protein sequences of CB1 and CB2 receptors are about 44% similar. When only the transmembrane regions of the receptors are considered, amino acid similarity between the two receptor subtypes is approximately 68%. In addition, minor variations in each receptor have been identified. Cannabinoids bind reversibly and stereo-selectively to the cannabinoid receptors. Subtype selective cannabinoids have been developed which theoretically may have advantages for treatment of certain diseases such as obesity. Enzymes involved in biosynthesis/inactivation of endocannabinoids and endocannabinoid signaling in general (involving targets other than CB1/2-type receptors) occur throughout the animal kingdom. Discovery The existence of cannabinoid receptors in the brain was discovered from in vitro studies in the 1980s, with the receptor designated as the cannabinoid receptor type 1 or CB1. The DNA sequence that encodes a G-protein-coupled cannabinoid receptor in the human brain was identified and cloned in 1990. These discoveries led to determination in 1993 of a second brain cannabinoid receptor named cannabinoid receptor type 2 or CB2. A neurotransmitter for a possible endocannabinoid system in the brain and peripheral nervous system, anandamide (from 'ananda', Sanskrit for 'bliss'), was first characterized in 1992, followed by discovery of other fatty acid neurotransmitters that behave as endogenous cannabinoids having a low-to-high range of efficacy for stimulating CB1 receptors in the brain and CB2 receptors in the periphery. Types CB1 Cannabinoid receptor type 1 (CB1) receptors are thought to be one of the most widely expressed Gαi protein-coupled receptors in the brain. One mechanism through which they function is endocannabinoid-mediated depolarization-induced suppression of inhibition, a very common form of retrograde signaling, in which the depolarization of a single neuron induces a reduction in GABA-mediated neurotransmission. Endocannabinoids released from the depolarized post-synaptic neuron bind to CB1 receptors in the pre-synaptic neuron and cause a reduction in GABA release due to limited presynaptic calcium ions entry. They are also found in other parts of the body. For instance, in the liver, activation of the CB1 receptor is known to increase de novo lipogenesis. CB2 CB2 receptors are expressed on T cells of the immune system, on macrophages and B cells, in hematopoietic cells, and in the brain and CNS (2019). They also have a function in keratinocytes. They are also expressed on peripheral nerve terminals. These receptors play a role in antinociception, or the relief of pain. In the brain, they are mainly expressed by microglial cells, where their role remains unclear. While the most likely cellular targets and executors of the CB2 receptor-mediated effects of endocannabinoids or synthetic agonists are the immune and immune-derived cells (e.g. leukocytes, various populations of T and B lymphocytes, monocytes/macrophages, dendritic cells, mast cells, microglia in the brain, Kupffer cells in the liver, astrocytes, etc.), the number of other potential cellular targets is expanding, now including endothelial and smooth muscle cells, fibroblasts of various origins, cardiomyocytes, and certain neuronal elements of the peripheral or central nervous systems (2011). Other The existence of additional cannabinoid receptors has long been suspected, due to the actions of compounds such as abnormal cannabidiol that produce cannabinoid-like effects on blood pressure and inflammation, yet do not activate either CB1 or CB2. Recent research strongly supports the hypothesis that the N-arachidonoyl glycine (NAGly) receptor GPR18 is the molecular identity of the abnormal cannabidiol receptor and additionally suggests that NAGly, the endogenous lipid metabolite of anandamide (also known as arachidonoylethanolamide or AEA), initiates directed microglial migration in the CNS through activation of GPR18. Other molecular biology studies have suggested that the orphan receptor GPR55 should in fact be characterised as a cannabinoid receptor, on the basis of sequence homology at the binding site. Subsequent studies showed that GPR55 does indeed respond to cannabinoid ligands. This profile as a distinct non-CB1/CB2 receptor that responds to a variety of both endogenous and exogenous cannabinoid ligands, has led some groups to suggest GPR55 should be categorized as the CB3 receptor, and this re-classification may follow in time. However this is complicated by the fact that another possible cannabinoid receptor has been discovered in the hippocampus, although its gene has not yet been cloned, suggesting that there may be at least two more cannabinoid receptors to be discovered, in addition to the two that are already known. GPR119 has been suggested as a fifth possible cannabinoid receptor, while the PPAR family of nuclear hormone receptors can also respond to certain types of cannabinoid. Signaling Cannabinoid receptors are activated by cannabinoids, generated naturally inside the body (endocannabinoids) or introduced into the body as cannabis or a related synthetic compound. Similar responses are produced when introduced in alternative methods, only in a more concentrated form than what is naturally occurring. After the receptor is engaged, multiple intracellular signal transduction pathways are activated. At first, it was thought that cannabinoid receptors mainly inhibited the enzyme adenylate cyclase (and thereby the production of the second messenger molecule cyclic AMP), and positively influenced inwardly rectifying potassium channels (=Kir or IRK). However, a much more complex picture has appeared in different cell types, implicating other potassium ion channels, calcium channels, protein kinase A and C, Raf-1, ERK, JNK, p38, c-fos, c-jun and many more. For example, in human primary leukocytes CB2 displays a complex signalling profile, activating adenylate cyclase via stimulatory Gαs alongside the classical Gαi signalling, and induces ERK, p38 and pCREB pathways. Separation between the therapeutically undesirable psychotropic effects, and the clinically desirable ones, however, has not been reported with agonists that bind to cannabinoid receptors. THC, as well as the two major endogenous compounds identified so far that bind to the cannabinoid receptors —anandamide and 2-arachidonylglycerol (2-AG)— produce most of their effects by binding to both the CB1 and CB2 cannabinoid receptors. While the effects mediated by CB1, mostly in the central nervous system, have been thoroughly investigated, those mediated by CB2 are not equally well defined. Prenatal cannabis exposure (PCE) has been shown to perturb the fetal endogenous cannabinoid signaling system. This perturbation has not been shown to directly affect neurodevelopment nor cause lifelong cognitive, behavioral, or functional abnormalities, but it may predispose offspring to abnormalities in cognition and altered emotionality from post-natal factors. Additionally, PCE may alter the wiring of brain circuitry in foetal development and cause significant molecular modifications to neurodevelopmental programs that may lead to neurophysiological disorders and behavioural abnormalities. Cannabinoid treatments Synthetic tetrahydrocannabinol (THC) is prescribed under the INN dronabinol or the brand name Marinol, to treat vomiting and for enhancement of appetite, mainly in people with AIDS as well as for refractory nausea and vomiting in people undergoing chemotherapy. Use of synthetic THC is becoming more common as the known benefits become more prominent within the medical industry. THC is also an active ingredient in nabiximols, a specific extract of Cannabis that was approved as a botanical drug in the United Kingdom in 2010 as a mouth spray for people with multiple sclerosis to alleviate neuropathic pain, spasticity, overactive bladder, and other symptoms. Ligands Binding affinity and selectivity of cannabinoid ligands: See also Cannabinoid receptor antagonist Endocannabinoid enhancer Endocannabinoid reuptake inhibitor Cannabidiol Effects of cannabis References External links G protein-coupled receptors
Cannabinoid receptor
[ "Chemistry" ]
2,060
[ "G protein-coupled receptors", "Signal transduction" ]
586,191
https://en.wikipedia.org/wiki/ES%20EVM
The ES EVM (, "Unified System of Electronic Computing Machines"), or YeS EVM, also known in English literature as the Unified System or Ryad (, "Series"), is a series of mainframe computers generally compatible with IBM's System/360 and System/370 mainframes, built in the Comecon countries under the initiative of the Soviet Union between 1968 and 1998. More than 15,000 of the ES EVM mainframes were produced. Development In 1966, the Soviet economists suggested creating a unified series of mutually compatible computers. Due to the success of the IBM System/360 in the United States, the economic planners decided to use the IBM design, although some prominent Soviet computer scientists had criticized the idea and suggested instead choosing one of the Soviet indigenous designs, such as BESM or Minsk. The first works on the cloning began in 1968; production started in 1972. In addition, after 1968, other Comecon countries joined the project. With the exception of only a few hardware pieces, the ES EVM machines were recognized in the Western countries as independently designed, based on legitimate Soviet patents. Unlike the hardware, which was quite original, mostly created by reverse engineering, much of the software was based on slightly modified and localized IBM code. In 1974–1976, IBM had contacted the Soviet authorities and expressed interest in ES EVM development; however, after the Soviet Army entered Afghanistan, in 1979, all contacts between IBM and ES developers were interrupted, due to the U.S. embargo on technological cooperation with the USSR. Due to the CoCom restrictions, much of the software localization was done through disassembling the IBM software, with some minimal modification. The most common operating system was OS ES (), a modified version of OS/360; the later versions of OS ES were very original and different from the IBM OSes, but they also included a lot of original IBM code. There were even anecdotal rumors among the Soviet programmers, that this supposedly Soviet operating system contained some secret command which outputs the American national anthem. Today some of the Russian institutions that worked on ES EVM are cooperating with IBM to continue legacy support for both actual IBM mainframes and the ES EVM systems. ES EVM machines were developed in Moscow, at the Scientific Research Center for Electronic Computer Machinery (NITsEVT); in Yerevan, Armenia, at Yerevan Computer Research and Development Institute; and later in Minsk, at the Scientific Research Institute of Electronic Computer Machines (NIIEVM); and in Penza, at the Penza Scientific Research Institute of Computer Machinery. They were manufactured in Minsk, at the Minsk Production Group for Computing Machinery (MPOVT); and in Penza, at the Penza Electronic Computer Factory. Some models had been also produced in other countries of the Eastern Bloc, such as Bulgaria, Hungary, Poland, Czechoslovakia, Romania, and East Germany; some peripheral devices were also produced in Cuba. The former German chancellor Angela Merkel, used one of East Germany's ES EVM computers in 1986 for her PhD dissertation. ES EVM computers were assigned to four subseries or generations (), known as Ryad 1, Ryad 2, Ryad 3 and Ryad 4, this nomenclature gave rise to the common name for the whole project. Hardware models and technical details The first subseries of the ES EVM, released in 1969–1978, included the models 1010, 1020, 1030, 1040, and 1050, which were analogous to the System/360 and operated at 10–450 kIPS, and the more rare and advanced versions, incompatible with the IBM versions: 1022, 1032, 1033 and 1052. The electronics of the first models were based on TTL circuits; the later machines used ECL design. ES 1050 had up to 1M RAM and 64-bit floating point registers. The fastest machine of the series, ES 1052, developed in 1978, operated at 700 kIPS. The second subseries, released in 1977–1978, included the models 1015, 1025, 1035, 1045, 1055, and 1060, analogous to System/370 and operated at 33 kIPS—1.050 MIPS. ES 1060 had up to 8M RAM. The third subseries, released in 1984, were analogous to System/370 with some original enhancements, and included 1016, 1026, 1036, 1046, and 1066. ES 1066 had up to 16M RAM and operated at 5.5 MIPS. The fourth subseries had no direct IBM analogs and included 1130, 1181, and 1220. The last machine in the series, ES 1220, released in 1995, supported a number of 64-bit CPU commands, 256M RAM, and operated at 7 MIPS, but was not successful; only 20 such machines were ever produced, and in 1998 the whole production of ES mainframes was stopped. See also History of computer hardware in Eastern Bloc countries SM EVM ES PEVM References External links Historical Overview of the ES Computers Operating Systems of ES EVM Pioneers of Soviet Computing IBM System/360 mainframe line Science and technology in Belarus Soviet computer systems Comecon
ES EVM
[ "Technology" ]
1,081
[ "Computer systems", "Soviet computer systems" ]
586,253
https://en.wikipedia.org/wiki/Combination%20square
A combination square is a multi-purpose measuring and marking tool used in metalworking, woodworking, and stonemasonry. It is composed of a rule and one or more interchangeable heads that can be attached to the rule. Other names for the tool include adjustable square, combo square, and sliding square. The most common head is the standard head, which is used as a square for marking and testing 90° and 45° angles. The other common types of head are the protractor head, and the centre finder head. Description Rule Combination square rules are made of steel and can be purchased with gradations in metric, imperial, or both metric and imperial. Both faces of the rule have markings, providing four different sets of markings. This allows different sides to have different graduations (eg. 1mm or 0.5mm markings) or units (ie. metric and imperial). The rule typically comes in lengths between 150mm and 600mm or between 4 inches and 24 inches. Heads The heads, occasionally called anvils, are attached to the rule by sliding the rule into a slot in the side of the head. The head is then tightened in place via a lock bolt or lock nut which engages with a channel running the full length of the rule, allowing the head to be tightened on at any point along the rule. The standard or square head has three adjacent flat faces, two of them meet square to one another, and the third face is angled away at 45°. When attached one face is parallel to the rule, one face is perpendicular, and one face is at 45°. The standard head usually incorporates a small spirit level and a small removable scriber. The protractor head has a flat reference edge which is attached to an adjustable 180° protractor or (sometimes called a turret) with a graduated scale in both directions for reading both the angle or the complement angle. The protractor head sometimes includes a small spirit level. The centre finder head has two faces meeting at 90°, when attached one edge of the rule bisects the two faces at 45°. The heads are manufactured from either forged steel, cast iron, die-cast aluminium, die-cast zinc, or plastic. Aluminium and zinc heads are cheaper than steel and iron, but less durable and more prone to inaccuracy. Cast iron and steel heads are also notably heavier. The heads are usually painted except for the flat machined reference faces. Uses As well as being used as a regular standalone rule or straightedge, the rule is used in combination with the different heads. Standard head The standard head can be used as a: Square, for marking and referencing 90° angles and checking if surfaces are flat and square to one another. Mitre square, for marking and referencing 45° angles, such as in woodworking for mitre joints. Spirit level, to check if a surface is level or similarly if a surface or edge is plumb (vertical). Depth gauge or height gauge. Form of marking gauge for marking lines parallel to an edge, by setting the head to a certain distance from the end of the rule. Reference for directly transferring dimensions without needing to take a measurement, minimising measurement errors and inaccuracies. Protractor head The protractor head can be used for: Measuring and checking angles between surfaces, edges, and markings. Marking angles from an edge Directly transferring angles, like a bevel gauge (sliding T gauge), to minimise measurement errors and inaccuracies. Measuring and marking angles relative to the horizontal by using the spirit level. Centre finder head The centre finder head can be used for: Marking lines through the centre of circular or square objects, such as dowels. Making multiple marks at different angles can be used to identify the point at the centre of the circle. Marking lines perpendicular to a curved edge (normal lines). Bisecting square corners to mark a 45° angle. History Though some earlier 19th century tools were called combination squares, the modern combination square was invented in the late 1870s by American inventor Laroy S. Starrett, and patented in 1879. In 1880 he founded the L. S. Starrett Company in Athol, Massachusetts, United States. The tool was originally designed for machinists, but over time became commonly used in other trades, such as woodworking. Notes References Dimensional instruments Woodworking measuring instruments Metalworking measuring instruments Woodworking hand tools American inventions Squares (tool)
Combination square
[ "Physics", "Mathematics" ]
916
[ "Quantity", "Dimensional instruments", "Physical quantities", "Size" ]
586,307
https://en.wikipedia.org/wiki/Remote%20surgery
Remote surgery (also known as cybersurgery or telesurgery) is the ability for a doctor to perform surgery on a patient even though they are not physically in the same location. It is a form of telepresence. A robot surgical system generally consists of one or more arms (controlled by the surgeon), a master controller (console), and a sensory system giving feedback to the user. Remote surgery combines elements of robotics, telecommunications such as high-speed data connections and elements of management information systems. While the field of robotic surgery is fairly well established, most of these robots are controlled by surgeons at the location of the surgery. Remote surgery is remote work for surgeons, where the physical distance between the surgeon and the patient is less relevant. It promises to allow the expertise of specialized surgeons to be available to patients worldwide, without the need for patients to travel beyond their local hospital. Surgical systems Surgical robot systems have been developed from the first functional telesurgery system-ZEUS-to the da Vinci Surgical System, which is currently the only commercially available surgical robotic system. In Israel a company was established by Professor Moshe Schoham, from the faculty of Mechanical Engineering at the Technion. Used mainly for "on-site" surgery, these robots assist the surgeon visually, with better precision and less invasiveness to patients. The Da Vinci Surgical System has also been combined to form a Dual Da Vinci system which allows two surgeons to work together on a patient at the same time. The system gives the surgeons the ability to control different arms, switch command of arms at any point, and communicate through headsets during the operation. Costs Marketed for $975,000, the ZEUS Robot Surgical System was less expensive than the da Vinci Surgical System, which cost $1 million. The cost of an operation through telesurgery is not precise but must pay for the surgical system, the surgeon, and contribute to paying for a year's worth of ATM technology which runs between $100,000-$200,000. The Lindbergh Operation The first true and complete remote surgery was conducted on 7 September 2001 across the Atlantic Ocean, with a French surgeon (Dr. Jacques Marescaux) in New York City performing a cholecystectomy on a 68-year-old female patient 6,230 km away in Strasbourg, France. It was named Operation Lindbergh, after Charles Lindbergh's pioneering transatlantic flight from New York to Paris. France Telecom provided the redundant fiber optic ATM lines to minimize latency and optimize connectivity, and Computer Motion provided a modified Zeus robotic system. After clinical evaluation of the complete solution in July 2001, the human operation was successfully completed on 9/7/2001. The success and exposure of the procedure led the robotic team to use the same technology within Canada, this time using Bell Canada's public internet between Hamilton, Ontario and North Bay, Ontario (a distance of about 400 kilometers). While operation Lindbergh used the most expensive ATM fiber optics communication to ensure reliability and success of the first telesurgery, the follow on procedures in Canada used standard public internet which was provisioned with QOS using MPLS QOS-MPLS. A series of complex laparoscopic procedures were performed where in this case, the expert clinician would support the surgeon who was less experienced, operating on his patient. This resulted in patient receiving the best care possible while remaining in their hometown, the less experienced surgeon gaining valuable experience, and the expert surgeon providing their expertise without travel. The robotic team's goal was to go from Lindbergh's proof of concept to a real-life solution. This was achieved with over 20 complex laparoscopic operations between Hamilton and North Bay. Applications Since Operation Lindbergh, remote surgery has been conducted many times in numerous locations. To date Dr. Anvari, a laparoscopic surgeon in Hamilton, Canada, has conducted numerous remote surgeries on patients in North Bay, a city 400 kilometres from Hamilton. Even though he uses a VPN over a non-dedicated fiberoptic connection that shares bandwidth with regular telecommunications data, Dr. Anvari has not had any connection problems during his procedures. Rapid development of technology has allowed remote surgery rooms to become highly specialized. At the Advanced Surgical Technology Center at Mt. Sinai Hospital in Toronto, Canada, the surgical room responds to the surgeon's voice commands in order to control a variety of equipment at the surgical site, including the lighting in the operating room, the position of the operating table and the surgical tools themselves. With continuing advances in communication technologies, the availability of greater bandwidth and more powerful computers, the ease and cost-effectiveness of deploying remote surgery units is likely to increase rapidly. The possibility of being able to project the knowledge and the physical skill of a surgeon over long distances has many attractions. There is considerable research underway in the subject. The armed forces have an obvious interest since the combination of telepresence, teleoperation, and telerobotics can potentially save the lives of battle casualties by providing them with prompt attention in mobile operating theatres. Another potential advantage of having robots perform surgeries is accuracy. A study conducted at Guy's Hospital in London, England compared the success of kidney surgeries in 304 dummy patients conducted traditionally as well as remotely and found that those conducted using robots were more successful in accurately targeting kidney stones. In 2015, another test was conducted on the lag time involved in the robotic surgery. A Florida hospital successfully tested lag time created by the Internet for a simulated robotic surgery in Ft. Worth, Texas, more than 1,200 miles away from the surgeon who was at the virtual controls. The team found out that the lag time in robotic surgeries, were insignificant. Roger Smith, CTO at the Florida Hospital Nicholson Center said that the team had concluded that, telesurgery is something that is possible and generally safe for large areas within the United States. Unassisted robotic surgery As the techniques of expert surgeons are studied and stored in special computer systems, robots might one day be able to perform surgeries with little or no human input. Carlo Pappone, an Italian surgeon, has developed a software program that uses data collected from several surgeons and thousands of operations to perform the surgery without human intervention. This could one day make expensive, complicated surgeries much more widely available, even to patients in regions which have traditionally lacked proper medical facilities. Force-feedback and time delay The ability to carry out delicate manipulations relies greatly upon feedback. For example, it is easy to learn how much pressure is required to handle an egg. In robotic surgery, surgeons need to be able to perceive the amount of force being applied without directly touching the surgical tools. Systems known as force-feedback, or haptic technology, have been developed to simulate this. Haptics is the science of touch. Any type of Haptic feedback provides a responsive force in opposition to the touch of the hand. Haptic technology in telesurgery, making a virtual image of a patient or incision, would allow a surgeon to see what they are working on as well as feel it. This technology is designed to give a surgeon the ability to feel tendons and muscles as if it were actually the patient's body. However these systems are very sensitive to time-delays such as those present in the networks used in remote surgery. Depth perception Being able to gauge the depth of an incision is crucial. Humans' binocular vision makes this easy in a three-dimensional environment. However, this can be much more difficult when the view is presented on a flat computer screen. Possible uses One possible use of remote surgery is the Trauma-Pod project conceived by the US military under the Defense Advanced Research Agency. This system is intended to aid wounded soldiers in the battlefield by making use of the skills of remotely located medical personnel. Another future possibility could be the use of remote surgery during long space exploration missions. Limitations For now, remote surgery is not a widespread technology in part because it does not have sponsorship by the governments. Before its acceptance on a broader scale, many issues will need to be resolved. For example, establishing secure very fast connections between the two sites, establishing clinical protocols, training, and global compatibility of equipment. Another technological limitation is the risk of interference with the communications (hacking). Also, there is still the need for an anesthesiologist and a backup surgeon to be present in case there is a disruption of communications or a malfunction in the robot. Nevertheless, Operation Lindbergh proved that the technology exists today to enable delivery of expert care to remote areas of the globe. See also Waldo (short story) by Robert A. Heinlein. References External links BBC News SCI/TECH -- First transatlantic surgery Surgical procedures and techniques French inventions Computer-assisted surgery Telemedicine Videotelephony Telepresence
Remote surgery
[ "Biology" ]
1,813
[ "Medical robotics", "Medical technology" ]
586,316
https://en.wikipedia.org/wiki/List%20of%20Soviet%20computer%20systems
This is the list of Soviet computer systems. The Russian abbreviation EVM (ЭВМ), present in some of the names below, means "electronic computing machine" (). List of hardware The Russian abbreviation EVM (ЭВМ), present in some of the names below, means "electronic computing machine" (). Ministry of Radio Technology Computer systems from the Ministry of Radio Technology: Agat (Агат) — Apple II clone ES EVM (ЕС ЭВМ), IBM mainframe clone ES PEVM (ЕС ПЭВМ), IBM PC compatible M series — series of mainframes and mini-computers Minsk (Минск) Poisk (Поиск) — IBM PC-XT clone Setun (Сетунь) — unique balanced ternary computer. Strela (Стрела) Ural (Урал) — mainframe series Vector-06C (Вектор-06Ц) Ministry of Instrument Making Computer systems from the Ministry of Instrument Making: Aragats (Арагац) Iskra (Искра) — common name for many computers with different architecture Iskra-1030 — Intel 8086 XT clone KVM-1 (КВМ-1) SM EVM (СМ ЭВМ) — most models were PDP-11 clones, while some others were HP 2100, VAX or Intel compatible Ministry of the Electronics Industry Computer systems from the Ministry of Electronics Industry: Elektronika (Электроника) family DVK family (ДВК) — PDP-11 clones Elektronika BK-0010 (БК-0010, БК-0011) — LSI-11 clone home computer UKNC (УКНЦ) — educational, PDP11-like Elektronika 60, Elektronika 100 Elektronika 85 — Clone of DEC Professional (computer) 350 (F11) Elektronika 85.1 — Clone of DEC Professional (computer) 380 (J11) Elektronika D3-28 Elektronika SS BIS (Электроника СС БИС) — Cray clone Soviet Academy of Sciences BESM (БЭСМ) — series of mainframes Besta (Беста) — Unix box, Motorola 68020-based, Sun-3 clone Elbrus (Эльбрус) — high-end mainframe series Kronos (Кронос) MESM (МЭСМ) — first Soviet Union computer (1950) M-1 — one of the earliest stored program computers (1950–1951) ZX Spectrum clones ATM Turbo Dubna 48K – running at half the speed of the original Hobbit Pentagon Radon 'Z' Scorpion Other 5E** (5Э**) series – military computers 5E51 (5Э51) 5E53 (5Э53) 5E76 (5Э76) – IBM/360 clone, military version 5E92 (5Э92) 5E92b (5Э92б) A series — ES EVM-compatible military computers Argon — a series of military real-time computers AS-6 (АС-6) – multiprocessor computing complex, name is Russian abbreviation for "Connection Equipment – 6" Dnepr (Днепр) GVS-100 (ГВС-100, Гибридная Вичислителная Система) – Hybrid Computer System Irisha (Ириша) Juku (Юку) — Estonian school computer Kiev (Киев) Korvet (Корвет) Krista (Криста) Micro-80 (Микро-80) — experimental PC, based on 8080-compatible processor Microsha (Микроша) — modification of Radio-86RK MIR, МИР (:uk:ЕОМ "МИР-1", :uk:ЕОМ "МИР-2") Nairi (Наири) Orion-128 (Орион-128) Promin (Проминь) PS-2000, PS-3000 – multiprocessor supercomputers in the 1980s Razdan (Раздан) Radon — real-time computer, designed for anti-aircraft defense Radio-86RK — simplified and modified version of Micro-80 Sneg (Снег) Specialist (Специалист) SVS TsUM-1 (ЦУМ-1) TIA-MC-1 An arcade system UM (УМ) UT-88 Vesna and Sneg — early mainframes List of operating systems For Kronos Kronos For BESM D-68 (Д-68, Диспетчер-68, Dispatcher-68) DISPAK ("Диспетчер Пакетов," Dispatcher of the Packets) DUBNA ("ДУБНА") For ES EVM DOS/ES ("Disk Operation system for ES EVM") OS/ES ("Disk Operation system for ES EVM") For SM EVM RAFOS (РАФОС), FOBOS (ФОБОС) and FODOS (ФОДОС) — RT-11 clones OSRV (ОСРВ) — RSX-11M clone, one of the most popular Soviet multi-user systems DEMOS — BSD-based Unix-like; later was ported to x86 and some other architectures INMOS (ИНМОС, Инструментальная мобильная операционная система) For 8-bit microcomputers MicroDOS (МикроДОС) — CP/M 2.2 clone For ZX Spectrum clones iS-DOS, TASiS DNA-OS For different platforms MISS (Multipurpose Interactive timeSharing System) – ES EVM ES1010, ES EVM ES1045, D3-28M, PC-compatible, etc. MOS (operating system) – a Soviet clone of Unix in the 1980s See also History of computing in the Soviet Union List of Soviet microprocessors List of Russian IT developers List of Russian microprocessors Internet in Russia References External links Russian Virtual Computer Museum Museum of the USSR Computers history Pioneers of Soviet Computing Archive software and documentation for Soviet computers UK-NC, DVK and BK0010. Computing-related lists
List of Soviet computer systems
[ "Technology" ]
1,488
[ "Computing-related lists", "Computer systems", "Soviet computer systems" ]
586,357
https://en.wikipedia.org/wiki/Artificial%20general%20intelligence
Artificial general intelligence (AGI) is a type of artificial intelligence (AI) that matches or surpasses human cognitive capabilities across a wide range of cognitive tasks. This contrasts with narrow AI, which is limited to specific tasks. Artificial superintelligence (ASI), on the other hand, refers to AGI that greatly exceeds human cognitive capabilities. AGI is considered one of the definitions of strong AI. Creating AGI is a primary goal of AI research and of companies such as OpenAI and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries. The timeline for achieving AGI remains a subject of ongoing debate among researchers and experts. As of 2023, some argue that it may be possible in years or decades; others maintain it might take a century or longer; a minority believe it may never be achieved; and another minority claims that it is already here. Notable AI researcher Geoffrey Hinton has expressed concerns about the rapid progress towards AGI, suggesting it could be achieved sooner than many expect. There is debate on the exact definition of AGI and regarding whether modern large language models (LLMs) such as GPT-4 are early forms of AGI. AGI is a common topic in science fiction and futures studies. Contention exists over whether AGI represents an existential risk. Many experts on AI have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be too remote to present such a risk. Terminology AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action. Some academic sources reserve the term "strong AI" for computer programs that experience sentience or consciousness. In contrast, weak AI (or narrow AI) is able to solve one specific problem but lacks general cognitive abilities. Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans. Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence (ASI) is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution. A framework for classifying AGI in levels was proposed in 2023 by Google DeepMind researchers. They define five levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI (i.e. an artificial superintelligence) is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI. Characteristics Various popular definitions of intelligence have been proposed. One of the leading proposals is the Turing test. However, there are other well-known definitions, and some researchers disagree with the more popular approaches. Intelligence traits Researchers generally hold that intelligence is required to do all of the following: reason, use strategy, solve puzzles, and make judgments under uncertainty represent knowledge, including common sense knowledge plan learn communicate in natural language if necessary, integrate these skills in completion of any given goal Many interdisciplinary approaches (e.g. cognitive science, computational intelligence, and decision making) consider additional traits such as imagination (the ability to form novel mental images and concepts) and autonomy. Computer-based systems that exhibit many of these capabilities exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent). There is debate about whether modern AI systems possess them to an adequate degree. Physical traits Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include: the ability to sense (e.g. see, hear, etc.), and the ability to act (e.g. move and manipulate objects, change location to explore, etc.) This includes the ability to detect and respond to hazard. Although the ability to sense (e.g. see, hear, etc.) and the ability to act (e.g. move and manipulate objects, change location to explore, etc.) can be desirable for some intelligent systems, these physical capabilities are not strictly required for an entity to qualify as AGI—particularly under the thesis that large language models (LLMs) may already be or become AGI. Even from a less optimistic perspective on LLMs, there is no firm requirement for an AGI to have a human-like form; being a silicon-based computational system is sufficient, provided it can process input (language) from the external world in place of human senses. This interpretation aligns with the understanding that AGI has never been proscribed a particular physical embodiment and thus does not demand a capacity for locomotion or traditional "eyes and ears". Tests for human-level AGI Several tests meant to confirm human-level AGI have been considered, including: The Turing Test (Turing) Proposed by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," this test involves a human judge engaging in natural language conversations with both a human and a machine designed to generate human-like responses. The machine passes the test if it can convince the judge it is human a significant fraction of the time. Turing proposed this as a practical measure of machine intelligence, focusing on the ability to produce human-like responses rather than on the internal workings of the machine. Turing described the test as follows: In 2014, a chatbot named Eugene Goostman, designed to imitate a 13-year-old Ukrainian boy, reportedly passed a Turing Test event by convincing 33% of judges that it was human. However, this claim was met with significant skepticism from the AI research community, who questioned the test's implementation and its relevance to AGI. More recently, a 2024 study suggested that GPT-4 was identified as human 54% of the time in a randomized, controlled version of the Turing Test—surpassing older chatbots like ELIZA while still falling behind actual humans (67%). The Robot College Student Test (Goertzel) A machine enrolls in a university, taking and passing the same classes that humans would, and obtaining a degree. LLMs can now pass university degree-level exams without even attending the classes. The Employment Test (Nilsson) A machine performs an economically important job at least as well as humans in the same job. AIs are now replacing humans in many roles as varied as fast food and marketing. The Ikea test (Marcus) Also known as the Flat Pack Furniture Test. An AI views the parts and instructions of an Ikea flat-pack product, then controls a robot to assemble the furniture correctly. The Coffee Test (Wozniak) A machine is required to enter an average American home and figure out how to make coffee: find the coffee machine, find the coffee, add water, find a mug, and brew the coffee by pushing the proper buttons. This has not yet been completed. The Modern Turing Test (Suleyman) An AI model is given $100,000 and has to obtain $1 million. AI-complete problems A problem is informally called "AI-complete" or "AI-hard" if it is believed that in order to solve it, one would need to implement AGI, because the solution is beyond the capabilities of a purpose-specific algorithm. There are many problems that have been conjectured to require general intelligence to solve as well as humans. Examples include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author's argument (reason), understand the context (knowledge), and faithfully reproduce the author's original intent (social intelligence). All of these problems need to be solved simultaneously in order to reach human-level machine performance. However, many of these tasks can now be performed by modern large language models. According to Stanford University's 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning. History Classical AI Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved". Several classical AI projects, such as Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project, were directed at AGI. However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". In the early 1980s, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]". Narrow AI research In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as speech recognition and recommendation algorithms. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. , development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years. At the turn of the century, many mainstream AI researchers hoped that strong AI could be developed by combining programs that solve various sub-problems. Hans Moravec wrote in 1988: I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts. However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the symbol grounding hypothesis by stating: The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer). Modern artificial general intelligence research The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments". This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour, was also called universal artificial intelligence. The term AGI was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. , a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning, which is the idea of allowing AI to continuously learn and innovate like humans do. Feasibility As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist. AI pioneer Herbert A. Simon speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight. A further challenge is the lack of clarity in defining what intelligence entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions? Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like Hubert Dreyfus and Roger Penrose, deny the possibility of achieving strong AI. John McCarthy is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted. AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. Further current AGI progress considerations can be found above Tests for confirming human-level AGI. A report by Stuart Armstrong and Kaj Sotala of the Machine Intelligence Research Institute found that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about. In 2023, Microsoft researchers published a detailed evaluation of GPT-4. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." Another study in 2023 reported that GPT-4 outperforms 99% of humans on the Torrance tests of creative thinking. Blaise Agüera y Arcas and Peter Norvig wrote in 2023 that a significant level of general intelligence has already been achieved with frontier models. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI". 2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiple modalities such as text, audio, and images). In 2024, OpenAI released o1-preview, the first of a series of models that "spend more time thinking before they respond". According to Mira Murati, this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power. An OpenAI employee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it’s even more clear with O1." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with Microsoft, prompting speculation about the company’s strategic intentions. Timescales Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop. Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress. For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of GPU-enabled CPUs. In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. , the consensus in the AGI research community seemed to be that the timeline discussed by Ray Kurzweil in 2005 in The Singularity is Near (i.e. between 2015 and 2045) was plausible. Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert. In 2012, Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton developed a neural network called AlexNet, which won the ImageNet competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers). AlexNet was regarded as the initial ground-breaker of the current deep learning wave. In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27. In 2020, OpenAI developed GPT-3, a language model capable of performing many diverse tasks without specific training. According to Gary Grossman in a VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system. In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API. In 2022, DeepMind developed Gato, a "general-purpose" system capable of performing more than 600 different tasks. In 2023, Microsoft Research published a study on an early version of OpenAI's GPT-4, contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems. In 2023, the AI researcher Geoffrey Hinton stated that: In May 2023, Demis Hassabis similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years. In March 2024, Nvidia's CEO, Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans. In June 2024, the AI researcher Leopold Aschenbrenner, a former OpenAI employee, estimated AGI by 2027 to be "strikingly plausible". Whole brain emulation While the development of transformer models like in ChatGPT is considered the most promising path to AGI, whole brain emulation can serve as an alternative approach. With whole brain simulation, a brain model is built by scanning and mapping a biological brain in detail, and then copying and simulating it on a computer system or another computational device. The simulation model must be sufficiently faithful to the original, so that it behaves in practically the same way as the original brain. Whole brain emulation is a type of brain simulation that is discussed in computational neuroscience and neuroinformatics, and for medical research purposes. It has been discussed in artificial intelligence research as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near predicts that a map of sufficient quality will become available on a similar timescale to the computing power required to emulate it. Early estimates For low-level brain simulation, a very powerful cluster of computers or GPUs would be required, given the enormous quantity of synapses within the human brain. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. The brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS). In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps). (For comparison, if a "computation" was equivalent to one "floating-point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011, while 1018 was achieved in 2022.) He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued. Current research The Human Brain Project, an EU-funded initiative active from 2013 to 2023, has developed a particularly detailed and publicly accessible atlas of the human brain. In 2023, researchers from Duke University performed a high-resolution scan of a mouse brain. Criticisms of simulation-based approaches The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in broad outline. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition, the estimates do not account for glial cells, which are known to play a role in cognitive processes. A fundamental criticism of the simulated brain approach derives from embodied cognition theory which asserts that human embodiment is an essential aspect of human intelligence and is necessary to ground meaning. If this theory is correct, any fully functional brain model will need to encompass more than just the neurons (e.g., a robotic body). Goertzel proposes virtual embodiment (like in metaverses like Second Life) as an option, but it is unknown whether this would be sufficient. Philosophical perspective "Strong AI" as defined in philosophy In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. He proposed a distinction between two hypotheses about artificial intelligence: Strong AI hypothesis: An artificial intelligence system can have "a mind" and "consciousness". Weak AI hypothesis: An artificial intelligence system can (only) act like it thinks and has a mind and consciousness. The first one he called "strong" because it makes a stronger statement: it assumes something special has happened to the machine that goes beyond those abilities that we can test. The behaviour of a "weak AI" machine would be precisely identical to a "strong AI" machine, but the latter would also have subjective conscious experience. This usage is also common in academic AI research and textbooks. In contrast to Searle and mainstream AI, some futurists such as Ray Kurzweil use the term "strong AI" to mean "human level artificial general intelligence". This is not the same as Searle's strong AI, unless it is assumed that consciousness is necessary for human-level AGI. Academic philosophers such as Searle do not believe that is the case, and to most artificial intelligence researchers the question is out-of-scope. Mainstream AI is most interested in how a program behaves. According to Russell and Norvig, "as long as the program works, they don't care if you call it real or a simulation." If the program can behave as if it has a mind, then there is no need to know if it actually has mind – indeed, there would be no way to tell. For AI research, Searle's "weak AI hypothesis" is equivalent to the statement "artificial general intelligence is possible". Thus, according to Russell and Norvig, "most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." Thus, for academic AI research, "Strong AI" and "AGI" are two different things. Consciousness Consciousness can have various meanings, and some aspects play significant roles in science fiction and the ethics of artificial intelligence: Sentience (or "phenomenal consciousness"): The ability to "feel" perceptions or emotions subjectively, as opposed to the ability to reason about perceptions. Some philosophers, such as David Chalmers, use the term "consciousness" to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Determining why and how subjective experience arises is known as the hard problem of consciousness. Thomas Nagel explained in 1974 that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e., has consciousness) but a toaster does not. In 2022, a Google engineer claimed that the company's AI chatbot, LaMDA, had achieved sentience, though this claim was widely disputed by other experts. Self-awareness: To have conscious awareness of oneself as a separate individual, especially to be consciously aware of one's own thoughts. This is opposed to simply being the "subject of one's thought"—an operating system or debugger is able to be "aware of itself" (that is, to represent itself in the same way it represents everything else)—but this is not what people typically mean when they use the term "self-awareness". These traits have a moral dimension. AI sentience would give rise to concerns of welfare and legal protection, similarly to animals. Other aspects of consciousness related to cognitive capabilities are also relevant to the concept of AI rights. Figuring out how to integrate advanced AI with existing legal and social frameworks is an emergent issue. Benefits AGI could have a wide variety of applications. If oriented towards such goals, AGI could help mitigate various problems in the world such as hunger, poverty and health problems. AGI could improve productivity and efficiency in most jobs. For example, in public health, AGI could accelerate medical research, notably against cancer. It could take care of the elderly, and democratize access to rapid, high-quality medical diagnostics. It could offer fun, cheap and personalized education. The need to work to subsist could become obsolete if the wealth produced is properly redistributed. This also raises the question of the place of humans in a radically automated society. AGI could also help to make rational decisions, and to anticipate and prevent disasters. It could also help to reap the benefits of potentially catastrophic technologies such as nanotechnology or climate engineering, while avoiding the associated risks. If an AGI's primary goal is to prevent existential catastrophes such as human extinction (which could be difficult if the Vulnerable World Hypothesis turns out to be true), it could take measures to drastically reduce the risks while minimizing the impact of these measures on our quality of life. Risks Existential risks AGI may represent multiple types of existential risk, which are risks that threaten "the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development". The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it. If humanity still has moral blind spots similar to slavery in the past, AGI might irreversibly entrench it, preventing moral progress. Furthermore, AGI could facilitate mass surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime. There is also a risk for the machines themselves. If machines that are sentient or otherwise worthy of moral consideration are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare and interests could be an existential catastrophe. Considering how much AGI could improve humanity's future and help reduce other existential risks, Toby Ord calls these existential risks "an argument for proceeding with due caution", not for "abandoning AI". Risk of loss of control and human extinction The thesis that AI poses an existential risk for humans, and that this risk needs more attention, is controversial but has been endorsed in 2023 by many public figures, AI researchers and CEOs of AI companies such as Elon Musk, Bill Gates, Geoffrey Hinton, Yoshua Bengio, Demis Hassabis and Sam Altman. In 2014, Stephen Hawking criticized widespread indifference: The potential fate of humanity has sometimes been compared to the fate of gorillas threatened by human activities. The comparison states that greater intelligence allowed humanity to dominate gorillas, which are now vulnerable in ways that they could not have anticipated. As a result, the gorilla has become an endangered species, not out of malice, but simply as a collateral damage from human activities. The skeptic Yann LeCun considers that AGIs will have no desire to dominate humanity and that we should be careful not to anthropomorphize them and interpret their intents as we would for humans. He said that people won't be "smart enough to design super-intelligent machines, yet ridiculously stupid to the point of giving it moronic objectives with no safeguards". On the other side, the concept of instrumental convergence suggests that almost whatever their goals, intelligent agents will have reasons to try to survive and acquire more power as intermediary steps to achieving these goals. And that this does not require having emotions. Many scholars who are concerned about existential risk advocate for more research into solving the "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximise the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? Solving the control problem is complicated by the AI arms race (which could lead to a race to the bottom of safety precautions in order to release products before competitors), and the use of AI in weapon systems. The thesis that AI can pose existential risk also has detractors. Skeptics usually say that AGI is unlikely in the short-term, or that concerns about AGI distract from other issues related to current AI. Former Google fraud czar Shuman Ghosemajumder considers that for many people outside of the technology industry, existing chatbots and LLMs are already perceived as though they were AGI, leading to further misunderstanding and fear. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God. Some researchers believe that the communication campaigns on AI existential risk by certain AI groups (such as OpenAI, Anthropic, DeepMind, and Conjecture) may be an at attempt at regulatory capture and to inflate interest in their products. In 2023, the CEOs of Google DeepMind, OpenAI and Anthropic, along with other industry leaders and researchers, issued a joint statement asserting that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." Mass unemployment Researchers from OpenAI estimated that "80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while around 19% of workers may see at least 50% of their tasks impacted". They consider office workers to be the most exposed, for example mathematicians, accountants or web designers. AGI could have a better autonomy, ability to make decisions, to interface with other computer tools, but also to control robotized bodies. According to Stephen Hawking, the outcome of automation on the quality of life will depend on how the wealth will be redistributed: Elon Musk considers that the automation of society will require governments to adopt a universal basic income. See also AI effect Artificial intelligence (IA) Moravec's paradox Notes References Sources Further reading Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.) Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.) Halpern, Sue, "The Coming Tech Autocracy" (review of Verity Harding, AI Needs You: How We Can Change AI's Future and Save Our Own, Princeton University Press, 274 pp.; Gary Marcus, Taming Silicon Valley: How We Can Ensure That AI Works for Us, MIT Press, 235 pp.; Daniela Rus and Gregory Mone, The Mind's Mirror: Risk and Reward in the Age of AI, Norton, 280 pp.; Madhumita Murgia, Code Dependent: Living in the Shadow of AI, Henry Holt, 311 pp.), The New York Review of Books, vol. LXXI, no. 17 (7 November 2024), pp. 44–46. "'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on governments driven by campaign finance contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the tech companies. They include transparency on how AI systems work; compensation for individuals if their data [are] used to train LLMs (large language model)s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating Section 230, imposing cash penalites, and passing stricter product liability laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the FDA, the FCC, or the FTC, might provide the most robust oversight.... [T]he Fordham law professor Chinmayi Sharma... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to medical licenses, malpractice suits, and the Hippocratic oath in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to do no harm?'" (p. 46.) Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.) Immerwahr, Daniel, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.) Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research", Scientific American, vol. 330, no. 6 (June 2024), pp. 80-81. Lepore, Jill, "The Chit-Chatbot: Is talking with a machine a conversation?", The New Yorker, 7 October 2024, pp. 12–16. Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45. Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26. Roivainen, Eka, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts." Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", London Review of Books, vol. 46, no. 19 (10 October 2024), pp. 29–32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency to anthropomorphise." (p. 29.) External links The AGI portal maintained by Pei Wang Hypothetical technology Artificial intelligence Computational neuroscience Unsolved problems in computer science Intelligence by type
Artificial general intelligence
[ "Mathematics" ]
9,159
[ "Unsolved problems in computer science", "Unsolved problems in mathematics", "Mathematical problems" ]
586,599
https://en.wikipedia.org/wiki/Penning%20trap
A Penning trap is a device for the storage of charged particles using a homogeneous magnetic field and a quadrupole electric field. It is mostly found in the physical sciences and related fields of study for precision measurements of properties of ions and stable subatomic particles, like for example mass, fission yields and isomeric yield ratios. One initial object of study was the so-called geonium atoms, which represent a way to measure the electron magnetic moment by storing a single electron. These traps have been used in the physical realization of quantum computation and quantum information processing by trapping qubits. Penning traps are in use in many laboratories worldwide, including CERN, to store and investigate anti-particles such as antiprotons. The main advantages of Penning traps are the potentially long storage times and the existence of a multitude of techniques to manipulate and non-destructively detect the stored particles. This makes Penning traps versatile for the investigation of stored particles, but also for their selection, preparation or mere storage. History The Penning trap was named after F. M. Penning (1894–1953) by Hans Georg Dehmelt (1922–2017) who built the first trap. Dehmelt got inspiration from the vacuum gauge built by F. M. Penning where a current through a discharge tube in a magnetic field is proportional to the pressure. Citing from H. Dehmelt's autobiography: "I began to focus on the magnetron/Penning discharge geometry, which, in the Penning ion gauge, had caught my interest already at Göttingen and at Duke. In their 1955 cyclotron resonance work on photoelectrons in vacuum Franken and Liebes had reported undesirable frequency shifts caused by accidental electron trapping. Their analysis made me realize that in a pure electric quadrupole field the shift would not depend on the location of the electron in the trap. This is an important advantage over many other traps that I decided to exploit. A magnetron trap of this type had been briefly discussed in J.R. Pierce's 1949 book, and I developed a simple description of the axial, magnetron, and cyclotron motions of an electron in it. With the help of the expert glassblower of the Department, Jake Jonson, I built my first high vacuum magnetron trap in 1959 and was soon able to trap electrons for about 10 sec and to detect axial, magnetron and cyclotron resonances." – H. Dehmelt H. Dehmelt shared the Nobel Prize in Physics in 1989 for the development of the ion trap technique. Operation Penning traps use a strong homogeneous axial magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially. The static electric potential can be generated using a set of three electrodes: a ring and two endcaps. In an ideal Penning trap the ring and endcaps are hyperboloids of revolution. For trapping of positive (negative) ions, the endcap electrodes are kept at a positive (negative) potential relative to the ring. This potential produces a saddle point in the centre of the trap, which traps ions along the axial direction. The electric field causes ions to oscillate (harmonically in the case of an ideal Penning trap) along the trap axis. The magnetic field in combination with the electric field causes charged particles to move in the radial plane with a motion which traces out an epitrochoid. The orbital motion of ions in the radial plane is composed of two modes at frequencies which are called the magnetron and the modified cyclotron frequencies. These motions are similar to the deferent and epicycle, respectively, of the Ptolemaic model of the solar system. The sum of these two frequencies is the cyclotron frequency, which depends only on the ratio of electric charge to mass and on the strength of the magnetic field. This frequency can be measured very accurately and can be used to measure the masses of charged particles. Many of the highest-precision mass measurements (masses of the electron, proton, 2H, 20Ne and 28Si) come from Penning traps. Buffer gas cooling, resistive cooling, and laser cooling are techniques to remove energy from ions in a Penning trap. Buffer gas cooling relies on collisions between the ions and neutral gas molecules that bring the ion energy closer to the energy of the gas molecules. In resistive cooling, moving image charges in the electrodes are made to do work through an external resistor, effectively removing energy from the ions. Laser cooling can be used to remove energy from some kinds of ions in Penning traps. This technique requires ions with an appropriate electronic structure. Radiative cooling is the process by which the ions lose energy by creating electromagnetic waves by virtue of their acceleration in the magnetic field. This process dominates the cooling of electrons in Penning traps, but is very small and usually negligible for heavier particles. Using the Penning trap can have advantages over the radio frequency trap (Paul trap). Firstly, in the Penning trap only static fields are applied and therefore there is no micro-motion and resultant heating of the ions due to the dynamic fields, even for extended 2- and 3-dimensional ion Coulomb crystals. Also, the Penning trap can be made larger whilst maintaining strong trapping. The trapped ion can then be held further away from the electrode surfaces. Interaction with patch potentials on the electrode surfaces can be responsible for heating and decoherence effects and these effects scale as a high power of the inverse distance between the ion and the electrode. Fourier-transform mass spectrometry Fourier-transform ion cyclotron resonance mass spectrometry (also known as Fourier-transform mass spectrometry) is a type of mass spectrometry used for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap where they are excited to a larger cyclotron radius by an oscillating electric field perpendicular to the magnetic field. The excitation also results in the ions moving in phase (in a packet). The signal is detected as an image current on a pair of plates which the packet of ions passes close to as they cyclotron. The resulting signal is called a free induction decay (fid), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum. Single ions can be investigated in a Penning trap held at a temperature of 4 K. For this the ring electrode is segmented and opposite electrodes are connected to a superconducting coil and the source and the gate of a field-effect transistor. The coil and the parasitic capacitances of the circuit form a LC circuit with a Q of about 50 000. The LC circuit is excited by an external electric pulse. The segmented electrodes couple the motion of the single electron to the LC circuit. Thus the energy in the LC circuit in resonance with the ion slowly oscillates between the many electrons (10000) in the gate of the field effect transistor and the single electron. This can be detected in the signal at the drain of the field effect transistor. Geonium atom A geonium atom is a pseudo-atomic system that consists of a single electron or ion stored in a Penning trap which is 'bound' to the remaining Earth, hence the term 'geonium'. The name was coined by H.G. Dehmelt. In the typical case, the trapped system consists of only one particle or ion. Such a quantum system is determined by quantum states of one particle, like in the hydrogen atom. Hydrogen consists of two particles, the nucleus and electron, but the electron motion relative to the nucleus is equivalent to one particle in an external field, see center-of-mass frame. The properties of geonium are different from a typical atom. The charge undergoes cyclotron motion around the trap axis and oscillates along the axis. An inhomogeneous magnetic "bottle field" is applied to measure the quantum properties by the "continuous Stern-Gerlach" technique. Energy levels and g-factor of the particle can be measured with high precision. Van Dyck, et al. explored the magnetic splitting of geonium spectra in 1978 and in 1987 published high-precision measurements of electron and positron g-factors, which constrained the electron radius. Single particle In November 2017, an international team of scientists isolated a single proton in a Penning trap in order to measure its magnetic moment to the highest precision to date. It was found to be . The CODATA 2018 value matches this. References External links Nobel Prize in Physics 1989 The High-precision Penning Trap Mass Spectrometer SMILETRAP in Stockholm, Sweden High-precision mass determination of unstable nuclei with a Penning trap mass spectrometer at ISOLDE/CERN, Switzerland High-precision mass measurements of rare isotopes using the LEBIT and SIPT Penning traps at the National Superconducting Cyclotron Laboratory, USA High-precision mass measurements of short-lived isotopes using the TITAN Penning trap at TRIUMF in Vancouver, Canada Measuring instruments Atomic physics Mass spectrometry Particle traps
Penning trap
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,939
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Quantum mechanics", "Measuring instruments", "Particle traps", "Mass spectrometry", "Atomic physics", " molecular", "Atomic", "Matter", " and optical physics" ]
586,694
https://en.wikipedia.org/wiki/Signed%20number%20representations
In computing, signed number representations are required to encode negative numbers in binary number systems. In mathematics, negative numbers in any base are represented by prefixing them with a minus sign ("−"). However, in RAM or CPU registers, numbers are represented only as sequences of bits, without extra symbols. The four best-known methods of extending the binary numeral system to represent signed numbers are: sign–magnitude, ones' complement, two's complement, and offset binary. Some of the alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2. Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other elaborations on such themes. There is no definitive criterion by which any of the representations is universally superior. For integers, the representation used in most current computing devices is two's complement, although the Unisys ClearPath Dorado series mainframes use ones' complement. History The early days of digital computing were marked by competing ideas about both hardware technology and mathematics technology (numbering systems). One of the great debates was the format of negative numbers, with some of the era's top experts expressing very strong and differing opinions. One camp supported two's complement, the system that is dominant today. Another camp supported ones' complement, where a negative value is formed by inverting all of the bits in its positive equivalent. A third group supported sign–magnitude, where a value is changed from positive to negative simply by toggling the word's highest-order bit. There were arguments for and against each of the systems. Sign–magnitude allowed for easier tracing of memory dumps (a common process in the 1960s) as small numeric values use fewer 1 bits. These systems did ones' complement math internally, so numbers would have to be converted to ones' complement values when they were transmitted from a register to the math unit and then converted back to sign–magnitude when the result was transmitted back to the register. The electronics required more gates than the other systemsa key concern when the cost and packaging of discrete transistors were critical. IBM was one of the early supporters of sign–magnitude, with their 704, 709 and 709x series computers being perhaps the best-known systems to use it. Ones' complement allowed for somewhat simpler hardware designs, as there was no need to convert values when passed to and from the math unit. But it also shared an undesirable characteristic with sign–magnitude: the ability to represent negative zero (−0). Negative zero behaves exactly like positive zero: when used as an operand in any calculation, the result will be the same whether an operand is positive or negative zero. The disadvantage is that the existence of two forms of the same value necessitates two comparisons when checking for equality with zero. Ones' complement subtraction can also result in an end-around borrow (described below). It can be argued that this makes the addition and subtraction logic more complicated or that it makes it simpler, as a subtraction requires simply inverting the bits of the second operand as it is passed to the adder. The PDP-1, CDC 160 series, CDC 3000 series, CDC 6000 series, UNIVAC 1100 series, and LINC computer use ones' complement representation. Two's complement is the easiest to implement in hardware, which may be the ultimate reason for its widespread popularity. Processors on the early mainframes often consisted of thousands of transistors, so eliminating a significant number of transistors was a significant cost savings. Mainframes such as the IBM System/360, the GE-600 series, and the PDP-6 and PDP-10 use two's complement, as did minicomputers such as the PDP-5 and PDP-8 and the PDP-11 and VAX machines. The architects of the early integrated-circuit-based CPUs (Intel 8080, etc.) also chose to use two's complement math. As IC technology advanced, two's complement technology was adopted in virtually all processors, including x86, m68k, Power ISA, MIPS, SPARC, ARM, Itanium, PA-RISC, and DEC Alpha. Sign–magnitude In the sign–magnitude representation, also called sign-and-magnitude or signed magnitude, a signed number is represented by the bit pattern corresponding to the sign of the number for the sign bit (often the most significant bit, set to 0 for a positive number and to 1 for a negative number), and the magnitude of the number (or absolute value) for the remaining bits. For example, in an eight-bit byte, only seven bits represent the magnitude, which can range from 0000000 (0) to 1111111 (127). Thus numbers ranging from −12710 to +12710 can be represented once the sign bit (the eighth bit) is added. For example, −4310 encoded in an eight-bit byte is 10101011 while 4310 is 00101011. Using sign–magnitude representation has multiple consequences which makes them more intricate to implement: There are two ways to represent zero, 00000000 (0) and 10000000 (−0). Addition and subtraction require different behavior depending on the sign bit, whereas ones' complement can ignore the sign bit and just do an end-around carry, and two's complement can ignore the sign bit and depend on the overflow behavior. Comparison also requires inspecting the sign bit, whereas in two's complement, one can simply subtract the two numbers, and check if the outcome is positive or negative. The minimum negative number is −127, instead of −128 as in the case of two's complement. This approach is directly comparable to the common way of showing a sign (placing a "+" or "−" next to the number's magnitude). Some early binary computers (e.g., IBM 7090) use this representation, perhaps because of its natural relation to common usage. Sign–magnitude is the most common way of representing the significand in floating-point values. Ones' complement In the ones' complement representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number. Like sign–magnitude representation, ones' complement has two representations of 0: 00000000 (+0) and 11111111 (−0). As an example, the ones' complement form of 00101011 (4310) becomes 11010100 (−4310). The range of signed numbers using ones' complement is represented by to and ±0. A conventional eight-bit byte is −12710 to +12710 with zero being either 00000000 (+0) or 11111111 (−0). To add two numbers represented in this system, one does a conventional binary addition, but it is then necessary to do an end-around carry: that is, add any resulting carry back into the resulting sum. To see why this is necessary, consider the following example showing the case of the addition of −1 (11111110) to +2 (00000010): binary decimal 11111110 −1 + 00000010 +2 ─────────── ── 1 00000000 0 ← Incorrect answer 1 +1 ← Add carry ─────────── ── 00000001 1 ← Correct answer In the previous example, the first binary addition gives 00000000, which is incorrect. The correct result (00000001) only appears when the carry is added back in. A remark on terminology: The system is referred to as "ones' complement" because the negation of a positive value x (represented as the bitwise NOT of x) can also be formed by subtracting x from the ones' complement representation of zero that is a long sequence of ones (−0). Two's complement arithmetic, on the other hand, forms the negation of x by subtracting x from a single large power of two that is congruent to +0. Therefore, ones' complement and two's complement representations of the same negative value will differ by one. Note that the ones' complement representation of a negative number can be obtained from the sign–magnitude representation merely by bitwise complementing the magnitude (inverting all the bits after the first). For example, the decimal number −125 with its sign–magnitude representation 11111101 can be represented in ones' complement form as 10000010. Two's complement In the two's complement representation, a negative number is represented by the bit pattern corresponding to the bitwise NOT (i.e. the "complement") of the positive number plus one, i.e. to the ones' complement plus one. It circumvents the problems of multiple representations of 0 and the need for the end-around carry of the ones' complement representation. This can also be thought of as the most significant bit representing the inverse of its value in an unsigned integer; in an 8-bit unsigned byte, the most significant bit represents the 128ths place, where in two's complement that bit would represent −128. In two's-complement, there is only one zero, represented as 00000000. Negating a number (whether negative or positive) is done by inverting all the bits and then adding one to that result. This actually reflects the ring structure on all integers modulo 2N: . Addition of a pair of two's-complement integers is the same as addition of a pair of unsigned numbers (except for detection of overflow, if that is done); the same is true for subtraction and even for N lowest significant bits of a product (value of multiplication). For instance, a two's-complement addition of 127 and −128 gives the same binary bit pattern as an unsigned addition of 127 and 128, as can be seen from the 8-bit two's complement table. An easier method to get the negation of a number in two's complement is as follows: Method two: Invert all the bits through the number. This computes the same result as subtracting from negative one. Add one Example: for +2, which is 00000010 in binary (the ~ character is the C bitwise NOT operator, so ~X means "invert all the bits in X"): ~00000010 → 11111101 11111101 + 1 → 11111110 (−2 in two's complement) Offset binary In the offset binary representation, also called excess-K or biased, a signed number is represented by the bit pattern corresponding to the unsigned number plus K, with K being the biasing value or offset. Thus 0 is represented by K, and −K is represented by an all-zero bit pattern. This can be seen as a slight modification and generalization of the aforementioned two's-complement, which is virtually the representation with negated most significant bit. Biased representations are now primarily used for the exponent of floating-point numbers. The IEEE 754 floating-point standard defines the exponent field of a single-precision (32-bit) number as an 8-bit excess-127 field. The double-precision (64-bit) exponent field is an 11-bit excess-1023 field; see exponent bias. It also had use for binary-coded decimal numbers as excess-3. Base −2 In the base −2 representation, a signed number is represented using a number system with base −2. In conventional binary number systems, the base, or radix, is 2; thus the rightmost bit represents 20, the next bit represents 21, the next bit 22, and so on. However, a binary number system with base −2 is also possible. The rightmost bit represents , the next bit represents , the next bit and so on, with alternating sign. The numbers that can be represented with four bits are shown in the comparison table below. The range of numbers that can be represented is asymmetric. If the word has an even number of bits, the magnitude of the largest negative number that can be represented is twice as large as the largest positive number that can be represented, and vice versa if the word has an odd number of bits. Comparison table The following table shows the positive and negative integers that can be represented using four bits. Same table, as viewed from "given these binary bits, what is the number as interpreted by the representation system": Other systems Google's Protocol Buffers "zig-zag encoding" is a system similar to sign–magnitude, but uses the least significant bit to represent the sign and has a single representation of zero. This allows a variable-length quantity encoding intended for nonnegative (unsigned) integers to be used efficiently for signed integers. A similar method is used in the Advanced Video Coding/H.264 and High Efficiency Video Coding/H.265 video compression standards to extend exponential-Golomb coding to negative numbers. In that extension, the least significant bit is almost a sign bit; zero has the same least significant bit (0) as all the negative numbers. This choice results in the largest magnitude representable positive number being one higher than the largest magnitude negative number, unlike in two's complement or the Protocol Buffers zig-zag encoding. Another approach is to give each digit a sign, yielding the signed-digit representation. For instance, in 1726, John Colson advocated reducing expressions to "small numbers", numerals 1, 2, 3, 4, and 5. In 1840, Augustin Cauchy also expressed preference for such modified decimal numbers to reduce errors in computation. See also Balanced ternary Binary-coded decimal Computer number format Method of complements Signedness References Ivan Flores, The Logic of Computer Arithmetic, Prentice-Hall (1963) Israel Koren, Computer Arithmetic Algorithms, A.K. Peters (2002), Computer arithmetic ca:Representació de nombres amb signe cs:Dvojková soustava#Zobrazení záporných čísel fr:Système binaire#Représentation des entiers négatifs
Signed number representations
[ "Mathematics" ]
3,000
[ "Computer arithmetic", "Arithmetic" ]
586,699
https://en.wikipedia.org/wiki/Darwin%20Medal
The Darwin Medal is one of the medals awarded by the Royal Society for "distinction in evolution, biological diversity and developmental, population and organismal biology". In 1885, International Darwin Memorial Fund was transferred to the Royal Society. The fund was devoted for promotion of biological research, and was used to establish the Darwin Medal. The medal was first awarded to Alfred Russel Wallace in 1890 for "his independent origination of the theory of the origin of species by natural selection." The medal commemorates the work of English biologist Charles Darwin (1809–1882). Darwin, most famous for his 1859 book On the Origin of Species, was a fellow of the Royal Society, and had received the Royal Medal in 1853 and the Copley Medal in 1864. The diameter of the Darwin Medal is inch (5.7 cm). It is made of silver. The obverse has Darwin's portrait, while the reverse has a wreath of plants with Darwin's name in Latin, "Carolus Darwin". It is surrounded by the years of his birth and death in Roman numerals (MDCCCIX and MDCCCLXXXII). The general design of the medal was by John Evans, the president of the Royal Numismatic Society. Since its creation the Darwin Medal has been awarded over 60 times. Among the recipients are Francis Darwin, Charles Darwin's son, and two married couples: Jack and Yolande Heslop-Harrison in 1982 and Peter and Rosemary Grant in 2002. Initially accompanied by a grant of £100, the medal is currently awarded with a grant of £2,000. All citizens who have been residents of the United Kingdom, Commonwealth of Nations, or the Republic of Ireland for more than three years are eligible for the medal. The medal was awarded biennially from 1890 until 2018; since then it is awarded annually. List of recipients See also Awards, lectures and medals of the Royal Society References External links Awards established in 1890 Awards of the Royal Society Biennial events Biology awards Charles Darwin 1890 establishments in the United Kingdom 1890 in biology
Darwin Medal
[ "Technology" ]
416
[ "Science and technology awards", "Biology awards" ]
586,735
https://en.wikipedia.org/wiki/Archaeobatrachia
Archaeobatrachia (Neo-Latin archaeo- ("old") + batrachia ("frog")) is a suborder of the order Anura containing various primitive frogs and toads. As the name suggests, these are the most primitive frogs. Many of the species (28 in total) show certain physiological characteristics which are not present in other frogs and toads, thus giving rise to this group. They are largely found in Eurasia, New Zealand, the Philippines, and Borneo, and are characteristically small. In addition, the family Ascaphidae is found in the Pacific Northwest and northern Rocky Mountains of the United States, and is only represented by two species. The taxon is considered paraphyletic. References . Amphibian suborders Paraphyletic groups
Archaeobatrachia
[ "Biology" ]
169
[ "Phylogenetics", "Paraphyletic groups" ]
586,817
https://en.wikipedia.org/wiki/Mass%20spectrum
A mass spectrum is a histogram plot of intensity vs. mass-to-charge ratio (m/z) in a chemical sample, usually acquired using an instrument called a mass spectrometer. Not all mass spectra of a given substance are the same; for example, some mass spectrometers break the analyte molecules into fragments; others observe the intact molecular masses with little fragmentation. A mass spectrum can represent many different types of information based on the type of mass spectrometer and the specific experiment applied. Common fragmentation processes for organic molecules are the McLafferty rearrangement and alpha cleavage. Straight chain alkanes and alkyl groups produce a typical series of peaks: 29 (CH3CH2+), 43 (CH3CH2CH2+), 57 (CH3CH2CH2CH2+), 71 (CH3CH2CH2CH2CH2+) etc. X-axis: m/z (mass-to-charge ratio) The x-axis of a mass spectrum represents a relationship between the mass of a given ion and the number of elementary charges that it carries. This is written as the IUPAC standard m/z to denote the quantity formed by dividing the mass of an ion (in daltons) by the dalton unit and by its charge number (positive absolute value). Thus, m/z is a dimensionless quantity with no associated units. Despite carrying neither units of mass nor charge, the m/z is referred to as the mass-to-charge ratio of an ion. However, this is distinct from the mass-to-charge ratio, m/Q (SI standard units kg/C), which is commonly used in physics. The m/z is used in applied mass spectrometry because convenient and intuitive numerical relationships naturally arise when interpreting spectra. A single m/z value alone does not contain sufficient information to determine the mass or charge of an ion. However, mass information may be extracted when considering the whole spectrum, such as the spacing of isotopes or the observation of multiple charge states of the same molecule. These relationships and the relationship to the mass of the ion in daltons tend toward approximately rational number values in m/z space. For example, ions with one charge exhibit spacing between isotopes of 1 and the mass of the ion in daltons is numerically equal to the m/z. The IUPAC Gold Book gives an example of appropriate use: "for the ion C7H72+, m/z equals 45.5". Alternative x-axis notations There are several alternatives to the standard m/z notation that appear in the literature; however, these are not currently accepted by standards organizations and most journals. m/e appears in older historical literature. A label more consistent with the IUPAC green book and ISO 31 conventions is m/Q or m/q where m is the symbol for mass and Q or q the symbol for charge with the units u/e or Da/e. This notation is not uncommon in the physics of mass spectrometry but is rarely used as the abscissa of a mass spectrum. It was also suggested to introduce a new unit thomson (Th) as a unit of m/z, where 1 Th = 1 u/e. According to this convention, mass spectra x axis could be labeled m/z (Th) and negative ions would have negative values. This notation is rare and not accepted by IUPAC or any other standards organisation. History of x-axis notation In 1897 the mass-to-charge ratio of the electron was first measured by J. J. Thomson. By doing this he showed that the electron, which was postulated before in order to explain electricity, was in fact a particle with a mass and a charge and that its mass-to-charge ratio was much smaller than the one for the hydrogen ion H+. In 1913 he measured the mass-to-charge ratio of ions with an instrument he called a parabola spectrograph. Although this data was not represented as a modern mass spectrum, it was similar in meaning. Eventually there was a change to the notation as m/e giving way to the current standard of m/z. Early in mass spectrometry research the resolution of mass spectrometers did not allow for accurate mass determination. Francis William Aston won the Nobel prize in Chemistry in 1922. "For his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the Whole Number Rule." In which he stated that all atoms (including isotopes) follow a whole-number rule This implied that the masses of atoms were not on a scale but could be expressed as integers (in fact multiple charged ions were rare, so for the most part the ratio was whole as well). There have been several suggestions (e.g. the unit thomson) to change the official mass spectrometry nomenclature to be more internally consistent. Y-axis: signal intensity The y-axis of a mass spectrum represents signal intensity of the ions. When using counting detectors the intensity is often measured in counts per second (cps). When using analog detection electronics the intensity is typically measured in volts. In FTICR and Orbitraps the frequency domain signal (the y-axis) is related to the power (~amplitude squared) of the signal sine wave (often reduced to an rms power); however, the axis is usually not labeled as such for many reasons. In most forms of mass spectrometry, the intensity of ion current measured by the spectrometer does not accurately represent relative abundance, but correlates loosely with it. Therefore, it is common to label the y-axis with "arbitrary units". Y-axis and relative abundance Signal intensity may be dependent on many factors, especially the nature of the molecules being analyzed and how they ionize. The efficiency of ionization varies from molecule to molecule and from ion source to ion source. For example, in electrospray sources in positive ion mode a quaternary amine will ionize exceptionally well whereas a large hydrophobic alcohol will most likely not be seen no matter how concentrated. In an EI source these molecules will behave very differently. Additionally there may be factors that affect ion transmission disproportionally between ionization and detection. On the detection side there are many factors that can also affect signal intensity in a non-proportional way. The size of the ion will affect the velocity of impact and with certain detectors the velocity is proportional to the signal output. In other detection systems, such as FTICR, the number of charges on the ion are more important to signal intensity. In Fourier transform ion cyclotron resonance and Orbitrap type mass spectrometers the signal intensity (Y-axis) is related to the amplitude of the free induction decay signal. This is fundamentally a power relationship (amplitude squared) but often computed as an [rms]. For decaying signals the rms is not equal to the average amplitude. Additionally the damping constant (decay rate of the signal in the fid) is not the same for all ions. In order to make conclusions about relative intensity a great deal of knowledge and care is required. A common way to get more quantitative information out of a mass spectrum is to create a standard curve to compare the sample to. This requires knowing what is to be quantitated ahead of time, having a standard available and designing the experiment specifically for this purpose. A more advanced variation on this is the use of an internal standard which behaves very similarly to the analyte. This is often an isotopically labeled version of the analyte. There are forms of mass spectrometry, such as accelerator mass spectrometry that are designed from the bottom up to be quantitative. Spectral skewing Spectral skewing is the change in relative intensity of mass spectral peaks due to the changes in concentration of the analyte in the ion source as the mass spectrum is scanned. This situation occurs routinely as chromatographic components elute into a continuous ion source. Spectral skewing is not observed in ion trap (quadrupole (this has been seen also in QMS) or magnetic) or time-of-flight (TOF) mass analyzers because potentially all ions formed in operational cycle (a snapshot in time) of the instrument are available for detection. See also Kendrick mass References External links Quantities, Units and Symbols in Physical Chemistry (IUPAC green book) An introductory video on Mass Spectrometry The Royal Society of Chemistry NIST Standard Reference Database 1A v17 Mass spectrometry
Mass spectrum
[ "Physics", "Chemistry" ]
1,789
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
586,918
https://en.wikipedia.org/wiki/Bruce%20Medal
The Catherine Wolfe Bruce Gold Medal is awarded every year by the Astronomical Society of the Pacific for outstanding lifetime contributions to astronomy. It is named after Catherine Wolfe Bruce, an American patroness of astronomy, and was first awarded in 1898. List of Bruce Medalists Source: Astronomical Society of the Pacific 1898 – Simon Newcomb 1899 – Arthur Auwers 1900 – David Gill 1902 – Giovanni V. Schiaparelli 1904 – William Huggins 1906 – Hermann Carl Vogel 1908 – Edward C. Pickering 1909 – George William Hill 1911 – Henri Poincaré 1913 – Jacobus C. Kapteyn 1914 – Oskar Backlund 1915 – William Wallace Campbell 1916 – George Ellery Hale 1917 – Edward Emerson Barnard 1920 – Ernest W. Brown 1921 – Henri A. Deslandres 1922 – Frank W. Dyson 1923 – Benjamin Baillaud 1924 – Arthur Stanley Eddington 1925 – Henry Norris Russell 1926 – Robert G. Aitken 1927 – Herbert Hall Turner 1928 – Walter S. Adams 1929 – Frank Schlesinger 1930 – Max Wolf 1931 – Willem de Sitter 1932 – John S. Plaskett 1933 – Carl V.L. Charlier 1934 – Alfred Fowler 1935 – Vesto M. Slipher 1936 – Armin O. Leuschner 1937 – Ejnar Hertzsprung 1938 – Edwin P. Hubble 1939 – Harlow Shapley 1940 – Frederick H. Seares 1941 – Joel Stebbins 1942 – Jan H. Oort 1945 – E. Arthur Milne 1946 – Paul Merrill 1947 – Bernard Lyot 1948 – Otto Struve 1949 – Harold Spencer Jones 1950 – Alfred H. Joy 1951 – Marcel Minnaert 1952 – Subrahmanyan Chandrasekhar 1953 – Harold D. Babcock 1954 – Bertil Lindblad 1955 – Walter Baade 1956 – Albrecht Unsöld 1957 – Ira S. Bowen 1958 – William Wilson Morgan 1959 – Bengt Strömgren 1960 – Viktor A. Ambartsumian 1961 – Rudolph Minkowski 1962 – Grote Reber 1963 – Seth Barnes Nicholson 1964 – Otto Heckmann 1965 – Martin Schwarzschild 1966 – Dirk Brouwer 1967 – Ludwig Biermann 1968 – Willem J. Luyten 1969 – Horace W. Babcock 1970 – Fred Hoyle 1971 – Jesse Greenstein 1972 – Iosif S. Shklovskii 1973 – Lyman Spitzer Jr. 1974 – Martin Ryle 1975 – Allan R. Sandage 1976 – Ernst J. Öpik 1977 – Bart J. Bok 1978 – Hendrik C. van de Hulst 1979 – William A. Fowler 1980 – George Herbig 1981 – Riccardo Giacconi 1982 – E. Margaret Burbidge 1983 – Yakov B. Zel'dovich 1984 – Olin C. Wilson 1985 – Thomas G. Cowling 1986 – Fred L. Whipple 1987 – Edwin E. Salpeter 1988 – John G. Bolton 1989 – Adriaan Blaauw 1990 – Charlotte E. Moore Sitterly 1991 – Donald E. Osterbrock 1992 – Maarten Schmidt 1993 – Martin Rees 1994 – Wallace Sargent 1995 – P. James E. Peebles 1996 – Albert E. Whitford 1997 – Eugene Parker 1998 – Donald Lynden-Bell 1999 – Geoffrey R. Burbidge 2000 – Rashid A. Sunyaev 2001 – Hans A. Bethe 2002 – Bohdan Paczyński 2003 – Vera C. Rubin 2004 – Chūshirō Hayashi 2005 – Robert Kraft 2006 – Frank J. Low 2007 – Martin Harwit 2008 – Sidney van den Bergh 2009 – Frank H. Shu 2010 – Gerry Neugebauer 2011 – Jeremiah P. Ostriker 2012 – Sandra M. Faber 2013 – James E. Gunn 2014 – Kenneth Kellermann 2015 – Douglas N. C. Lin 2016 – Andrew Fabian 2017 – Nick Scoville 2018 – Tim Heckman 2019 – Martha P. Haynes 2020 – Prize suspended due to COVID-19 pandemic 2021 - Bruce Elmegreen 2022 - Ellen Gould Zweibel 2023 - Marcia J. Rieke 2024 - Chryssa Kouveliotou See also List of astronomy awards Prizes named after people Sonoma State's Directory References External links 20th Century Astronomers Astronomy prizes Awards established in 1898 Astronomical Society of the Pacific
Bruce Medal
[ "Astronomy", "Technology" ]
875
[ "Astronomical Society of the Pacific", "Astronomy education", "Astronomy prizes", "Science and technology awards" ]
586,931
https://en.wikipedia.org/wiki/Catherine%20Wolfe%20Bruce
Catherine Wolfe Bruce (January 22, 1816, New York – March 13, 1900, New York) was a noted American philanthropist and patron of astronomy. Early life Bruce was born on January 22, 1816. She was the daughter of the George Bruce (1781–1866), a famous type founder who was born in Edinburgh, and Catherine Wolfe (1785–1861), the daughter of David Wolfe (1748–1836) of New York City. One of five children, her brother was David Wolfe Bruce (1824–1895), who, along with David Wolfe Bishop, inherited the fortune of their cousin, Catharine Lorillard Wolfe. Career She studied painting, learned Latin, German, French and Italian, and was familiar with the literature of those languages. In 1890, she wrote and published a translation of the "Dies Irae." Personal life Due to an ever-increasing illness, she was confined to her home and died on March 13, 1900, at 810 Fifth Avenue in New York City. Philanthropy In 1877, she donated $50,000 for the construction of a library building and the purchase of books in memory of her father. The library, known as "The George Bruce Library" was completed in 1888 and was located at 226 West 42nd Street and designed by G. E. Harney. The building was sold in 1913 and the proceeds were used to build the current George Bruce library located on 125th Street in Harlem and designed by Carrère & Hastings. As an amateur astronomer, she turned to philanthropy in this field at the age of 73, only after reading an article by Simon Newcomb claiming all the major discoveries in astronomy have occurred. Bruce turned to telescope maker Alvan Graham Clark to see how she could support research in astronomy. Bruce made over 54 gifts to astronomy, totaling over $275,000, between 1889 and 1899. She donated funds to the Harvard College Observatory (U.S.A.), Yerkes Observatory (U.S.A.) and Landessternwarte Heidelberg-Königstuhl (Germany), run by Max Wolf at the time, to buy new telescopes at each of those institutes. In 1887, she donated the George Bruce Free Library. Bruce established the Bruce Medal of the Astronomical Society of the Pacific in recognition of lifetime achievements and contributions to astrophysics, and is one of the prestigious awards in the field. Honors Asteroid 323 Brucia, discovered by Max Wolf is named after her, as well as the crater Bruce on the Moon. She was awarded a gold medal by the Grand Duke of Baden. Astronomer Johann Palisa gave her the honor of naming 313 Chaldaea as a token for the gratitude of astronomers. References 1816 births 1900 deaths People associated with astronomy American people of Scottish descent Harvard College Observatory people
Catherine Wolfe Bruce
[ "Astronomy" ]
564
[ "People associated with astronomy" ]
586,963
https://en.wikipedia.org/wiki/Smoothbore
A smoothbore weapon is one that has a barrel without rifling. Smoothbores range from handheld firearms to powerful tank guns and large artillery mortars. History Early firearms had smoothly bored barrels that fired projectiles without significant spin. To minimize inaccuracy-inducing tumbling during flight, their projectiles required an aerodynamically uniform shape, such as a sphere. However, surface imperfections on the projectile and/or the barrel will cause even a sphere to rotate randomly during flight, and the Magnus effect will curve it off the intended trajectory when spinning on any axis not parallel to the direction of travel. Rifling the bore surface with spiral grooves or polygonal valleys imparts a stabilizing gyroscopic spin to a projectile that prevents tumbling in flight. Not only does this more than counter Magnus-induced drift, but it allows a longer, more streamlined round with greater sectional density to be fired from the same caliber barrel, improving the accuracy, effective range and hitting power. In the eighteenth century, the standard infantry arm was the smoothbore musket; although rifled muskets were introduced in the early 18th century and had more power and range, they did not become the norm until the middle of the 19th century, when the Minié ball increased their rate of fire to match that of smoothbores. Artillery weapons were smoothbore until the mid-19th century, and smoothbores continued in limited use until the late 19th century. Early rifled artillery pieces were patented by Joseph Whitworth and William Armstrong in the United Kingdom in 1855. In the United States, rifled small arms and artillery were gradually adopted during the American Civil War. However, heavy coast defense Rodman smoothbores persisted in the US until 1900 due to the tendency of the Civil War's heavy Parrott rifles to burst and lack of funding for replacement weapons. Current use Some smoothbore firearms are still used. Small arms A shotgun fires multiple, round shot; firing out of a rifled barrel would impart centrifugal forces that result a doughnut-shaped pattern of shot (with a high projectile density on the periphery, and a low projectile density in the interior). While this may be acceptable at close ranges (some spreader chokes are rifled to produce wide patterns at close range) this is not desirable at longer ranges, where a tight, consistent pattern is required to improve accuracy. Another smoothbore weapon in use today is the 37-mm riot gun, which fires less-lethal munitions like rubber bullets and teargas at short range at crowds, where a high degree of accuracy is not required. The Steyr IWS 2000 anti-tank rifle is smoothbore. This can help accelerate projectiles and increase ballistic effectiveness. The projectile is a 15.2 mm fin-stabilized discarding-sabot type with armor-piercing capability which the IWS 2000 was specifically designed to fire. It contains a dart-shaped penetrator of either tungsten carbide or depleted uranium, capable of piercing 40 mm of rolled homogeneous armor at a range of 1,000 m, and causing secondary fragmentation. Artillery and tanks The cannon made the transition from smoothbore firing cannonballs to rifled firing shells in the mid-19th century. However, to reliably penetrate the thick armor of modern armored vehicles many modern tank guns have moved back to smoothbore. These fire a very long, thin kinetic-energy projectile, too long in relation to its diameter to develop the necessary spin rate through rifling. Instead, kinetic energy rounds are produced as fin-stabilized darts. Not only does this reduce the time and expense of producing rifled barrels, it also reduces the need for replacement due to barrel wear. The armour-piercing gun evolution has also shown up in small arms, particularly the now abandoned U.S. Advanced Combat Rifle (ACR) program. The ACR "rifles" used smoothbore barrels to fire single or multiple flechettes (tiny darts), rather than bullets, per pull of the trigger, to provide long range, flat trajectory, and armor-piercing abilities. Just like kinetic-energy tank rounds, flechettes are too long and thin to be stabilized by rifling and perform best from a smoothbore barrel. The ACR program was abandoned due to reliability problems and poor terminal ballistics. Mortar barrels are typically muzzle-loading smoothbores. Since mortars fire bombs that are dropped down the barrel and must not be a tight fit, a smooth barrel is essential. The bombs are fin-stabilized. Gallery See also Rifling Buck and ball Cap gun Caplock mechanism Internal ballistics Tubes and primers for ammunition Minié ball Gunpowder Cannon Muzzleloader Muzzle (firearms) Gun barrel Projectile References Artillery by type Firearm terminology Artillery components
Smoothbore
[ "Technology" ]
978
[ "Artillery components", "Components" ]
587,106
https://en.wikipedia.org/wiki/Lava%20flow%20%28programming%29
In computer programming jargon, lava flow is an anti-pattern that occurs when computer source code written under sub-optimal conditions is deployed into a production environment and subsequently expanded upon while still in a developmental state. The term derives from the natural occurrence of lava which, once cooled, solidifies into rock that is difficult to remove. Similarly, such code becomes difficult to refactor or replace due to dependencies that arise over time, necessitating the maintenance of backward compatibility with the original, incomplete design. Causes Lava flow can occur due to a variety of reasons within a software development process: Pressure to meet deadlines leading to temporary solutions becoming permanent Inadequate documentation which prevents understanding of the code’s purpose Lack of automated tests which makes refactoring risky Frequent changes in the development team leading to loss of knowledge Consequences Unrefined code that becomes part of the software’s infrastructure increases the complexity of the system and the codebase becomes increasingly difficult to understand and maintain. It leads to: The need for backward compatibility which can stifle innovation and prevent adoption of newer, more efficient solutions Increased technical debt that accumulates over time, resulting in higher costs of change and maintenance Obstacles to refactoring or improving the system due to fear of breaking dependent components Impact on Teams Development teams often experience the impact of lava flow when team members cycle in and out: Loss of knowledge about aspects of the system's code when original developers leave Reluctance among new developers to refactor unfamiliar code, leading to further complexity as they add rather than clean up Mitigation Strategies Several practices can mitigate the effects of the lava flow anti-pattern: Promoting good documentation practices for clear understanding of code Encouraging regular code reviews to catch suboptimal practices early Prioritizing refactoring as an integral part of the development lifecycle Maintaining a comprehensive suite of automated tests to reduce risk in changes References Anti-patterns
Lava flow (programming)
[ "Technology" ]
385
[ "Computer science", "Anti-patterns", "Computing stubs", "Computer science stubs" ]
587,163
https://en.wikipedia.org/wiki/Light-time%20correction
Light-time correction is a displacement in the apparent position of a celestial object from its true position (or geometric position) caused by the object's motion during the time it takes its light to reach an observer. Light-time correction occurs in principle during the observation of any moving object, because the speed of light is finite. The magnitude and direction of the displacement in position depends upon the distance of the object from the observer and the motion of the object, and is measured at the instant at which the object's light reaches the observer. It is independent of the motion of the observer. It should be contrasted with the aberration of light, which depends upon the instantaneous velocity of the observer at the time of observation, and is independent of the motion or distance of the object. Light-time correction can be applied to any object whose distance and motion are known. In particular, it is usually necessary to apply it to the motion of a planet or other Solar System object. For this reason, the combined displacement of the apparent position due to the effects of light-time correction and aberration is known as planetary aberration. By convention, light-time correction is not applied to the positions of stars, because their motion and distance may not be known accurately. Calculation A calculation of light-time correction usually involves an iterative process. An approximate light-time is calculated by dividing the object's geometric distance from Earth by the speed of light. Then the object's velocity is multiplied by this approximate light-time to determine its approximate displacement through space during that time. Its previous position is used to calculate a more precise light-time. This process is repeated as necessary. For planetary motions, a few (3–5) iterations are sufficient to match the accuracy of the underlying ephemerides. Discovery The effect of the finite speed of light on observations of celestial objects was first recognised by Ole Rømer in 1675, during a series of observations of eclipses of the moons of Jupiter. He found that the interval between eclipses was less when Earth and Jupiter are approaching each other, and more when they are moving away from each other. He correctly deduced that this difference was caused by the appreciable time it took for light to travel from Jupiter to the observer on Earth. References P. Kenneth Seidelmann (ed.), Explanatory Supplement to the Astronomical Almanac (Mill Valley, Calif., University Science Books, 1992), 23, 393. Arthur Berry, A Short History of Astronomy (John Murray, 1898 – republished by Dover, 1961), 258–265. Astrometry Time
Light-time correction
[ "Physics", "Astronomy", "Mathematics" ]
540
[ "Physical quantities", "Time", "Time stubs", "Quantity", "Astrometry", "Spacetime", "Wikipedia categories named after physical quantities", "Astronomical sub-disciplines" ]
587,173
https://en.wikipedia.org/wiki/List%20of%20house%20styles
This list of house styles lists styles of vernacular architecture – i.e., outside any academic tradition – used in the design of houses. African Asian South American Mediterranean, Spanish, Italian Neoclassical Elizabethan and Tudor Colonial French and Canadian Victorian and Queen Anne American Indian Central and Eastern European Modern and Post-modern See also List of architectural styles References House styles, List of
List of house styles
[ "Engineering" ]
75
[ "Design-related lists", "Design" ]
587,247
https://en.wikipedia.org/wiki/Classical%20planet
A classical planet is an astronomical object that is visible to the naked eye and moves across the sky and its backdrop of fixed stars (the common stars which seem still in contrast to the planets). Visible to humans on Earth there are seven classical planets (the seven luminaries). They are from brightest to dimmest: the Sun, the Moon, Venus, Jupiter, Mars, Mercury and Saturn. Greek astronomers such as Geminus and Ptolemy recorded these classical planets during classical antiquity, introducing the term planet, which means 'wanderer' in Greek ( and ), expressing the fact that these objects move across the celestial sphere relative to the fixed stars. Therefore, the Greeks were the first to document the astrological connections to the planets' visual detail. Through the use of telescopes other celestial objects like the classical planets were found, starting with the Galilean moons in 1610. Today the term planet is used considerably differently, with a planet being defined as a natural satellite directly orbiting the Sun (or other stars) and having cleared its own orbit. Therefore, only five of the seven classical planets remain recognized as planets, alongside Earth, Uranus, and Neptune. History Babylonian The Babylonians recognized seven planets. A bilingual list in the British Museum records the seven Babylonian planets in the following order: The Moon, Sin. The Sun, Shamash. Jupiter, Merodach. Venus, Ishtar. Saturn, Ninip. Mercury, Nebo. Mars, Nergal. Mandaean In Mandaeism, the names of the seven planets are derived from the seven Babylonian planets. Overall, the seven classical planets (; , "planets"; or, combined, "Seven Planets") are generally not viewed favorably in Mandaeism, since they constitute part of the entourage of Ruha, the Queen of the World of Darkness who is also their mother. However, individually, some of the planets can be associated with positive qualities. The names of the seven planets in Mandaic are borrowed from Akkadian. Some of the names are ultimately derived from Sumerian, since Akkadian had borrowed many deity names from Sumerian. Each planet is said to be carried in a ship. Drawings of these ships are found in various Mandaean scriptures, such as the Scroll of Abatur. The planets are listed according to the traditional Mandaean order of the planets as mentioned in Masco (2012). Symbols The astrological symbols for the classical planets appear in the medieval Byzantine codices in which many ancient horoscopes were preserved. In the original papyri of these Greek horoscopes, there are found a circle with one ray () for the Sun and a crescent for the Moon. The written symbols for Mercury, Venus, Jupiter, and Saturn have been traced to forms found in late Greek papyri. The symbols for Jupiter and Saturn are identified as monograms of the initial letters of the corresponding Greek names, and the symbol for Mercury is a stylized caduceus. A. S. D. Maunder finds antecedents of the planetary symbols in earlier sources, used to represent the gods associated with the classical planets. Bianchini's planisphere, produced in the 2nd century, shows Greek personifications of planetary gods charged with early versions of the planetary symbols: Mercury has a caduceus; Venus has, attached to her necklace, a cord connected to another necklace; Mars, a spear; Jupiter, a staff; Saturn, a scythe; the Sun, a circlet with rays radiating from it; and the Moon, a headdress with a crescent attached. A diagram in Johannes Kamateros' 12th century Compendium of Astrology shows the Sun represented by the circle with a ray, Jupiter by the letter zeta (the initial of Zeus, Jupiter's counterpart in Greek mythology), Mars by a shield crossed by a spear, and the remaining classical planets by symbols resembling the modern ones, without the cross-mark seen in modern versions of the symbols. The modern Sun symbol, pictured as a circle with a dot (☉), first appeared in the Renaissance. Planetary hours The Ptolemaic system used in ancient Greek astronomy placed the planets by order of proximity to Earth in the then-current geocentric model, closest to furthest, as the Moon, Mercury, Venus, Sun, Mars, Jupiter, and Saturn. In addition the day was divided into seven-hour intervals, each ruled by one of the planets, although the order was staggered (see below). The first hour of each day was named after the ruling planet, giving rise to the names and order of the Roman seven-day week. Modern Latin-based cultures, in general, directly inherited the days of the week from the Romans and they were named after the classical planets; for example, in Spanish Miércoles is Mercury, and in French mardi is Mars-day. The modern English days of the week were mostly inherited from gods of the old Germanic Norse culture – Wednesday is Wōden’s-day (Wōden or Wettin eqv. Mercury), Thursday is Thor’s-day (Thor eqv. Jupiter), Friday is Frige-day (Frige eqv. Venus). Equivalence here is by the gods' roles; for instance, Venus and Frige were both goddesses of love. It can be correlated that the Norse gods were attributed to each Roman planet and its god, probably due to Roman influence rather than coincidentally by the naming of the planets. A vestige of the Roman convention remains in the English name Saturday. Alchemy In alchemy, each classical planet (Moon, Mercury, Venus, Sun, Mars, Jupiter, and Saturn) was associated with one of the seven metals known to the classical world (silver, mercury/quicksilver, copper, gold, iron, tin and lead respectively). As a result, the alchemical glyphs for the metal and associated planet coincide. Alchemists believed the other elemental metals were variants of these seven (e.g. zinc was known as "Indian tin" or "mock silver"). Alchemy in the Western World and other locations where it was widely practiced was (and in many cases still is) allied and intertwined with traditional Babylonian-Greek style astrology; in numerous ways they were built to complement each other in the search for hidden knowledge (knowledge that is not common i.e. the occult). Astrology has used the concept of classical elements from antiquity up until the present day today. Most modern astrologers use the four classical elements extensively, and indeed they are still viewed as a critical part of interpreting the astrological chart. Traditionally, each of the seven planets in the Solar System as known to the ancients was associated with, held dominion over, and "ruled" a certain metal. The list of rulership is as follows: The Sun rules Gold () The Moon, Silver () Mercury, Quicksilver/Mercury () Venus, Copper () Mars, Iron () Jupiter, Tin () Saturn, Lead () Some alchemists (e.g. Paracelsus) adopted the Hermetic Qabalah assignment between the vital organs and the planets as follows: Contemporary astrology Western astrology Indian astrology Indian astronomy and astrology (jyotiṣa) recognises seven visible planets (including the Sun and Moon) and two additional invisible planets(tamo'graha); rahu and ketu. Naked-eye planets Mercury and Venus are visible only in twilight hours because their orbits are interior to that of Earth. Venus is the third-brightest object in the sky and the most prominent planet. Mercury is more difficult to see due to its proximity to the Sun. Lengthy twilight and an extremely low angle at maximum elongations make optical filters necessary to see Mercury from extreme polar locations. Mars is at its brightest when it is in opposition, which occurs approximately every twenty-five months. Jupiter and Saturn are the largest of the five planets, but are farther from the Sun, and therefore receive less sunlight. Nonetheless, Jupiter is often the next brightest object in the sky after Venus. Saturn's luminosity is often enhanced by its rings, which reflect light to varying degrees, depending on their inclination to the ecliptic; however, the rings themselves are not visible to the naked eye from the Earth. See also Antikythera mechanism Behenian fixed star List of former planets Monas Hieroglyphica of John Dee Olympian spirits Worship of heavenly bodies Wufang Shangdi References Further reading External links Chronology of Solar System Discovery Ancient astronomy Planets of the Solar System Solar System
Classical planet
[ "Astronomy" ]
1,776
[ "Ancient astronomy", "Outer space", "Solar System", "History of astronomy" ]
587,271
https://en.wikipedia.org/wiki/Torsion%20spring
A torsion spring is a spring that works by twisting its end along its axis; that is, a flexible elastic object that stores mechanical energy when it is twisted. When it is twisted, it exerts a torque in the opposite direction, proportional to the amount (angle) it is twisted. There are various types: A torsion bar is a straight bar of metal or rubber that is subjected to twisting (shear stress) about its axis by torque applied at its ends. A more delicate form used in sensitive instruments, called a torsion fiber consists of a fiber of silk, glass, or quartz under tension, that is twisted about its axis. A helical torsion spring, is a metal rod or wire in the shape of a helix (coil) that is subjected to twisting about the axis of the coil by sideways forces (bending moments) applied to its ends, twisting the coil tighter. Clocks use a spiral wound torsion spring (a form of helical torsion spring where the coils are around each other instead of piled up) sometimes called a "clock spring" or colloquially called a mainspring. Those types of torsion springs are also used for attic stairs, clutches, typewriters and other devices that need near constant torque for large angles or even multiple revolutions. Torsion, bending Torsion bars and torsion fibers do work by torsion. However, the terminology can be confusing because in helical torsion spring (including clock spring), the forces acting on the wire are actually bending stresses, not torsional (shear) stresses. A helical torsion spring actually works by torsion when it is bent (not twisted). We will use the word "torsion" in the following for a torsion spring according to the definition given above, whether the material it is made of actually works by torsion or by bending. Torsion coefficient As long as they are not twisted beyond their elastic limit, torsion springs obey an angular form of Hooke's law: where is the torque exerted by the spring in newton-meters, and is the angle of twist from its equilibrium position in radians is a constant with units of newton-meters / radian, variously called the spring's torsion coefficient, torsion elastic modulus, rate, or just spring constant, equal to the change in torque required to twist the spring through an angle of 1 radian. The torsion constant may be calculated from the geometry and various material properties. It is analogous to the spring constant of a linear spring. The negative sign indicates that the direction of the torque is opposite to the direction of twist. The energy U, in joules, stored in a torsion spring is: Uses Some familiar examples of uses are the strong, helical torsion springs that operate clothespins and traditional spring-loaded-bar type mousetraps. Other uses are in the large, coiled torsion springs used to counterbalance the weight of garage doors, and a similar system is used to assist in opening the trunk (boot) cover on some sedans. Small, coiled torsion springs are often used to operate pop-up doors found on small consumer goods like digital cameras and compact disc players. Other more specific uses: A torsion bar suspension is a thick, steel torsion-bar spring attached to the body of a vehicle at one end and to a lever arm which attaches to the axle of the wheel at the other. It absorbs road shocks as the wheel goes over bumps and rough road surfaces, cushioning the ride for the passengers. Torsion-bar suspensions are used in many modern cars and trucks, as well as military vehicles. The sway bar used in many vehicle suspension systems also uses the torsion spring principle. The torsion pendulum used in torsion pendulum clocks is a wheel-shaped weight suspended from its center by a wire torsion spring. The weight rotates about the axis of the spring, twisting it, instead of swinging like an ordinary pendulum. The force of the spring reverses the direction of rotation, so the wheel oscillates back and forth, driven at the top by the clock's gears. Torsion springs consisting of twisted ropes or sinew, were used to store potential energy to power several types of ancient weapons; including the Greek ballista and the Roman scorpio and catapults like the onager. The balance spring or hairspring in mechanical watches is a fine, spiral-shaped torsion spring that pushes the balance wheel back toward its center position as it rotates back and forth. The balance wheel and spring function similarly to the torsion pendulum above in keeping time for the watch. The D'Arsonval movement used in mechanical pointer-type meters to measure electric current is a type of torsion balance (see below). A coil of wire attached to the pointer twists in a magnetic field against the resistance of a torsion spring. Hooke's law ensures that the angle of the pointer is proportional to the current. A DMD or digital micromirror device chip is at the heart of many video projectors. It uses hundreds of thousands of tiny mirrors on tiny torsion springs fabricated on a silicon surface to reflect light onto the screen, forming the image. Badge tether Torsion balance The torsion balance, also called torsion pendulum, is a scientific apparatus for measuring very weak forces, usually credited to Charles-Augustin de Coulomb, who invented it in 1777, but independently invented by John Michell sometime before 1783. Its most well-known uses were by Coulomb to measure the electrostatic force between charges to establish Coulomb's Law, and by Henry Cavendish in 1798 in the Cavendish experiment to measure the gravitational force between two masses to calculate the density of the Earth, leading later to a value for the gravitational constant. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. If an unknown force is applied at right angles to the ends of the bar, the bar will rotate, twisting the fiber, until it reaches an equilibrium where the twisting force or torque of the fiber balances the applied force. Then the magnitude of the force is proportional to the angle of the bar. The sensitivity of the instrument comes from the weak spring constant of the fiber, so a very weak force causes a large rotation of the bar. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls. Determining the force for different charges and different separations between the balls, he showed that it followed an inverse-square proportionality law, now known as Coulomb's law. To measure the unknown force, the spring constant of the torsion fiber must first be known. This is difficult to measure directly because of the smallness of the force. Cavendish accomplished this by a method widely used since: measuring the resonant vibration period of the balance. If the free balance is twisted and released, it will oscillate slowly clockwise and counterclockwise as a harmonic oscillator, at a frequency that depends on the moment of inertia of the beam and the elasticity of the fiber. Since the inertia of the beam can be found from its mass, the spring constant can be calculated. Coulomb first developed the theory of torsion fibers and the torsion balance in his 1785 memoir, Recherches theoriques et experimentales sur la force de torsion et sur l'elasticite des fils de metal &c. This led to its use in other scientific instruments, such as galvanometers, and the Nichols radiometer which measured the radiation pressure of light. In the early 1900s gravitational torsion balances were used in petroleum prospecting. Today torsion balances are still used in physics experiments. In 1987, gravity researcher A. H. Cook wrote: The most important advance in experiments on gravitation and other delicate measurements was the introduction of the torsion balance by Michell and its use by Cavendish. It has been the basis of all the most significant experiments on gravitation ever since.In the Eötvös experiment, a torsion balance was used to prove the equivalence principle - the idea that inertial mass and gravitational mass are one and the same. Torsional harmonic oscillators Torsion balances, torsion pendulums and balance wheels are examples of torsional harmonic oscillators that can oscillate with a rotational motion about the axis of the torsion spring, clockwise and counterclockwise, in harmonic motion. Their behavior is analogous to translational spring-mass oscillators (see Harmonic oscillator Equivalent systems). The general differential equation of motion is: If the damping is small, , as is the case with torsion pendulums and balance wheels, the frequency of vibration is very near the natural resonant frequency of the system: Therefore, the period is represented by: The general solution in the case of no drive force (), called the transient solution, is: where: Applications The balance wheel of a mechanical watch is a harmonic oscillator whose resonant frequency sets the rate of the watch. The resonant frequency is regulated, first coarsely by adjusting with weight screws set radially into the rim of the wheel, and then more finely by adjusting with a regulating lever that changes the length of the balance spring. In a torsion balance the drive torque is constant and equal to the unknown force to be measured , times the moment arm of the balance beam , so . When the oscillatory motion of the balance dies out, the deflection will be proportional to the force: To determine it is necessary to find the torsion spring constant . If the damping is low, this can be obtained by measuring the natural resonant frequency of the balance, since the moment of inertia of the balance can usually be calculated from its geometry, so: In measuring instruments, such as the D'Arsonval ammeter movement, it is often desired that the oscillatory motion die out quickly so the steady state result can be read off. This is accomplished by adding damping to the system, often by attaching a vane that rotates in a fluid such as air or water (this is why magnetic compasses are filled with fluid). The value of damping that causes the oscillatory motion to settle quickest is called the critical damping: See also Beam (structure) Slinky, helical toy spring References Bibliography . Detailed account of Coulomb's experiment. . Shows pictures of the Coulomb torsion balance, and describes Coulomb's contributions to torsion technology. . Describes the Nichols radiometer. . Description of how torsion balances were used in petroleum prospecting, with pictures of a 1902 instrument. External links Torsion balance interactive java tutorial Torsion spring calculator Big G measurement, description of 1999 Cavendish experiment at Univ. of Washington, showing torsion balance[link broken] How torsion balances were used in petroleum prospecting (web archive link) Mechanics of torsion springs. Web archive link, accessed December 8, 2016. Solved mechanics problems involving springs (springs in series and in parallel) Milestones in the History of Springs Articles containing video clips Pendulums Springs (mechanical) Torque
Torsion spring
[ "Physics" ]
2,432
[ "Wikipedia categories named after physical quantities", "Force", "Physical quantities", "Torque" ]
587,339
https://en.wikipedia.org/wiki/Circuit%20diagram
A circuit diagram (or: wiring diagram, electrical diagram, elementary diagram, electronic schematic) is a graphical representation of an electrical circuit. A pictorial circuit diagram uses simple images of components, while a schematic diagram shows the components and interconnections of the circuit using standardized symbolic representations. The presentation of the interconnections between circuit components in the schematic diagram does not necessarily correspond to the physical arrangements in the finished device. Unlike a block diagram or layout diagram, a circuit diagram shows the actual electrical connections. A drawing meant to depict the physical arrangement of the wires and the components they connect is called artwork or layout, physical design, or wiring diagram. Circuit diagrams are used for the design (circuit design), construction (such as PCB layout), and maintenance of electrical and electronic equipment. In computer science, circuit diagrams are useful when visualizing expressions using Boolean algebra. Symbols Circuit diagrams are pictures with symbols that have differed from country to country and have changed over time, but are now to a large extent internationally standardized. Simple components often had symbols intended to represent some feature of the physical construction of the device. For example, the symbol for a resistor dates back to the time when that component was made from a long piece of wire wrapped in such a manner as to not produce inductance, which would have made it a coil. These wirewound resistors are now used only in high-power applications, smaller resistors being cast from carbon composition (a mixture of carbon and filler) or fabricated as an insulating tube or chip coated with a metal film. The internationally standardized symbol for a resistor is therefore now simplified to an oblong, sometimes with the value in ohms written inside, instead of the zig-zag symbol. A less common symbol is simply a series of peaks on one side of the line representing the conductor, rather than back-and-forth. The linkages between leads were once simple crossings of lines. With the arrival of computerized drafting, the connection of two intersecting wires was shown by a crossing of wires with a "dot" or "blob" to indicate a connection. At the same time, the crossover was simplified to be the same crossing, but without a "dot". However, there was a danger of confusing the wires that were connected and not connected in this manner, if the dot was drawn too small or accidentally omitted (e.g. the "dot" could disappear after several passes through a copy machine). As such, the modern practice for representing a 4-way wire connection is to draw a straight wire and then to draw the other wires staggered along it with "dots" as connections (see diagram), so as to form two separate T-junctions that brook no confusion and are clearly not a crossover. For crossing wires that are insulated from one another, a small semi-circle symbol is commonly used to show one wire "jumping over" the other wire (similar to how jumper wires are used). A common, hybrid style of drawing combines the T-junction crossovers with "dot" connections and the wire "jump" semi-circle symbols for insulated crossings. In this manner, a "dot" that is too small to see or that has accidentally disappeared can still be clearly differentiated from a "jump". On a circuit diagram, the symbols for components are labelled with a descriptor or reference designator matching that on the list of parts. For example, C1 is the first capacitor, L1 is the first inductor, Q1 is the first transistor, and R1 is the first resistor. Often the value or type designation of the component is given on the diagram beside the part, but detailed specifications would go on the parts list. Detailed rules for reference designations are provided in the International standard IEC 61346. Organization It is a usual (although not universal) convention that schematic drawings are organized on the page from left to right and top to bottom in the same sequence as the flow of the main signal or power path. For example, a schematic for a radio receiver might start with the antenna input at the left of the page and end with the loudspeaker at the right. Positive power supply connections for each stage would be shown towards the top of the page, with grounds, negative supplies, or other return paths towards the bottom. Schematic drawings intended for maintenance may have the principal signal paths highlighted to assist in understanding the signal flow through the circuit. More complex devices have multi-page schematics and must rely on cross-reference symbols to show the flow of signals between the different sheets of the drawing. Detailed rules for the preparation of circuit diagrams, and other document types used in electrotechnology, are provided in the international standard IEC 61082-1. Circuit diagrams are often drawn with the same standardized title block and frame as other engineering drawings. Relay logic line diagrams, also called ladder logic diagrams, use another common standardized convention for organizing schematic drawings, with a vertical power supply rail on the left and another on the right, and components strung between them like the rungs of a ladder. Artwork Once the schematic has been made, it is converted into a layout that can be fabricated onto a printed circuit board (PCB). Schematic-driven layout starts with the process of schematic capture. The result is what is known as a . The rat's nest is a jumble of wires (lines) criss-crossing each other to their destination nodes. These wires are routed either manually or automatically by the use of electronics design automation (EDA) tools. The EDA tools arrange and rearrange the placement of components and find paths for tracks to connect various nodes. This results in the final layout artwork for the integrated circuit or printed circuit board. A generalized design flow may be as follows: Schematic → schematic capture → netlist → rat's nest → routing → artwork → PCB development and etching → component mounting → testing Education Teaching about the functioning of electrical circuits is often on primary and secondary school curricula. Students are expected to understand the rudiments of circuit diagrams and their functioning. Use of diagrammatic representations of circuit diagrams can aid understanding of principles of electricity. Principles of the physics of circuit diagrams are often taught with the use of analogies, such as comparing functioning of circuits to other closed systems such as water heating systems with pumps being the equivalent to batteries. See also Boxology Circuit design language Electronic symbol Logic gate One-line diagram Pinout Schematic capture Schematic editor References External links Electrical diagrams Electronic design Diagrams
Circuit diagram
[ "Engineering" ]
1,362
[ "Electronic design", "Electronic engineering", "Design", "Electrical diagrams" ]
587,654
https://en.wikipedia.org/wiki/Speculum%20metal
Speculum metal is a mixture of around two-thirds copper and one-third tin, making a white brittle alloy that can be polished to make a highly reflective surface. It was used historically to make different kinds of mirrors from personal grooming aids to optical devices until it was replaced by more modern materials such as metal-coated glass mirrors. Speculum metal mixtures usually contain two parts copper to one part tin along with a small amount of arsenic, although there are other mixtures containing silver, lead, or zinc. This is about twice the proportion of tin to copper typically used in bronze alloys. Archaeologists and others prefer to call it "high-tin bronze", although this broad term is also used for other alloys such as bell metal, which is typically around 20% tin. Large speculum metal mirrors are hard to manufacture, and the alloy is prone to tarnish, requiring frequent re-polishing. However, it was the only practical choice for large mirrors in high-precision optical equipment between the mid-17th and mid-19th centuries, before the invention of glass silvering. Speculum metal was noted for its use in the metal mirrors of reflecting telescopes, and famous examples of its use were Newton's telescope, the Leviathan of Parsonstown, and William Herschel's telescope used to discover the planet Uranus. A major difficulty with its use in telescopes is that the mirrors could not reflect as much light as modern mirrors and would tarnish rapidly. Early history The knowledge of making very hard white high luster metal out of bronze-type high-tin alloys may date back more than 2000 years in China, although it could also be an invention of western civilizations. Remarks in Pliny the Elder may refer to it. It was certainly in use by the European Middle Ages, giving better reflectivity than the usual bronze mirrors, and tarnishing more slowly. However, tin was expensive, and the composition of the alloy had to be controlled precisely. Confusingly, mirrors made of speculum metal were known at the time, and often later, as "steel mirrors", although they had no steel in them. It was not suitable for "cold-working" techniques such as repoussé and chasing, being much too hard, but worked well if cast into small objects, and was also used for "Dark Age belt fittings, buckles, brooches" and similar small items, giving an attractive silver-white colouring. Use in telescopes Speculum metal found an application in early modern Europe as the only known good reflecting surface for mirrors in reflecting telescopes. In contrast to household mirrors, where the reflecting metal layer is coated on the back of a glass pane and covered with a protective varnish, precision optical equipment like telescopes needs first surface mirrors that can be ground and polished into complex shapes such as parabolic reflectors. For nearly 200 years speculum metal was the only mirror substance that could perform this task. One of the earliest designs, James Gregory’s Gregorian telescope could not be built because Gregory could not find a craftsman capable of fabricating the complex speculum mirrors needed for the design. Isaac Newton was the first to successfully build a reflecting telescope in 1668. His first reflecting telescope (a design which came to be known as a Newtonian reflector) had a 33-mm (1.3-inch) diameter speculum metal primary mirror of his own formulation. Newton was likewise confronted with the problem of fabricating the complex parabolic shape needed to create the image, but simply settled on a spherical shape. The composition of speculum metal was further refined and went on to be used in the 1700s and 1800s in many designs of reflecting telescopes. The ideal composition was around 68.21% copper to 31.7% tin; more copper made the metal more yellow, more tin made the metal more blue in color. Ratios with up to 45% tin were used for resistance to tarnishing. Although speculum metal mirror reflecting telescopes could be built very large, such as William Herschel's 126-cm (49.5-inch) "40-foot telescope" of 1789 and Lord Rosse 183-cm (72-inch) mirror of his "Leviathan of Parsonstown" of 1845, impracticalities in using the metal made most astronomers prefer their smaller refracting telescope counterparts. Speculum metal was very hard to cast and shape. It only reflected 66% of the light that hit it. Speculum also had the unfortunate property of tarnishing in open air with a sensitivity to humidity, requiring constant re-polishing to maintain its usefulness. This meant the telescope mirrors had to be constantly removed, polished, and re-figured to the correct shape. This sometimes proved difficult, with some mirrors having to be abandoned. It also required that two or more mirrors had to be fabricated for each telescope so that one could be used while the other was being polished. Rapidly cooling night-time air would cause stresses in large speculum metal mirrors, distorting their shape and causing them to produce poor images. Lord Rosse had a system of adjustable levers on his 72-inch metal mirror so he could adjust the shape when it was unreliable at producing an acceptable image. In 1856–57 an improvement over speculum mirrors was invented when Karl August von Steinheil and Léon Foucault introduced the process of depositing an ultra-thin layer of silver on the front surface (first surface) of a ground block of glass. Silvered glass mirrors were a vast improvement, since silver reflects 90% of the light that hits it and is much slower to tarnish than speculum. Silver coatings can also be removed from the glass, so a tarnished mirror could be resilvered without changing the delicate precision-polished shape of the glass substrate. Glass is also more thermally stable than speculum metal, allowing it to hold its shape better through temperature changes. This marked the end of the speculum-mirror reflecting telescope, with the last large one, the Great Melbourne Telescope with its 122 cm (48-inch) mirror, being completed in 1867. The era of the large glass-mirror reflector had begun, with telescopes such as Andrew Ainslie Common's 1879 36-inch (91 cm) and 1887 60-inch (152 cm) reflectors built at Ealing, and the first of the "modern" large glass-mirror research reflectors, 60-inch (150 cm) Mount Wilson Observatory Hale Telescope of 1908, the 100-inch (2.5 m) Mount Wilson Hooker telescope in 1917 and the 200-inch (5 m) Mount Palomar Hale Telescope in 1948. See also Liquid-mirror telescope List of largest optical telescopes in the 19th century List of largest optical telescopes in the 18th century References Meeks, Nigel, "Patination phenomena on Roman and Chinese bronze mirrors and other artefacts", in Metal Plating and Patination: Cultural, Technical and Historical Developments, ed. Susan La-Niece, 2013, Elsevier, ISBN 9781483292069, google books External links National Pollutant Inventory — Copper and compounds fact sheet Bronze Copper alloys Optical materials
Speculum metal
[ "Physics", "Chemistry" ]
1,454
[ "Copper alloys", "Materials", "Optical materials", "Alloys", "Matter" ]
587,678
https://en.wikipedia.org/wiki/Nerve%20complex
In topology, the nerve complex of a set family is an abstract complex that records the pattern of intersections between the sets in the family. It was introduced by Pavel Alexandrov and now has many variants and generalisations, among them the Čech nerve of a cover, which in turn is generalised by hypercoverings. It captures many of the interesting topological properties in an algorithmic or combinatorial way. Basic definition Let be a set of indices and be a family of sets . The nerve of is a set of finite subsets of the index set . It contains all finite subsets such that the intersection of the whose subindices are in is non-empty: In Alexandrov's original definition, the sets are open subsets of some topological space . The set may contain singletons (elements such that is non-empty), pairs (pairs of elements such that ), triplets, and so on. If , then any subset of is also in , making an abstract simplicial complex. Hence N(C) is often called the nerve complex of . Examples Let X be the circle and , where is an arc covering the upper half of and is an arc covering its lower half, with some overlap at both sides (they must overlap at both sides in order to cover all of ). Then , which is an abstract 1-simplex. Let X be the circle and , where each is an arc covering one third of , with some overlap with the adjacent . Then . Note that {1,2,3} is not in since the common intersection of all three sets is empty; so is an unfilled triangle. The Čech nerve Given an open cover of a topological space , or more generally a cover in a site, we can consider the pairwise fibre products , which in the case of a topological space are precisely the intersections . The collection of all such intersections can be referred to as and the triple intersections as . By considering the natural maps and , we can construct a simplicial object defined by , n-fold fibre product. This is the Čech nerve. By taking connected components we get a simplicial set, which we can realise topologically: . Nerve theorems The nerve complex is a simple combinatorial object. Often, it is much simpler than the underlying topological space (the union of the sets in ). Therefore, a natural question is whether the topology of is equivalent to the topology of . In general, this need not be the case. For example, one can cover any n-sphere with two contractible sets and that have a non-empty intersection, as in example 1 above. In this case, is an abstract 1-simplex, which is similar to a line but not to a sphere. However, in some cases does reflect the topology of X. For example, if a circle is covered by three open arcs, intersecting in pairs as in Example 2 above, then is a 2-simplex (without its interior) and it is homotopy-equivalent to the original circle. A nerve theorem (or nerve lemma) is a theorem that gives sufficient conditions on C guaranteeing that reflects, in some sense, the topology of . A functorial nerve theorem is a nerve theorem that is functorial in an approriate sense, which is, for example, crucial in topological data analysis. Leray's nerve theorem The basic nerve theorem of Jean Leray says that, if any intersection of sets in is contractible (equivalently: for each finite the set is either empty or contractible; equivalently: C is a good open cover), then is homotopy-equivalent to . Borsuk's nerve theorem There is a discrete version, which is attributed to Borsuk. Let K1,...,Kn be abstract simplicial complexes, and denote their union by K. Let Ui = ||Ki|| = the geometric realization of Ki, and denote the nerve of {U1, ... , Un } by N. If, for each nonempty , the intersection is either empty or contractible, then N is homotopy-equivalent to K. A stronger theorem was proved by Anders Bjorner. if, for each nonempty , the intersection is either empty or (k-|J|+1)-connected, then for every j ≤ k, the j-th homotopy group of N is isomorphic to the j-th homotopy group of K. In particular, N is k-connected if-and-only-if K is k-connected. Čech nerve theorem Another nerve theorem relates to the Čech nerve above: if is compact and all intersections of sets in C are contractible or empty, then the space is homotopy-equivalent to . Homological nerve theorem The following nerve theorem uses the homology groups of intersections of sets in the cover. For each finite , denote the j-th reduced homology group of . If HJ,j is the trivial group for all J in the k-skeleton of N(C) and for all j in {0, ..., k-dim(J)}, then N(C) is "homology-equivalent" to X in the following sense: for all j in {0, ..., k}; if then . See also Hypercovering References Topology Simplicial sets Families of sets
Nerve complex
[ "Physics", "Mathematics" ]
1,121
[ "Combinatorics", "Basic concepts in set theory", "Topology", "Space", "Families of sets", "Simplicial sets", "Geometry", "Spacetime" ]
587,698
https://en.wikipedia.org/wiki/Secure%20copy%20protocol
Secure copy protocol (SCP) is a means of securely transferring computer files between a local host and a remote host or between two remote hosts. It is based on the Secure Shell (SSH) protocol. "SCP" commonly refers to both the Secure Copy Protocol and the program itself. According to OpenSSH developers in April 2019, SCP is outdated, inflexible and not readily fixed; they recommend the use of more modern protocols like SFTP and rsync for file transfer. As of OpenSSH version 9.0, scp client therefore uses SFTP for file transfers by default instead of the legacy SCP/RCP protocol. Secure Copy Protocol The SCP is a network protocol, based on the BSD RCP protocol, which supports file transfers between hosts on a network. SCP uses Secure Shell (SSH) for data transfer and uses the same mechanisms for authentication, thereby ensuring the authenticity and confidentiality of the data in transit. A client can send (upload) files to a server, optionally including their basic attributes (permissions, timestamps). Clients can also request files or directories from a server (download). SCP runs over TCP port 22 by default. Like RCP, there is no RFC that defines the specifics of the protocol. Function Normally, a client initiates an SSH connection to the remote host, and requests an SCP process to be started on the remote server. The remote SCP process can operate in one of two modes: source mode, which reads files (usually from disk) and sends them back to the client, or sink mode, which accepts the files sent by the client and writes them (usually to disk) on the remote host. For most SCP clients, source mode is generally triggered with the -f flag (from), while sink mode is triggered with -t (to). These flags are used internally and are not documented outside the SCP source code. Remote to remote mode In the past, in remote-to-remote secure copy, the SCP client opens an SSH connection to the source host and requests that it, in turn, open an SCP connection to the destination. (Remote-to-remote mode did not support opening two SCP connections and using the originating client as an intermediary). SCP cannot be used to remotely copy from the source to the destination when operating in password or keyboard-interactive authentication mode, as this would reveal the destination server's authentication credentials to the source. It is, however, possible with key-based or GSSAPI methods that do not require user input. Recently, remote-to-remote mode supports routing traffic through the client which originated the transfer, even though it is a 3rd party to the transfer. This way, authorization credentials must reside only on the originating client, the 3rd party. Issues using talkative shell profiles SCP does not expect text communicating with the SSH login shell. Text transmitted due to the SSH profile (e.g. echo "Welcome" in the .bashrc file) is interpreted as an error message, and a null line (echo "") causes SCP client to deadlock waiting for the error message to complete. scp program The SCP program is a software tool implementing the SCP protocol as a service daemon or client. It is a program to perform secure copying. Perhaps the most widely used SCP program is the OpenSSH command line scp program, which is provided in most SSH implementations. The scp program is the secure analog of the rcp command. The scp program must be part of all SSH servers that want to provide SCP service, as scp functions as SCP server too. Since OpenSSH 9.0, the program has been updated to use the newer, more secure SFTP protocol; an -O option is added for using SCP with old SCP-only servers. Syntax Typically, a syntax of scp program is like the syntax of cp (copy): Copying local file to a remote host: scp LocalSourceFile user@remotehost:directory/TargetFile Copying file from remote host and recursively copying folder (with -r switch) from remote host: scp user@remotehost:directory/SourceFile LocalTargetFile scp -r user@host:directory/SourceFolder LocalTargetFolder Note that if the remote host uses a port other than the default of 22, it can be specified in the command. For example, copying a file from host: scp -P 2222 user@host:directory/SourceFile TargetFile Other clients As the Secure Copy Protocol implements file transfers only, GUI SCP clients are rare, as implementing it requires additional functionality (directory listing at least). For example, WinSCP defaults to the SFTP protocol. Even when operating in SCP mode, clients like WinSCP are typically not pure SCP clients, as they must use other means to implement the additional functionality (like the ls command). This in turn brings platform-dependency problems. More comprehensive tools for managing files over SSH are SFTP clients. Security In 2019 vulnerability was announced related to the openssh SCP tool and protocol allowing users to overwrite arbitrary files in the SCP client target directory. See also References Cryptographic software Cryptographic protocols Network file transfer protocols
Secure copy protocol
[ "Mathematics", "Technology" ]
1,111
[ "Windows commands", "Cryptographic software", "Computing commands", "Mathematical software" ]
587,884
https://en.wikipedia.org/wiki/UMAX%20Technologies
UMAX Technologies (), originally known as UMAX Computer Corporation, is a manufacturer of computer products, including scanners, mice, and flash drives, based in Taiwan. The company also uses the Yamada and Vaova brand names. History UMAX was formerly a maker of Apple Macintosh clones, using the SuperMac brand name outside of Europe. Their models included the SuperMac S900/S910, J700, C500 and C500e/i/LT, C600e/v/LT/x and Aegis 200. The C500 was marketed as the Apus 2000 in Europe. After Steve Jobs returned to Apple as the new CEO, he revoked all of the clone producers' licenses to produce Mac clones except for UMAX, due to their sub-US$1,000 low-end offerings, a market in which Apple was not strong, and UMAX's stated desire to expand the Macintosh platform's presence in East Asian markets. UMAX's license for Mac OS 8 expired in July 1998. UMAX could not remain profitable selling only these systems, however; it briefly made IBM PC compatible computers in the mid-1990s, but since then UMAX has mainly concentrated on manufacturing scanners. In 1995, UMAX was the leading Taiwanese scanner maker, with a market share of 13% second worldwide behind Hewlett-Packard (HP). This continued to be the case throughout 1996. According to PC Data figures, in 1997 UMAX briefly overtook HP in some monthly sales. According to the same source however, by 1999 UMAX was being "eclipsed" by HP whose scanner market share doubled that year from 13% to 26%. In some markets with high price-sensitivity like India for example, UMAX continued to have a slight lead on HP throughout 1999–2000 with the two companies claiming 44% and 40%, respectively, of the scanner sales in this country; 85% of which were for products costing less than 10,000 Rs.) By 2003, HP and Canon were dominating the world's flatbed scanner market, "accounting for a combined unit market share of 81 per cent." In 2002, UMAX started to charge its US customers for driver upgrades for its scanners—a practice that soon proved controversial. Until their exit from the desktop scanner market in 2002, Heidelberger Druckmaschinen used UMAX as its OEM for these products. UMAX also made a 1.3 megapixel digital camera called the AstraPix 490. It is capable of recording video clips, functioning as a webcam and can even be used to listen to music encoded in MP3 format. Scanners Astra 610S and 1200S; these were cloned and/or repackaged (OEM'd) for many other manufacturers Astra 1220P Astra 2000U Astra 2100U Astra 2400S, NCR 53C80 SCSI/Intel 8031 based 600x1200dpi Astra 3450 Astra 4900 Astra 4950 Astra 5600 Astra 6700 AstraSlim AstraSlim SE PowerLook 1000 PowerLook 1120 PowerLook 2100XL PowerLook 180 PowerLook 270 Scanner software UMAX offers some semi-free (in the sense that some versions/updates cost money and some do not) basic scanner software for Microsoft Windows (up to Windows XP) and Mac OS: VistaScan is their basic TWAIN scanner module, which also contains Windows Image Acquisition (WIA) drivers in its newer versions. It features a simpler interface compared to MagicScan. However, not all versions work with all products. In general, VistaScan versions after 3.55 no longer support SCSI scanners. The German site of UMAX has a (bilingual) webpage/wizard that helps the user select the proper version for their scanner. MagicScan is the higher-end version of VistaScan, with a user interface aimed at more experienced users; it did not ship with the cheaper scanners (Astra, etc.) but only with the higher-end (PowerLook) scanners. It does however work with many of the cheaper UMAX scanners. Versions after 4.71 no longer ship with SCSI drivers. Additionally, UMAX offers more sophisticated (typically non-free) third-party photo scanning/correction software: binuscan PhotoPerfect, which is a standalone application and has a plug-in for MagicScan; PhotoPerfect is also bundled with high-end scanners and sold separately for others SilverFast is compatible with many UMAX scanner (especially the SCSI ones); its entry-level (SE) version is shipped with some newer UMAX scanners. and offered separately for others For optical character recognition, some UMAX scanners came bundled with OmniPage and others with ABBYY FineReader. The Unix SANE software generally supports well the UMAX SCSI scanners, with varying degrees of support for the other ones (USB, FireWire, parallel). See also List of companies of Taiwan References External links UMAX Technologies SuperMac Insider The Unofficial SuperMac Support Site All Umax SuperMac Mac Clones (at EveryMac.com) Technology companies established in 1987 Electronics companies established in 1987 Manufacturing companies established in 1987 1987 establishments in Taiwan Companies based in Taipei Computer hardware companies Computer memory companies Computer peripheral companies Electronics companies of Taiwan Taiwanese brands Macintosh clones
UMAX Technologies
[ "Technology" ]
1,111
[ "Computer hardware companies", "Computers" ]
587,970
https://en.wikipedia.org/wiki/Clopidogrel
Clopidogrel, sold under the brand name Plavix among others, is an antiplatelet medication used to reduce the risk of heart disease and stroke in those at high risk. It is also used together with aspirin in heart attacks and following the placement of a coronary artery stent (dual antiplatelet therapy). It is taken by mouth. Its effect starts about two hours after intake and lasts for five days. Common side effects include headache, nausea, easy bruising, itching, and heartburn. More severe side effects include bleeding and thrombotic thrombocytopenic purpura. While there is no evidence of harm from use during pregnancy, such use has not been well studied. Clopidogrel is in the thienopyridine-class of antiplatelets. It works by irreversibly inhibiting a receptor called P2Y12 on platelets. Clopidogrel was patented in 1982, and approved for medical use in 1997. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 47th most commonly prescribed medication in the United States, with more than 13million prescriptions. It is available as a generic medication. Medical uses Clopidogrel is used to prevent heart attack and stroke in people who are at high risk of these events, including those with a history of myocardial infarction and other forms of acute coronary syndrome, stroke, and those with peripheral artery disease. Treatment with clopidogrel or a related drug is recommended by the American Heart Association and the American College of Cardiology for people who: Present for treatment with a myocardial infarction with ST-elevation including A loading dose given in advance of percutaneous coronary intervention (PCI), followed by a full year of treatment for those receiving a vascular stent A loading dose given in advance of fibrinolytic therapy, continued for at least 14 days Present for treatment of a non-ST elevation myocardial infarction or unstable angina Including a loading dose and maintenance therapy in those receiving PCI and unable to tolerate aspirin therapy Maintenance therapy for up to 12 months in those at medium to high risk for which a noninvasive treatment strategy is chosen In those with stable ischemic heart disease, treatment with clopidogrel is described as a "reasonable" option for monotherapy in those who cannot tolerate aspirin, as is treatment with clopidogrel in combination with aspirin in certain high risk patients. It is also used, along with acetylsalicylic acid (ASA, aspirin), for the prevention of thrombosis after placement of a coronary stent or as an alternative antiplatelet drug for people intolerant to aspirin. It is available as a fixed-dose combination with aspirin. A meta-analysis found clopidogrel's benefit as an antiplatelet drug in reducing cardiovascular death, myocardial infarction, and stroke to be 25% benefit in smokers, with little (8%) benefit in non-smokers. Consensus-based therapeutic guidelines also recommend the use of clopidogrel rather than aspirin (ASA) for antiplatelet therapy in people with a history of gastric ulceration, as inhibition of the synthesis of prostaglandins by ASA can exacerbate this condition. In people with healed ASA-induced ulcers, however, those receiving ASA plus the proton-pump inhibitor (PPI) esomeprazole had a lower incidence of recurrent ulcer bleeding than those receiving clopidogrel. However, prophylaxis with proton-pump inhibitors along with clopidogrel following acute coronary syndrome may increase adverse cardiac outcomes, possibly due to inhibition of CYP2C19, which is required for the conversion of clopidogrel to its active form. The European Medicines Agency has issued a public statement on a possible interaction between clopidogrel and proton-pump inhibitors. However, several cardiologists have voiced concern that the studies on which these warnings are based have many limitations and that it is not certain whether an interaction between clopidogrel and proton-pump inhibitors is real. Adverse effects Serious adverse drug reactions associated with clopidogrel therapy include: Thrombotic thrombocytopenic purpura (incidence: four per million patients treated) Hemorrhage – the annual incidence of hemorrhage may be increased by the coadministration of aspirin. In the CURE trial, people with acute coronary syndrome without ST elevation were treated with aspirin plus either clopidogrel or placebo and followed for up to one year. The following rates of major bleed were seen: Any major bleeding: clopidogrel 3.7%, placebo 2.7% Life-threatening bleeding: clopidogrel 2.2%, placebo 1.8% Hemorrhagic stroke: clopidogrel 0.1%, placebo 0.1% The CAPRIE trial compared clopidogrel monotherapy to aspirin monotherapy for 1.6 years in people who had recently experienced a stroke or heart attack. In this trial the following rates of bleeding were observed. Gastrointestinal hemorrhage: clopidogrel 2.0%, aspirin 2.7% Intracranial bleeding: clopidogrel 0.4%, aspirin 0.5% In CAPRIE, itching was the only adverse effect seen more frequently with clopidogrel than aspirin. In CURE, there was no difference in the rate of non-bleeding adverse events. Rashes and itching were uncommon in studies (between 0.1 and 1% of people); serious hypersensitivity reactions are rare. Interactions Clopidogrel generally has a low potential to interact with other pharmaceutical drugs. Combination with other drugs that affect blood clotting, such as aspirin, heparins and thrombolytics, showed no relevant interactions. Naproxen did increase the likelihood of occult gastrointestinal bleeding, as might be the case with other nonsteroidal anti-inflammatory drugs. As clopidogrel is metabolized by the liver enzyme CYP2C19, in cellular models it has been theorized that it might increase blood plasma levels of other drugs that are metabolized by this enzyme, such as phenytoin and tolbutamide. Clinical studies showed that this mechanism is irrelevant for practical purposes. In November 2009, the US Food and Drug Administration (FDA) announced that clopidogrel should be used with caution in people using the proton-pump inhibitors omeprazole or esomeprazole, but pantoprazole appears to be safe. The newer antiplatelet agent prasugrel has minimal interaction with , hence might be a better antiplatelet agent (if no other contraindications are present) in people who are on these proton-pump inhibitors. Pharmacology Clopidogrel is a prodrug which is metabolized by the liver into its active form. The active form specifically and irreversibly inhibits the P2Y12 subtype of ADP receptor, which is important in activation of platelets and eventual cross-linking by the protein fibrin. Pharmacokinetics and metabolism After repeated oral doses of 75 mg of clopidogrel (base), plasma concentrations of the parent compound, which has no platelet-inhibiting effect, are very low and, in general, are below the quantification limit (0.258 μg/L) beyond two hours after dosing. Clopidogrel is a prodrug, which is activated in two steps, first by the enzymes CYP2C19, CYP1A2, and CYP2B6, then by CYP2C19, CYP2C9, CYP2B6, and CYP3A. The thiophene ring is converted to a thiolactone, which undergoes ring-opening. The active metabolite has three sites that are stereochemically relevant, making a total of eight possible isomers. These are: a stereocentre at C4 (attached to the —SH thiol group), a double bond at C3—C16, and the original stereocentre at C7. Only one of the eight structures is an active antiplatelet drug. This has the following configuration: Z configuration at the C3—C16 double bond, the original S configuration at C7, and, although the stereocentre at C4 cannot be directly determined, as the thiol group is too reactive, work with the active metabolite of the related drug prasugrel suggests the R-configuration of the C4 group is critical for P2Y12 and platelet-inhibitory activity. The active metabolite has an elimination half-life of about 0.5 to 1.0 h, and acts by forming a disulfide bridge with the platelet ADP receptor. Patients with a variant allele of CYP2C19 are 1.5 to 3.5 times more likely to die or have complications than patients with the high-functioning allele. Following an oral dose of 14C-labeled clopidogrel in humans, about 50% was excreted in the urine and 46% in the feces in the five days after dosing. Effect of food: Administration of clopidogrel bisulfate with meals did not significantly modify the bioavailability of clopidogrel as assessed by the pharmacokinetics of the main circulating metabolite. Absorption and distribution: Clopidogrel is rapidly absorbed after oral administration of repeated doses of 75-milligram clopidogrel (base), with peak plasma levels (about 3 mg/L) of the main circulating metabolite occurring around one hour after dosing. The pharmacokinetics of the main circulating metabolite are linear (plasma concentrations increased in proportion to dose) in the dose range of 50 to 150 mg of clopidogrel. Absorption is at least 50% based on urinary excretion of clopidogrel-related metabolites. Clopidogrel and the main circulating metabolite bind reversibly in vitro to human plasma proteins (98% and 94%, respectively). The binding is not saturable in vitro up to a concentration of 110 μg/mL. Metabolism and elimination: In vitro and in vivo, clopidogrel undergoes rapid hydrolysis into its carboxylic acid derivative. In plasma and urine, the glucuronide of the carboxylic acid derivative is also observed. In 2010, the US Food and Drug Administration (FDA) added a boxed warning, later updated, to Plavix, alerting that the drug can be less effective in people unable to metabolize the drug to convert it to its active form. Pharmacogenetics CYP2C19 is an important drug-metabolizing enzyme that catalyzes the biotransformation of many clinically useful drugs, including antidepressants, barbiturates, proton-pump inhibitors, and antimalarial and antitumor drugs. Clopidogrel is one of the drugs metabolized by this enzyme. The US Food and Drug Administration (FDA) added a boxed warning on clopidogrel in 2010 about CYP2C19-poor metabolizers. People with variants in cytochrome P-450 2C19 (CYP2C19) have lower levels of the active metabolite of clopidogrel, less inhibition of platelets, and a 3.58-times greater risk for major adverse cardiovascular events such as death, heart attack, and stroke; the risk was greatest in CYP2C19 poor metabolizers. A published review showed that some mutations of CYP2C19, CYP3A4, CYP2C9, CYP2B6, and CYP1A2 genes could affect the clinical efficacy and safety of clopidogrel treatment. For instance, patients carrying the mutations CYP2C19*2, CYP2C19*3, CYP2C9*2, CYP2C9*3, and CYP2B6*5 alleles may not respond to clopidogrel due to poor platelet inhibition efficacy revealed among them. Mechanism of action The active metabolite of clopidogrel specifically and irreversibly inhibits the P2Y12 subtype of ADP receptor, which is important in activation of platelets and eventual cross-linking by the protein fibrin. Platelet inhibition can be demonstrated two hours after a single dose of oral clopidogrel, but the onset of action is slow, so a loading dose of either 600 or 300 mg is administered when a rapid effect is needed. Society and culture Economics Plavix is marketed worldwide in nearly 110 countries, with sales of in 2009. It was the second-top-selling drug in the world in 2007 and was still growing by over 20% in 2007. US sales were in 2008. Before the expiry of its patent, clopidogrel was the second best-selling drug in the world. In 2010, it grossed over in global sales. In 2006, generic clopidogrel was briefly marketed by Apotex, a Canadian generic pharmaceutical company before a court order halted further production until resolution of a patent infringement case brought by Bristol-Myers Squibb. The court ruled that Bristol-Myers Squibb's patent was valid and provided protection until November 2011. The FDA extended the patent protection of clopidogrel by six months, giving exclusivity that would expire in May 2012. The FDA approved generic versions of clopidogrel in May 2012. Names Generic clopidogrel is marketed by many companies worldwide under many brand names. , brands included Aclop, Actaclo, Agregex, Agrelan, Agrelax, Agreless, Agrelex, Agreplat, Anclog, Angiclod, Anplat, Antiagrex, Antiban, Antigrel, Antiplaq, Antiplar, Aplate, Apolets, Areplex, Artepid, Asogrel, Atelit, Atelit, Ateplax, Atervix, Atheros, Athorel, Atrombin, Attera, Bidogrel, Bigrel, Borgavix, Carder, Cardogrel, Carpigrel, Ceraenade, Ceruvin, Cidorix, Clatex, Clavix, Clentel, Clentel, Clidorel, Clodel, Clodelib, Clodian, Clodil, Cloflow, Clofre, Clogan, Clogin, Clognil, Clogrel, Clogrelhexal, Clolyse, Clont, Clood, Clopacin, Clopcare, Clopeno, Clopex Agrel, Clopez, Clopi, Clopid, Clopida, Clopidep, Clopidexcel, Clopidix, Clopidogrel, Clopidogrelum, Clopidomed, Clopidorex, Clopidosyn, Clopidoteg, Clopidowel, Clopidra, Clopidrax, Clopidrol, Clopigal, Clopigamma, Clopigrel, Clopilet, Clopimed, Clopimef, Clopimet, Clopinovo, Clopione, Clopiright, Clopirite, Clopirod, Clopisan, Clopistad, Clopistad, Clopitab, Clopithan, Clopitro, ClopiVale, Clopivas, Clopivaz, Clopivid, Clopivin, Clopix, Cloplat, Clopra, Cloprez, Cloprez, Clopval, Clorel, Cloriocard, Cloroden, Clotix, Clotiz, Clotrombix, Clova, Clovas, Clovax, Clovelen, Clovex, Clovexil, Clovix, Clovvix, Copalex, Copegrel, Copidrel, Copil, Cordiax, Cordix, Corplet, Cotol, CPG, Cugrel, Curovix, Dapixol, Darxa, Dasogrel-S, Dclot, Defrozyp, Degregan, Deplat, Deplatt, Diclop, Diloxol, Dilutix, Diporel, Doglix, Dogrel, Dogrel, Dopivix, Dorel, Dorell, Duopidogrel, DuoPlavin, Eago, Egitromb, Espelio, Eurogrel, Expansia, Farcet, Flucogrel, Fluxx, Freeclo, Globel, Glopenel, Grelet, Greligen, Grelix, Grepid, Grepid, Grindokline, Heart-Free, Hemaflow, Hyvix, Idiavix, Insigrel, Iscover, Iskimil, Kafidogran, Kaldera, Kardogrel, Karum, Kerberan, Keriten, Klepisal, Klogrel, Klopide, Klopidex, Klopidogrel, Klopik, Klopis, Kogrel, Krossiler, Larvin, Lodigrel, Lodovax, Lofradyk, Lopigalel, Lopirel, Lyvelsa, Maboclop, Medigrel, Miflexin, Mistro, Mogrel, Monel, Monogrel, Moytor, Myogrel, Nabratin, Nadenel, Nefazan, Niaclop, Nivenol, Noclog, Nofardom, Nogreg, Nogrel, Noklot, Norplat, Novigrel, Oddoral, Odrel, Olfovel, Opirel, Optigrel, Panagrel, Pedovex, Pegorel, Piax, Piclokare, Pidgrel, Pidogrel, Pidogul, Pidovix, Pigrel, Pingel, Placta, Pladel, Pladex, Pladogrel, Plagerine, Plagrel, Plagril, Plagrin, Plahasan, Plamed, Planor, PlaquEx, Plasiver, Plataca, Platarex, Platec, Platel, Platelex, Platexan, Platil, Platless, Platogrix, Platrel, Plavedamol, Plavicard, Plavictonal, Plavidosa, Plavigrel, Plavihex, Plavitor, Plavix, Plavocorin, Plavogrel, Plavos, Pleyar, Plogrel, Plvix, Pravidel, Pregrel, Provic, Psygrel, Q.O.L, Ravalgen, Replet, Respekt, Revlis, Ridlor, Roclas, Rozak, Sanvix, Sarix, Sarovex, Satoxi, Shinclop, Sigmagrel, Simclovix, Sintiplex, Stazex, Stroka, Stromix, Sudroc, Synetra, Talcom, Tansix, Tessyron, Thinrin, Throimper, Thrombifree, Thrombo, Timiflo, Tingreks, Torpido, Triosal, Trogran, Troken, Trombex, Trombix, Tuxedon, Unigrel, Unplaque, Vaclo, Vasocor, Vatoud, Venicil, Vidogrel, Vivelon, Vixam, Xydrel, Zakogrel, Zillt, Zopya, Zylagren, Zyllt, and Zystol. , it was marketed as a combination drug with acetylsalicylic acid (aspirin) under the brand names Anclog Plus, Antiban-ASP, Asclop, Asogrel-A, Aspin-Plus, Cargrel-A, Clas, Clasprin, Clavixin Duo, Clodrel Forte, Clodrel Plus, Clofre AS, Clognil Plus, Clontas, Clopid-AS, Clopid-AS, Clopida A, Clopil-A, Clopirad-A, Clopirin, Clopitab-A, Clorel-A, Clouds, Combiplat, Coplavix, Coplavix, Cugrel-A, Dorel Plus, DuoCover, DuoCover, DuoPlavin, DuoPlavin, Ecosprin Plus, Grelet-A, Lopirel Plus, Myogrel-AP, Noclog Plus, Noklot Plus, Norplat-S, Odrel Plus, Pidogul A, Pladex-A, Plagerine-A, Plagrin Plus, Plavix Plus, Replet Plus, Stromix-A, and Thrombosprin. Veterinary uses Clopidogrel has been shown to be effective at decreasing platelet aggregation in cats, so its use in prevention of feline aortic thromboembolism has been advocated. References Further reading External links US Patent US4847265A for "Dextro-rotatory enantiomer of methyl alpha-5 (4,5,6,7-tetrahydro (3,2-c) thieno pyridyl) (2-chlorophenyl)-acetate and the pharmaceutical compositions containing it" Adenosine diphosphate receptor inhibitors Drugs developed by Bristol Myers Squibb Carboxylate esters 2-Chlorophenyl compounds CYP2C8 inhibitors Hepatotoxins Methyl esters Prodrugs Sanofi Thienopyridines World Health Organization essential medicines Wikipedia medicine articles ready to translate
Clopidogrel
[ "Chemistry" ]
4,818
[ "Chemicals in medicine", "Prodrugs" ]
588,001
https://en.wikipedia.org/wiki/Waste%20treatment
Waste treatment refers to the activities required to ensure that waste has the least practicable impact on the environment. In many countries various forms of waste treatment are required by law. Solid waste treatment The treatment of solid wastes is a key component of waste management. Different forms of solid waste treatment are graded in the waste hierarchy. Waste water treatment Agricultural waste water treatment Agricultural wastewater treatment is treatment and disposal of liquid animal waste, pesticide residues etc. from agriculture. Industrial wastewater treatment Industrial wastewater treatment is the treatment of wet wastes from factories, mines, power plants and other commercial facilities. Sewage treatment Sewage treatment is the treatment and disposal of human waste. Sewage is produced by all human communities. Treatment in urbanized areas is typically handled by centralized treatment systems. Alternative systems may use composting processes or processes that separate solid materials by settlement and then convert soluble contaminants into biological sludge and into gases such as carbon dioxide or methane. Radioactive waste treatment Radioactive waste treatment is the treatment and containment of radioactive waste. References Waste management Waste treatment technology
Waste treatment
[ "Chemistry", "Engineering" ]
212
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
588,004
https://en.wikipedia.org/wiki/Wastewater%20treatment
Wastewater treatment is a process which removes and eliminates contaminants from wastewater. It thus converts it into an effluent that can be returned to the water cycle. Once back in the water cycle, the effluent creates an acceptable impact on the environment. It is also possible to reuse it. This process is called water reclamation. The treatment process takes place in a wastewater treatment plant. There are several kinds of wastewater which are treated at the appropriate type of wastewater treatment plant. For domestic wastewater the treatment plant is called a Sewage Treatment. Municipal wastewater or sewage are other names for domestic wastewater. For industrial wastewater, treatment takes place in a separate Industrial wastewater treatment, or in a sewage treatment plant. In the latter case it usually follows pre-treatment. Further types of wastewater treatment plants include Agricultural wastewater treatment and leachate treatment plants. One common process in wastewater treatment is phase separation, such as sedimentation. Biological and chemical processes such as oxidation are another example. Polishing is also an example. The main by-product from wastewater treatment plants is a type of sludge that is usually treated in the same or another wastewater treatment plant. Biogas can be another by-product if the process uses anaerobic treatment. Treated wastewater can be reused as reclaimed water. The main purpose of wastewater treatment is for the treated wastewater to be able to be disposed or reused safely. However, before it is treated, the options for disposal or reuse must be considered so the correct treatment process is used on the wastewater. The term "wastewater treatment" is often used to mean "sewage treatment". Types of treatment plants Wastewater treatment plants may be distinguished by the type of wastewater to be treated. There are numerous processes that can be used to treat wastewater depending on the type and extent of contamination. The treatment steps include physical, chemical and biological treatment processes. Types of wastewater treatment plants include: Sewage treatment plants Industrial wastewater treatment plants Agricultural wastewater treatment plants Leachate treatment plants Sewage treatment plants Industrial wastewater treatment plants Agricultural wastewater treatment plants Leachate treatment plants Leachate treatment plants are used to treat leachate from landfills. Treatment options include: biological treatment, mechanical treatment by ultrafiltration, treatment with active carbon filters, electrochemical treatment including electrocoagulation by various proprietary technologies and reverse osmosis membrane filtration using disc tube module technology. Unit processes The unit processes involved in wastewater treatment include physical processes such as settlement or flotation and biological processes such oxidation or anaerobic treatment. Some wastewaters require specialized treatment methods. At the simplest level, treatment of most wastewaters is carried out through separation of solids from liquids, usually by sedimentation. By progressively converting dissolved material into solids, usually a biological floc or biofilm, which is then settled out or separated, an effluent stream of increasing purity is produced. Phase separation Phase separation transfers impurities into a non-aqueous phase. Phase separation may occur at intermediate points in a treatment sequence to remove solids generated during oxidation or polishing. Grease and oil may be recovered for fuel or saponification. Solids often require dewatering of sludge in a wastewater treatment plant. Disposal options for dried solids vary with the type and concentration of impurities removed from water. Sedimentation Solids such as stones, grit, and sand may be removed from wastewater by gravity when density differences are sufficient to overcome dispersion by turbulence. This is typically achieved using a grit channel designed to produce an optimum flow rate that allows grit to settle and other less-dense solids to be carried forward to the next treatment stage. Gravity separation of solids is the primary treatment of sewage, where the unit process is called "primary settling tanks" or "primary sedimentation tanks". It is also widely used for the treatment of other types of wastewater. Solids that are denser than water will accumulate at the bottom of quiescent settling basins. More complex clarifiers also have skimmers to simultaneously remove floating grease such as soap scum and solids such as feathers, wood chips, or condoms. Containers like the API oil-water separator are specifically designed to separate non-polar liquids. Biological and chemical processes Oxidation Oxidation reduces the biochemical oxygen demand of wastewater, and may reduce the toxicity of some impurities. Secondary treatment converts organic compounds into carbon dioxide, water, and biosolids through oxidation and reduction reactions. Chemical oxidation is widely used for disinfection. Biochemical oxidation (secondary treatment) Chemical oxidation Advanced oxidation processes are used to remove some persistent organic pollutants and concentrations remaining after biochemical oxidation. Disinfection by chemical oxidation kills bacteria and microbial pathogens by adding hydroxyl radicals such as ozone, chlorine or hypochlorite to wastewater. These hydroxyl radical then break down complex compounds in the organic pollutants into simple compounds such as water, carbon dioxide, and salts. Anaerobic treatment Anaerobic wastewater treatment processes (for example UASB, EGSB) are also widely applied in the treatment of industrial wastewaters and biological sludge. Polishing Polishing refers to treatments made in further advanced treatment steps after the above methods (also called "fourth stage" treatment). These treatments may also be used independently for some industrial wastewater. Chemical reduction or pH adjustment minimizes chemical reactivity of wastewater following chemical oxidation. Carbon filtering removes remaining contaminants and impurities by chemical absorption onto activated carbon. Filtration through sand (calcium carbonate) or fabric filters is the most common method used in municipal wastewater treatment. See also List of largest wastewater treatment plants List of wastewater treatment technologies Water treatment References External links Sanitation Water pollution Environmental engineering
Wastewater treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
1,148
[ "Chemical engineering", "Water pollution", "Civil engineering", "Environmental engineering" ]
588,168
https://en.wikipedia.org/wiki/SM%20EVM
SM EVM (СМ ЭВМ, abbreviation of Система Малых ЭВМ—literally System of Mini Computers) are several types of Soviet and Comecon minicomputers produced from 1975 through the 1980s. Most types of SM EVM are clones of DEC PDP-11 and VAX. SM-1 and SM-2 are clones of Hewlett-Packard minicomputers. The common operating systems for the PDP-11 clones are translated versions of RSX-11 (ОС РВ) for the higher spec models and RT-11 (РАФОС, ФОДОС) for lower spec models. Also available for the high-end PDP-11 clones is MOS, a clone of UNIX. See also SM-4 SM-1420 SM-1600 SM-1710 SM-1720 References Computer-related introductions in 1975 Minicomputers Soviet computer systems PDP-11
SM EVM
[ "Technology" ]
202
[ "Computer systems", "Soviet computer systems" ]
588,193
https://en.wikipedia.org/wiki/MOS%20%28operating%20system%29
Mobile Operating System (MOS; ) is an operating system, a Soviet clone of Unix from the 1980s. Overview This operating system is commonly found on SM EVM minicomputers; it was also ported to ES EVM and Elbrus. MOS is also used by high-end PDP-11 clones. Modifications of MOS include MNOS, DEMOS, , etc. See also List of Soviet computer systems References Unix variants Computing in the Soviet Union
MOS (operating system)
[ "Technology" ]
96
[ "Operating system stubs", "Computing stubs", "Computing in the Soviet Union", "History of computing" ]
588,196
https://en.wikipedia.org/wiki/Landscape%20design
Landscape design is an independent profession and a design and art tradition, practiced by landscape designers, combining nature and culture. In contemporary practice, landscape design bridges the space between landscape architecture and garden design. Design scope Landscape design focuses on both the integrated master landscape planning of a property and the specific garden design of landscape elements and plants within it. The practical, aesthetic, horticultural, and environmental sustainability are also components of landscape design, which is often divided into hardscape design and softscape design. Landscape designers often collaborate with related disciplines such as architecture, civil engineering, surveying, landscape contracting, and artisan specialties. Design projects may involve two different professional roles: landscape design and landscape architecture. Landscape design typically involves artistic composition and artisanship, horticultural finesse and expertise, and emphasis on detailed site involvement from conceptual stages through to final construction. Landscape architecture focuses more on urban planning, city and regional parks, civic and corporate landscapes, large scale interdisciplinary projects, and delegation to contractors after completing designs. There can be a significant overlap of talent and skill between the two roles, depending on the education, licensing, and experience of the professional. Both landscape designers and landscape architects practice landscape design. Design approach The landscape design phase consists of research, gathering ideas, and setting a plan. Design factors include objective qualities such as: climate and microclimates; topography and orientation, site drainage and groundwater recharge; municipal and resource building codes; soils and irrigation; human and vehicular access and circulation; recreational amenities (i.e., sports and water); furnishings and lighting; native plant habitat botany when present; property safety and security; construction detailing; and other measurable considerations. Design factors also include subjective qualities such as genius loci (the special site qualities to emphasize); client's needs and preferences; desirable plants and elements to retain on site, modify, or replace, and that may be available for borrowed scenery from beyond; artistic composition from perspectives of both looking upon and observing from within; spatial development and definition – using lines, sense of scale, and balance and symmetry; plant palettes; and artistic focal points for enjoyment. There are innumerable other design factors and considerations brought to the complex process of designing a garden that is beautiful, well-functioning, and that thrives over time. The up-and-coming practice of online landscape design allows professional landscapers to remotely design and plan sites through manipulation of two-dimensional images without ever physically visiting the location. Due to the frequent lack of non-visual, supplementary data such as soil assessments and pH tests, online landscaping necessarily must focus on incorporating only plants which are tolerant across many diverse soil conditions. Training Historically, landscape designers trained by apprenticing—such as André Le Nôtre, who apprenticed with his father before designing the Gardens of Versailles—to accomplished masters in the field, with the titular name varying and reputation paramount for a career. The professional section of garden designers in Europe and the Americas went by the name "Landscape Gardener". In the 1890s, the distinct classification of landscape architect was created, with educational and licensing test requirements for using the title legally. Beatrix Farrand, the sole woman in the founding group, refused the title preferring Landscape Gardener. Matching the client and technical needs of a project, and the appropriate practitioner with talent, legal qualifications, and experienced skills, surmounts title nomenclature. Institutional education in landscape design appeared in the early 20th century. Over time it became available at various levels. Ornamental horticulture programs with design components are offered at community college and universities within schools of agriculture or horticulture, with some beginning to offer garden or landscape design certificates and degrees. Departments of landscape architecture are located within university schools of architecture or environmental design, with undergraduate and graduate degrees offered. Specialties and minors are available in horticultural botany, horticulture, natural resources, landscape engineering, construction management, fine and applied arts, and landscape design history. Traditionally, hand-drawn drawings documented the design and position of features for construction, but Landscape design software is frequently used now. Other routes of training are through informal apprenticeships with practicing landscape designers, landscape architects, landscape contractors, gardeners, nurseries and garden centers, and docent programs at botanical and public gardens. Since the landscape designer title does not have a college degree or licensing requirements to be used, there is a very wide range of sophistication, aesthetic talent, technical expertise, and specialty strengths to be responsibly matched with specific client and project requirements. Gardening Many landscape designers have an interest and involvement with gardening, personally or professionally. Gardens are dynamic and not static after construction and planting are completed, and so in some ways are "never done". Involvement with landscape management and direction of the ongoing garden direction, evolution, and care depend on the professional's and client's needs and inclinations. As with the other interrelated landscape disciplines, there can be an overlap of services offered under the titles of landscape designer or professional gardener. See also Landscape design software Concrete landscape curbing Landscape assessment Landscape planning Space in landscape design References Environmental design Garden design
Landscape design
[ "Engineering" ]
1,037
[ "Environmental design", "Design", "Landscape architecture", "Architecture" ]
588,255
https://en.wikipedia.org/wiki/Green%20gross%20domestic%20product
The green gross domestic product (green GDP or GGDP) is an index of economic growth with the environmental consequences of that growth factored into a country's conventional GDP. Green GDP monetizes the loss of biodiversity, and accounts for costs caused by climate change. Some environmental experts prefer physical indicators (such as "waste per capita" or "carbon dioxide emissions per year"), which may be aggregated to indices such as the "Sustainable Development Index". Calculation Formula The environmental and related social costs to develop the economy are taken into consideration when calculating the green GDP, which can be expressed as: Green GDP = GDP − Environmental Costs − Social Costs where the environmental cost typically qualifies: Depletion value of natural resources, e.g. oil, coal, natural gas, wood, and metals; Degradation cost of ecological environment, e.g. underground water pollution, topsoil erosion, and extinction of wildlife; Restoration cost of natural resources, e.g. waste recycling, wetland restoration, and afforestation; and the social costs typically include: Poverty caused by degradation of environment, e.g. shortage of natural resources after exploitation; Extra healthcare expenditure coming with the degradation of ecological environment; Above calculations can also be applied to net domestic product (NDP), which deducts the depreciation of produced capital from GDP. Valuation methodology Since the indicators of environment are generally expressed in national accounts, the conversion of the resource activity into a monetary value is necessary. A common procedure to evaluate, proposed by United Nations in its System of Integrated Environmental and Economic Accounting handbook, applies following steps: If current values of resources are non-existent or non-explicit, the next option is to value the resource based upon the present value of expected net returns from future commercial use. That is, the sum of present values for future expected income minus expected future expenditures (the cash flow CF), for each future time point (t), is termed the net present value (NPV). Rationale The motivation for creating a green GDP originates from the inherent limitations of GDP as an indicator of economic performance and social progress. GDP assesses gross output alone, without identifying the wealth and assets that underlie output. GDP does not account for significant or permanent depletion, or replenishment, of these assets. Ultimately, GDP has no capacity to identify whether the level of income generated in a country is sustainable. Richard Stone, one of the creators of the original GDP index, suggested that, while "the three pillars on which an analysis of society ought to rest are studies of economic, socio-demographic, and environmental phenomenon", he had done little work in the area of environmental issues. Natural capital is poorly represented in GDP. Resources are not adequately considered as economic assets. Relative to their costs, companies and policymakers also do not give sufficient weight to the future benefits generated by restorative or protective environmental projects. As well, the important positive externalities that arise from forests, wetlands, and agriculture are unaccounted for, or otherwise hidden, because of practical difficulties around measuring and pricing these assets. Similarly, the impact that the depletion of natural resources or increases in pollution can and do have on the future productive capacity of a nation are unaccounted for in traditional GDP estimates. The need for a more comprehensive macroeconomic indicator is consistent with the conception of sustainable development as a desirable phenomenon. GDP is mistakenly appropriated as a primary indicator of well-being, and as a result, it is used heavily in the analysis of political and economic policy. Green GDP would arguably be a more accurate indicator or measure of societal well-being. Therefore, the integration of environmental statistics into national accounts, and by extension, the generation of a green GDP figure, would improve countries' abilities to manage their economies and resources. History Many economists, scientists, and other scholars have theorized about adjusting macroeconomic indicators to account for environmental change. The idea was developed early on through the work of Nordhaus and Tobin (1972), Ahmad et al. (1989), Repetto et al. (1989), and Hartwick (1990). In 1972, William Nordhaus and James Tobin introduced the first model to measure the annual real consumption of households, called the Measure of Economic Welfare (MEW). MEW adjusts GDP to include the value of leisure time, unpaid work, and environmental damages. They also defined a sustainable MEW (MEW-S) value, and their work was the precursor to more sophisticated measures of sustainable development. Repetto further explored the impact that the failure of resource-based economies to account for the depreciation of their natural capital could have, especially by distorting evaluations of macroeconomic relationships and performance. He and his colleagues developed the concept of depreciation accounting, which factors environmental depreciation into "aggregate measures of economic performance". In their seminal report, "Economic Accounting for Sustainable Development", Yusuf Ahmad, Salah El Serafy, and Ernst Lutz compiled papers from several UNEP-World Bank sponsored workshops, convened after 1983, on how to develop environmental accounting as a public policy tool. The central theme of all of the authors' arguments is that the system of national accounts (SNA), as it traditionally calculates income, omits important aspects of economic development that ought to be included. One important disagreement on environmentally adjusted indicators is presented by Anne Harrison and Salah El Serafy, in their respective chapters. Harrison argues that appropriate adjustments ought to be made within the existing SNA framework, while El Serafy suggests a redefinition of what constitutes intermediate and final demand. In his view, the SNA should not consider the sale of natural capital as generating value added, while at least part of the income generated from this sale should be excluded from GDP and net product. This would effectively allow GDP to continue to be used extensively. In "Natural Resources, National Accounting and Economic Depreciation", John Hartwick presents an accounting methodology to find NNP, inclusive of the depletion of natural resource stock, by representing the use of natural resources as "economic depreciation magnitudes". This method of accounting, which makes adjustments to the existing national account indicators, found traction in the System of Integrated Environmental and Economic Accounting (SEEA), published by the United Nations as an appendix to the 1993 SNA. The report offered five approaches, or versions, to developing environmental accounts. Over the years, the SEEA has been expanded and revised in view of the increased sophistication of accounting methodologies and technology. This revision will be explored in greater detail in the "Global Initiatives" section. Ultimately, the importance of the SEEA with respect to the green GDP is that it is possible to create full-sequence accounts from which aggregates, such as green GDP, can be derived and compared internationally, and many countries have begun this process. Several reports and initiatives after the SEEA-1993 have explored the possibility of expanding or changing the scope of environmentally-adjusted macroeconomic indicators. As the popularity of green GDP and other environmentally adjusted macroeconomic indicators grows, their construction will increasingly draw on this continuously developing body of research, especially with respect to the methodology associated with valuing non-market capital (e.g., services from natural capital which exist outside of traditional market settings). In 1993, the Bureau of Economic Analysis, the official bookkeeper of the U.S. economy, began responding to concerns that the GDP needed retooling. The agency began working on a green accounting system called Integrated Environmental and Economic Accounts. These initial results, released in 1994, showed that GDP numbers were overstating the impact of mining companies to the nation's economic wealth. Mining companies did not like those results, and in 1995, Alan B. Mollohan, a Democratic House Representative from West Virginia's coal country, sponsored an amendment to the 1995 Appropriations Bill that stopped the Bureau of Economic Analysis from working on revising the GDP. Costanza et al. (1997) estimated the current economic value of 17 ecosystem services for 16 biomes. The value of the entire biosphere, most of which exists outside of the market, is estimated conservatively to be between $16–54 trillion per year. By comparison, global GNP is approximately $18 trillion per year. The size of this figure demonstrates the significance of ecosystem services on human welfare and income generation, and the importance of identifying and recognizing this value. The valuation techniques used by the authors were often based on estimations of individuals' "willingness-to-pay" for ecosystem services. Kunte et al. (1998) use their paper "Estimating National Wealth: Methodology and Results" to demonstrate that expanding the national accounts to include natural capital is a "practical [and necessary] exercise". They estimate the total wealth of nations by including different components of wealth in their calculations, including natural capital. They place values on natural capital by using the concept of economic rent. "Economic rent is the return on a commodity in excess of the minimum required to bring forth its services. Rental value is therefore the difference between the market price and cost of production / extraction." Following this, and by adjusting calculations for (un)sustainable use patterns, they are able to determine the stock of natural capital in a country that more accurately reflects its wealth. "Nature's Numbers: Expanding the National Economic Accounts to Include the Environment," written by William Nordhaus and Edward Kokkelenberg and published in 1999, examined whether or not to broaden the U.S. National Income and Product Accounts (NIPA) to include natural resources and the environment. The panel, which addressed this question, concluded that extending the NIPA and developing supplemental environmental accounts should be a high-priority goal for the U.S., because these would provide useful data on a variety of economic issues and government trends, which entailed both replenishing and extractive activities. One of the major findings of the report is that it is fundamentally necessary for green adjustments to account for instances when natural capital is discovered or replenished, along with general depletive activities. Green GDP in China As one of the fastest-growing countries in the world, China noticed the green GDP as early as 1997. City authorities had conducted a survey based on Beijing's GDP, and the result showed that around 75% of the total GDP was constituted by Green GDP, and the rest of the 25% flowed away as pollution. Other cities also started the same calculation. For example, green GDP in Yaan reported 80% of the total GDP, while Datong reported only 60%. In 2004, Wen Jiabao, the Chinese premier, announced that the green GDP index would replace the Chinese GDP index itself as a performance measure for government and party officials at the highest levels. China’s State Environmental Protection Agency (SEPA), together with the National Bureau of Statistics(NBS), the Chinese Academy for Environmental Planning(CAEP), and units from Renmin University, investigated the nationwide Green GDP. The major environmental impacts in China were from air, water, and solid waste pollution. The first green GDP accounting report, for 2004, was published in September 2006. It showed that the financial loss caused by pollution was 511.8 billion yuan ($66.3 billion), or 3.05 percent of the nation's economy. As an experiment in national accounting, the Green GDP effort collapsed in failure in 2007, when it became clear that the adjustment for environmental damage had reduced the growth rate to politically unacceptable levels, nearly zero in some provinces. In the face of mounting evidence that environmental damage and resource depletion was far more costly than anticipated, the government withdrew its support for the Green GDP methodology and suppressed the 2005 report, which had been due out in March, 2007. The failure of Green GDP in China is connected to the incongruity between central authorities and local government. Beijing was aware of the environmental costs of fast-growing GDP, and encouraged for cleaner or more efficient production. However, many local officials had direct connections with local businesses, and focused more on economic growth than damage by pollution. Another reason for the failure was due to the cost of data collection. It took both money and time to collect data and set them into databases. The Chinese government had a hard time collecting comprehensive environmental cost data. Only pollution and emission costs (air emissions, surface water pollution discards to land, and environmental accidents) were counted in, while social costs and natural resources depletion were missing. Lang and Li (2009) use their paper "China's 'Green GDP' Experiment and the Struggle for Ecological Modernisation" to conclude that the attempt to implement green GDP was a signal that the Chinese government paid attention to environmental impacts. However, the fast-growing economy was more prioritized than environmental accounting, and the failure of the experiment was inevitable. Independent estimates of the cost to China of environmental degradation and resource depletion have, for the last decade, ranged from 8 to 12 percentage points of GDP growth. These estimates support the idea that, by this measure at least, the growth of the Chinese economy is close to zero. The most promising national activity on the green GDP has been from India. The country's environmental minister, Jairam Ramesh, stated in 2009 that "It is possible for scientists to estimate green GDP. An exercise has started under the country's chief statistician Pronab Sen and by 2015, India's GDP numbers will be adjusted with economic costs of environmental degradation." Organizations The Global Reporting Initiative's (GRI) core goals include the mainstreaming of disclosure on environmental, social, and governance performance. Although the GRI is independent, it remains a collaborating centre of UNEP and works in cooperation with the United Nations Global Compact. It produces one of the world's most prevalent standards for sustainability reporting—also known as ecological footprint reporting, environmental social governance (ESG) reporting, triple bottom line (TBL) reporting, and corporate social responsibility (CSR) reporting. It is working on a green GDP to be implemented worldwide. Current debate Some critics of environmentally adjusted aggregates, including GDP, point out that it may be difficult to assign values to some of the outputs that are quantified. This is a particular difficulty in cases where the environmental asset does not exist in a traditional market and is therefore non-tradable. Ecosystem services are one example of this type of resource. In the case that valuation is undertaken indirectly, there is a possibility that calculations may rely on speculation or hypothetical assumptions. Supporters of adjusted aggregates may reply to this objection in one of two ways. First, that as our technological capabilities increase, more accurate methods of valuation have been and will continue to develop. Second, that while measurements may not be perfect in the cases of non-market natural assets, the adjustments they entail are still a preferable alternative to traditional GDP. A second objection may be found in the Report by the Commission on the Measurement of Economic Performance and Social Progress, when Stiglitz, Sen, and Fitoussi remark that: "there is a more fundamental problem with green GDP, which also applies to Nordhaus and Tobin's SMEW and to the ISEW/GNI indices. None of these measures characterize sustainability per se. Green GDP just charges GDP for the depletion of or damage to environmental resources. This is only one part of the answer to the question of sustainability." See also Environment of China Genuine progress indicator (GPI) Green national product Millennium Development Goals (MDGs) References Further reading Green GDP Accounting Study Report 2004 issued . A brief explanation of Green GDP. China issues first 'green GDP' report – article from China Dialogue Environmental pollution costs China 64 billion dollars in 2004 – article from Terra Daily NYTimes documentary on China's Green GDP effort Sustainability metrics and indices Sustainable development Environmental social science concepts
Green gross domestic product
[ "Environmental_science" ]
3,253
[ "Environmental social science concepts", "Environmental social science" ]
588,260
https://en.wikipedia.org/wiki/Kakeya%20set
In mathematics, a Kakeya set, or Besicovitch set, is a set of points in Euclidean space which contains a unit line segment in every direction. For instance, a disk of radius 1/2 in the Euclidean plane, or a ball of radius 1/2 in three-dimensional space, forms a Kakeya set. Much of the research in this area has studied the problem of how small such sets can be. Besicovitch showed that there are Besicovitch sets of measure zero. A Kakeya needle set (sometimes also known as a Kakeya set) is a (Besicovitch) set in the plane with a stronger property, that a unit line segment can be rotated continuously through 180 degrees within it, returning to its original position with reversed orientation. Again, the disk of radius 1/2 is an example of a Kakeya needle set. Kakeya needle problem The Kakeya needle problem asks whether there is a minimum area of a region in the plane, in which a needle of unit length can be turned through 360°. This question was first posed, for convex regions, by . The minimum area for convex sets is achieved by an equilateral triangle of height 1 and area 1/, as Pál showed. Kakeya seems to have suggested that the Kakeya set of minimum area, without the convexity restriction, would be a three-pointed deltoid shape. However, this is false; there are smaller non-convex Kakeya sets. Besicovitch needle sets Besicovitch was able to show that there is no lower bound > 0 for the area of such a region , in which a needle of unit length can be turned around. That is, for every , there is region of area within which the needle can move through a continuous motion that rotates it a full 360 degrees. This built on earlier work of his, on plane sets which contain a unit segment in each orientation. Such a set is now called a Besicovitch set. Besicovitch's work showing such a set could have arbitrarily small measure was from 1919. The problem may have been considered by analysts before that. One method of constructing a Besicovitch set (see figure for corresponding illustrations) is known as a "Perron tree" after Oskar Perron who was able to simplify Besicovitch's original construction. The precise construction and numerical bounds are given in Besicovitch's popularization. The first observation to make is that the needle can move in a straight line as far as it wants without sweeping any area. This is because the needle is a zero width line segment. The second trick of Pál, known as Pál joins describes how to move the needle between any two locations that are parallel while sweeping negligible area. The needle will follow the shape of an "N". It moves from the first location some distance up the left of the "N", sweeps out the angle to the middle diagonal, moves down the diagonal, sweeps out the second angle, and them moves up the parallel right side of the "N" until it reaches the required second location. The only non-zero area regions swept are the two triangles of height one and the angle at the top of the "N". The swept area is proportional to this angle which is proportional to . The construction starts with any triangle with height 1 and some substantial angle at the top through which the needle can easily sweep. The goal is to do many operations on this triangle to make its area smaller while keeping the directions though which the needle can sweep the same. First consider dividing the triangle in two and translating the pieces over each other so that their bases overlap in a way that minimizes the total area. The needle is able to sweep out the same directions by sweeping out those given by the first triangle, jumping over to the second, and then sweeping out the directions given by the second. The needle can jump triangles using the "N" technique because the two lines at which the original triangle was cut are parallel. Now, suppose we divide our triangle into 2n subtriangles. The figure shows eight. For each consecutive pair of triangles, perform the same overlapping operation we described before to get half as many new shapes, each consisting of two overlapping triangles. Next, overlap consecutive pairs of these new shapes by shifting them so that their bases overlap in a way that minimizes the total area. Repeat this n times until there is only one shape. Again, the needle is able to sweep out the same directions by sweeping those out in each of the 2n subtriangles in order of their direction. The needle can jump consecutive triangles using the "N" technique because the two lines at which these triangle were cut are parallel. What remains is to compute the area of the final shape. The proof is too hard to present here. Instead, we will just argue how the numbers might go. Looking at the figure, one sees that the 2n subtriangles overlap a lot. All of them overlap at the bottom, half of them at the bottom of the left branch, a quarter of them at the bottom of the left left branch, and so on. Suppose that the area of each shape created with i merging operations from 2i subtriangles is bounded by Ai. Before merging two of these shapes, they have area bounded be 2Ai. Then we move the two shapes together in the way that overlaps them as much as possible. In a worst case, these two regions are two 1 by ε rectangles perpendicular to each other so that they overlap at an area of only ε2. But the two shapes that we have constructed, if long and skinny, point in much of the same direction because they are made from consecutive groups of subtriangles. The handwaving states that they over lap by at least 1% of their area. Then the merged area would be bounded by Ai+1 = 1.99 Ai. The area of the original triangle is bounded by 1. Hence, the area of each subtriangle is bounded by A0 = 2-n and the final shape has area bounded by An = 1.99n × 2-n. In actuality, a careful summing up all areas that do not overlap gives that the area of the final region is much bigger, namely, 1/n. As n grows, this area shrinks to zero. A Besicovitch set can be created by combining six rotations of a Perron tree created from an equilateral triangle. A similar construction can be made with parallelograms There are other methods for constructing Besicovitch sets of measure zero aside from the 'sprouting' method. For example, Kahane uses Cantor sets to construct a Besicovitch set of measure zero in the two-dimensional plane. In 1941, H. J. Van Alphen showed that there are arbitrary small Kakeya needle sets inside a circle with radius 2 + ε (arbitrary ε > 0). Simply connected Kakeya needle sets with smaller area than the deltoid were found in 1965. Melvin Bloom and I. J. Schoenberg independently presented Kakeya needle sets with areas approaching to , the Bloom-Schoenberg number. Schoenberg conjectured that this number is the lower bound for the area of simply connected Kakeya needle sets. However, in 1971, F. Cunningham showed that, given ε > 0, there is a simply connected Kakeya needle set of area less than ε contained in a circle of radius 1. Although there are Kakeya needle sets of arbitrarily small positive measure and Besicovich sets of measure 0, there are no Kakeya needle sets of measure 0. Kakeya conjecture Statement The same question of how small these Besicovitch sets could be was then posed in higher dimensions, giving rise to a number of conjectures known collectively as the Kakeya conjectures, and have helped initiate the field of mathematics known as geometric measure theory. In particular, if there exist Besicovitch sets of measure zero, could they also have s-dimensional Hausdorff measure zero for some dimension s less than the dimension of the space in which they lie? This question gives rise to the following conjecture: Kakeya set conjecture: Define a Besicovitch set in Rn to be a set which contains a unit line segment in every direction. Is it true that such sets necessarily have Hausdorff dimension and Minkowski dimension equal to n? This is known to be true for n = 1, 2 but only partial results are known in higher dimensions. Kakeya maximal function A modern way of approaching this problem is to consider a particular type of maximal function, which we construct as follows: Denote Sn−1 ⊂ Rn to be the unit sphere in n-dimensional space. Define to be the cylinder of length 1, radius δ > 0, centered at the point a ∈ Rn, and whose long side is parallel to the direction of the unit vector e ∈ Sn−1. Then for a locally integrable function f, we define the Kakeya maximal function of f to be where m denotes the n-dimensional Lebesgue measure. Notice that is defined for vectors e in the sphere Sn−1. Then there is a conjecture for these functions that, if true, will imply the Kakeya set conjecture for higher dimensions: Kakeya maximal function conjecture: For all ε > 0, there exists a constant Cε > 0 such that for any function f and all δ > 0, (see lp space for notation) Results Some results toward proving the Kakeya conjecture are the following: The Kakeya conjecture is true for n = 1 (trivially) and n = 2 (Davies). In any n-dimensional space, Wolff showed that the dimension of a Kakeya set must be at least (n+2)/2. In 2002, Katz and Tao improved Wolff's bound to , which is better for n > 4. In 2000, Katz, Łaba, and Tao proved that the Minkowski dimension of Kakeya sets in 3 dimensions is strictly greater than 5/2. In 2000, Jean Bourgain connected the Kakeya problem to arithmetic combinatorics which involves harmonic analysis and additive number theory. In 2017, Katz and Zahl improved the lower bound on the Hausdorff dimension of Besicovitch sets in 3 dimensions to for an absolute constant . Applications to analysis Somewhat surprisingly, these conjectures have been shown to be connected to a number of questions in other fields, notably in harmonic analysis. For instance, in 1971, Charles Fefferman was able to use the Besicovitch set construction to show that in dimensions greater than 1, truncated Fourier integrals taken over balls centered at the origin with radii tending to infinity need not converge in Lp norm when p ≠ 2 (this is in contrast to the one-dimensional case where such truncated integrals do converge). Analogues and generalizations of the Kakeya problem Sets containing circles and spheres Analogues of the Kakeya problem include considering sets containing more general shapes than lines, such as circles. In 1997 and 1999, Wolff proved that sets containing a sphere of every radius must have full dimension, that is, the dimension is equal to the dimension of the space it is lying in, and proved this by proving bounds on a circular maximal function analogous to the Kakeya maximal function. It was conjectured that there existed sets containing a sphere around every point of measure zero. Results of Elias Stein proved all such sets must have positive measure when n ≥ 3, and Marstrand proved the same for the case n=2. Sets containing k-dimensional disks A generalization of the Kakeya conjecture is to consider sets that contain, instead of segments of lines in every direction, but, say, portions of k-dimensional subspaces. Define an (n, k)-Besicovitch set K to be a compact set in Rn containing a translate of every k-dimensional unit disk which has Lebesgue measure zero. That is, if B denotes the unit ball centered at zero, for every k-dimensional subspace P, there exists x ∈ Rn such that (P ∩ B) + x ⊆ K. Hence, a (n, 1)-Besicovitch set is the standard Besicovitch set described earlier. The (n, k)-Besicovitch conjecture: There are no (n, k)-Besicovitch sets for k > 1. In 1979, Marstrand proved that there were no (3, 2)-Besicovitch sets. At around the same time, however, Falconer proved that there were no (n, k)-Besicovitch sets for 2k > n. The best bound to date is by Bourgain, who proved in that no such sets exist when 2k−1 + k > n. Kakeya sets in vector spaces over finite fields In 1999, Wolff posed the finite field analogue to the Kakeya problem, in hopes that the techniques for solving this conjecture could be carried over to the Euclidean case. Finite Field Kakeya Conjecture: Let F be a finite field, let K ⊆ Fn be a Kakeya set, i.e. for each vector y ∈ Fn there exists x ∈ Fn such that K contains a line {x + ty : t ∈ F}. Then the set K has size at least cn|F|n where cn>0 is a constant that only depends on n. Zeev Dvir proved this conjecture in 2008, showing that the statement holds for cn = 1/n!. In his proof, he observed that any polynomial in n variables of degree less than |F| vanishing on a Kakeya set must be identically zero. On the other hand, the polynomials in n variables of degree less than |F| form a vector space of dimension Therefore, there is at least one non-trivial polynomial of degree less than |F| that vanishes on any given set with less than this number of points. Combining these two observations shows that Kakeya sets must have at least |F|n/n! points. It is not clear whether the techniques will extend to proving the original Kakeya conjecture but this proof does lend credence to the original conjecture by making essentially algebraic counterexamples unlikely. Dvir has written a survey article on progress on the finite field Kakeya problem and its relationship to randomness extractors. See also Nikodym set Notes References External links Kakeya at University of British Columbia Besicovitch at UCLA Kakeya needle problem at mathworld Dvir's proof of the finite field Kakeya conjecture at Terence Tao's blog An Introduction to Besicovitch-Kakeya Sets Harmonic analysis Real analysis Discrete geometry Eponyms in geometry
Kakeya set
[ "Mathematics" ]
3,073
[ "Discrete geometry", "Eponyms in geometry", "Discrete mathematics", "Geometry" ]
588,330
https://en.wikipedia.org/wiki/Hardening%20%28computing%29
In computer security, hardening is usually the process of securing a system by reducing its attack surface, which is larger when a system performs more functions; in principle a single-function system is more secure than a multipurpose one. Reducing available ways of attack typically includes changing default passwords, the removal of unnecessary software, unnecessary usernames or logins, and the disabling or removal of unnecessary services. Hardening measures can include setting up intrusion prevention systems, disabling accounts, reducing file system permissions and using encrypted network connections. Binary hardening Binary hardening is a security technique in which binary executables are analyzed and modified to protect against common exploits. Binary hardening is independent of compilers and involves the entire toolchain. For example, one binary hardening technique is to detect potential buffer overflows and to substitute the existing code with safer code. The advantage of manipulating binaries is that vulnerabilities in legacy code can be fixed automatically without the need for source code, which may be unavailable or obfuscated. Secondly, the same techniques can be applied to binaries from multiple compilers, some of which may be less secure than others. Binary hardening often involves the non-deterministic modification of control flow and instruction addresses so as to prevent attackers from successfully reusing program code to perform exploits. Common hardening techniques are: Buffer overflow protection Stack overwriting protection Position independent executables and address space layout randomization Binary stirring (randomizing the address of basic blocks) Pointer masking (protection against code injection) Control flow randomization (to protect against control flow diversion) See also Computer security Network security policy Security-focused operating system Security-Enhanced Linux References External links at globalsecurity.org at globalsecurity.org Computer security procedures
Hardening (computing)
[ "Engineering" ]
373
[ "Cybersecurity engineering", "Computer security procedures" ]
588,441
https://en.wikipedia.org/wiki/Trolling%20motor
A trolling motor is a self-contained marine propulsion unit that includes an electric motor, propeller and control system, and is affixed to an angler's boat, either at the bow or stern. A gasoline-powered outboard used in trolling, if it is not the vessel's primary source of propulsion, may also be referred to as a trolling motor. The main function of trolling motors was once to keep the boat running at a consistent, low speed suitable for trolling, but that function has been augmented by GPS-tracking trolling motors that function as "virtual anchors" to automatically maintain a boat's position relative to a desired location, such as a favorite fishing spot. Trolling motors are often lifted from the water to reduce drag when the boat's primary engine is in operation. Uses Trolling for game fish; a motor used for this purpose is usually a secondary means of propulsion, and mounted on the transom alongside the primary outboard motor or on a bracket made for the purpose. Auxiliary power for precision maneuvering of the boat, to enable the angler to cast his bait to where the fish are located. Trolling motors designed for this application are typically mounted in the bow. History An 1895 article in Scientific American entitled "A Portable Electric Propeller for Boats" stated: "Briefly described, it consists of a movable tube which is hinged at the stern of the boat, much as an oar is used in sculling. The tube contains a flexible shaft formed of three coils of phosphor bronze. This tube extends down and out into the water, where it carries a propeller, and at the inboard end an electric Motor is attached, which is itself driven by batteries." It was invented and sold by the Electric Boat company. The electric trolling motor was invented by O.G. Schmidt in 1934 in Fargo, North Dakota, when he took a starter motor from a Ford Model A, added a flexible shaft, and a propeller. Because his manufacturing company was near the Minnesota/North Dakota border, he decided to call the new company Minn Kota. The company still is a major manufacturer of trolling motors. Design Electric trolling motors Modern electric trolling motors are designed around a 12-volt, 24-volt or 36-volt brushed DC electric motor, to take advantage of the availability of 12-volt deep cycle batteries designed specifically for marine use. The motor itself is sealed inside a watertight compartment at the end of the shaft. It is submerged during operation, which prevents overheating. The propeller is fitted directly on to the propshaft. Hand-control: tiller for steering, with speed control either built into the tiller or a control knob on top of the unit. Hand controlled trolling motors are attached to the boat with a clamp. Foot-control: on/off and speed controls are foot-operated, and built into a pedal that also controls the steering mechanism. Steering may be via electronically controlled servo motors, or in early-model (and late-model low-end units), a push-pull cable. Foot controlled trolling motors require a specialized mounting bracket that bolts horizontally to the deck. Main advantage of foot controls is that fisherman has both hands free for fishing and landing the hooked fish. On the other hand, it is sometimes hard to coordinate foot work with hands, especially in wavy and windy conditions. Wireless remote: available on high-end late-model trolling motors. Servo-controlled steering and speed control both respond to a wireless device, either in a foot pedal or a key-fob transmitter (similar to an automotive remote keyless system). Gasoline-powered trolling motors Small outboard motors are frequently used as trolling motors on boats with much larger engines that do not operate as efficiently or quietly at trolling speeds. These typically are designed with a manual pull start system, throttle and gearshift controls mounted on the body of the motor, and a tiller for steering, but in a trolling application, will be connected to the steering mechanism at the helm. See also Electric boat Electric outboard motor Outboard motor Trolling References External links Marine engines Marine propulsion
Trolling motor
[ "Technology", "Engineering" ]
860
[ "Marine engines", "Marine propulsion", "Engines", "Marine engineering" ]
588,622
https://en.wikipedia.org/wiki/Pregnancy%20category
The pregnancy category of a medication is an assessment of the risk of fetal injury due to the pharmaceutical, if it is used as directed by the mother during pregnancy. It does not include any risks conferred by pharmaceutical agents or their metabolites in breast milk. Every drug has specific information listed in its product literature. The British National Formulary used to provide a table of drugs to be avoided or used with caution in pregnancy, and did so using a limited number of key phrases, but now Appendix 4 (which was the Pregnancy table) has been removed. Appendix 4 is now titled "Intravenous Additives". However, information that was previously available in the former Appendix 4 (pregnancy) and Appendix 5 (breastfeeding) is now available in the individual drug monographs. United States American law requires that certain drugs and biological products must be labelled very specifically. Title 21, Part 201.57 (9)(i) of the Code of Federal Regulations lists specific requirements regarding the labeling of drugs with respect to their effects on pregnant populations, including a definition of a "pregnancy category". These rules are enforced by the Food and Drug Administration. To supplement this information, the FDA publishes additional rules regarding pregnancy and lactation labeling. The FDA does not regulate labeling for all hazardous and non-hazardous substances. Many substances, including alcohol, are widely known to cause serious hazards to pregnant women and their fetuses, including fetal alcohol syndrome. Many other pollutants and hazardous materials are similarly known to cause reproductive harm. However, some of these substances are not subject to drug labeling laws, and are therefore not assigned a "Pregnancy Category" per 21 CFR 201.57. One characteristic of the FDA definitions of the pregnancy categories is that the FDA requires a relatively large amount of high-quality data on a pharmaceutical for it to be defined as Pregnancy Category A. As a result of this, many drugs that would be labelled as safe in other countries are allocated to Category C by the FDA. Pregnancy and Lactation Labeling Rule of December 2014 On December 13, 2014, the FDA published the Pregnancy and Lactation Labeling Final Rule (PLLR), which changed the labeling requirements for the pregnancy and lactation sections for prescription drugs and biological agents. The final rule removed the pregnancy letter categories, and created descriptive subsections for pregnancy exposure and risk, lactation, and effects to reproductive potential for females and males. Labeling changes from this rule began on June 30, 2015, with all submissions for prescription drugs and biological agents using the labeling changes immediately. Previously approved drugs from June 30, 2001, will switch to the new labeling gradually. The rule does not affect the labeling of over-the-counter drugs or of drugs approved prior to June 30, 2001. Australia Australia has a slightly different pregnancy category system from the United States. The categorisation of medicines for use in pregnancy does not follow a hierarchical structure. Notably the subdivision of Category B. (For drugs in B1, B2 and B3 categories, human data are lacking or inadequate and subcategorisation is actually based on animal data instead) The allocation of a B category does not imply greater safety than C category Medicines in category D are not absolutely contraindicated during pregnancy (e.g. anticonvulsants) The system, as outlined below, was developed by medical and scientific experts based on available evidence of risks associated with taking particular medicines while pregnant. Being general in nature, it is not presented as medical advice to health professionals or the public. Some prescribing guides, such as the Australian Medicines Handbook, are shifting away from using pregnancy categories since, inherent in these categories, there is an implied assumption that the alphabetical code is one of safety when this is not always the case. Categorisation does not indicate which stages of fetal development might be affected and does not convey information about the balance between risks and benefits in a particular situation. Additionally, categories are not necessarily maintained or updated with availability of new data. Germany Categorization of selected agents The data presented is for comparative and illustrative purposes only, and may have been superseded by updated data. Withdrawn drugs Notes References – links provided for 1999 4th edition and subsequent updates Food and Drug Administration. Federal Register 1980; 44:37434–67 Health issues in pregnancy Pharmacological classification systems
Pregnancy category
[ "Chemistry" ]
876
[ "Pharmacological classification systems", "Pharmacology" ]
588,652
https://en.wikipedia.org/wiki/Chylomicron
Chylomicrons (from the Greek χυλός, chylos, meaning juice (of plants or animals), and micron, meaning small), also known as ultra low-density lipoproteins (ULDL), are lipoprotein particles that consist of triglycerides (85–92%), phospholipids (6–12%), cholesterol (1–3%), and proteins (1–2%). They transport dietary lipids, such as fats and cholesterol, from the intestines to other locations in the body, within the water-based solution of the bloodstream. ULDLs are one of the five major groups lipoproteins are divided into based on their density. A protein specific to chylomicrons is ApoB48. There is an inverse relationship in the density and size of lipoprotein particles: fats have a lower density than water or smaller protein molecules, and the larger particles have a higher ratio of internal fat molecules with respect to the outer emulsifying protein molecules in the shell. ULDLs, if in the region of 1,000 nm or more, are the only lipoprotein particles that can be seen using a light microscope, at maximum magnification. All the other classes are submicroscopic. Function Chylomicrons transport lipids absorbed from the intestine to adipose, cardiac, and skeletal muscle tissue, where their triglyceride components are hydrolyzed by the activity of the lipoprotein lipase, allowing the released free fatty acids to be absorbed by the tissues. When a large portion of the triglyceride core has been hydrolyzed, chylomicron remnants are formed and are taken up by the liver, thereby also transferring dietary fat to the liver. Stages Nascent chylomicrons In the small intestine, dietary triglycerides are emulsified by bile and digested by pancreatic lipases, resulting in the formation of monoglycerides and fatty acids. These lipids are absorbed into enterocytes via passive diffusion. Inside these cells, monoglycerides and fatty acids are transported to the smooth endoplasmic reticulum (smooth ER), where they are re-esterified to form triglycerides. These triglycerides, along with phospholipids and cholesterol, are added to apolipoprotein B48 to form nascent chylomicrons (also referred to as immature chylomicrons or pre-chylomicrons). After synthesis in the smooth ER, nascent chylomicrons are transported to the Golgi apparatus by SAR1B proteins. The transport of nascent chylomicrons within the secretory pathway is facilitated by protein transport vesicles (PCTVs). PCTVs are uniquely equipped with v-SNARE and VAMP-7 proteins, which aid in their fusion with the cis-Golgi compartment. This transport is facilitated by COPII proteins, including Sec23/24, which select cargo and facilitate vesicle budding from the ER membrane. During transit through the Golgi, nascent chylomicrons undergo enzymatic modification and lipidation processes, resulting in the formation of mature chylomicrons. Mature chylomicrons Mature chylomicrons are released through the basolateral membrane of enterocytes (via the secretory pathway) into lacteals, lymphatic capillaries in the villi of the small intestine. Lymph that contains chylomicrons (and other emulsified fats) is referred to as chyle. The lymphatic circulation carries chyle to the lymphatic ducts before it enters the venous return of the systemic circulation via subclavian veins. From here, chylomicrons can supply tissue throughout the body with fat absorbed from the diet. Because they enter the bloodstream in this way, digested lipids (in the form of chylomicrons) bypass the hepatic portal system and thus avoid first pass metabolism, unlike digested carbohydrates (in the form of monosaccharides) and proteins (in the form of amino acids). While circulating in blood, high-density lipoproteins (HDLs) donate essential components including apolipoprotein C-II (APOC2) and apolipoprotein E (APOE) to the mature chylomicron. APOC2 is a crucial coenzyme for the activity of lipoprotein lipase (LPL), which hydrolyzes triglycerides within chylomicrons. Chylomicron remnants Once triglyceride stores are distributed, chylomicrons return APOC2 to HDLs while retaining APOE, transforming into a chylomicron remnant. ApoB48 and APOE are important to identify the chylomicron remnant in the liver for endocytosis and breakdown. Pathology Hyperchylomicronemia Hyperchylomicronemia is characterized by an excessive presence of chylomicrons in the blood, leading to extreme hypertriglyceridemia. Clinical manifestations of this disorder include eruptive xanthomas, lipaemia retinalis, hepatosplenomegaly, recurrent abdominal pain, and acute pancreatitis. This condition can be caused by genetic mutations (see below) or secondary factors such as uncontrolled diabetes or alcohol use disorder. Hypochylomicronemia Hypochylomicronemia refers to abnormally low levels or complete absence of chylomicrons in the blood, particularly after a meal (postprandial). This condition can result from genetic mutations (see below), as well as certain malabsorption syndromes or deficiencies in dietary fat intake. Related disorders Chylomicron remnants and cardiovascular disease Chylomicron remnants are the lipoprotein particles left after chylomicrons have delivered triglycerides to tissues. Elevated levels of these remnants contribute to hyperlipidemia, which is considered an important risk factor for cardiovascular disease. Recent studies have demonstrated that chylomicron remnants can penetrate the tunica intima and become trapped in the subendothelial space. This process enhances the deposition of cholesterol in the arterial wall, which is a critical step in the formation of atherosclerotic plaques. The retention and modification of these remnants within the arterial wall trigger inflammatory responses, further accelerating the development of atherosclerosis. Related genetic disorders Abetalipoproteinemia (ABL) Abetalipoproteinemia (ABL; OMIM 200100) is a rare autosomal recessive disorder caused by mutations in both alleles of the MTP gene. This genetic defect leads to nearly undetectable levels of ApoB and very low plasma cholesterol levels. Patients with ABL exhibit fat malabsorption, steatorrhea, and fat accumulation in enterocytes and hepatocytes. The condition also results in multiple vitamin deficiencies (E, A, K, and D) due to impaired lipoprotein assembly and transport. If untreated, ABL can cause neurological disturbances such as spinal-cerebellar degeneration, peripheral neuropathies, and retinitis pigmentosa. Early supplementation of fat-soluble vitamins can prevent these complications. Homozygous hypobetalipoproteinemia (Ho-HBL) Homozygous hypobetalipoproteinemia (Ho-HBL; OMIM 107730) is an extremely rare inherited disorder characterized by improper packaging and secretion of apoB-containing lipoproteins due to mutations in both alleles of the APOB gene. These mutations lead to apoB truncations or amino acid substitutions, resulting in the formation of short, abnormal apoBs that are unable to bind lipids and form chylomicrons. Clinical manifestations vary, ranging from lack of symptoms to features overlapping with those of ABL, including fat malabsorption and vitamin deficiencies. Chylomicron retention disease (CMRD) Chylomicron retention disease (CMRD; OMIM #607689) is a rare autosomal recessive disorder caused by mutations in the SAR1B gene. Patients with CMRD present with chronic diarrhea, failure to thrive, hypocholesterolemia, and low levels of fat-soluble vitamins. The enterocytes of these patients fail to secrete chylomicrons into the lymph, leading to lipid accumulation and characteristic mucosal changes in the small intestine. Unlike ABL and Ho-HBL, CMRD does not cause acanthocytosis, retinitis pigmentosa, or severe neurological symptoms. Familial chylomicronemia syndrome (FCS) Familial chylomicronemia syndrome (FCS), also known as Type I hyperlipoproteinemia, is characterized by massive hypertriglyceridemia, abdominal pain, pancreatitis, eruptive xanthomas, and hepatosplenomegaly. This condition is caused by mutations in genes such as LPL, APOC-II, APOA-V, LMF1, and GPIHBP1, which are involved in the regulation of triglyceride-rich lipoprotein catabolism. Patients with FCS show significantly elevated fasting concentrations of chylomicrons and do not typically develop premature atherosclerosis due to the large size of chylomicrons preventing their traversal through the vascular endothelial barrier. Diagnosis is confirmed by DNA sequencing for pathogenic mutations in these genes. References Lipoproteins
Chylomicron
[ "Chemistry" ]
2,064
[ "Lipid biochemistry", "Lipoproteins" ]
588,689
https://en.wikipedia.org/wiki/List%20of%20rail%20accidents%20%282000%E2%80%932009%29
This is a list of rail accidents from 2000 to 2009. For a list of terrorist incidents involving trains, see List of terrorist incidents involving railway systems. 2000 January 4 – Norway – Åsta accident, Åsta in Åmot: Two diesel passenger trains collided in Rørosbanen killing 19. The fire after the collision lasted nearly six hours. February 6 – Germany – Brühl train derailment: The D 203 "Schweiz-Express" train travelling from Amsterdam to Basel negotiated a low-speed turnout at three times the correct speed and derailed near Brühl station, killing 9 people. March 2 – Denmark – Kølkær. Two regional trains heading to Herning and Vejle collided in Kolkær station. The train to Herning had entered the station and was slowing down when it was hit head-on by the oncoming Vejle train at 116 km/h. One passenger in the Herning train and both train drivers were killed, 10 passengers were seriously injured. The investigation concluded that the crash was caused by the driver of the Vejle train, who had completely ignored stop signals. March 8 – Japan – Naka-Meguro derailment: The last car of a TRTA Hibiya Line train derailed and was hit by a Tobu Railway train traveling in the opposite direction, killing five people and injuring 63. March 28 – United States – A school bus failed to stop at a crossbuck in Tennga, Georgia and was struck by a CSX train, killing three people. April 18 – Indonesia – In Kosambi, three trains (a container train, an animal transport train and Argo Bromo express train) collided, killing three stowaways on the container train. June 6 – Switzerland – A passenger train and a freight train collided at Hüswil, killing one. July 6 – France – In Paris, a Eurostar Train derailed due to a fault on the train track, injuring six. July 13 – Canada – A Canadian National train heading westbound pulling grain hoppers hit a semi-trailer that had become stuck at a level crossing west of Wainwright, Alberta. The two locomotives and 28 cars derailed. The train crew suffered only minor injuries after jumping from the train prior to the collision August 26 – United States – A Dakota, Minnesota, and Eastern train derailed in Brookings, South Dakota, killing the conductor and severely injuring the engineer. The crash was caused intentionally by a vandal who broke the lock off of a railroad switch using a hammer and covered the warning reflector with at least one trash bag as part of a 'prank.' October 17 – United Kingdom – Hatfield rail crash – In Hertfordshire, a faulty rail shattered into 300 pieces due to rolling contact fatigue while a GNER London to Leeds InterCity 225 express was passing at . Four people were killed and 102 injured. The crash forced the biggest and most expensive re-railing exercise in British history, with huge service disruption for many months. Infrastructure owner Railtrack was found guilty in one of the longest rail-related trials in UK legal history, but manslaughter charges against company managers were not sustained. November 11 – Austria – Kaprun disaster: The Gletscherbahn 2 funicular train caught fire in a tunnel after an unsafely installed heater overheated, killing 155 people and leaving only 12 survivors. December 2 – India – Sarai Banjara rail disaster, a crowded commuter train crashed into a derailed freight train in Punjab, killing more than 45 people. 2001 January 12 – Republic of the Congo – Nvoungouti: More than 30 people were killed after two trains collided because of a brake failure. February 7 – Canada – Toronto: Ontario Northland Railway's Northlander passenger train derailed in the Don Valley near the Bayview Extension and Pottery Road area, slightly injuring two passengers. February 28 – United Kingdom – Selby rail crash: A sleep-deprived driver on the M62 motorway fell asleep at the wheel, causing his Land Rover to swerve off the road and travel down an embankment onto the main line below. After failing to reverse off the track, the driver exited the vehicle and while contacted emergency services, during which the vehicle was hit by a GNER InterCity 225 passenger train which derailed and collided at high speed with a coal train traveling in the opposite direction. Ten people were killed (all due to the second collision) and over 80 were injured. The Land Rover driver was later jailed for 5 years for causing death by dangerous driving. March 27 – Belgium – Pécrot rail crash: Two passenger trains collided on the same track, killing eight people and injuring 12. Lack of common language was seen as a factor. April 12 – Canada – Stewiacke Via derailment: a teenager tampered with a track switch, derailing Via Rail Canada's Ocean, injuring as many as 22 people. May 15 – United States – CSX 8888 incident, Toledo, Ohio: A CSX freight train of 47 cars, including some with hazardous molten phenol acid, ran away in the yard at Toledo with no engineer aboard. The engineer had stepped out to reset a switch but had improperly applied the dynamic brake, causing the train to accelerate. It ran for to Kenton, Ohio, before being stopped by a railroad worker who jumped aboard and managed to stop it. CSX slowed the train down to by coupling an engine onto the end. This incident was dubbed the "Crazy Eights" incident in reference to the lead locomotive's number (#8888). The incident inspired the 2010 Tony Scott film Unstoppable, starring Denzel Washington and Chris Pine. June 21 – India – Kadalundi train derailment: Six carriages of the Mangalore-Chennai Mail train derailed while crossing Bridge 924 after it shifted following heavy rain. Three carriages plunge into the swollen Kadalundi River, killing 59 people. July 18 – United States – Howard Street Tunnel fire, Baltimore, Maryland: A 60-car CSX train carrying chemicals and wood products derailed in a tunnel, causing water contamination and a fire that burned for six days. July 29 – United States – An Amtrak Texas Eagle en route from San Antonio to Chicago derailed in Sabula, Missouri, located away from St. Louis, possibly due to severe weather and flash flooding. Four passengers and the engineer were injured. August 19 – Sri Lanka – Kurunegala train crash: 15 people died after the Udarata Menike train derailed due to speeding and overcrowding. September 2 – Indonesia – In Cirebon, a passenger train and a locomotive collided, killing 40 people and seriously injuring 37. September 13 – United States – In Wendover, Utah, the westbound California Zephyr derailed after striking a coal train, causing several injuries. November 15 – United States – Canadian National train E243 running from Flat Rock, Michigan to Flint, Michigan collided with another CN train, L533 (bound for Detroit), at the Andersonville Siding on CN's Holly Subdivision in Michigan. The engineer and conductor on E243 were killed. December 7 — United States – A BNSF freight train collided head-on with another BNSF freight train that was parked in a siding in Arminto, Wyoming due to a misaligned switch killing an engineer. December 18 – Greece – In Orestiada, a train got stuck in a snow drift and derailed near the Bulgarian border, killing one person. December 25 - Indonesia - In Brebes, 45 people were killed after a Empu Jaya train collision with Gaya Baru Malam train in Ketanggungan station. 2002 January 18 – United States – Minot train derailment: A Canadian Pacific train derailed near a residential area west of Minot, North Dakota. Seven tank cars ruptured, releasing more than of anhydrous ammonia which vaporized in the sub-zero air and formed a toxic cloud that drifted over much of Minot. One man died and numerous others were treated for chemical exposure. February 6 – South Africa – 2002 Charlotte's Dale train collision: Two commuter trains collided in Charlotte's Dale near Durban, killing 22 people, including 16 children. February 20 – Egypt – 2002 El Ayyat railway accident: A train packed to double capacity caught fire after a cooking gas cylinder exploded, killing 383 people. February 21 – Switzerland – A freight train and a locomotive collided at Chiasso, killing two people and injuring three. March 30 – Spain – In Catalonia, a Euromed Express Train collided with another Euromed Express Train crushing many cars, killing two people and injuring 100. April 18 – United States – Crescent City, Florida: 21 cars of an Amtrak Auto-Train derailed, killing four people and injuring 142. April 23 – United States – 2002 Placentia train collision, In Placentia, California: A BNSF Railway freight train, which ran a stop signal, collided head-on with a Metrolink train near Atwood Junction, at the intersection of Orangethorpe Avenue and Van Buren Street. Two people died and 22 were seriously injured. May 10 – United Kingdom – 2002 Potters Bar rail accident: A northbound WAGN Class 365 train derailed at high speed, killing seven people and seriously injuring 11. May 13 – India – 2002 Jaunpur train crash: 12 people died after a passenger train derailed and crashed in Uttar Pradesh in a suspected act of sabotage May 28 — United States — A BNSF coal train and intermodal train collided head-on near Clarendon, Texas, killing the engineer of the intermodal train and injuring the coal train's crew and the intermodal train's conductor. It was revealed that the coal train's crew was distracted by their cell phones, ultimately causing their train to collide with the other train. May 30 – United States – Hempfield Township, Westmoreland County, Pennsylvania: a freight train struck a vehicle at an ungated crossing, killing two teenagers and injuring two others. June 13 – Sri Lanka – A train derailed while coming into Alawwa station, killing 14 people. June 24 – Tanzania – Igandu train collision: Nearly 300 people died after a passenger train rolled backwards into a goods train. July 20 – Italy – Rometta Marea derailment: The Palermo–Venice train derailed in Rometta Marea, Messina, killing eight people. July 29 – United States – Kensington, Maryland: The eastbound Amtrak Capitol Limited, train 30, while traversing a CSX route, struck a sun kink while traveling at . Several cars fell off an embankment and four Superliners overturned against trees. Sixteen people were seriously injured and 79 people suffered minor injuries. The misalignment was determined to be caused by an improperly tamped ballast and excessive speed in the sunny weather. "Slow orders" were imposed on passenger trains in the area on very hot days following this accident. September 9 – Germany – Bad Münder: Two freight trains collided head-on after a brake failure on one of the trains. A tank car loaded with 1-chloro-2,3-epoxypropane subsequently exploded, contaminating the station and exposing 96 firemen to carcinogenic fumes. September 9 – India – Rafiganj train wreck: More than 130 people died after a passenger train derailed and plunged into the Dhave River in Bihar due to sabotaged tracks. September 15 – United States – 2002 Farragut derailment: A Norfolk Southern freight train derailed in Farragut, Tennessee, resulting in a hazardous materials release of fuming sulfuric acid and evacuation of more than 2,600 residents for nearly three days. September 27 – United States – Jamaica, New York: Three cars of a JFK Airtrain test train derailed near Federal Circle. The train's lone occupant, a train operator testing the automated equipment, was crushed to death by collapsing cement blocks inside the first car used to evenly distribute the weight inside the car to simulate the weight of customers in passenger service. October 13 – Australia – Benalla level crossing collision, Victoria: A heritage train hauled by K class steam locomotive K 183 collided with a B-Double truck that failed to clear the level crossing. The impact caused the locomotive to derail, rolling onto its side, and the locomotive's tender to be forced into the locomotive's cab. Three of the four people in the cab were killed and one critically injured. November 6 – France – Nancy: A fire broke out in the front two carriages of an overnight sleeper train heading from Paris to Vienna, killing 12 passengers from smoke inhalation and injuring nine. November 7 – Denmark – Holte: An empty S-train turning around failed to brake in time and drove into the path of another S-train heading to Køge station, killing a woman and injuring three passengers, two more seriously. December 9 – Indonesia – The Argo Dwipangga train with Solo Balapan-Gambir route, derailed in Prembun, Kebumen Regency, killing five passengers and injuring dozens were injured. The cause of the accident was a rail that shifted due to a box truck that passed through the tunnel under the train tracks just before the Argo Dwipangga passed. 2003 January 3 – India – Ghatnandur train crash: 18 people died in a collision of two trains at Ghatnandur in Maharashtra January 31 – Australia – Waterfall rail accident: The driver of a southbound passenger train suffered a heart attack and died; the train then sped out of control and derailed on a curve, overturning several cars and killing six passengers. February 2 – Zimbabwe – Dete train crash. Two trains collided, derailed, and caught fire, killing over 40 people. February 3 – Australia – Broadmeadows train runaway and crash, Melbourne, Victoria: An unmanned electric suburban train rolled away from Broadmeadows station and ran for 16.848 kilometres at speeds in excess of 100 km/h through many pedestrian and level crossings before crashing into a stationary diesel passenger train at Southern Cross station derailing both. No serious injuries were reported. February 18 – South Korea – Daegu subway fire: A mentally ill man started a fire which engulfed two subway trains, killing 192 people. March 20 – Netherlands – Roermond: The driver of an NS passenger train suffered a heart attack and ran through a red signal before colliding head-on with a freight train. The driver was killed, while six passengers were injured seriously. May 6 – United States – Amtrak's Silver Star struck a delivery truck at a private crossing near Hinesville, Georgia killing the engineer and the truck driver. May 15 – India – Ladhowal train fire: A passenger train caught fire near Ladhowal in Punjab, killing 38. June 3 – Spain – Chinchilla train collision: 19 people died after a TALGO train and a freight train collided head-on in Albacete. June 11 – Germany – Six people died and 25 were injured after two passenger trains collided head-on near Schrozberg. June 20 – United States – Commerce, California: A runaway cut of 31 cars from a Union Pacific freight train, without a locomotive, carrying lumber derailed at a speed of in a Los Angeles suburb, destroying several homes and rupturing natural gas lines. June 22 - India - In Kankavli, a Karwar-Mumbai holiday special train derailed and telescoped over the engine, 34 were killed and dozens were injured. June 23 – India – Vaibhavwadi train crash: 51 people died after a special holiday train derailed in Maharashtra. July 2 – India – Warangal train crash: 22 people died after a train fell off a bridge due to brake failure into a crowded fish market in Warangal. July 7 – United Kingdom – Between Evesham and Pershore, Worcestershire, a First Great Western train collided with a minibus on a level crossing, killing three people in the minibus. August 3 – United Kingdom – A Romney, Hythe and Dymchurch Railway steam train hit a car driven across the level crossing. The train driver died, while the car occupants and some train passengers were injured. August 7 – Switzerland – Two trains collided at Gsteigwiler, killing one person and injuring 63. October 4 – Indonesia – In Bogor Regency, a passenger train rear-ended another between Cilebut and Bogor Stations, injuring 39 people. October 12 – United States – Chicago, Illinois: A Metra train derailed after its engineer ignored warning signals telling him to slow down for a track change and continued travelling at over a switch. The front locomotive rolled onto its side and caught fire. Forty-five were injured. October 14 – Switzerland – Two express trains collided at Zürich Oerlikon railway station, killing one person and injuring 45. 2004 February 15 – United Kingdom – Tebay rail accident, Cumbria, England. A sleeper (railroad tie) transporter trolley with defective brakes carrying 16 tonnes of rails was detached from a maintenance train south of Penrith and rolled down the falling gradient until it struck and killed four workmen in a team repairing the line at Tebay, between Oxenholme and Penrith. February 18 – Iran – Nishapur train disaster: 51 train cars broke loose from their siding, rolled down the track, derailed and fell down an embankment into Khayyam, near Nishapur. During the cleanup operation, the cargo of the cars exploded (an equivalent of 180 tons of TNT), killing 295 people and leveling Khayyam and damaging three nearby towns. The blast was felt as far away as Mashhad. April 16 – Turkey – : An overnight İzmir-to-Ankara express hit a truck near Ankara while at a level crossing. Seven to 10 children died and two to five more were injured. April 22 – North Korea – Ryongchon disaster: 161 people were killed and more than 1,000 injured after an explosion. May 19 – United States – near Gunter, Texas: One person died and four were injured after a BNSF train failed to adhere to an after-arrival track warrant and collided with another train. June 1 – Denmark – Holstebro: Two regional trains entered the same track and collided head-on, injuring 24 people were injured, two more seriously. June 17 – India – Karanjadi train crash: 20 people died and 100 were injured after 10 carriages fell off a bridge during a monsoon-induced landslide. June 28 – United States – Macdona, Texas, near San Antonio: Four people died and 51 were injured after a Union Pacific train failed to stop at a signal and collided with another train, causing chlorine gas to leak out of a train car. Among the dead were the UP driver and two residents. Several other residents and many visitors to the SeaWorld theme park were seriously injured by the gas. July 22 – Turkey – Pamukova train derailment: An Istanbul–Ankara express derailed at Pamukova, Sakarya Province, killing 41 and injuring 80. September 10 – Sweden – Nosaby level crossing disaster: A heavy truck was caught between the barriers at a level crossing, and was hit by a passenger train. The train driver died, while 47 people were injured. The truck driver was found guilty of not attempting to move the vehicle away from the level crossing, and was sentenced to 14 months' imprisonment. November 4 - United States - In Washington D.C., two Washington Metro trains on the Red Line collided rear on at Woodley Park station, 20 people were injured. November 6 – United Kingdom – Ufton Nervet rail crash: A First Great Western InterCity 125 hit a stationary car driven by an apparently suicidal driver on a level crossing and derailed. Five train passengers and the drivers of both the train and the car died; more than 100 passengers were injured. November 11 – United States – San Antonio, Texas: A Union Pacific train derailed off the tracks in an industrial district, killing one man working in a warehouse office and injuring others. November 15 – Australia – Cairns Tilt Train derailment: The world's fastest narrow-gauge train derailed at 112 km/h. The accident was blamed on the train travelling too fast on a curved line. November 29 – United States – Zephyrhills, Florida: Two CSX freight trains collided in early morning fog at Vitis Junction, killing one and injuring three. December 3 – Italy – Two trains collided at Castellaneta after one of them passed a red signal. Twenty people were injured, two seriously. December 14 – India – Two passenger trains collided in Punjab, killing 27 people and injuring more than 50. December 26 – Sri Lanka – 2004 Sri Lanka tsunami train wreck: Approximately 1,700 people died in the world's worst rail disaster after a train was overwhelmed by a tsunami created by the 2004 Indian Ocean earthquake. 2005 January 6 – United States – Graniteville train crash: Nine people (including the engineer) died and more than 250 were injured after a Norfolk Southern freight train collided head-on with a parked freight train near the Avondale Mills plant in Graniteville, South Carolina. A derailed tank car ruptured 90 tons of chlorine gas into the air. January 7 – Italy – Crevalcore train crash. A passenger train running from Verona to Bologna failed to stop at a red light and collided head-on with a freight train, near Crevalcore in dense fog, killing 17. January 17 – Thailand – an empty MRT (Bangkok) train returning to the depot collided with another train filled with passengers at the Thailand Cultural Centre MRT station. One hundred and forty people were hurt, most of whom sustained only minor injuries, and the entire Metro network was shut down for two weeks. January 26 – United States – 2005 Glendale train crash: In a planned suicide attempt during which the suspect changed his mind, a southbound Metrolink double-deck commuter train collided in Glendale, California with the man's vehicle that he has driven onto the tracks and then abandoned. The train derailed, then struck both a moving northbound Metrolink train on the adjacent track as well as a parked Union Pacific freight train on a siding. Eleven people died, about 100 were injured. February 3 – India – Nagpur level crossing disaster. A tractor-trailer carrying a wedding party washit by a train, killing 55 wedding guests. February 9 – Latvia – A Lielvarde-Riga passenger train collided with empty stock heading for Riga train depot, killing four people and injuring 32. February 14 — United States – Oxnard, California. An Amtrak Pacific Surfliner passenger train traveling from Los Angeles Union Station collided with a semi-truck loaded with strawberries at the Rice Avenue grade crossing. The truck driver, who had stopped at the crossing, suddenly encountered a green traffic light that conflicted the crossing gates coming down. This confused the driver, who started crossing the tracks then, but the traffic light suddenly turned red after a few seconds, making the truck driver stop without realising that the rest of her truck was still on the tracks. The train was traveling too fast to stop in time and completely destroyed the truck's trailer, severely damaging the locomotive and spilling diesel fuel at the crash site. A few passengers of the train had minor injuries. February 14 – Denmark – Lyngby: An S-train to Høje Tåstrup ran into a stopped S-train which was heading for Køge at 60–70 km/h. Two passengers and a driver were seriously injured. Snow which had settled on the signals made them difficult to read, meaning the train could not be stopped in time. April 21 – India – Vadodara train crash: 18 people died after a collision between a freight and passenger express train. April 25 – Japan – Amagasaki rail crash: A train derailed on a sharp curve and smashed into an apartment building, killing 107 people and injuring 549. An investigation revealed that the driver (who was among the dead) was speeding because of a slight delay. April 26 – Sri Lanka – Polgahawela level crossing collision: A bus tried to beat the train at a level crossing; at least 35 bus passengers died. May 19 – Indonesia – A passenger train from Palembang crashed into another passenger train at Bandar Lampung station and derailed. Seven children dies and about 200 were injured. Many of the dead were passengers clinging on to the sides of the Palembang train. The Indonesian government began a crackdown on people clinging on to the exteriors of trains as a means of travel. June 16 – Russia – Between Zubtsov and Aristovo in Tver Oblast, 27 fuel oil tankers bound from Moscow to Riga derailed at a speed of about , causing 300 tonnes of fuel to leak. of track was destroyed and the Volga River was contaminated briefly. June 21 – Israel – A Beersheba-bound passenger train collided with a coal delivery truck near Revadim, about south of Tel Aviv. At least seven died and more than 200 were injured. July 10 – United States – Anding, Mississippi: Two Canadian National freight trains collided head-on after the northbound train failed to stop at a red light. Both crews died. July 10 – United Kingdom – A Romney, Hythe & Dymchurch Railway steam train hit a car driven across a level crossing, killing the train driver. July 13 – Pakistan – 2005 Ghotki rail crash: A chain reaction accident caused by one train missing a signal and colliding into another resulted in three trains crashing and over 150 people dead. July 26 – Austria – Gramatneusiedl: passenger train of the ÖBB crashed into a cargo train, injuring 13 people. July 31 – China – Liaoning, Shenyang: A passenger train from Xi'an to Changchun passed a sabotaged railway signal and collided with a freight train, killing 5 passengers. August 1 – Greece – Kilkis: A truck driver was struck and killed by a train on a crossing after ignoring crossing warnings. August 2 – United States – Raleigh, North Carolina. Two people were killed after their truck was hit by an Amtrak train. August 5 – Canada – Cheakamus River derailment: Nine cars of a Canadian National freight train derailed into the Cheakamus River near Whistler, British Columbia. Forty thousand litres of caustic soda entered the river, killing over 500,000 fish and greatly damaging the surrounding ecosystem. September 17 – United States – Chicago, Illinois: A Metra commuter train derailed, killing two and injuring 83. October 3 – India – Datia rail accident: 100 died after a train travelling at six times the speed limit derailed. October 15 – United States – Texarkana, Arkansas, Union Pacific train rear-ended another train, derailing and puncturing a tank car containing propylene. The leak ignited at a nearby house, causing a massive explosion and subsequent fire. A radius was evacuated, and one resident was killed. October 23 – Italy – Eurostar 9410 derailment: Eurostar Italia train 9410, running from Taranto to Milan, derailed between Acquaviva delle Fonti and Sannicandro di Bari after subsidence due to heavy rain caused a bridge to collapse under it, leaving two bare rails spanning the 12-metre-deep ravine. The train completed the crossing and came to rest with its rear end suspended over the void. October 29 – India – Veligonda train disaster: At least 114 people died and many more were injured after part of the track was swept away by a flood, causing a train to derail. November 23 – Turkey – A train hit a truck on a level crossing between Tarsus and Mersin, killing nine people and injuring 18. November 29 – Democratic Republic of Congo – Kindu rail accident: Over 60 people were swept off the roof of a train by the beams of a bridge in Maniema province. December 19 – Poland – Świnna rail crash: After losing braking power in EN57-840 EMU operating as passenger train from Sucha Beskidzka to Żywiec, train crews managed to stage a controlled collision with another train in Świnna, Silesian Voivodeship. Eight people were injured. December 25 – Japan – Shonai, Yamagata: All six cars of Akita–Niigata Inaho express train derailed and three passenger cars were crushed, killing five people and injuring 32. Strong winter winds were thought to be the cause. 2006 January 5 - United States - In Quantico, Virginia, a Virginia Railway Express train No. 304, derailed at Possum Point, 4 people were injured. January 23 – Montenegro (then within Serbia and Montenegro) – Bioče train disaster: A passenger train crashed into a ravine near Podgorica, killing 46 people and injuring 198. January 29 – Pakistan – A broken rail caused a derailment near Jhelum in Punjab, killing 2 people and injuring 29. Poor maintenance was officially being cited as cause; sabotage was suspected by some authorities. The government inquiry later blamed defective and aging rails. March 13 – United States – Austin, Texas: Tara Rose McAvoy, the reigning Miss Deaf Texas, was killed by the snowplow on a 65-car Union Pacific freight train while trespassing on the tracks and text-messaging her parents. April 15 – Indonesia – In Gubug, 13 people died and 26 were injured after two eastbound trains collided and wreckage fell into a paddy field. April 28 – Australia – Victoria: A V/Line VLocity high-speed train derailed after being struck by an 18-wheeler truck, killing two people and injuring 28 on the Ballarat-to-Ararat line. May 17 – Switzerland – An engineering train suffered a brake failure and crashes at Thun. Three people are killed. June 12 – Israel – Netanya: A passenger train from Tel Aviv to Haifa derailed after colliding with a lorry on a level crossing, killing 5 people and injuring more than 100. June 14 – United States – Kismet, California. Two BNSF freight trains collided head-on due to one of the trains running a red signal, injuring 5 people. A crew from the train that ran the red was suspected to be high on cocaine. July 1 – United States – Abington, Pennsylvania. Two SEPTA Regional Rail passenger trains collided on a single track on the Warminster Line, injuring 36. July 3 – Spain – Valencia Metro derailment: A Valencia Metro train derailed after leaving Jesús station, killing 41 people and injuring at least 47. The records of the train's black box showed that the train passed a bend at 80 km/h, above the speed limit of 40 km/h. August 21 – Egypt – Qalyoub train collision – Two trains collided in Qalyoub, north of Cairo, killing 57 people and injuring 128. August 21 – Spain – A speeding RENFE intercity train derailed in Villada, 40 km west of Palencia, killing six people and injuring 36. August 27 – Zimbabwe – Five people died in a head-on collision between a passenger train and a freight train 30 km south of Victoria Falls. September 4 – Egypt – A passenger train collided with a freight train north of Cairo, killing five people and injuring 30. September 5 – Netherlands – A diesel locomotive passed a red signal and collided with a passenger train at Amersfoort, injuring 17. September 22 – Germany – Lathen train collision: 21 passengers and two maintenance workers died and many more were injured after a Transrapid train collides with a maintenance of way vehicle on the system's test track near the Netherlands border. October 11 – France – Zoufftgen train collision: A Passenger and freight train collided head-on at Zoufftgen, Moselle, close to the Luxembourg border. Five people died, including the drivers of both trains, and 20 were injured. The accident is ascribed to human error in the controlling signalling centre in Luxembourg. October 17 – Italy – 2006 Rome metro crash: Two metro trains collided at Rome's Vittorio Emanuele metro station, killing one person and injuring around 60. October 20 – United States – New Brighton, Pennsylvania: A Norfolk Southern unit train of DOT-111 tank cars containing ethanol derailed on a bridge over the Beaver River. The resulting fire burned for days and forced evacuations. November 9 – United States – Baxter, California: Six cars of a runaway maintenance train derailed, killing two of the crew. November 13 – South Africa – Faure level crossing accident: A Metrorail train smashed into a truck carrying farm workers at a level crossing, killing 27 people. November 20 – India – 2006 West Bengal train explosion: A train traveling between New Jalpaiguri and Haldiburi in West Bengal exploded, killing five people and injuring 25 to 66. Terrorism was suspected. November 24 – Croatia – Drniš: HŽ ICN tilting train number 521 collided with a lorry loaded with Knauf cement boards at a railroad crossing with no ramp or warning lights. The train engineer died instantly, the lorry driver sustained severe injuries. November 30 – United States – North Baltimore, Ohio: 15 cars carrying steel derailed after the train inadvertently switched to a side track. These cars then struck a coal train on a parallel set of tracks, causing four of its cars to derail. Three people who were in vehicles waiting for the train to pass were injured. December 1 – India – Bihar, Bhágalpur: In the Ganges, a portion of the 150-year-old 'Ulta Pul' bridge being dismantled collapsed over a passing train of India's Eastern Railways, killing 35 people and injuring 17. December 13 – Italy – Avio: A freight train operated by Trenitalia passed a red signal and crashed into a freight train of the private company Rail Traction Company. Two Trenitalia engineers died. 2007 January 4 – Turkey – A freight train crashed into a truck carrying farm workers at a railroad crossing in Hatay Province, killing 7 people and injuring 19. January 16 – United States – Brooks derailment: A CSX freight train derailed in Brooks, Kentucky. January 16 – Indonesia – The Senja Bengawan train derailed in Pager Bridge, Brebes, Central Java. One of the car fell into the Pager River, killing five people. February 2 – United Kingdom – Grayrigg derailment: A Virgin Trains West Coast Pendolino service from London Euston to Glasgow Central derailed at Grayrigg Cottage near Oxenholme, Cumbria, United Kingdom, killing a woman. February 28 – China – Strong winds blew 10 passenger rail cars off the track near Turpan, killing 3 passengers and seriously injuring two. June 5 – Australia – Kerang train accident: A B-Double truck collided with a Melbourne-bound passenger train north of Kerang at the Murray Valley Highway level crossing, killing 11 train passengers and injuring 23. June 14 – Croatia – A Croatian HŽ commuter train bound from Zagreb to Karlovac collided with a lorry on the railroad crossing in Demerje, killing the lorry driver and the train engineer. The crash was caused by the lorry driver who ignored light and bell warnings about the train. The train conductor sustained minor injuries. June 15 – Italy – Two trains collided on Sardinia, killing three. July 16–17 – United States – Two Amtrak Silver Star trains on the Tampa to Miami route crashed into automobiles and derailed in two separate instances, one in Lakeland and one in Plant City. Four people in the automobile died in the first wreck; one in the automobile died in the second accident. July 16 – Ukraine – Fifteen carriages from a train carrying yellow phosphorus derailed and caught fire, releasing toxic fumes that affected 14 villages in a area near Lviv. August 1 – Democratic Republic of the Congo – Benaleka train accident: A passenger train derailed killing about 100 people and injuring more than 200 others, many riding on the roof in Kasai-Occidental province, due to brake failure. August 24 – Serbia – Two persons died and five were injured after a locomotive and a freight train collided near Čortanovci. August 30 – Brazil – 2007 Rio de Janeiro train collision: Eight passengers died and 80 were injured after a commuter train collided with an empty train at Nova Iguaçu near Rio de Janeiro. October 2 - United States - In D.C., Virginia, two United States Capitol subway system trains collided onto each other after a train failed to slow down when it reached the end of the line, no deaths were reported but 1 person was injured. October 10 – United States – A CSX train Q380-09 carrying ethanol and butane derailed in Painesville and Painesville Township, Ohio, causing an evacuation and fire that burned for several days. October 22 – United States – A Vermont Railway train carrying gasoline derailed in Middlebury, Vermont, causing an evacuation. At least one car caught fire and several others leaked gasoline into Otter Creek (Vermont). October 25 – Sweden – A 15-year-old boy died after he was struck by a high-speed train in Solna while illegally crossing the railway. October 29 – United States – Two BNSF trains derailed in Clara City, Minnesota, causing a hydrochloric acid spill that prompted the evacuation of about 350 people. November 9 – United States – An improperly secured free-rolling cut of hoppers from a CSX train in Bennington Yard in the District of Columbia rolled onto an out-of-service bridge, which collapsed and dumped ten rail cars of coal in the Anacostia River. November 30 – United States – Amtrak train No. 371, the Pere Marquette, struck the last car of COFC freight train on the Norfolk Southern (ex-PRR) line near 65th Street in Chicago. Two people in the cab of P42DC No. 8 were injured, and many passengers on the other train were injured, including three critically. The engineer was running at approximately in a zone due to confusion about the meaning of a signal. December 10 – China – Beijing, Daxing: Between Beijing-Shanghai railway Huangtupo and Huangcun, a workman clearing snow was hit and killed by the K215 passenger train from Beijing to Tumen. December 19 – Pakistan – Mehrabpur derailment – A crowded passenger express train derailed down an embankment north-east of Karachi near Mehrabpur, killing 35 people and injuring about 269, 10 critically. 2008 February 5 – United States – Two people died and one was injured in a chain reaction accident involving six vehicles and a 50-car train at a fog-obscured rail crossing in Boswell, about west of Lafayette, Indiana. February 28 – Bulgaria – Sofia – Kardam train fire – Nine people died in a fire on board Bulgarian State Railways train No.2637 travelling from Sofia to Kardam, which started in a couchette carriage as the train entered Cherven bryag, and spread to a sleeping coach with 27 people. The fire took more than three hours to extinguish. Among the victims was Rasho Rashev, the director of Bulgaria's National Archaeological Institute. March 8 – Greece – An Alexandroupolis-bound InterCity train derailed outside Larissa, injuring 28 passengers. Initial reports suggested the station master failed to change the points after a previous train had passed through the station, causing five carriages from the passenger train to derail. March 9 – Argentina – A Ferrobaires passenger train going from Buenos Aires to Mar del Plata struck an El Rápido Argentino bus going from Mar de Ajó to San Miguel (Greater Buenos Aires) at a crossing on the outskirts of Dolores, Buenos Aires Province. The bus had disregarded the active warning devices on the railroad crossing of Provincial Highway 63. Seventeen people died, 40 were injured. March 25 – United States – A MBTA train crashed into a runaway box car at Canton Junction station in Canton, Massachusetts, injuring 150 people on board. April 9 – Malaysia – A Sabah State Railway train plunged 10 metres into Padas River after a derailment caused by a landslide near Tenom, killing two passengers. April 14 – Czech Republic – In Ostrava, two trams collided head-on, killing three people and injuring 40. April 26 – Germany – An InterCityExpress train ran into a herd of sheep that had wandered onto the tracks at the mouth of the Landrückentunnel, Germany's longest rail tunnel, on the Hanover-Würzburg high-speed rail line near Fulda. The derailed train stopped into the tunnel. Twenty-two people were injured seriously, including the engineer, 17 were slightly injured and 77 sheep died. April 28 – China – Zibo train collision: Train No. T195 en route from Beijing to Qingdao derailed at a section of temporary detour tracks Zhoucun-Wangcun, Hejiacun, on the outskirts of Zibo, Shandong. It was then struck by the No. 5034 (Yantai to Xuzhou) passenger train. Fourteen passenger cars were crushed, 72 people died and 416 were injured. May 5 – Sudan – A freight train carrying illegal passengers and Kordofan University students derailed on the outskirts of Al-Foula, South Kordofan, killing 14 and injuring 28. May 10 – Romania – The locomotive and three cars of Romanian National Railway Company (CFR) passenger train No 1661 going from Bucharest to Iași derailed at a defective switch near Valea Calugareasca station in Prahova County. A 17-year-old girl was killed and four others were injured. May 28 – United States – 2008 Massachusetts train collision: Boston MBTA Green Line D Train rear-ended another train in Newton, Massachusetts, between Woodland and Waban "T" stops. The driver of the rear train was killed while 12 others were injured. July 3 – Belgium – A passenger train and a freight train collided, injuring 42 people, two seriously. July 16 – Egypt – At least 44 people were killed and 33 injured after a truck failed to stop at level crossing and pushed two vehicles into a Matruh–Alexandria passenger train, at El Dabaa, Marsa Matruh. August 8 – Czech Republic – 2008 Studénka train wreck: Express train EC 108 Comenius from Kraków, Poland to Prague travelling at 140 km/h crashed into a section of a bridge undergoing construction that had fallen onto the track, killing eight people and injuring 91. September 11 – United Kingdom-France – 2008 Channel Tunnel fire: A Eurotunnel Shuttle train carrying two vans and 25 lorries was severely damaged after a fire starts on one of the lorries. Six people were injured slightly, the part of the Channel Tunnel where the train came to rest was closed for repairs until February 2009. September 12 – United States – 2008 Chatsworth train collision: A northbound Metrolink (California) double-deck commuter train ran a red light and collided head-on with a Union Pacific Railroad freight train pulled by three engines at about and derailed; the derailed Metrolink engine was knocked backwards into a passenger car, crushing it in half. Twenty-five people were killed and about 135 were injured. This incident intensified the debate Congress was considering to mandate a safety system called positive train control, which passed Congress, and was signed into law, just over 30 days after this incident. October 6 – Hungary – Monorierdő train collision: Four people died and 26 were injured in a collision between two passenger trains near Budapest. October 14 – United States – A CSX train derailed in Decatur, Alabama, killing its conductor. November 18 – Australia – A Connex Melbourne express train collides with a car in Dandenong South, killing the car's occupant. November 27 – Australia – Two people died and several others were injured after a QR Tilt Train collided with a truck on the Bruce Highway level crossing about 20 km south of Cardwell, Queensland. December 23 – Latvia – A fuel cargo train crashed into a stationary train in Ventspils. Ten fuel tankers caught fire, killing two people. December 31 – Canada – Thirty-three cars of a Canadian National Railway freight train running from Toronto to Moncton derailed in Villeroy, Quebec. One car leaked propane, causing the evacuation of about 70 residents. 2009 January 1 – Australia – One person was killed and six others injured after a QR Sunlander train collided with a garbage truck at a level crossing with no boom gates or warning lights near Innisfail, Queensland. February 7 – Cuba – Three people were killed and more than 90 injured in a collision between two passenger trains in Camagüey, east of Havana. 13 February - India - In Orissa, a Coromandel Express train caught fire after leaving Jaipur Road Train Station, killing 15 people and injuring dozens. February 13 – India – Jajpur derailment: 12 carriages of the Howrah-Madras Express derailed after the train left Jajpur Road station near Bhubaneswar in Odisha, killing nine people and injuring 250. February 21 – Slovakia – Brezno train accident: Eleven people died in a collision between a bus and a train on a level crossing near Brezno. March 29 – Tanzania – Equipment breakdown and culpability causes a rear-end collision in Gulwe–Igandu section (Mpwapwa District), killing dozens of passengers and injuring many others. April 17 – Germany – A passenger train collided with a freight train in Berlin, injuring 12 people. April 28 – United States – In Boston, a T-Trolley train derailed due to the driver looking at text messages, injuring 68 people. April 29 - India - In Vyasarpadi Jeeva, a MEMU train collided with an empty goods tanker at Vyasarpadi Jeeva Station following a hijack, 4 were killed and several others were injured. June 1- United States – In Louisville, Kentucky, a train derailed at a curve in Louisville Zoo, injuring 22 people. June 5 -Canada - In Oshawa, a freight train derailed and exploded, sending toxic chemicals in the air, no casualties were reported, but hundreds of residents in nearby areas were evacuated. June 19 – United States – A major downpour in Rockford, Illinois, caused 14 ethanol tankers of a Canadian National freight train to derail and explode into flames. One person at a rail crossing died, several others were burned. June 22 – United States – June 2009 Washington Metro train collision – On the Washington Metro, an electronic track-circuit module failed, causing a train to go undetected by the automatic train control system. A second train crashed into it, killing nine people, the deadliest incident in the subway system's 33-year history. June 29 – Italy – Viareggio train derailment: A freight train derailed at Viareggio. Two wagons carrying liquefied petroleum gas then exploded, killing 32, including five after a house collapsed. June 29 – China – Chenzhou train collision: Two passenger trains collided at Chenzhou railway station, Hunan Province, killing three people and injuring 63. July 5 – United States – 2009 Walt Disney World monorail accident: Two monorails at Walt Disney World collided, killing one; the cause was found to be human error. July 9 – United States – The Amtrak Wolverine hit the side of a car in Canton Township, Michigan, near Detroit. All five car occupants died. July 18 – United States – In San Francisco, a Muni Metro Train collided rear end onto another train stopped at West Portal Train Station, injuring 47 people. July 24 – Croatia – Rudine derailment – HŽ ICN tilting train number 521 bound from Zagreb to Split derailed between Labin Dalmatinski and Kaštel Stari near the village of Rudine. Six passengers died and 55 were injured, 13 of them seriously. The crash was caused by retardant that was sprayed on the railroad approximately 10 minutes before the accident, leaving the track surface slippery and preventing braking. Thirty minutes later, a rescue train experienced the same problem and derails on the very same spot but did not produce further casualties despite colliding with the derailed wreckage. July 29 – China – A train derailed due to a landslide in Liucheng County, Guangxi, killing four people and injuring 71. August 28 – Cameroon – A freight train carrying fuel derailed south of Yaoundé and caught fire, killing one. August 30 – Cameroon – A passenger train derailed near Yaoundé, killing five and injuring 275. September 12 – Germany – Friedewald train collision – Two steam trains collided head-on on the Lößnitzgrundbahn. September 16 – Ireland – A tram and bus collided in Dublin injuring 21 people. September 24 – Netherlands – Barendrecht train accident – Two freight trains collided head-on below a viaduct on the A15 motorway near Barendrecht, killing a train driver. Derailed sections landed on a parallel track as a passenger train approached. A few passengers of the passenger train were lightly injured. October 5 – Thailand – A passenger train derailed during heavy rain in Hua Hin District, killing at least seven people and injuring dozens. October 20 – India – Mathura train collision – A passenger train collision near Mathura. October 23 – India – A huge concrete slab fell down into a train in north-east Mumbai, killing the driver and a passenger. Twelve others were injured. October 24 – Egypt – 2009 El Ayyat railway accident – A passenger train stopped after striking water buffalo in the El Ayyat area of Giza. Another passenger train then rear-ended the stationary train, killing at least 18 people. November 4 – Pakistan – At least 18 people died after the Allama Iqbal Express collided head-on with a goods train near Juma Goth Station in the suburbs of Karachi. December 21 – Croatia – A HŽ commuter train number 5100 bound from Sisak Caprag to Zagreb failed to stop and crashes into the platform bumper at Zagreb Glavni Kolodvor in Zagreb at 15–20 km/h. The cause was blamed on the lack of antifreeze fluid in the locomotive's braking system which froze due to low temperatures. Sixty people from the train, including its engineer, were injured, seven of them seriously. See also Classification of railway accidents List of accidents and disasters by death toll Lists of traffic collisions – includes level crossing accidents List of traffic collisions (2000–present) List of level crossing crashes List of rail accidents in the United Kingdom List of railway accidents and incidents in India List of Russian rail accidents Lists of rail accidents by country List of years in rail transport 2006 in rail transport Tram accident Federal Employers Liability Act References External links NTSB Reports – The United States National Transportation Safety Board official reports of transportation accidents. RSSB Publications – UK Rail Safety and Standards Board Casper Star-Tribune (June 22, 2005), BP Amoco Timeline. Retrieved June 22, 2005. CBS, (June 21, 2005), Deadly Train-Truck Crash In Israel. Retrieved August 13, 2005. BBC: Europe's history of rail disasters BBC: World's worst rail disasters Chicago South Shore and South Bend Railroad accidents. Retrieved January 27, 2005. (May 2002), CSX recognizes human error, Trains, p. 22. Banerji, Ajai, "Worldwide Railway Accidents, 2000–09", 2011, , Edinburgh Evening News (August 17, 2005) Railway worker killed in Channel Tunnel link blaze. Retrieved August 17, 2005. Holbrook, Stewart Hall, The Story of American Railroads, Bonanza Books, New York, 1947, . IOL Asia (May 19, 2005), Hundreds injured in train accident. Retrieved May 20, 2005. Molloy, Tim; Associated Press (January 26, 2005) Suicide try triggers California commuter rail tragedy, police say. Retrieved January 26, 2005. National Transportation Safety Board, 2003, Derailment of Amtrak Auto Train PO52-18 on the CSXT Railroad near Crescent City, Florida, April 18, 2002. Railroad Accident Report NTSB/RAR 03–02. Washington, D.C. The Railway Magazine, Network Rail found guilty over Hatfield crash, but managers are acquitted, November 2005, King's Reach Tower, Stamford Street, London, SE1 9LS, pages 13–15. Reuters UK (August 16, 2005), One feared dead in rail tunnel fire in south. Retrieved August 17, 2005. (May 2005), "Scanner – Rail at fault?", Trains Magazine, p. 21. Trains NewsWire (August 1, 2005), Chinese passenger train derailment kills 5. Retrieved August 3, 2005. Trains News Wire (May 4, 2005), Illinois derailment closes UP Overland Route main line. Retrieved May 5, 2005. USA Today, February 17, 2006 Train derailment kills 2, page 5A. The Washington Post, March 16, 2006 Off the Track in Manassas, page B-3. Withers, Bob, The President Travels by Train – Politics and Pullmans, TLC Publishing Inc., Lynchburg, Virginia, 1996, . Fox Valley tied to tragic train wrecks, The Herald News Online, March 20, 1999 (ChicagoSuburbanNews.com). Retrieved May 31, 2005. United States National Transportation Safety Board Publications Rail accidents 2000-2009 21st-century railway accidents 2000s in rail transport Rail accidents
List of rail accidents (2000–2009)
[ "Technology" ]
11,021
[ "Railway accidents and incidents", "Lists of railway accidents and incidents" ]
588,886
https://en.wikipedia.org/wiki/Chlorate
Chlorate is the common name of the anion, whose chlorine atom is in the +5 oxidation state. The term can also refer to chemical compounds containing this anion, with chlorates being the salts of chloric acid. Other oxyanions of chlorine can be named "chlorate" followed by a Roman numeral in parentheses denoting the oxidation state of chlorine: e.g., the ion commonly called perchlorate can also be called chlorate(VII). As predicted by valence shell electron pair repulsion theory, chlorate anions have trigonal pyramidal structures. Chlorates are powerful oxidizers and should be kept away from organics or easily oxidized materials. Mixtures of chlorate salts with virtually any combustible material (sugar, sawdust, charcoal, organic solvents, metals, etc.) will readily deflagrate. Chlorates were once widely used in pyrotechnics for this reason, though their use has fallen due to their instability. Most pyrotechnic applications that formerly used chlorates now use the more stable perchlorates instead. Structure and bonding The chlorate ion cannot be satisfactorily represented by just one Lewis structure, since all the Cl–O bonds are the same length (1.49 Å in potassium chlorate), and the chlorine atom is hypervalent. Instead, it is often thought of as a hybrid of multiple resonance structures: Preparation Laboratory Metal chlorates can be prepared by adding chlorine to hot metal hydroxides like KOH: 3 Cl2 + 6 KOH -> 5 KCl + KClO3 + 3 H2O In this reaction, chlorine undergoes disproportionation, both reduction and oxidation. Chlorine, oxidation number 0, forms chloride Cl− (oxidation number −1) and chlorate(V) (oxidation number +5). The reaction of cold aqueous metal hydroxides with chlorine produces the chloride and hypochlorite (oxidation number +1) instead. Industrial The industrial-scale synthesis for sodium chlorate starts from an aqueous sodium chloride solution (brine) rather than chlorine gas. If the electrolysis equipment allows for the mixing of the chlorine and the sodium hydroxide, then the disproportionation reaction described above occurs. The heating of the reactants to 50–70 °C is performed by the electrical power used for electrolysis. Natural occurrence A 2010 study has discovered the presence of natural chlorate deposits around the world, with relatively high concentrations found in arid and hyper-arid regions. The chlorate was also measured in rainfall samples with the amount of chlorate similar to perchlorate. It is suspected that chlorate and perchlorate may share a common natural formation mechanism and could be a part of the chlorine biogeochemistry cycle. From a microbial standpoint, the presence of natural chlorate could also explain why there is a variety of microorganisms capable of reducing chlorate to chloride. Further, the evolution of chlorate reduction may be an ancient phenomenon as all perchlorate reducing bacteria described to date also utilize chlorate as a terminal electron acceptor. It should be clearly stated, that currently no chlorate-dominant minerals are known. This means that the chlorate anion exists only as a substitution in the known mineral species, or – eventually – is present in the pore-filling solutions. In 2011, a study of the Georgia Institute of Technology unveiled the presence of magnesium chlorate on the planet Mars. Compounds (salts) Examples of chlorates include potassium chlorate, KClO3 sodium chlorate, NaClO3 magnesium chlorate, Mg(ClO3)2 Other oxyanions If a Roman numeral in brackets follows the word "chlorate", this indicates the oxyanion contains chlorine in the indicated oxidation state, namely: Using this convention, "chlorate" means any chlorine oxyanion. Usually, "chlorate" refers only to chlorine in the +5 oxidation state. Toxicity Chlorates are relatively toxic, though they form generally harmless chlorides on reduction. References External links Chlorine oxides
Chlorate
[ "Chemistry" ]
909
[ "Chlorates", "Salts" ]
589,225
https://en.wikipedia.org/wiki/Bimetallic%20strip
A bimetallic strip or bimetal strip is a strip that consists of two strips of different metals which expand at different rates as they are heated. They are used to convert a temperature change into mechanical displacement. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The invention of the bimetallic strip is generally credited to John Harrison, an eighteenth-century clockmaker who made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England. Characteristics The strip consists of two strips of different metals which expand at different rates as they are heated, usually steel and copper, or in some cases steel and brass. The strips are joined together throughout their length by riveting, brazing or welding. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The sideways displacement of the strip is much larger than the small lengthways expansion in either of the two metals. In some applications, the bimetal strip is used in the flat form. In others, it is wrapped into a coil for compactness. The greater length of the coiled version gives improved sensitivity. The radius of curvature of a bimetallic strip depends on temperature according the formula derived by French physicist Yvon Villarceau in 1863 in his research for improving the precision of clocks: , where is the total thickness of the bimetal and is a dimensionless coefficient. For each metallic strip: is the Young modulus, is the coefficient of thermal expansion and is the thickness. The formula can also be rewritten as a function of the thermal misfit strain . And if the modulus and height are similar, we simply have . An equivalent formula can be derived from the beam theory. History The earliest surviving bimetallic strip was made by the eighteenth-century clockmaker John Harrison who is generally credited with its invention. He made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. It should not be confused with the bimetallic mechanism for correcting for thermal expansion in his gridiron pendulum. His earliest examples had two individual metal strips joined by rivets but he also invented the later technique of directly fusing molten brass onto a steel substrate. A strip of this type was fitted to his last timekeeper, H5. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England. Composition The metals involved in a bimetallic strip can vary in composition so long as their thermal expansion coefficients differ. The metal of lower thermal expansion coefficient is sometimes called the passive metal, while the other is called the active metal. Copper, steel, brass, iron, and nickel are commonly used metals in bimetallic strips. Metal alloys have been used in bimetallic strips as well, such as invar and constantan. Material selection has a significant impact on the working temperature range of a bimetallic strip, with some having a temperature limit up to 500°C, with others only reaching 150°C before failing. Applications This effect is used in a range of mechanical and electrical devices. Clocks Mechanical clock mechanisms are sensitive to temperature changes as each part has tiny tolerance and it leads to errors in time keeping. A bimetallic strip is used to compensate this phenomenon in the mechanism of some timepieces. The most common method is to use a bimetallic construction for the circular rim of the balance wheel. What it does is move a weight in a radial way looking at the circular plane down by the balance wheel, varying then, the momentum of inertia of the balance wheel. As the spring controlling the balance becomes weaker with the increasing temperature, the balance becomes smaller in diameter to decrease the momentum of inertia and keep the period of oscillation (and hence timekeeping) constant. Nowadays this system is not used anymore since the appearance of low temperature coefficient alloys like nivarox, parachrom and many others depending on each brand. Thermostats In the regulation of heating and cooling, thermostats that operate over a wide range of temperatures are used. In these, one end of the bimetallic strip is mechanically fixed and attached to an electrical power source, while the other (moving) end carries an electrical contact. In adjustable thermostats another contact is positioned with a regulating knob or lever. The position so set controls the regulated temperature, called the set point. Some thermostats use a mercury switch connected to both electrical leads. The angle of the entire mechanism is adjustable to control the set point of the thermostat. Depending upon the application, a higher temperature may open a contact (as in a heater control) or it may close a contact (as in a refrigerator or air conditioner). The electrical contacts may control the power directly (as in a household iron) or indirectly, switching electrical power through a relay or the supply of natural gas or fuel oil through an electrically operated valve. In some natural gas heaters the power may be provided with a thermocouple that is heated by a pilot light (a small, continuously burning, flame). In devices without pilot lights for ignition (as in most modern gas clothes dryers and some natural gas heaters and decorative fireplaces) the power for the contacts is provided by reduced household electrical power that operates a relay controlling an electronic ignitor, either a resistance heater or an electrically powered spark generating device. Thermometers A direct indicating dial thermometer, common in household devices (such as a patio thermometer or a meat thermometer), uses a bimetallic strip wrapped into a coil in its most common design. The coil changes the linear movement of the metal expansion into a circular movement thanks to the helicoidal shape it draws. One end of the coil is fixed to the housing of the device as a fix point and the other drives an indicating needle inside a circular indicator. A bimetallic strip is also used in a recording thermometer. Breguet's thermometer consists of a tri-metallic helix in order to have a more accurate result. Heat engine Heat engines are not the most efficient ones, and with the use of bimetallic strips the efficiency of the heat engine is even lower as there is no chamber to contain the heat. Moreover, the bimetallic strips cannot produce strength in its moves, the reason why is that in order to achieve reasonables bendings (movements) both metallic strips have to be thin to make the difference between the expansion noticeable. So the uses for metallic strips in heat engines are mostly in simple toys that have been built to demonstrate how the principle can be used to drive a heat engine. Electrical devices Bimetal strips are used in miniature circuit breakers to protect circuits from excess current. A coil of wire is used to heat a bimetal strip, which bends and operates a linkage that unlatches a spring-operated contact. This interrupts the circuit and can be reset when the bimetal strip has cooled down. Bimetal strips are also used in time-delay relays, gas oven safety valves, thermal flashers for older turn signal lamps, and fluorescent lamp starters. In some devices, the current running directly through the bimetal strip is sufficient to heat it and operate contacts directly. It has also been used in mechanical PWM voltage regulators for automotive uses. See also Thermotime switch References Article about compensating the balance wheel against temperature changes by Hodinkee magazine Article about the hairspring used in watches by Monochrome magazine Notes External links Video of a circular bimetallic wire powering a small motor with iced water. Accessed February 2011. Video of a bimetlic coil powering engine (among others like Curie, Stirling and Hero) English inventions Engineering thermodynamics Mechanical engineering Heating, ventilation, and air conditioning Energy conversion Thermometers Bimetal
Bimetallic strip
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,743
[ "Applied and interdisciplinary physics", "Metallurgy", "Engineering thermodynamics", "Measuring instruments", "Bimetal", "Thermodynamics", "Mechanical engineering", "Thermometers" ]
589,277
https://en.wikipedia.org/wiki/William%20Huggins
Sir William Huggins (7 February 1824 – 12 May 1910) was a British astronomer best known for his pioneering work in astronomical spectroscopy together with his wife, Margaret. Biography William Huggins was born at Cornhill, Middlesex, in 1824. In 1875, he married Margaret Lindsay, daughter of John Murray of Dublin, who also had an interest in astronomy and scientific research. She encouraged her husband's photography and helped to put their research on a systematic footing. Huggins built a private observatory at 90 Upper Tulse Hill, London, from where he and his wife carried out extensive observations of the spectral emission lines and absorption lines of various celestial objects. On 29 August 1864, Huggins was the first to take the spectrum of a planetary nebula when he analysed NGC 6543. He was also the first to distinguish between nebulae and galaxies by showing that some (like the Orion Nebula) had pure emission spectra characteristic of gas, while others like the Andromeda Galaxy had the spectral characteristics of stars. Huggins was assisted in the analysis of spectra by his neighbor, the chemist William Allen Miller. Huggins was also the first to adopt dry plate photography in imaging astronomical objects. With observations of Sirius showing a redshift in 1868, Huggins hypothesized that a radial velocity of the star could be computed. Huggins won the Gold Medal of the Royal Astronomical Society in 1867, jointly with William Allen Miller. He later served as President of the Royal Astronomical Society from 1876 to 1878, and received the Gold Medal again (this time alone) in 1885. He served as an officer of the Royal Astronomical Society for a total of 37 years, more than any other person. Huggins was elected a Fellow of the Royal Society in June 1865, was awarded their Royal Medal (1866), Rumford Medal (1880) and Copley Medal (1898) and delivered their Bakerian Lecture in 1885. He then served as President of the Royal Society from 1900 to 1905. For example, his Presidential Address in 1904 praised the fallen Fellows and distributed the prizes of that year. He died at his home in Tulse Hill, London, after an operation for a hernia in 1910 and was buried at Golders Green Crematorium. Telescopes In 1856 Huggins acquired a 5-inch diameter aperture telescope by Dollond. In 1858 an 8-inch telescope by Clark was added. These were both refracting telescopes. They had glass objectives. In 1871 Huggins acquired an speculum reflecting telescope from the Grubb Telescope Company. Honours and awards Honours Elected an International Honorary Member of the American Academy of Arts and Sciences in 1892. Elected an International Member of the American Philosophical Society in 1895. Knight Commander of the Order of the Bath (KCB) in the 1897 Diamond Jubilee Honours list on 22 June 1897. Huggins was among the original recipients of the Order of Merit (OM) in the 1902 Coronation Honours list published on 26 June 1902, and received the order from King Edward VII at Buckingham Palace on 8 August 1902. Elected an International Member of the United States National Academy of Sciences in 1904. Awards Royal Medal (1866) Lalande Prize (1870) Gold Medal of the Royal Astronomical Society (jointly with William Allen Miller in 1867, solo in 1885) Rumford Medal (1880) Valz Prize (1882) Member of the Royal Swedish Academy of Sciences (1883) Janssen Medal (1888) Copley Medal (1898) Henry Draper Medal from the National Academy of Sciences (1901) Bruce Medal (1904) Named after him Huggins (lunar crater) Huggins (Martian crater) Asteroid 2635 Huggins Publications 1870: Spectrum analysis in its application to the heavenly bodies. Manchester, (Science lectures for the work people; series 2, no. 3) 1872: (editor) Spectrum analysis in its application to terrestrial substances and the physical constitution of heavenly bodies by H. Schellen, translated by Jane and Caroline Lassell, link from HathiTrust. 1899: (with Lady Huggins): An Atlas of Representative Stellar Spectra from 4870 to 3300, together with a discussion of the evolution order of the stars, and the interpretation of their spectra; preceded by a short history of the observatory. London, (Publications of Sir William Huggins's Observatory; v. 1) 1906: The Royal Society, or, Science in the state and in the schools. London. 1909: The Scientific Papers of Sir William Huggins; edited by Sir William and Lady Huggins. London, (Publications of Sir William Huggins's Observatory; v. 2) See also Planetary nebula#Observations Timeline of knowledge about the interstellar and intergalactic medium List of presidents of the Royal Society References External links Huggins, Sir William (1824–1910) Barbara J. Becker, Oxford Dictionary of National Biography, 2004 (subscription required) Audio description of Huggins' work Eclecticism, Opportunism, and the Evolution of a New Research Agenda: William and Margaret Huggins and the Origins of Astrophysics Barbara J. Becker William Wallace Campbell Sir William Huggins, K.C.B.,O.M., Astronomical Society of the Pacific link from Internet Archive. 1824 births 1910 deaths Place of birth missing British astrophysicists 19th-century British astronomers Fellows of the Royal Society Foreign associates of the National Academy of Sciences Members of the Order of Merit Members of the Royal Swedish Academy of Sciences People educated at the City of London School Presidents of the Royal Society Recipients of the Bruce Medal Recipients of the Copley Medal Recipients of the Gold Medal of the Royal Astronomical Society Royal Medal winners Knights Commander of the Order of the Bath Spectroscopists Presidents of the Royal Astronomical Society Photographers from London Academics from London Recipients of the Lalande Prize Members of the Royal Society of Sciences in Uppsala Members of the American Philosophical Society
William Huggins
[ "Physics", "Chemistry" ]
1,180
[ "Physical chemists", "Spectrum (physical sciences)", "Analytical chemists", "Spectroscopists", "Spectroscopy" ]
589,286
https://en.wikipedia.org/wiki/Pi%20bond
In chemistry, pi bonds (π bonds) are covalent chemical bonds, in each of which two lobes of an orbital on one atom overlap with two lobes of an orbital on another atom, and in which this overlap occurs laterally. Each of these atomic orbitals has an electron density of zero at a shared nodal plane that passes through the two bonded nuclei. This plane also is a nodal plane for the molecular orbital of the pi bond. Pi bonds can form in double and triple bonds but do not form in single bonds in most cases. The Greek letter π in their name refers to p orbitals, since the orbital symmetry of the pi bond is the same as that of the p orbital when seen down the bond axis. One common form of this sort of bonding involves p orbitals themselves, though d orbitals also engage in pi bonding. This latter mode forms part of the basis for metal-metal multiple bonding. Properties Pi bonds are usually weaker than sigma bonds. The C-C double bond, composed of one sigma and one pi bond, has a bond energy less than twice that of a C-C single bond, indicating that the stability added by the pi bond is less than the stability of a sigma bond. From the perspective of quantum mechanics, this bond's weakness is explained by significantly less overlap between the component p-orbitals due to their parallel orientation. This is contrasted by sigma bonds which form bonding orbitals directly between the nuclei of the bonding atoms, resulting in greater overlap and a strong sigma bond. Pi bonds result from overlap of atomic orbitals that are in contact through two areas of overlap. Most orbital overlaps that do not include the s-orbital, or have different internuclear axes (for example px + py overlap, which does not apply to an s-orbital) are generally all pi bonds. Pi bonds are more diffuse bonds than the sigma bonds. Electrons in pi bonds are sometimes referred to as pi electrons. Molecular fragments joined by a pi bond cannot rotate about that bond without breaking the pi bond, because rotation involves destroying the parallel orientation of the constituent p orbitals. For homonuclear diatomic molecules, bonding π molecular orbitals have only the one nodal plane passing through the bonded atoms, and no nodal planes between the bonded atoms. The corresponding antibonding, or π* ("pi-star") molecular orbital, is defined by the presence of an additional nodal plane between these two bonded atoms. Multiple bonds A typical double bond consists of one sigma bond and one pi bond; for example, the C=C double bond in ethylene (H2C=CH2). A typical triple bond, for example in acetylene (HC≡CH), consists of one sigma bond and two pi bonds in two mutually perpendicular planes containing the bond axis. Two pi bonds are the maximum that can exist between a given pair of atoms. Quadruple bonds are extremely rare and can be formed only between transition metal atoms, and consist of one sigma bond, two pi bonds and one delta bond. A pi bond is weaker than a sigma bond, but the combination of pi and sigma bond is stronger than either bond by itself. The enhanced strength of a multiple bond versus a single (sigma bond) is indicated in many ways, but most obviously by a contraction in bond lengths. For example, in organic chemistry, carbon–carbon bond lengths are about 154 pm in ethane, 134 pm in ethylene and 120 pm in acetylene. More bonds make the total bond length shorter and the bond becomes stronger. Special cases A pi bond can exist between two atoms that do not have a net sigma-bonding effect between them. In certain metal complexes, pi interactions between a metal atom and alkyne and alkene pi antibonding orbitals form pi-bonds. In some cases of multiple bonds between two atoms, there is no net sigma-bonding at all, only pi bonds. Examples include diiron hexacarbonyl (Fe2(CO)6), dicarbon (C2), and diborane(2) (B2H2). In these compounds the central bond consists only of pi bonding because of a sigma antibond accompanying the sigma bond itself. These compounds have been used as computational models for analysis of pi bonding itself, revealing that in order to achieve maximum orbital overlap the bond distances are much shorter than expected. See also Aromatic interaction Delta bond Molecular geometry Pi backbonding Pi interaction References Chemical bonding
Pi bond
[ "Physics", "Chemistry", "Materials_science" ]
920
[ "Chemical bonding", "Condensed matter physics", "nan" ]
589,303
https://en.wikipedia.org/wiki/Molecular%20orbital%20theory
In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century. The MOT explains the paramagnetic nature of O2, which valence bond theory cannot explain. In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms. Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons – the molecular orbitals – as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation. Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry. Linear combination of atomic orbitals (LCAO) method In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function ψj can be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation: One may determine cij coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital – hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent. Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state. There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals. The atomic orbital combination must have the correct symmetry, which means that it must belong to the correct irreducible representation of the molecular symmetry group. Using symmetry adapted linear combinations, or SALCs, molecular orbitals of the correct symmetry can be formed. Atomic orbitals must also overlap within space. They cannot combine to form molecular orbitals if they are too far away from one another. Atomic orbitals must be at similar energy levels to combine as molecular orbitals. Because if the energy difference is great, when the molecular orbitals form, the change in energy becomes small. Consequently, there is not enough reduction in energy of electrons to make significant bonding. History Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones. MO theory was originally called the Hund-Mulliken theory. According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones. This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism (see ) before valence bond theory, which came up with its own explanation in 1931. The word orbital was introduced by Mulliken in 1932. By 1933, the molecular orbital theory had been accepted as a valid and useful theory. Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons. This method provided an explanation of the stability of molecules with six pi-electrons such as benzene. The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent. This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations. This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods. The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory. Types of orbitals Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together. An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength. Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels. The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams. Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis and pi (π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*. Bond order Bond order is the number of chemical bonds between a pair of atoms. The bond order of a molecule can be calculated by subtracting the number of electrons in anti-bonding orbitals from the number of bonding orbitals, and the resulting number is then divided by two. A molecule is expected to be stable if it has bond order larger than zero. It is adequate to consider the valence electron to determine the bond order. Because (for principal quantum number n > 1) when MOs are derived from 1s AOs, the difference in number of electrons in bonding and anti-bonding molecular orbital is zero. So, there is no net effect on bond order if the electron is not the valence one. From bond order, one can predict whether a bond between two atoms will form or not. For example, the existence of He2 molecule. From the molecular orbital diagram, the bond order is . That means, no bond formation will occur between two He atoms which is seen experimentally. It can be detected under very low temperature and pressure molecular beam and has binding energy of approximately 0.001 J/mol. (The helium dimer is a van der Waals molecule.) Besides, the strength of a bond can also be realized from bond order (BO). For example: For H2: Bond order is ; bond energy is 436 kJ/mol. For H2+: Bond order is ; bond energy is 171 kJ/mol. As the bond order of H2+ is smaller than H2, it should be less stable which is observed experimentally and can be seen from the bond energy. Overview MOT provides a global, delocalized perspective on chemical bonding. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation, the breaking of a chemical bond due to the absorption of light. Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state. Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond theory. This makes MO theory more useful for the description of extended systems. Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article: ...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei....An example is the MO description of benzene, , which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons – 24 coming from carbon atoms and 6 coming from hydrogen atoms – are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C–C or C–H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon–carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms. In molecules such as methane, , the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in valence bond theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies. As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances. This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal. See also Cis effect Configuration interaction Coupled cluster Frontier molecular orbital theory Ligand field theory (MO theory for transition metal complexes) Møller–Plesset perturbation theory Quantum chemistry computer programs Semi-empirical quantum chemistry methods Valence bond theory References External links Molecular Orbital Theory - Purdue University Molecular Orbital Theory - Sparknotes Molecular Orbital Theory - Mark Bishop's Chemistry Site Introduction to MO Theory - Queen Mary, London University Molecular Orbital Theory - a related terms table An introduction to Molecular Group Theory - Oxford University Chemical bonding Chemistry theories General chemistry Quantum chemistry
Molecular orbital theory
[ "Physics", "Chemistry", "Materials_science" ]
3,107
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Condensed matter physics", " molecular", "nan", "Atomic", "Chemical bonding", " and optical physics" ]
589,307
https://en.wikipedia.org/wiki/Jean-Christophe%20Yoccoz
Jean-Christophe Yoccoz (29 May 1957 – 3 September 2016) was a French mathematician. He was awarded a Fields Medal in 1994, for his work on dynamical systems. Yoccoz died on 3 September 2016 at the age of 59. Biography Yoccoz attended the Lycée Louis-le-Grand, during which time he was a silver medalist at the 1973 International Mathematical Olympiad and a gold medalist in 1974. He entered the École Normale Supérieure in 1975, and completed an agrégation in mathematics in 1977. After completing military service in Brazil, he completed his PhD under Michael Herman in 1985 at Centre de mathématiques Laurent-Schwartz, which is a research unit jointly operated by the French National Center for Scientific Research (CNRS) and École Polytechnique. He took up a position at the University of Paris-Sud in 1987, and became a professor at the Collège de France in 1997, where he remained until his death. He was a member of Bourbaki. Yoccoz won the Salem Prize in 1988. He was an invited speaker at the International Congress of Mathematicians in 1990 at Kyoto, and was awarded the Fields Medal at the International Congress of Mathematicians in 1994 in Zürich. He joined the French Academy of Sciences and Brazilian Academy of Sciences in 1994, became a chevalier in the French Legion of Honor in 1995, and was awarded the Grand Cross of the Brazilian National Order of Scientific Merit in 1998. Mathematical work Yoccoz's worked on the theory of dynamical systems. His contributions include advances to KAM theory, and the introduction of the method of Yoccoz puzzles, a combinatorial technique which proved useful to the study of Julia sets. Notable publications Yoccoz, J.-C. Conjugaison différentiable des difféomorphismes du cercle dont le nombre de rotation vérifie une condition diophantienne. Ann. Sci. École Norm. Sup. (4) 17 (1984), no. 3, 333–359. doi:10.24033/asens.1475 Yoccoz, Jean-Christophe. Théorème de Siegel, nombres de Bruno et polynômes quadratiques. Petits diviseurs en dimension 1. Astérisque No. 231 (1995), 3–88. References 1957 births 2016 deaths 20th-century French mathematicians 21st-century French mathematicians Fields Medalists Scientists from Paris Academic staff of Paris-Sud University École Normale Supérieure alumni Members of the Brazilian Academy of Sciences Recipients of the Great Cross of the National Order of Scientific Merit (Brazil) Lycée Louis-le-Grand alumni Members of the French Academy of Sciences International Mathematical Olympiad participants Dynamical systems theorists Nicolas Bourbaki Academic staff of the Collège de France Knights of the Legion of Honour
Jean-Christophe Yoccoz
[ "Mathematics" ]
581
[ "Dynamical systems theorists", "Dynamical systems" ]
589,488
https://en.wikipedia.org/wiki/Perxenate
In chemistry, perxenates are salts of the yellow xenon-containing anion . This anion has octahedral molecular geometry, as determined by Raman spectroscopy, having O–Xe–O bond angles varying between 87° and 93°. The Xe–O bond length was determined by X-ray crystallography to be 1.875 Å. Synthesis Perxenates are synthesized by the disproportionation of xenon trioxide when dissolved in strong alkali: 2 XeO3 () + 4 OH− () → Xe () + () + O2 () + 2 H2O () When Ba(OH)2 is used as the alkali, barium perxenate can be crystallized from the resulting solution. Perxenic acid Perxenic acid is the unstable conjugate acid of the perxenate anion, formed by the solution of xenon tetroxide in water. It has not been isolated as a free acid, because under acidic conditions it rapidly decomposes into xenon trioxide and oxygen gas: Its extrapolated formula, H4XeO6, is inferred from the octahedral geometry of the perxenate ion () in its alkali metal salts. The pKa of aqueous perxenic acid has been indirectly calculated to be below 0, making it an extremely strong acid. Its first ionization yields the anion , which has a pKa value of 4.29, still relatively acidic. The twice deprotonated species has a pKa value of 10.81. Due to its rapid decomposition under acidic conditions as described above, however, it is most commonly known as perxenate salts, bearing the anion . Properties Perxenic acid and the anion are both strong oxidizing agents, capable of oxidising silver(I), copper (II) and manganese(II) to (respectively) silver(III), copper(III), and permanganate. The perxenate anion is unstable in acidic solutions, being almost instantaneously reduced to . The sodium, potassium, and barium salts are soluble. Barium perxenate solution is used as the starting material for the synthesis of xenon tetroxide (XeO4) by mixing it with concentrated sulfuric acid: Ba2XeO6 (s) + 2 H2SO4 (l) → XeO4 (g) + 2 BaSO4 (s) + 2 H2O (l) Most metal perxenates are stable, except silver perxenate, which decomposes violently. Applications Sodium perxenate, Na4XeO6, can be used for the analytic separation of trace amounts of americium from curium. The separation involves the oxidation of Am3+ to Am4+ by sodium perxenate in acidic solution in the presence of La3+, followed by treatment with calcium fluoride, which forms insoluble fluorides with Cm3+ and La3+, but retains Am4+ and Pu4+ in solution as soluble fluorides. References Oxyanions Salts Xenon(VIII) compounds Octahedral compounds
Perxenate
[ "Chemistry" ]
688
[ "Salts" ]
589,503
https://en.wikipedia.org/wiki/Xenon%20tetroxide
Xenon tetroxide is a chemical compound of xenon and oxygen with molecular formula XeO4, remarkable for being a relatively stable compound of a noble gas. It is a yellow crystalline solid that is stable below −35.9 °C; above that temperature it is very prone to exploding and decomposing into elemental xenon and oxygen (O2). All eight valence electrons of xenon are involved in the bonds with the oxygen, and the oxidation state of the xenon atom is +8. Oxygen is the only element that can bring xenon up to its highest oxidation state; even fluorine can only give XeF6 (+6). Two other short-lived xenon compounds with an oxidation state of +8, XeO3F2 and XeO2F4, are accessible by the reaction of xenon tetroxide with xenon hexafluoride. XeO3F2 and XeO2F4 can be detected with mass spectrometry. The perxenates are also compounds where xenon has the +8 oxidation state. Reactions At temperatures above −35.9 °C, xenon tetroxide is very prone to explosion, decomposing into xenon and oxygen gases with ΔH = −643 kJ/mol: XeO4 → Xe + 2 O2 Xenon tetroxide dissolves in water to form perxenic acid and in alkalis to form perxenate salts: XeO4 + 2 H2O → H4XeO6 XeO4 + 4 NaOH → Na4XeO6 + 2 H2O Xenon tetroxide can also react with xenon hexafluoride to give xenon oxyfluorides: XeO4 + XeF6 → XeOF4 + XeO3F2 XeO4 + 2XeF6 → XeO2F4 + 2 XeOF4 Synthesis All syntheses start from the perxenates, which are accessible from the xenates through two methods. One is the disproportionation of xenates to perxenates and xenon: 2 + 2 OH− → + Xe + O2 + 2 H2O The other is oxidation of the xenates with ozone in basic solution: + O3 + 3 OH− → + O2 + 2 H2O Barium perxenate is reacted with sulfuric acid and the unstable perxenic acid is dehydrated to give xenon tetroxide: + 2 → 2 + → 2 + Any excess perxenic acid slowly undergoes a decomposition reaction to xenic acid and oxygen: 2 → + 2 + 2 References Xenon(VIII) compounds Inorganic compounds Oxides
Xenon tetroxide
[ "Chemistry" ]
594
[ "Inorganic compounds", "Oxides", "Salts" ]
589,516
https://en.wikipedia.org/wiki/Sexual%20maturity
Sexual maturity is the capability of an organism to reproduce. In humans, it is related to both puberty and adulthood. Puberty is the biological process of sexual maturation, while adulthood, the condition of being socially recognized as an independent person capable of giving consent and taking responsibility, generally implies sexual maturity (certain disorders of sexual development notwithstanding), but depends on other criteria, defined by specific cultural expectations. Most multicellular organisms are unable to sexually reproduce at birth (animals) or germination (e.g. plants): depending on the species, it may be days, weeks, or years until they have developed enough to be able to do so; in addition, certain cues may trigger an organism to become sexually mature. These may be external, such as drought, or fire, that triggers sexual maturation of certain plants, or internal, such as percentage of body fat (certain animals). Internal cues are not to be confused with hormones, the chemical signals that initiate cellular processes leading to sexual maturity, but the production and secretion of hormones is triggered by such cues. In some species, immature males may delay sexual maturation in the presence of another sexually mature male, as in the male chicken (rooster), due to the intense, often lethal, combat engaged in by mature males. The female honeybee only becomes sexually mature if it is fed a special substance ("royal jelly") during the larval stage. Role of reproductive organs Sexual maturity is brought about by a maturing of the reproductive organs and the production of gametes. It may also be accompanied by a growth spurt or other physical changes which distinguish the immature organism from its adult form. In animals these are termed secondary sex characteristics, and often represent an increase in sexual dimorphism. After sexual maturity is achieved, some organisms become infertile, or even change their sex. Some organisms are hermaphrodites and may or may not be able to "completely" mature and/or to produce viable offspring. Also, while in many organisms sexual maturity is strongly linked to age, many other factors are involved, and it is possible for some to display most or all of the characteristics of the adult form without being sexually mature. Conversely it is also possible for the "immature" form of an organism to reproduce. This is called progenesis, in which sexual development occurs faster than other physiological development (in contrast, the term neoteny refers to when non-sexual development is slowed – but the result is the same - the retention of juvenile characteristics into adulthood). Puberty vs. sexual maturity In some species, there is a difference between puberty and sexual maturity. For example, in bulls, puberty is characterized by the accelerated growth of the genital system, an increase in luteinizing hormone (LH) secretion, and the onset of spermatogenesis. Sexual maturity, however, signifies the attainment of full reproductive capacity, which may take up to 6–9 months after puberty. See also Gonadosomatic index Generation time References Gynaecological endocrinology Sexual reproduction Reproduction Sexuality Adulthood
Sexual maturity
[ "Biology" ]
631
[ "Behavior", "Sex", "Reproduction", "Biological interactions", "Sexual reproduction", "Sexuality" ]
589,548
https://en.wikipedia.org/wiki/Prothrombin%20time
The prothrombin time (PT) – along with its derived measures of prothrombin ratio (PR) and international normalized ratio (INR) – is an assay for evaluating the extrinsic pathway and common pathway of coagulation. This blood test is also called protime INR and PT/INR. They are used to determine the clotting tendency of blood, in conditions such as the measure of warfarin dosage, liver damage (cirrhosis), and vitamin K status. PT measures the following coagulation factors: I (fibrinogen), II (prothrombin), V (proaccelerin), VII (proconvertin), and X (Stuart–Prower factor). PT is often used in conjunction with the activated partial thromboplastin time (aPTT) which measures the intrinsic pathway and common pathway of coagulation. Laboratory measurement The reference range for prothrombin time depends on the analytical method used, but is usually around 12–13 seconds (results should always be interpreted using the reference range from the laboratory that performed the test), and the INR in absence of anticoagulation therapy is 0.8–1.2. The target range for INR in anticoagulant use (e.g. warfarin) is 2 to 3. In some cases, if more intense anticoagulation is thought to be required, the target range may be as high as 2.5–3.5 depending on the indication for anticoagulation. Methodology Prothrombin time is typically analyzed by a laboratory technologist on an automated instrument at 37 °C (as a nominal approximation of normal human body temperature). Blood is drawn into a test tube containing liquid sodium citrate, which acts as an anticoagulant by binding the calcium in a sample. The blood is mixed, then centrifuged to separate blood cells from plasma (as prothrombin time is most commonly measured using blood plasma). In newborns, a capillary whole blood specimen is used. A sample of the plasma is extracted from the test tube and placed into a measuring test tube (Note: for an accurate measurement, the ratio of blood to citrate needs to be fixed and should be labeled on the side of the measuring test tube by the manufacturing company; many laboratories will not perform the assay if the tube is underfilled and contains a relatively high concentration of citrate—the standardized dilution of 1 part anticoagulant to 9 parts whole blood is no longer valid). Next an excess of calcium (in a phospholipid suspension) is added to the test tube, thereby reversing the effects of citrate and enabling the blood to clot again. Finally, in order to activate the extrinsic / tissue factor clotting cascade pathway, tissue factor (also known as factor III) is added and the time the sample takes to clot is measured optically. Some laboratories use a mechanical measurement, which eliminates interferences from lipemic and icteric samples. Prothrombin time ratio The prothrombin time ratio is the ratio of a subject's measured prothrombin time (in seconds) to the normal laboratory reference PT. The PT ratio varies depending on the specific reagents used, and has been replaced by the INR. Elevated INR may be useful as a rapid and inexpensive diagnostic of infection in people with COVID-19. International normalized ratio The result (in seconds) for a prothrombin time performed on a normal individual will vary according to the type of analytical system employed. This is due to the variations between different types and batches of manufacturer's tissue factor used in the reagent to perform the test. The INR was devised to standardize the results. Each manufacturer assigns an ISI value (International Sensitivity Index) for any tissue factor they manufacture. The ISI value indicates how a particular batch of tissue factor compares to an international reference tissue factor. The ISI is usually between 0.94 and 1.4 for more sensitive and 2.0–3.0 for less sensitive thromboplastins. The INR is the ratio of a patient's prothrombin time to a normal (control) sample, raised to the power of the ISI value for the analytical system being used. PTnormal is established as the geometric mean of the prothrombin times (PT) of a reference sample group. Interpretation The prothrombin time is the time it takes plasma to clot after addition of tissue factor (obtained from animals such as rabbits, or recombinant tissue factor, or from brains of autopsy patients). This measures the quality of the extrinsic pathway (as well as the common pathway) of coagulation. The speed of the extrinsic pathway is greatly affected by levels of functional factor VII in the body. Factor VII has a short half-life and the carboxylation of its glutamate residues requires vitamin K. The prothrombin time can be prolonged as a result of deficiencies in vitamin K, warfarin therapy, malabsorption, or lack of intestinal colonization by bacteria (such as in newborns). In addition, poor factor VII synthesis (due to liver disease) or increased consumption (in disseminated intravascular coagulation) may prolong the PT. The INR is typically used to monitor patients on warfarin or related oral anticoagulant therapy. The normal range for a healthy person not using warfarin is 0.8–1.2, and for people on warfarin therapy an INR of 2.0–3.0 is usually targeted, although the target INR may be higher in particular situations, such as for those with a mechanical heart valve. If the INR is outside the target range, a high INR indicates a higher risk of bleeding, while a low INR suggests a higher risk of developing a clot. In patients on a vitamin K antagonist such as warfarin with supratherapeutic INR but INR less than 10 and no bleeding, it is enough to lower the dose or omit a dose, monitor the INR and resume the vitamin K antagonist at an adjusted lower dose when the target INR is reached. For people who need rapid reversal of the vitamin K antagonist – such as due to serious bleeding – or who need emergency surgery, the effects of warfarin can be reversed with vitamin K, prothrombin complex concentrate (PCC), or fresh frozen plasma (FFP). Factors determining accuracy Lupus anticoagulant, a circulating inhibitor predisposing for thrombosis, may skew PT results, depending on the assay used. Variations between various thromboplastin preparations have in the past led to decreased accuracy of INR readings, and a 2005 study suggested that despite international calibration efforts (by INR) there were still statistically significant differences between various kits, casting doubt on the long-term tenability of PT/INR as a measure for anticoagulant therapy. Indeed, a new prothrombin time variant, the Fiix prothrombin time, intended solely for monitoring warfarin and other vitamin K antagonists has been invented and recently become available as a manufactured test. The Fiix prothrombin time is only affected by reductions in factor II and/or factor X and this stabilizes the anticoagulant effect and appears to improve clinical outcome according to an investigator initiated randomized blinded clinical trial, The Fiix-trial. In this trial thromboembolism was reduced by 50% during long-term treatment and despite that bleeding was not increased. Statistics An estimated 800 million PT/INR assays are performed annually worldwide. Near-patient testing In addition to the laboratory method outlined above, near-patient testing (NPT) or home INR monitoring is becoming increasingly common in some countries. In the United Kingdom, for example, near-patient testing is used both by patients at home and by some anticoagulation clinics (often hospital-based) as a fast and convenient alternative to the lab method. After a period of doubt about the accuracy of NPT results, a new generation of machines and reagents seems to be gaining acceptance for its ability to deliver results close in accuracy to those of the lab. In a typical NPT set up, a small table-top device is used. A drop of capillary blood is obtained with an automated finger-prick, which is almost painless. This drop is placed on a disposable test strip with which the machine has been prepared. The resulting INR comes up on the display a few seconds later. A similar form of testing is used by people with diabetes for monitoring blood sugar levels, which is easily taught and routinely practiced. Local policy determines whether the patient or a coagulation specialist (pharmacist, nurse, general practitioner or hospital doctor) interprets the result and determines the dose of medication. In Germany and Austria, patients may adjust the medication dose themselves, while in the UK and the US this remains in the hands of a health care professional. A significant advantage of home testing is the evidence that patient self-testing with medical support and patient self-management (where patients adjust their own anticoagulant dose) improves anticoagulant control. A meta analysis which reviewed 14 trials showed that home testing led to a reduced incidence of complications (bleeding and thrombosis) and improved the time in the therapeutic range, which is an indirect measure of anticoagulant control. In 2022, a smartphone system was introduced by researchers to perform PT/INR testing in an inexpensive and accessible manner. It uses the vibration motor and camera ubiquitous on smartphones to track micro-mechanical movements of a copper particle and compute PT/INR values. Other advantages of the NPT approach are that it is fast and convenient, usually less painful, and offers, in home use, the ability for patients to measure their own INRs when required. Among its problems are that quite a steady hand is needed to deliver the blood to the exact spot, that some patients find the finger-pricking difficult, and that the cost of the test strips must also be taken into account. In the UK these are available on prescription so that elderly and unwaged people will not pay for them and others will pay only a standard prescription charge, which at the moment represents only about 20% of the retail price of the strips. In the US, NPT in the home is currently reimbursed by Medicare for patients with mechanical heart valves, while private insurers may cover for other indications. Medicare is now covering home testing for patients with chronic atrial fibrillation. Home testing requires a doctor's prescription and that the meter and supplies are obtained from a Medicare-approved Independent Diagnostic Testing Facility (IDTF). There is some evidence to suggest that NPT may be less accurate for certain patients, for example those who have the lupus anticoagulant. Guidelines International guidelines were published in 2005 to govern home monitoring of oral anticoagulation by the International Self-Monitoring Association for Oral Anticoagulation. The international guidelines study stated, "The consensus agrees that patient self-testing and patient self-management are effective methods of monitoring oral anticoagulation therapy, providing outcomes at least as good as, and possibly better than, those achieved with an anticoagulation clinic. All patients must be appropriately selected and trained. Currently, available self-testing/self-management devices give INR results which are comparable with those obtained in laboratory testing." Medicare coverage for home testing of INR has been expanded in order to allow more people access to home testing of INR in the US. The release on 19 March 2008 said, "[t]he Centers for Medicare & Medicaid Services (CMS) expanded Medicare coverage for home blood testing of prothrombin time (PT) International Normalized Ratio (INR) to include beneficiaries who are using the drug warfarin, an anticoagulant (blood thinner) medication, for chronic atrial fibrillation or venous thromboembolism." In addition, "those Medicare beneficiaries and their physicians managing conditions related to chronic atrial fibrillation or venous thromboembolism will benefit greatly through the use of the home test." History The prothrombin time was developed by Armand J. Quick and colleagues in 1935, and a second method was published by , also called the "p and p" or "prothrombin and proconvertin" method. It aided in the identification of the anticoagulants dicumarol and warfarin, and was used subsequently as a measure of activity for warfarin when used therapeutically. The INR was invented in the early 1980s by Tom Kirkwood working at the UK National Institute for Biological Standards and Control (and subsequently at the UK National Institute for Medical Research) to provide a consistent way of expressing the prothrombin time ratio, which had previously suffered from a large degree of variation between centres using different reagents. The INR was coupled to Dr Kirkwood's simultaneous invention of the International Sensitivity Index (ISI), which provided the means to calibrate different batches of thromboplastins to an international standard. The INR became widely accepted worldwide, especially after endorsement by the World Health Organization. See also D-dimer Partial thromboplastin time (PTT), or activated partial thromboplastin time (aPTT or APTT) Thrombin time (TT) Thrombodynamics test Thromboelastography Thrombus References External links PT and INR – Lab Tests Online Blood tests
Prothrombin time
[ "Chemistry" ]
2,901
[ "Blood tests", "Chemical pathology" ]
1,539,826
https://en.wikipedia.org/wiki/Suzhou%20numerals
The Suzhou numerals, also known as (), is a numeral system used in China before the introduction of Hindu numerals. The Suzhou numerals are also known as (), (), (), () and (). History The Suzhou numeral system is the only surviving variation of the rod numeral system. The rod numeral system is a positional numeral system used by the Chinese in mathematics. Suzhou numerals are a variation of the Southern Song rod numerals. Suzhou numerals were used as shorthand in number-intensive areas of commerce such as accounting and bookkeeping. At the same time, standard Chinese numerals were used in formal writing, akin to spelling out the numbers in English. Suzhou numerals were once popular in Chinese marketplaces, such as those in Hong Kong and Chinese restaurants in Malaysia before the 1990s, but they have gradually been supplanted by Hindu numerals. This is similar to what had happened in Europe with Roman numerals used in ancient and medieval Europe for mathematics and commerce. Nowadays, the Suzhou numeral system is only used for displaying prices in Chinese markets or on traditional handwritten invoices. Symbols In the Suzhou numeral system, special symbols are used for digits instead of the Chinese characters. The digits of the Suzhou numerals are defined between U+3021 and U+3029 in Unicode. An additional three code points starting from U+3038 were added later. The symbols for 5 to 9 are derived from those for 0 to 4 by adding a vertical bar on top, which is similar to adding an upper bead which represents a value of 5 in an abacus. The resemblance makes the Suzhou numerals intuitive to use together with the abacus as the traditional calculation tool. The numbers one, two, and three are all represented by vertical bars. This can cause confusion when they appear next to each other. Standard Chinese ideographs are often used in this situation to avoid ambiguity. For example, "21" is written as "" instead of "" which can be confused with "3" (). The first character of such sequences is usually represented by the Suzhou numeral, while the second character is represented by the Chinese ideograph. Notations The digits are positional. The full numerical notations are written in two lines to indicate numerical value, order of magnitude, and unit of measurement. Following the rod numeral system, the digits of the Suzhou numerals are always written horizontally from left to right, just like how numbers are represented in an abacus, even when used within vertically written documents. For example: The first line contains the numerical values, in this example, "" stands for "4022". The second line consists of Chinese characters that represents the order of magnitude and unit of measurement of the first digit in the numerical representation. In this case "" which stands for "ten yuan". When put together, it is then read as "40.22 yuan". Possible characters denoting order of magnitude include: wàn () for myriads (As a variant of the traditional character , it is used for speed of writing in Suzhou numerals even before simplification of Chinese characters.) qiān () for thousands bǎi () for hundreds shí () for tens blank for ones Other possible characters denoting unit of measurement include: yuán () for dollar máo ( or ) for 10 cents lǐ () for the Chinese mile any other Chinese measurement unit Notice that the decimal point is implicit when the first digit is set at the ten position. Zero is represented by the character for zero (). Leading and trailing zeros are unnecessary in this system. This is very similar to the modern scientific notation for floating point numbers where the significant digits are represented in the mantissa and the order of magnitude is specified in the exponent. Also, the unit of measurement, with the first digit indicator, is usually aligned to the middle of the "numbers" row. Hangzhou misnomer In the Unicode standard version 3.0, these characters are incorrectly named Hangzhou style numerals. In the Unicode standard 4.0, an erratum was added which stated: All references to "Hangzhou" in the Unicode standard have been corrected to "Suzhou" except for the character names themselves, which cannot be changed once assigned, in accordance with the Unicode Stability Policy. (This policy allows software to use the names as unique identifiers.) See also Unicode numerals References Numerals Chinese mathematics Numeral systems Culture in Suzhou
Suzhou numerals
[ "Mathematics" ]
953
[ "Numeral systems", "Numerals", "Mathematical objects", "Numbers" ]
1,539,909
https://en.wikipedia.org/wiki/Conrotatory%20and%20disrotatory
In organic chemistry, an electrocyclic reaction can either be classified as conrotatory or disrotatory based on the rotation at each end of the molecule. In conrotatory mode, both atomic orbitals of the end groups turn in the same direction (such as both atomic orbitals rotating clockwise or counter-clockwise). In disrotatory mode, the atomic orbitals of the end groups turn in opposite directions (one atomic orbital turns clockwise and the other counter-clockwise). The cis/trans geometry of the final product is directly decided by the difference between conrotation and disrotation. Determining whether a particular reaction is conrotatory or disrotatory can be accomplished by examining the molecular orbitals of each molecule and through a set of rules. Only two pieces of information are required to determine conrotation or disrotation using the set of rules: how many electrons are in the pi-system and whether the reaction is induced by heat or by light. This set of rules can also be derived from an analysis of the molecular orbitals for predicting the stereochemistry of electrocyclic reactions. Example of a photochemical reaction Analysis of a photochemical electrocyclic reaction involves the HOMO, the LUMO, and correlations diagrams. An electron is promoted into the LUMO changing the frontier molecular orbital involved in the reaction. Example of a thermal reaction Suppose that trans-cis-trans-2,4,6-octatriene is converted to under thermal conditions. Since the substrate octatriene is a "4n + 2" molecule, the Woodward–Hoffmann rules predict that the reaction happens in a disrotatory mechanism. Since thermal electrocyclic reactions occur in the HOMO, it is first necessary to draw the appropriate molecular orbitals. Next, the new carbon-carbon bond is formed by taking two of the p-orbitals and rotating them 90 degrees (see diagram). Since the new bond requires constructive overlap, the orbitals must be rotated in a certain way. Performing a disrotation will cause the two black lobes to overlap, forming a new bond. Therefore, the reaction with octatriene happens through a disrotatory mechanism. In contrast, if a conrotation had been performed then one white lobe would overlap with one black lobe. This would have caused destructive interference and no new carbon-carbon bond would have been formed. In addition, the cis/trans geometry of the product can also be determined. When the p-orbitals were rotated inwards it also caused the two methyl groups to rotate upwards. Since both methyls are pointing "up", then the product is . References Carey, Francis A.; Sundberg, Richard J.; (1984). Advanced Organic Chemistry Part A Structure and Mechanisms (2nd ed.). New York N.Y.: Plenum Press. . March Jerry; (1985). Advanced Organic Chemistry reactions, mechanisms and structure (3rd ed.). New York: John Wiley & Sons, inc. Physical organic chemistry
Conrotatory and disrotatory
[ "Chemistry" ]
627
[ "Physical organic chemistry" ]
1,539,973
https://en.wikipedia.org/wiki/Overlapping%20generations%20model
The overlapping generations (OLG) model is one of the dominating frameworks of analysis in the study of macroeconomic dynamics and economic growth. In contrast to the Ramsey–Cass–Koopmans neoclassical growth model in which individuals are infinitely-lived, in the OLG model individuals live a finite length of time, long enough to overlap with at least one period of another agent's life. The OLG model is the natural framework for the study of: (a) the life-cycle behavior (investment in human capital, work and saving for retirement), (b) the implications of the allocation of resources across the generations, such as Social Security, on the income per capita in the long-run, (c) the determinants of economic growth in the course of human history, and (d) the factors that triggered the fertility transition. History The construction of the OLG model was inspired by Irving Fisher's monograph The Theory of Interest. It was first formulated in 1947, in the context of a pure-exchange economy, by Maurice Allais, and more rigorously by Paul Samuelson in 1958. In 1965, Peter Diamond incorporated an aggregate neoclassical production into the model. This OLG model with production was further augmented with the development of the two-sector OLG model by Oded Galor, and the introduction of OLG models with endogenous fertility. Books devoted to the use of the OLG model include Azariadis' Intertemporal Macroeconomics and de la Croix and Michel's Theory of Economic Growth. Pure-exchange OLG model The most basic OLG model has the following characteristics: Individuals live for two periods; in the first period of life, they are referred to as the Young. In the second period of life, they are referred to as the Old. A number of individuals are born in every period. denotes the number of individuals born in period t. denotes the number of old people in period t. Since the economy begins in period 1, in period 1 there is a group of people who are already old. They are referred to as the initial old. The number of them can be denoted as . The size of the initial old generation is normalized to 1: . People do not die early, so . Population grows at a constant rate n: In the "pure exchange economy" version of the model, there is only one physical good and it cannot endure for more than one period. Each individual receives a fixed endowment of this good at birth. This endowment is denoted as y. In the "production economy" version of the model (see Diamond OLG model below), the physical good can be either consumed or invested to build physical capital. Output is produced from labor and physical capital. Each household is endowed with one unit of time which is inelastically supply on the labor market. Preferences over consumption streams are given by where is the rate of time preference. OLG model with production Basic one-sector OLG model The pure-exchange OLG model was augmented with the introduction of an aggregate neoclassical production by Peter Diamond.  In contrast, to Ramsey–Cass–Koopmans neoclassical growth model in which individuals are infinitely-lived and the economy is characterized by a unique steady-state equilibrium, as was established by Oded Galor and Harl Ryder, the OLG economy may be characterized by multiple steady-state equilibria, and initial conditions may therefore affect the long-run evolution of the long-run level of income per capita. Since initial conditions in the OLG model may affect economic growth in long-run, the model was useful for the exploration of the convergence hypothesis. The economy has the following characteristics: Two generations are alive at any point in time, the young (age 1) and old (age 2). The size of the young generation in period t is given by Nt = N0 Et. Households work only in the first period of their life and earn Y1,t income. They earn no income in the second period of their life (Y2,t+1 = 0). They consume part of their first period income and save the rest to finance their consumption when old. At the end of period t, the assets of the young are the source of the capital used for aggregate production in period t+1.So Kt+1 = Nt,a1,t where a1,t is the assets per young household after their consumption in period 1. In addition to this there is no depreciation. The old in period t own the entire capital stock and consume it entirely, so dissaving by the old in period t is given by Nt-1,a1,t-1 = Kt. Labor and capital markets are perfectly competitive and the aggregate production technology is CRS, Y = F(K,L). Two-sector OLG model The one-sector OLG model was further augmented with the introduction of a two-sector OLG model by Oded Galor. The two-sector model provides a framework of analysis for the study of the sectoral adjustments to aggregate shocks and implications of international trade for the dynamics of comparative advantage. In contrast to the Uzawa two-sector neoclassical growth model, the two-sector OLG model may be characterized by multiple steady-state equilibria, and initial conditions may therefore affect the long-run position of an economy. OLG model with endogenous fertility Oded Galor and his co-authors develop OLG models where population growth is endogenously determined to explore: (a) the importance the narrowing of the gender wage gap for the fertility decline, (b) the contribution of the rise in the return to human capital and the decline in fertility to the transition from stagnation to growth, and (c) the importance of population adjustment to technological progress for the emergence of the Malthusian trap. Dynamic inefficiency One important aspect of the OLG model is that the steady state equilibrium need not be efficient, in contrast to general equilibrium models where the first welfare theorem guarantees Pareto efficiency. Because there are an infinite number of agents in the economy (summing over future time), the total value of resources is infinite, so Pareto improvements can be made by transferring resources from each young generation to the current old generation, similar to the logic described in the Hilbert Hotel. Not every equilibrium is inefficient; the efficiency of an equilibrium is strongly linked to the interest rate and the Cass Criterion gives necessary and sufficient conditions for when an OLG competitive equilibrium allocation is inefficient. Another attribute of OLG type models is that it is possible that 'over saving' can occur when capital accumulation is added to the model—a situation which could be improved upon by a social planner by forcing households to draw down their capital stocks. However, certain restrictions on the underlying technology of production and consumer tastes can ensure that the steady state level of saving corresponds to the Golden Rule savings rate of the Solow growth model and thus guarantee intertemporal efficiency. Along the same lines, most empirical research on the subject has noted that oversaving does not seem to be a major problem in the real world. In Diamond's version of the model, individuals tend to save more than is socially optimal, leading to dynamic inefficiency. Subsequent work has investigated whether dynamic inefficiency is a characteristic in some economies and whether government programs to transfer wealth from young to poor do reduce dynamic inefficiency. Another fundamental contribution of OLG models is that they justify existence of money as a medium of exchange. A system of expectations exists as an equilibrium in which each new young generation accepts money from the previous old generation in exchange for consumption. They do this because they expect to be able to use that money to purchase consumption when they are the old generation. See also Peter A. Diamond Karl Shell Macroeconomic model First welfare theorem Walrasian equilibrium References Further reading Azariadis, Costas (1993), "Intertemporal Macroeconomics", Wiley-Blackwell, . de la Croix, David; Michel, Philippe (2002), "A Theory of Economic Growth - Dynamics and Policy in Overlapping Generations", Cambridge University Press, . Economics models Economics and time
Overlapping generations model
[ "Physics" ]
1,677
[ "Spacetime", "Economics and time", "Physical quantities", "Time" ]
1,540,070
https://en.wikipedia.org/wiki/Jacob%20Palis
Jacob Palis Jr. (born 15 March 1940) is a Brazilian mathematician and professor. Palis' research interests are mainly dynamical systems and differential equations. Some themes are global stability and hyperbolicity, bifurcations, attractors and chaotic systems. Biography Jacob Palis was born in Uberaba, Minas Gerais. His father was a Lebanese immigrant, and his mother was a Syrian immigrant. The couple had eight children (five men and three women), and Jacob was the youngest. His father was a merchant, owner of a large store, and supported and funded the studies of his children. Palis said that he already enjoyed mathematics in his childhood. At 16, Palis moved to Rio de Janeiro to study engineering at the University of Brazil – now UFRJ. He was approved in first place in the entrance exam, but was not old enough to be accepted; he then had to take the university's entry exam again a year later, at which again he obtained first place. He completed the course in 1962 with honours and receiving the award for the best student. In 1964, he moved to the United States. In 1966 he obtained his master's degree in mathematics under the guidance of Stephen Smale at the University of California, Berkeley, and in 1968 his PhD, with the thesis On Morse-Smale Diffeomorphisms, again with Smale as advisor. In 1968, he returned to Brazil and became a researcher at the Instituto Nacional de Matemática Pura e Aplicada (IMPA) in Rio de Janeiro, Brazil. Since 1973 he has held a permanent position as professor at IMPA, where he was director from 1993 until 2003. He was Secretary-General of the Third World Academy of Sciences from 2004 to 2006, and elected its president in 2006 and remained on position till December 2012. He was also president of the International Mathematical Union from 1999 to 2002. He was president of the Brazilian Academy of Sciences from 2007 to 2016. Palis has advised more than forty PhD students so far from more than ten countries, including Artur Oscar Lopes, Ricardo Mañé, Welington de Melo, Carlos Gustavo Moreira, Enrique Pujals and Marcelo Viana. Awards and honors Palis has received numerous medals and decorations. He is a foreign member of several academies of sciences, including the United States National Academy of Sciences, the French Academy of Sciences and German Academy of Sciences Leopoldina. In 2005 Palis received the Legion of Honor. He is a member of the Norwegian Academy of Science and Letters. In 2010 he was awarded the Balzan Prize for his fundamental contributions in the mathematical theory of dynamical systems that has been the basis for many applications in various scientific disciplines, such as in the study of oscillations. He is also a recipient of the 1988 TWAS Prize. Selected publications On Morse-Smale Dynamical Systems, Topology 19, 1969 (385–405). Structural Stability Theorems, with S. Smale, Proceedings of the Institute on Global Analysis, American Math. Society, Vol. XIV, 1970 (223–232). Cycles and Bifurcations Theory, with S. Newhouse, Asterisque 31, Société Mathématique de France, 1976 (44–140). The Topology of Holomorphic Flows near a Singularity, with C. Camacho and N. Kuiper, Publications Math.Institut Hautes Études Scientifiques 48, 1978 (5–38). Moduli of Stability and Bifurcation Theory, Proceedings of the International Congress of Mathematicians, Helsinki, 1978 (835–839). Stability of Parameterized Families of Gradient Vector Fields, with F. Takens, Annals of Mathematics 118, 1983 (383–421). Cycles and Measure of Bifurcation Sets for Two-Dimensional Diffeomorphisms, with F. Takens, Inventiones Mathematicae 82, 1985 (397–422). Homoclinic Orbits, Hyperbolic Dynamic and Fractional Dimensions of Cantor Sets (Lefschetz Centennial Conference) Contemporary Mathematics - American Mathematical Society, 58, 1987 (203–216). Hyperbolicity and Creation of Homoclinic Orbits, with F.Takens, Annals of Mathematics 125, 1987 (337–374). On the C1 Omega-Stability Conjecture, Publications Math. Institut Hautes Études Scientifiques, 66, 1988 (210–215). Bifurcations and Global Stability of Two-Parameter Families of Gradient Vector Fields with M. J. Carneiro, Publications Math. Institut Hautes Études Scientifiques 70, 1990 (103–168). "Homoclinic Tangencies for Hyperbolic Sets of Large Hausdorff Dimension", with J. C. Yoccoz, Acta Mathematica 172, 1994, pp. 91–136. High Dimension Diffeomorphisms Displaying Infinitely Many Sinks, with M. Viana, Annals of Mathematics 140, 1994 (207–250). A Global View of Dynamics and a Conjecture on the Denseness of Finitude of Attractors. Astérisque. France:, v. 261, pp. 339–351, 2000. Homoclinic tangencies and fractal invariants in arbitrary dimension, with C. Moreira and M. Viana, C R Ac Sc Paris., 2001. Nonuniformily hyperbolic horseshoes unleashed by homoclinic bifurcations and zero density of attractors, with J.-C. Yoccoz, C R Ac Sc Paris., 2001. Books published Geometric Theory of Dynamical Systems, with W. de Melo. Springer-Verlag, 1982; also published in Portuguese, Russian and Chinese. Hyperbolicity and Sensitive-Chaotic Dynamics at Homoclinic Bifurcations, Fractal Dimensions and Infinitely Many Attractors, with F. Takens. Cambridge Univ. Press, 1993; Second Edition, 1994. References External links Jacob Palis' homepage Jacob Palis International Balzan Prize Foundation Interview (in Portuguese) 1940 births Living people 21st-century Brazilian mathematicians Brazilian people of Syrian descent People from Uberaba University of California, Berkeley alumni Foreign associates of the National Academy of Sciences Members of the French Academy of Sciences Members of the Norwegian Academy of Science and Letters Members of the Brazilian Academy of Sciences Foreign members of the Russian Academy of Sciences Foreign members of the Chinese Academy of Sciences Foreign fellows of the Indian National Science Academy Dynamical systems theorists Instituto Nacional de Matemática Pura e Aplicada researchers 20th-century Brazilian mathematicians TWAS laureates Members of the German National Academy of Sciences Leopoldina Presidents of the International Mathematical Union
Jacob Palis
[ "Mathematics" ]
1,369
[ "Dynamical systems theorists", "Dynamical systems" ]
1,540,093
https://en.wikipedia.org/wiki/International%20Federation%20of%20Chemical%2C%20Energy%2C%20Mine%20and%20General%20Workers%27%20Unions
The International Federation of Chemical, Energy, Mine and General Workers' Unions (ICEM) was a global union federation of trade unions. As of November 2007, ICEM represented 467 industrial trade unions in 132 countries, claiming a membership of over 20 million workers. History The federation was founded in 1995 in Washington, DC, when the Miners' International Federation merged with the International Federation of Chemical and General Workers' Unions. In 2000, the small Universal Alliance of Diamond Workers merged into the federation, while in 2007, the World Federation of Industry Workers joined. In June 2012, affiliates of ICEM merged into the new global federation IndustriALL Global Union. The organization represented workers employed in a wide range of industries, including energy, mining, chemicals and bioscience, pulp and paper, rubber, gems and jewellery, glass, ceramics, cement, environmental services and others. Organization and activities The international headquarters of ICEM was variously based in Brussels, Belgium, and Geneva, Switzerland, where meetings of the Presidium and the executive committee were held. These governing bodies organized activities on a higher level while the regional offices organized regional conferences, workshops and solidarity actions. The Presidium oversaw the grand line of ICEM whilst the executive committee was more involved in the day-to-day routine of the organization. Every four years, starting in 1995, a worldwide congress was organized in which new committee members were elected and policies were changed. The congresses were held in the following order: Washington, DC, 1995. Durban, South Africa, in November 1999. Stavanger, Norway, in August 2003. Bangkok, Thailand, in November 2007. The regional offices dealt with specific geographical areas such as Africa, Asia Pacific, Europe, Latin America and the Caribbean and North America. The regional office of the Asia Pacific area was housed in Seoul, South Korea. This regional office was one of the most active offices of ICEM. ICEM supported many strikes in various regions including the strike of 7 October 1998 in Russia by communists and the Federation of Independent Trade Unions of Russia during the 1998 Russian Financial Crisis. Affiliates of ICEM have also organized protests in South Africa. ICEM worked together with human rights and environmental activists who were in conflict with multinationals such as Rio Tinto by raising awareness and funding research. ICEM published two quarterly bulletins called ICEM Info and ICEM Global which merged in 2002 to become ICEM Global Info. Research Richard Croucher and Elizabeth Cotton's book Global Unions, Global Business contains a case study of the ICEM's dealings with the Anglo-American mining company. This is in Chapter Eight. The book is published by Middlesex University Press (2009). . The archive of ICEM is housed in the International Institute of Social History in Amsterdam and is open to the public. Leadership General Secretaries 1995: Vic Thorpe 1999: Fred Higgs 2007: Manfred Warda Presidents 1995: Hans Berger Germany 1999/2003: John Maitland Australia 2005: Senzeni ZokwanaSouth Africa References External links Chemical industry trade unions Energy industry trade unions Mining trade unions Organisations based in Brussels Organisations based in Geneva Trade unions established in 1995 Trade unions disestablished in 2012
International Federation of Chemical, Energy, Mine and General Workers' Unions
[ "Chemistry" ]
640
[ "Chemical industry trade unions" ]
1,540,172
https://en.wikipedia.org/wiki/Research%20vessel
A research vessel (RV or R/V) is a ship or boat designed, modified, or equipped to carry out research at sea. Research vessels carry out a number of roles. Some of these roles can be combined into a single vessel but others require a dedicated vessel. Due to the demanding nature of the work, research vessels may be constructed around an icebreaker hull, allowing them to operate in polar waters. History The research ship had origins in the early voyages of exploration. By the time of James Cook's Endeavour, the essentials of what today we would call a research ship are clearly apparent. In 1766, the Royal Society hired Cook to travel to the Pacific Ocean to observe and record the transit of Venus across the Sun. The Endeavour was a sturdy vessel, well designed and equipped for the ordeals she would face, and fitted out with facilities for her "research personnel", Joseph Banks. As is common with contemporary research vessels, Endeavour also carried out more than one kind of research, including comprehensive hydrographic survey work. Some other notable early research vessels were HMS Beagle, RV Calypso, HMS Challenger, USFC Albatross, and the Endurance and Terra Nova. The names of early research vessels have been used to name later research vessels, as well as Space Shuttles. Modern types Hydrographic survey A hydrographic survey ship is a vessel designed to conduct hydrographic research and survey. Nautical charts are produced from this information to ensure safe navigation by military and civilian shipping. Hydrographic survey vessels also conduct seismic surveys of the seabed and the underlying geology. Apart from producing the charts, this information is useful for detecting geological features likely to bear oil or gas. These vessels usually mount equipment on a towed structure, for example, air cannons used to generate shock waves that sound strata beneath the seabed, or mounted on the keel, for example, a depth sounder. In practice, hydrographic survey vessels are often equipped to perform multiple roles. Some function also as oceanographic research ships. Naval hydrographic survey vessels often do naval research, for example, on submarine detection. An example of a hydrographic survey vessel is CCGS Frederick G. Creed. For an example of the employment of a survey ship see . Oceanographic research Oceanographic research vessels carry out research on the physical, chemical, and biological characteristics of water, the atmosphere, and climate, and to these ends carry equipment for collecting water samples from a range of depths, including the deep seas, as well as equipment for the hydrographic sounding of the seabed, along with numerous other environmental sensors. These vessels often also carry scientific divers and unmanned underwater vehicles. Since the requirements of both oceanographic and hydrographic research are very different from those of fisheries research, these boats often fulfill dual roles. Recent oceanographic research campaigns include GEOTRACES and NAAMES. Examples of an oceanographic research vessel include the NOAAS Ronald H. Brown and the Chilean Navy Cabo de Hornos. Fisheries research A fisheries research vessel requires platforms capable of towing different types of fishing nets, collecting plankton or water samples from a range of depths, and carrying acoustic fish-finding equipment. Fisheries research vessels are often designed and built along the same lines as a large fishing vessel, but with space given over to laboratories and equipment storage, as opposed to storage of the catch. An example of a fisheries research vessel is FRV Scotia. Naval research Naval research vessels investigate naval concerns, such as submarine and mine detection or sonar and weapons trials. An example of a naval research vessel is the Planet of the German Navy. Polar research Polar research vessels are constructed around an icebreaker hull, allowing them to engage in ice navigation and operate in polar waters. These vessels usually have dual roles, particularly in the Antarctic, where they function also as polar replenishment and supply vessels to the Antarctic research bases. Examples of polar research vessels include USCGC Polar Star, RSV Aurora Australis and RSV Nuyina. Oil exploration Oil exploration is performed in a number of ways, one of the most common being mobile drilling platforms or ships that are moved from area to area as needed to drill into the seabed to find out what deposits lie beneath it. See also European and American voyages of scientific exploration List of research vessels by country Marine research vessels Technical research ship Weather ship References Further reading OCEANIC International Research Vessels Database Unofficial (English Language) Homepage of the research icebreaker "ARA Almirante Irizar Australian research vessel facilities Canadian research fleet Alfred Wegener Institute for Polar and Marine Research – home of the "Polarstern" Ifremer Fleet National Institute of Oceanography and Experimental Geophysics – OGS Trieste ITALY NOAA Marine Operations Scripps Institution of Oceanography Woods Hole Oceanographic Institution (WHOI) WHOI web page University-National Oceanographic Laboratory System (UNOLS) research vessels (US academic fleet) Fisheries science Hydrography Oceanographic instrumentation Ship types
Research vessel
[ "Technology", "Engineering", "Environmental_science" ]
996
[ "Hydrography", "Hydrology", "Oceanographic instrumentation", "Measuring instruments" ]
1,540,206
https://en.wikipedia.org/wiki/3C%2048
3C48 is a quasar discovered in 1960; it was the second source conclusively identified as such. 3C48 was the first source in the Third Cambridge Catalogue of Radio Sources for which an optical identification was found by Allan Sandage and Thomas A. Matthews in 1960 through interferometry. In 1963 Jesse L. Greenstein and Thomas Matthews found that it had a redshift of 0.367, making it one of the highest redshift sources then known. It was not until 1982 that the surrounding faint galactic "nebulosity" was confirmed to have the same redshift as 3C48, cementing its identification as an object in a distant galaxy. This was also the first solid identification of a quasar with a surrounding galaxy at the same redshift. 3C 48 is one of four primary calibrators used by the Very Large Array (along with 3C 138 and 3C 147, and 3C 286). Visibilities of all other sources are calibrated using observed visibilities of one of these four calibrators. Nomenclature The name of the object “3C 48” consists of two significant parts. The first part, “3C,” means that the object belongs to the Third Cambridge Catalog of Radio Sources. The second part - “48” - is the serial number in the catalog ordered by right ascension. History 3C 48 was the first source in the Third Cambridge Catalog of Radio Sources to be optically identified by Allan Sandage and Thomas Matthews in 1960 using interferometry. Jesse Greenstein and Thomas Matthews found that it had a redshift of 0.367, one of the highest redshifts of any source known at the time. It was not until 1982 that a surrounding faint galactic "nebula" was measured to have the same redshift as 3C 48, confirming its identification as an object in a distant galaxy. This was also the first reliable identification of a quasar with a surrounding galaxy of the same redshift. References Quasars 048 Astronomical objects discovered in 1960 Triangulum 073991
3C 48
[ "Astronomy" ]
441
[ "Triangulum", "Constellations" ]
1,540,218
https://en.wikipedia.org/wiki/Nitrene
In chemistry, a nitrene or imene () is the nitrogen analogue of a carbene. The nitrogen atom is uncharged and monovalent, so it has only 6 electrons in its valence level—two covalent bonded and four non-bonded electrons. It is therefore considered an electrophile due to the unsatisfied octet. A nitrene is a reactive intermediate and is involved in many chemical reactions. The simplest nitrene, HN, is called imidogen, and that term is sometimes used as a synonym for the nitrene class. Electron configuration In the simplest case, the linear N–H molecule (imidogen) has its nitrogen atom sp hybridized, with two of its four non-bonded electrons as a lone pair in an sp orbital and the other two occupying a degenerate pair of p orbitals. The electron configuration is consistent with Hund's rule: the low energy form is a triplet with one electron in each of the p orbitals and the high energy form is the singlet with an electron pair filling one p orbital and the other p orbital vacant. As with carbenes, a strong correlation exists between the spin density on the nitrogen atom which can be calculated in silico and the zero-field splitting parameter D which can be derived experimentally from electron spin resonance. Small nitrenes such as NH or CF3N have D values around 1.8 cm−1 with spin densities close to a maximum value of 2. At the lower end of the scale are molecules with low D (< 0.4) values and spin density of 1.2 to 1.4 such as 9-anthrylnitrene and 9-phenanthrylnitrene. Formation Because nitrenes are so reactive, they are rarely isolated. Instead, they are formed as reactive intermediates during a reaction. There are two common ways to generate nitrenes: From azides by thermolysis or photolysis, with expulsion of nitrogen gas. This method is analogous to the formation of carbenes from diazo compounds. From isocyanates, with expulsion of carbon monoxide. This method is analogous to the formation of carbenes from ketenes. Since formation of the nitrene typically starts from a diamagnetic precursor, the direct chemical product is a singlet nitrene, which then relaxes to its ground state triplet state. As has been shown for phenylazide as a model system, the direct photoproduct of photochemical-induced N2 loss can either be the singlet or triplet nitrene. By using a triplet sensitizer, the triplet nitrene can also be formed without initial formation of the singlet nitrene. Isolated Nitrenes Although highly reactive, some nitrenes could be isolated and characterized recently. In 2019, a triplet nitrene was isolated by Betley and Lancaster, stabilized by coordination to a copper center in a bulky ligand. Later on, Schneider and coworkers characterized Pd and Pt triplet metallonitrenes, where the organic residue is replaced by a metal. In 2024, the groups of Beckmann, Ye and Tan reported the isolation and characterization of organic triplet nitrenes, which are protected from chemical reactivity by an extremely bulky ligand. Reactions Nitrene reactions include: Nitrene C–H insertion. A nitrene can easily insert into a carbon to hydrogen covalent bond yielding an amine or amide. A singlet nitrene reacts with retention of configuration. In one study a nitrene, formed by oxidation of a carbamate with potassium persulfate, gives an insertion reaction into the palladium to nitrogen bond of the reaction product of palladium(II) acetate with 2-phenylpyridine to methyl N-(2-pyridylphenyl)carbamate in a cascade reaction: A nitrene intermediate is suspected in this C–H insertion involving an oxime, acetic anhydride leading to an isoindole: Nitrene cycloaddition. With alkenes, nitrenes react to form aziridines, very often with nitrenoid precursors such as nosyl- or tosyl-substituted [N-(phenylsulfonyl)imino]phenyliodinane (PhI=NNs or PhI=NTs respectively)) but the reaction is known to work directly with the sulfonamide in presence of a transition metal based catalyst such as copper, palladium, or gold: In most cases, however, [N-(p-nitrophenylsulfonyl)imino]phenyliodinane (PhI=NNs) is prepared separately as follows: Nitrene transfer takes place next: In this particular reaction both the cis-stilbene illustrated and the trans form (not depicted) result in the same trans-aziridine product, suggesting a two-step reaction mechanism. The energy difference between triplet and singlet nitrenes can be very small in some cases, allowing interconversion at room temperature. Triplet nitrenes are thermodynamically more stable but react stepwise allowing free rotation and thus producing a mixture of stereochemistry. Arylnitrene ring-expansion and ring-contraction: Aryl nitrenes show ring expansion to 7-membered ring cumulenes, ring opening reactions and nitrile formations many times in complex reaction paths. For instance the azide 2 in the scheme below trapped in an argon matrix at 20 K on photolysis expels nitrogen to the triplet nitrene 4 (observed experimentally with ESR and ultraviolet-visible spectroscopy) which is in equilibrium with the ring-expansion product 6. The nitrene ultimately converts to the ring-opened nitrile 5 through the diradical intermediate 7. In a high-temperature reaction, FVT at 500–600 °C also yields the nitrile 5 in 65% yield. Nitreno radicals For several compounds containing both a nitrene group and a free radical group an ESR high-spin quartet has been recorded (matrix, cryogenic temperatures). One of these has an amine oxide radical group incorporated, another system has a carbon radical group. In this system one of the nitrogen unpaired electrons is delocalized in the aromatic ring making the compound a σ–σ–π triradical. A carbene nitrogen radical (imidyl radical) resonance structure makes a contribution to the total electronic picture. References Reactive intermediates Free radicals Functional groups Nitrogen compounds
Nitrene
[ "Chemistry", "Biology" ]
1,392
[ "Free radicals", "Functional groups", "Octet-deficient functional groups", "Organic compounds", "Senescence", "Biomolecules", "Physical organic chemistry", "Reactive intermediates" ]
1,540,333
https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius%20theorem
In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem, Hawkins–Simon condition); to demography (Leslie population age distribution model); to social networks (DeGroot learning process); to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. Statement Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices. Positive matrices Let be an positive matrix: for . Then the following statements hold. There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue, principal eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, the spectral radius is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number. The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for AT, the transpose of A.) There exists an eigenvector v = (v1,...,vn)T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ i ≤ n. (Respectively, there exists a positive left eigenvector w : wT A = wT r, wi > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, principal eigenvector or dominant eigenvector. There are no other positive (moreover non-negative) eigenvectors except positive multiples of v (respectively, left eigenvectors except ww'w), i.e., all other eigenvectors must have at least one negative or non-real component. , where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix vwT is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection. Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f(x) be the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue. A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue. Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then, Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then,Friedland, S., 1981. Convex spectral functions. Linear and multilinear algebra, 9(4), pp.299-316. Fiedler formula: The Perron–Frobenius eigenvalue satisfies the inequalities All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1–7 can be found in Meyer chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669. The left and right eigenvectors w and v are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector v sums to one, while . Non-negative matrices There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than or equal, in absolute value, to all other eigenvalues. However, for the example , the maximum eigenvalue r = 1 has the same absolute value as the other eigenvalue −1; while for , the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive. However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form , where is a real strictly positive eigenvalue, and ranges over the complex h th roots of 1 for some positive integer h called the period of the matrix. The eigenvector corresponding to has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below. Classification of matrices Let A be a n × n square matrix over field F. The matrix A is irreducible if any of the following equivalent properties holds.Definition 1 : A does not have non-trivial invariant coordinate subspaces. Here a non-trivial coordinate subspace means a linear subspace spanned by any proper subset of standard basis vectors of Fn. More explicitly, for any linear subspace spanned by standard basis vectors ei1 , ..., eik, 0 < k < n its image under the action of A is not contained in the same subspace.Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P: where E and G are non-trivial (i.e. of size greater than zero) square matrices.Definition 3: One can associate with a matrix A a certain directed graph GA. It has n vertices labeled 1,...,n, and there is an edge from vertex i to vertex j precisely when aij ≠ 0. Then the matrix A is irreducible if and only if its associated graph GA is strongly connected. If F is the field of real or complex numbers, then we also have the following condition.Definition 4: The group representation of on or on given by has no non-trivial invariant coordinate subspaces. (By comparison, this would be an irreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.) A matrix is reducible if it is not irreducible. A real matrix A is primitive if it is non-negative and its mth power is positive for some natural number m (i.e. all entries of Am are positive). Let A be real and non-negative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that (Am)ii > 0. When A is irreducible, the period of every index is the same and is called the period of A. In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA (see Kitchens page 16). The period is also called the index of imprimitivity (Meyer page 674) or the order of cyclicity. If the period is 1, A is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices. All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period. Results for non-negative matrices were first obtained by Frobenius in 1912. Perron–Frobenius theorem for irreducible non-negative matrices Let be an irreducible non-negative matrix with period and spectral radius . Then the following statements hold. The number is a positive real number and it is an eigenvalue of the matrix . It is called Perron–Frobenius eigenvalue. The Perron–Frobenius eigenvalue is simple. Both right and left eigenspaces associated with are one-dimensional. has both a right and a left eigenvectors, respectively and , with eigenvalue and whose components are all positive. Moreover these are the only eigenvectors whose components are all positive are those associated with the eigenvalue . The matrix has exactly (where is the period) complex eigenvalues with absolute value . Each of them is a simple root of the characteristic polynomial and is the product of with an th root of unity. Let . Then the matrix is similar to , consequently the spectrum of is invariant under multiplication by (i.e. to rotations of the complex plane by the angle ). If then there exists a permutation matrix such that where denotes a zero matrix and the blocks along the main diagonal are square matrices. Collatz–Wielandt formula: for all non-negative non-zero vectors let be the minimum value of taken over all those such that . Then is a real valued function whose maximum is the Perron–Frobenius eigenvalue. The Perron–Frobenius eigenvalue satisfies the inequalities The example shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks Aj need not be square, and h need not divide n. Further properties Let A be an irreducible non-negative matrix, then: (I+A)n−1 is a positive matrix. (Meyer claim 8.3.5 p. 672). For a non-negative A, this is also a sufficient condition. Wielandt's theorem. If |B|<A, then ρ(B)≤ρ(A). If equality holds (i.e. if μ=ρ(A)eiφ is eigenvalue for B), then B = eiφ D AD−1 for some diagonal unitary matrix D (i.e. diagonal elements of D equals to eiΘl, non-diagonal are zero). If some power Aq is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: , where Ai are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A. If c(x) = xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks is the characteristic polynomial of A in which only the non-zero terms are listed, then the period of A equals the greatest common divisor of k1, k2, ... , ks. Cesàro averages: where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the spectral projection corresponding to r, the Perron projection. Let r be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive. If A has at least one non-zero diagonal element, then A is primitive. If 0 ≤ A < B, then rA ≤ rB. Moreover, if B is irreducible, then the inequality is strict: rA < rB. A matrix A is primitive provided it is non-negative and Am is positive for some m, and hence Ak is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A: If A is a non-negative primitive matrix of size n, then An2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrix M below, the power Mk is not positive for every k < n2 − 2n + 2, since (Mn2 − 2n+1)1,1 = 0. Applications Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain. Non-negative matrices The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix A may be written in upper-triangular block form (known as the normal form of a reducible matrix) PAP−1 = where P is a permutation matrix and each Bi is a square matrix that is either irreducible or zero. Now if A is non-negative then so too is each block of PAP−1, moreover the spectrum of A is just the union of the spectra of the Bi. The invertibility of A can also be studied. The inverse of PAP−1 (if it exists) must have diagonal blocks of the form Bi−1 so if any Bi isn't invertible then neither is PAP−1 or A. Conversely let D be the block-diagonal matrix corresponding to PAP−1, in other words PAP−1 with the asterisks zeroised. If each Bi is invertible then so is D and D−1(PAP−1) is equal to the identity plus a nilpotent matrix. But such a matrix is always invertible (if Nk = 0 the inverse of 1 − N is 1 + N + N2 + ... + Nk−1) so PAP−1 and A are both invertible. Therefore, many of the spectral properties of A may be deduced by applying the theorem to the irreducible Bi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible that none of these will be positive. Stochastic matrices A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible. If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If A is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal. Algebraic graph theory The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative n-square matrix is the graph with vertices numbered 1, ..., n and arc ij if and only if Aij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible. Finite Markov chains The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type). Compact operators More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology. Proof methods A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the spectral theory from which part of the arguments are borrowed. Perron root is strictly maximal eigenvalue for positive (and primitive) matrices If A is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue r (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence r is the spectral radius of A. This statement does not hold for general non-negative irreducible matrices, which have h eigenvalues with the same absolute eigenvalue as r, where h is the period of A. Proof for positive matrices Let A be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise consider A/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer m such that Am is a positive matrix and the real part of λm is negative. Let ε be half the smallest diagonal entry of Am and set T = Am − εI which is yet another positive matrix. Moreover, if Ax = λx then Amx = λmx thus λm − ε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1. On the other hand, all the entries in T are positive and less than or equal to those in Am so by Gelfand's formula ρ(T) ≤ ρ(Am) ≤ ρ(A)m = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle. Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices. Lemma Given a non-negative A, assume there exists m, such that Am is positive, then Am+1, Am+2, Am+3,... are all positive. Am+1 = AAm, so it can have zero element only if some row of A is entirely zero, but in this case the same row of Am will be zero. Applying the same arguments as above for primitive matrices, prove the main claim. Power method and the positive eigenpair For a positive (or more generally irreducible non-negative) matrix A the dominant eigenvector is real and strictly positive (for non-negative A respectively non-negative.) This can be established using the power method, which states that for a sufficiently generic (in the sense below) matrix A the sequence of vectors bk+1 = Abk / | Abk | converges to the eigenvector with the maximum eigenvalue. (The initial vector b0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector b0 produces the sequence of non-negative vectors bk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for A, proving the assertion. The corresponding eigenvalue is non-negative. The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this. Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest: Lemma: given a positive (or more generally irreducible non-negative) matrix A and v as any non-negative eigenvector for A, then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive. Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes i,j there exists m, such that (Am)ij is strictly positive. Given a non-negative eigenvector v, and that at least one of its components say i-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given n such that (An)ii >0, hence: rnvi = Anvi ≥ (An)iivi >0. Hence r is strictly positive. The eigenvector is strict positivity. Then given m, such that (Am)ji >0, hence: rmvj = (Amv)j ≥ (Am)jivi >0, hence vj is strictly positive, i.e., the eigenvector is strictly positive. Multiplicity one This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue r is one-dimensional. The arguments here are close to those in Meyer. Given a strictly positive eigenvector v corresponding to r and another eigenvector w with the same eigenvalue. (The vectors v and w can be chosen to be real, because A and r are both real, so the null space of A-r has a basis consisting of real vectors.) Assuming at least one of the components of w is positive (otherwise multiply w by −1). Given maximal possible α such that u=v- α w is non-negative, then one of the components of u is zero, otherwise α is not maximum. Vector u is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of u is zero. The contradiction implies that w does not exist. Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvalue r and all other eigenvalues which have the same absolute value. If there is a Jordan block, then the infinity norm (A/r)k∞ tends to infinity for k → ∞ , but that contradicts the existence of the positive eigenvector. Given r = 1, or A/r. Letting v be a Perron–Frobenius strictly positive eigenvector, so Av=v, then: So ‖Ak‖∞ is bounded for all k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖Ak‖∞ is unbounded. For a two by two matrix: hence ‖Jk‖∞ = |k + λ| (for |λ| = 1), so it tends to infinity when k does so. Since Jk = C−1 AkC, then Ak ≥ Jk/ (C−1 C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues. Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work. No other non-negative eigenvectors Given positive (or more generally irreducible non-negative matrix) A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for A. Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional. Assuming there exists an eigenpair (λ, y) for A, such that vector y is positive, and given (r, x), where x – is the left Perron–Frobenius eigenvector for A (i.e. eigenvector for AT), then rxTy = (xT A) y = xT (Ay) = λxTy, also xT y > 0, so one has: r = λ. Since the eigenspace for the Perron–Frobenius eigenvalue r is one-dimensional, non-negative eigenvector y is a multiple of the Perron–Frobenius one. Collatz–Wielandt formula Given a positive (or more generally irreducible non-negative matrix) A, one defines the function f on the set of all non-negative non-zero vectors x such that f(x) is the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r. For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x). The definition of f gives 0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector w for A for the Perron-Frobenius eigenvalue r, then ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x . Hence f(x) = ξ ≤ r, which implies R ≤ r. Perron projection as a limit: Ak/rk Let A be a positive (or more generally, primitive) matrix, and let r be its Perron–Frobenius eigenvalue. There exists a limit Ak/rk for k → ∞, denote it by P. P is a projection operator: P2 = P, which commutes with A: AP = PA. The image of P is one-dimensional and spanned by the Perron–Frobenius eigenvector v (respectively for PT—by the Perron–Frobenius eigenvector w for AT). P = vwT, where v,w are normalized such that wT v = 1. Hence P is a positive operator. Hence P is a spectral projection for the Perron–Frobenius eigenvalue r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices. Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above). Given that M is diagonalizable, M is conjugate to a diagonal matrix with eigenvalues r1, ... , rn on the diagonal (denote r1 = r). The matrix Mk/rk will be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), for k → ∞, so the limit exists. The same method works for general M (without assuming that M is diagonalizable). The projection and commutativity properties are elementary corollaries of the definition: MMk/rk = Mk/rk M ; P2 = lim M2k/r2k = P. The third fact is also elementary: M(Pu) = M lim Mk/rk u = lim rMk+1/rk+1u, so taking the limit yields M(Pu) = r(Pu), so image of P lies in the r-eigenspace for M, which is one-dimensional by the assumptions. Denoting by v, r-eigenvector for M (by w for MT). Columns of P are multiples of v, because the image of P is spanned by it. Respectively, rows of w. So P takes a form (a v wT), for some a. Hence its trace equals to (a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that P acts identically on the r-eigenvector for M. So it is one-dimensional. So choosing (wTv) = 1, implies P = vwT. Inequalities for Perron–Frobenius eigenvalue For any non-negative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality: This is not specific to non-negative matrices: for any matrix A with an eigenvalue it is true that . This is an immediate corollary of the Gershgorin circle theorem. However another proof is more direct: Any matrix induced norm satisfies the inequality for any eigenvalue because, if is a corresponding eigenvector, . The infinity norm of a matrix is the maximum of row sums: Hence the desired inequality is exactly applied to the non-negative matrix A. Another inequality is: This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that A is positive (not just non-negative), then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity. Another way to argue it is via the Collatz-Wielandt formula. One takes the vector x = (1, 1, ..., 1) and immediately obtains the inequality. Further proofs Perron projection The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property: The Perron projection of an irreducible non-negative square matrix is a positive matrix. Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover, if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that An = P + (1 − P)An. As n increases the second of these terms decays to zero leaving P as the limit of An as n → ∞. The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition. Peripheral projection The analysis when A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider the peripheral projection', which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal. Cyclicity Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, Rh = P, and the cyclic group P, R, R2, ...., Rh−1 represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula . All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between An and Rn is An − Rn = (1 − P)An representing the transients of An which eventually decay to zero. P may be computed as the limit of Anh as n → ∞. Counterexamples The matrices L = , P = , T = , M = provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of L are both equal to P, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix T is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. M is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of M are {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5 are both absent. More precisely, since M is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one Terminology A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and non-negative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible non-negative square matrices and connected matrices are synonymous. The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a probability distribution and is sometimes called a stochastic eigenvector.Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity. See also Metzler matrix (Quasipositive matrix) Notes References (1959 edition had different title: "Applications of the theory of matrices". Also the numeration of chapters is different in the two editions.) Further reading Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. . Chris Godsil and Gordon Royle, Algebraic Graph Theory, Springer, 2001. A. Graham, Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley&Sons, New York, 1987. R. A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990 Bas Lemmens and Roger Nussbaum, Nonlinear Perron-Frobenius Theory, Cambridge Tracts in Mathematics 189, Cambridge Univ. Press, 2012. S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability London: Springer-Verlag, 1993. (2nd edition, Cambridge University Press, 2009) Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) (The claim that Aj has order n/h'' at the end of the statement of the theorem is incorrect.) . Matrix theory Theorems in linear algebra Markov processes
Perron–Frobenius theorem
[ "Mathematics" ]
8,520
[ "Theorems in algebra", "Theorems in linear algebra" ]
1,540,704
https://en.wikipedia.org/wiki/Equation%20of%20state%20%28cosmology%29
In cosmology, the equation of state of a perfect fluid is characterized by a dimensionless number , equal to the ratio of its pressure to its energy density : It is closely related to the thermodynamic equation of state and ideal gas law. The equation The perfect gas equation of state may be written as where is the mass density, is the particular gas constant, is the temperature and is a characteristic thermal speed of the molecules. Thus where is the speed of light, and for a "cold" gas. FLRW equations and the equation of state The equation of state may be used in Friedmann–Lemaître–Robertson–Walker (FLRW) equations to describe the evolution of an isotropic universe filled with a perfect fluid. If is the scale factor then If the fluid is the dominant form of matter in a flat universe, then where is the proper time. In general the Friedmann acceleration equation is where is the cosmological constant and is Newton's constant, and is the second proper time derivative of the scale factor. If we define (what might be called "effective") energy density and pressure as and the acceleration equation may be written as Non-relativistic particles The equation of state for ordinary non-relativistic 'matter' (e.g. cold dust) is , which means that its energy density decreases as , where is a volume. In an expanding universe, the total energy of non-relativistic matter remains constant, with its density decreasing as the volume increases. Ultra-relativistic particles The equation of state for ultra-relativistic 'radiation' (including neutrinos, and in the very early universe other particles that later became non-relativistic) is which means that its energy density decreases as . In an expanding universe, the energy density of radiation decreases more quickly than the volume expansion, because its wavelength is red-shifted. Acceleration of cosmic inflation Cosmic inflation and the accelerated expansion of the universe can be characterized by the equation of state of dark energy. In the simplest case, the equation of state of the cosmological constant is . In this case, the above expression for the scale factor is not valid and , where the constant is the Hubble parameter. More generally, the expansion of the universe is accelerating for any equation of state . The accelerated expansion of the Universe was indeed observed. According to observations, the value of equation of state of cosmological constant is near -1. Hypothetical phantom energy would have an equation of state , and would cause a Big Rip. Using the existing data, it is still impossible to distinguish between phantom and non-phantom . Fluids In an expanding universe, fluids with larger equations of state disappear more quickly than those with smaller equations of state. This is the origin of the flatness and monopole problems of the Big Bang: curvature has and monopoles have , so if they were around at the time of the early Big Bang, they should still be visible today. These problems are solved by cosmic inflation which has . Measuring the equation of state of dark energy is one of the largest efforts of observational cosmology. By accurately measuring , it is hoped that the cosmological constant could be distinguished from quintessence which has . Scalar modeling A scalar field can be viewed as a sort of perfect fluid with equation of state where is the time-derivative of and is the potential energy. A free () scalar field has , and one with vanishing kinetic energy is equivalent to a cosmological constant: . Any equation of state in between, but not crossing the barrier known as the Phantom Divide Line (PDL), is achievable, which makes scalar fields useful models for many phenomena in cosmology. Table Different kinds of energy have different scaling properties. Notes Physical cosmology Equations of state
Equation of state (cosmology)
[ "Physics", "Astronomy" ]
791
[ "Astronomical sub-disciplines", "Equations of physics", "Theoretical physics", "Astrophysics", "Statistical mechanics", "Equations of state", "Physical cosmology" ]