id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
5,199,213
https://en.wikipedia.org/wiki/Phasor%20measurement%20unit
A phasor measurement unit (PMU) is a device used to estimate the magnitude and phase angle of an electrical phasor quantity (such as voltage or current) in the electricity grid using a common time source for synchronization. Time synchronization is usually provided by GPS or IEEE 1588 Precision Time Protocol, which allows synchronized real-time measurements of multiple remote points on the grid. PMUs are capable of capturing samples from a waveform in quick succession and reconstructing the phasor quantity, made up of an angle measurement and a magnitude measurement. The resulting measurement is known as a synchrophasor. These time synchronized measurements are important because if the grid’s supply and demand are not perfectly matched, frequency imbalances can cause stress on the grid, which is a potential cause for power outages. PMUs can also be used to measure the frequency in the power grid. A typical commercial PMU can report measurements with very high temporal resolution, up to 120 measurements per second. This helps engineers in analyzing dynamic events in the grid which is not possible with traditional SCADA measurements that generate one measurement every 2 or 4 seconds. Therefore, PMUs equip utilities with enhanced monitoring and control capabilities and are considered to be one of the most important measuring devices in the future of power systems. A PMU can be a dedicated device, or the PMU function can be incorporated into a protective relay or other device. History In 1893, Charles Proteus Steinmetz presented a paper on simplified mathematical description of the waveforms of alternating current electricity. Steinmetz called his representation a phasor. With the invention of phasor measurement units (PMU) in 1988 by Dr. Arun G. Phadke and Dr. James S. Thorp at Virginia Tech, Steinmetz’s technique of phasor calculation evolved into the calculation of real time phasor measurements that are synchronized to an absolute time reference provided by the Global Positioning System. We therefore refer to synchronized phasor measurements as synchrophasors. Early prototypes of the PMU were built at Virginia Tech, and Macrodyne built the first PMU (model 1690) in 1992. Today they are available commercially. With the increasing growth of distributed energy resources on the power grid, more observability and control systems will be needed to accurately monitor power flow. Historically, power has been delivered in a uni-directional fashion through passive components to customers, but now that customers can generate their own power with technologies such as solar PV, this is changing into a bidirectional system for distribution systems. With this change it is imperative that transmission and distribution networks are continuously being observed through advanced sensor technology, such as ––PMUs and uPMUs. In simple terms, the public electric grid that a power company operates was originally designed to take power from a single source: the operating company's generators and power plants, and feed it into the grid, where the customers consume the power. Now, some customers are operating power generating devices (solar panels, wind turbines, etc.) and to save costs (or to generate income) are also feeding power back into the grid. Depending on the region, feeding power back into the grid may be done through net metering. Because of this process, voltage and current must be measured and regulated in order to ensure the power going into the grid is of the quality and standard that customer equipment expects (as seen through metrics such as frequency, phase synchronicity, and voltage). If this is not done, as Rob Landley puts it, "people's light bulbs start exploding." This measurement function is what these devices do. Operation A PMU can measure 50/60 Hz AC waveforms (voltages and currents) typically at a rate of 48 samples per cycle making them effective at detecting fluctuations in voltage or current at less than one cycle. However, when the frequency does not oscillate around or near 50/60 Hz, PMUs are not able to accurately reconstruct these waveforms. Phasor measurements from PMU’s are constructed from cosine waves, that follow the structure below. The A in this function is a scalar value, that is most often described as voltage or current magnitude (for PMU measurements). The θ is the phase angle offset from some defined starting position, and the ω is the angular frequency of the wave form (usually 2π50 radians/second or 2π60 radians/second). In most cases PMUs only measure the voltage magnitude and the phase angle, and assume that the angular frequency is a constant. Because this frequency is assumed constant, it is disregarded in the phasor measurement. PMU’s measurements are a mathematical fitting problem, where the measurements are being fit to a sinusoidal curve. Thus, when the waveform is non-sinusoidal, the PMU is unable to fit it exactly. The less sinusoidal the waveform is, such as grid behavior during a voltage sag or fault, the worse the phasor representation becomes. The analog AC waveforms detected by the PMU are digitized by an analog-to-digital converter for each phase. A phase-locked oscillator along with a Global Positioning System (GPS) reference source provides the needed high-speed synchronized sampling with 1 microsecond accuracy. However, PMUs can take in multiple time sources including non-GPS references as long as they are all calibrated and working synchronously. The resultant time-stamped phasors can be transmitted to a local or remote receiver at rates up to 120 samples per second. Being able to see time synchronized measurements over a large area is helpful in examining how the grid operates at large, and determining which parts of the grid are affected by different disturbances. Historically, only small numbers of PMUs have been used to monitor transmission lines with acceptable errors of around 1%. These were simply coarser devices installed to prevent catastrophic blackouts. Now, with the invention of micro-synchronous phasor technology, many more of them are desired to be installed on distribution networks where power can be monitored at a very high degree of precision. This high degree of precision creates the ability to drastically improve system visibility and implement smart and preventative control strategies. No longer are PMUs just required at sub-stations, but are required at several places in the network including tap-changing transformers, complex loads, and PV generation buses. While PMUs are generally used on transmission systems, new research is being done on the effectiveness of micro-PMUs for distribution systems. Transmission systems generally have voltage that is at least an order of magnitude higher than distribution systems (between 12kV and 500kV while distribution runs at 12kV and lower). This means that transmission systems can have less precise measurements without compromising the accuracy of the measurement. However, distribution systems need more precision in order to improve accuracy, which is the benefit of uPMUs. uPMUs decrease the error of the phase angle measurements on the line from ±1° to ±0.05°, giving a better representation of the true angle value. The “micro” term in front of the PMU simply means it is a more precise measurement. Technical overview A phasor is a complex number that represents both the magnitude and phase angle of the sine waves found in electricity. Phasor measurements that occur at the same time over any distance are called "synchrophasors". While it is commonplace for the terms "PMU" and "synchrophasor" to be used interchangeably they actually represent two separate technical meanings. A synchrophasor is the metered value whereas the PMU is the metering device. In typical applications, phasor measurement units are sampled from widely dispersed locations in the power system network and synchronized from the common time source of a Global Positioning System (GPS) radio clock. Synchrophasor technology provides a tool for system operators and planners to measure the state of the electrical system (over many points)and manage power quality. PMUs measure voltages and currents at principal intersecting locations (critical substations) on a power grid and can output accurately time-stamped voltage and current phasors. Because these phasors are truly synchronized, synchronized comparison of two quantities is possible in real time. These comparisons can be used to assess system conditions-such as; frequency changes, MW, MVARs, kVolts, etc. The monitored points are preselected through various studies to make extremely accurate phase angle measurements to indicate shifts in system (grid) stability. The phasor data is collected either on-site or at centralized locations using Phasor Data Concentrator technologies. The data is then transmitted to a regional monitoring system which is maintained by the local Independent System Operator (ISO). These ISO's will monitor phasor data from individual PMU's or from as many as 150 PMU's — this monitoring provides an accurate means of establishing controls for power flow from multiple energy generation sources (nuclear, coal, wind, etc.). The technology has the potential to change the economics of power delivery by allowing increased power flow over existing lines. Synchrophasor data could be used to allow power flow up to a line's dynamic limit instead of to its worst-case limit. Synchrophasor technology will usher in a new process for establishing centralized and selective controls for the flow of electrical energy over the grid. These controls will affect both large scale (multiple-states) and individual transmission line sections at intersecting substations. Transmission line congestion (over-loading), protection, and control will therefore be improved on a multiple region scale (US, Canada, Mexico) through interconnecting ISO's. Phasor networks A phasor network consists of phasor measurement units (PMUs) dispersed throughout the electricity system, Phasor Data Concentrators (PDC) to collect the information and a Supervisory Control And Data Acquisition (SCADA) system at the central control facility. Such a network is used in Wide Area Measurement Systems (WAMS), the first of which began in 2000 by the Bonneville Power Administration. The complete network requires rapid data transfer within the frequency of sampling of the phasor data. GPS time stamping can provide a theoretical accuracy of synchronization better than 1 microsecond. "Clocks need to be accurate to ± 500 nanoseconds to provide the one microsecond time standard needed by each device performing synchrophasor measurement." For 60 Hz systems, PMUs must deliver between 10 and 30 synchronous reports per second depending on the application. The PDC correlates the data, and controls and monitors the PMUs (from a dozen up to 60). At the central control facility, the SCADA system presents system wide data on all generators and substations in the system every 2 to 10 seconds. PMUs often use phone lines to connect to PDCs, which then send data to the SCADA or Wide Area Measurement System (WAMS) server. Additionally, PMUs can use ubiquitous mobile (cellular) networks for data transfer (GPRS, UMTS), which allows potential savings in infrastructure and deployment costs, at the expense of a larger data reporting latency. However, the introduced data latency makes such systems more suitable for R&D measurement campaigns and near real-time monitoring, and limits their use in real-time protective systems. PMUs from multiple vendors can yield inaccurate readings. In one test, readings differed by 47 microseconds – or a difference of 1 degree of 60 Hz- an unacceptable variance. China's solution to the problem was to build all its own PMUs adhering to its own specifications and standards so there would be no multi-vendor source of conflicts, standards, protocols, or performance characteristics. Installation Installation of a typical 10 Phasor PMU is a simple process. A phasor will be either a 3 phase voltage or a 3 phase current. Each phasor will, therefore, require 3 separate electrical connections (one for each phase). Typically an electrical engineer designs the installation and interconnection of a PMU at a substation or at a generation plant. Substation personnel will bolt an equipment rack to the floor of the substation following established seismic mounting requirements. Then the PMU along with a modem and other support equipment will be mounted on the equipment rack. They will also install the Global Positioning Satellite (GPS) antenna on the roof of the substation per manufacturer instructions. Substation personnel will also install "shunts" in all Current transformer (CT) secondary circuits that are to be measured. The PMU will also require communication circuit connection (Modem if using 4-wire connection or Ethernet for network connection). Implementations The Bonneville Power Administration (BPA) was the first utility to implement comprehensive adoption of synchrophasors in its wide-area monitoring system. This was in 2000, and today there are several implementations underway. The FNET project operated by Virginia Tech and the University of Tennessee utilizes a network of approximately 80 low-cost, high-precision Frequency Disturbance Recorders to collect syncrophasor data from the U.S. power grid. The New York Independent System Operator has installed 48 PMUs throughout New York State, partly in response to a devastating 2003 blackout that originated in Ohio and affected regions in both the United States and Canada. In 2006, China's Wide Area Monitoring Systems (WAMS) for its 6 grids had 300 PMUs installed mainly at 500 kV and 330 kV substations and power plants. By 2012, China plans to have PMUs at all 500kV substations and all powerplants of 300MW and above. Since 2002, China has built its own PMUs to its own national standard. One type has higher sampling rates than typical and is used in power plants to measure rotor angle of the generator, reporting excitation voltage, excitation current, valve position, and output of the power system stabilizer (PSS). All PMUs are connected via private network, and samples are received within 40 ms on average. The North American Synchrophasor Initiative (NASPI), previously known as The Eastern Interconnect Phasor Project (EIPP), has over 120 connected phasor measurement units collecting data into a "Super Phasor Data Concentrator" system centered at Tennessee Valley Authority (TVA). This data concentration system is now an open source project known as the openPDC. The DOE has sponsored several related research projects, including GridStat at Washington State University. ARPA-E has sponsored a related research project on Micro-Synchrophasors for Distribution Systems, at the University of California, Berkeley. The largest Wide Area Monitoring System in the world is in India. The Unified Real Time Dynamic State Measurement system (URTDSM) is composed of 1,950 PMUs installed in 351 substations feeding synchrophasor data to 29 State Control Centres, 5 Regional Control Centres and 2 National Control Centres. Applications Power system automation, as in smart grids Load shedding and other load control techniques such as demand response mechanisms to manage a power system. (i.e. Directing power where it is needed in real-time) Increase the reliability of the power grid by detecting faults early, allowing for isolation of operative system, and the prevention of power outages. Increase power quality by precise analysis and automated correction of sources of system degradation. Wide area measurement and control through state estimation, in very wide area super grids, regional transmission networks, and local distribution grids. Phasor measurement technology and synchronized time stamping can be used for Security improvement through synchronized encryptions like trusted sensing base. Cyber attack recognition by verifying data between the SCADA system and the PMU data. Distribution State Estimation and Model Verification. Ability to calculate impedances of loads, distribution lines, verify voltage magnitude and delta angles based on mathematical state models. Event Detection and Classification. Events such as various types of faults, tap changes, switching events, circuit protection devices. Machine learning and signal classification methods can be used to develop algorithms to identify these significant events. Microgrid applications––islanding or deciding where to detach from the grid, load and generation matching, and resynchronization with the main grid. Standards The IEEE 1344 standard for synchrophasors was completed in 1995, and reaffirmed in 2001. In 2005, it was replaced by IEEE C37.118-2005, which was a complete revision and dealt with issues concerning use of PMUs in electric power systems. The specification describes standards for measurement, the method of quantifying the measurements, testing & certification requirements for verifying accuracy, and data transmission format and protocol for real-time data communication. This standard was not comprehensive- it did not attempt to address all factors that PMUs can detect in power system dynamic activity. A new version of the standard was released in December 2011, which split the IEEE C37.118-2005 standard into two parts: C37.118-1 dealing with the phasor estimation & C37.118-2 the communications protocol. It also introduced two classifications of PMU, M — measurement & P — protection. M class is close in performance requirements to that in the original 2005 standard, primarily for steady state measurement. P class has relaxed some performance requirements and is intended to capture dynamic system behavior. An amendment to C37.118.1 was released in 2014. IEEE C37.118.1a-2014 modified PMU performance requirements that were not considered achievable. Other standards used with PMU interfacing: OPC-DA / OPC-HDA — A Microsoft Windows based interface protocol that is currently being generalized to use XML and run on non Windows computers. IEC 61850 a standard for electrical substation automation BPA PDCStream — a variant of IEEE 1344 used by the Bonneville Power Administration (BPA) PDCs and user interface software. See also Utility frequency Power system automation Electric power transmission Smart grid References External links A simple and cheap Wide Area Frequency Measurement System. Free and open source Phasor Data Concentrator (iPDC) and PMU Simulator for Linux. New York Independent System Operator A GPRS-oriented ad-hoc WAM system Electric power systems components Electrical meters
Phasor measurement unit
Technology,Engineering
3,809
66,174,650
https://en.wikipedia.org/wiki/Heat%20pen
A heat pen (also known as a thermal stick) is a device used to mitigate the effects of an insect sting (e. g. wasp sting) or insect bite (e. g. mosquito bite) by briefly heating the skin. Shape The heat pen is available either as a pen-like device or as a USB-attachment for the smartphone. Effect A heat pen has a ceramic or metal plate at the tip, which heats to 50 to 60 °C. The heated plate is brought into contact with the area of skin affected by the insect bite for 3 to 10 seconds, causing the skin to briefly heat up to 53 °C (local hyperthermia). The heat activates various physiological processes. For example, it is assumed that the insect proteins are destroyed (denatured) and the body's histamine release is reduced. This results in symptom relief, for example itching is avoided. Due to the short application time, the skin is not damaged. The positive effect of the heat stick could be confirmed by a study, however employees of the manufacturer are the lead authors and may be biased. The exact effect is not known; various mechanisms are discussed. The same mode of action is also used to treat cold sores. References Medical devices
Heat pen
Biology
258
2,504,151
https://en.wikipedia.org/wiki/Phosphorus%20oxoacid
In chemistry, phosphorus oxoacid (or phosphorus acid) is a generic name for any acid whose molecule consists of atoms of phosphorus, oxygen, and hydrogen. There is a potentially infinite number of such compounds. Some of them are unstable and have not been isolated, but the derived anions and organic groups are present in stable salts and esters. The most important ones—in biology, geology, industry, and chemical research—are the phosphoric acids, whose esters and salts are the phosphates. In general, any hydrogen atom bonded to an oxygen atom is acidic, meaning that the –OH group can lose a proton leaving a negatively charged – group and thus turning the acid into a phosphorus oxoanion. Each additional proton lost has an associated acid dissociation constant Ka1, Ka2 Ka3, ..., often expressed by its cologarithm (pKa1, pKa2, pKa3, ...). Hydrogen atoms bonded directly to phosphorus are generally not acidic. Classification The phosphorus oxoacids can be classified by the oxidation state(s) of the phosphorus atom(s), which may vary from +1 to +5. The oxygen atoms are usually in oxidation state -2, but may be in state -1 if the molecule includes peroxide groups. Oxidation state +1 Hypophosphorous acid (or phosphinic acid), (or ), a monoprotic acid (meaning that only one of the hydrogen atoms is acidic). Its salts and esters are called hypophosphites or phosphinates. Oxidation state +3 Phosphorous acid (or phosphonic acid), (or ), a diprotic acid (with only two acidic hydrogens). Its salts and esters are called phosphites or phosphonates. Oxidation state +4 Hypophosphoric acid, (or –). All four hydrogens are acidic. Its salts and esters are hypophosphates. Oxidation state +5 The most important members of this group are the phosphoric acids, where each phosphorus atom bonded to four oxygen atoms, one of them through a double bond, arranged as the corners of a tetrahedron. Two or more of these tetrahedra may be connected by shared single-bonded oxygens, forming linear or branched chains, cycles, or more complex structures. The single-bonded oxygen atoms that are not shared are completed with acidic hydrogen atoms. Their generic formula is Hn−x+2PnO3n−x+1, where n is the number of phosphorus atoms and x is the number of fundamental cycles in the molecule's structure. These acids, and their esters and salts ("phosphates") include some of the best-known and most important compounds of phosphorus. The simplest member of this class is: Phosphoric acid proper (also called orthophosphoric acid or monophosphoric acid), (or ), a triprotic acid. It forms orthophosphate salt and esters, commonly called phosphates. The smallest compounds of this class with two or more phosphorus atoms are called "oligophosphoric acids", and the larger ones, with linear –P–O– backbones, are "polyphosphoric acids"; with no definite separation between the two. Some of the most important members are: Pyrophosphoric acid, (or –O–), with four acid hydrogens. Forms pyrophosphates. Triphosphoric acid (or tripolyphosphoric acid), (or –O––O–), with five acidic hydrogens. Forms triphosphates or tripolyphosphates. Tetraphosphoric acid, (or (–O–)2–O–), with six acidic hydrogens. Forms tetraphosphates. The backbone may be branched, as in: Triphosphono phosphoric acid, or P(O)(–)3, a branched isomer of tetrapolyphosphoric acid. The tetrahedra may be connected to form closed –P–O– chains, as in: Trimetaphosphoric acid (or cyclotriphosphoric acid), (or , (–P(O)(OH)–O–)3), a cyclic molecule with three acidic hydrogens. Forms the trimetaphosphate salts and esters. Metaphosphoric acid is a general term for phosphoric acids with a single cycle, (–P(O)(OH)–O–)n, whose elemental formula is . Another compound that may be included in this class is Peroxomonophosphoric acid, H3PO5 (or OP(OH)2(OOH)), which can be seen as monophosphoric acid with a peroxide group replacing the oxygen atom in one of the hydroxyl groups Mixed oxidation states Some phosphorus oxoacids have two or more P atoms in different oxidation states. One example is Isohypophosphoric acid, (or H(OH)(O)P−O−P(O)(OH)2), a tetraprotic acid and isomer of hypophosphoric acid, containing P in oxidation state +3 and +5 See also Phosphoramidate References Further reading External links Determination of Polyphosphates Using Ion Chromatography with Suppressed Conductivity Detection, Application Note 71 by Dionex Dietary minerals Inorganic compounds Phosphates Phosphorus(V) compounds Pyrophosphates Reagents for organic chemistry
Phosphorus oxoacid
Chemistry
1,197
12,911,648
https://en.wikipedia.org/wiki/Quercus%20%C3%97%20macdonaldii
Quercus × macdonaldii, formerly Quercus macdonaldii, with the common names MacDonald's oak and Macdonald oak, is a rare hybrid species of oak in the family Fagaceae. Description The tree is between 5 and 15 meters tall, with scaly bark on the trunk. The twigs are gray and tomentose. The leaves are between 4 and 7 centimeters in length, the blades are oblong to obovate, and adaxially glabrous to sparsely hairy. The petioles are between 3 and 10 millimeters. The fruits cup is between 10 and 20 millimeters long and 6 to 10 millimeters deep. The nuts are between 20 and 35 millimeters long and conic-oblong or ovoid. The flowering time is between the months of March and May. Distribution The tree is endemic to the California Channel Islands, on Santa Cruz Island, Santa Rosa Island, and Santa Catalina Island, in Southern California. It is found in chaparral and woodlands habitats in canyons and slopes below . Taxonomy The plant was reclassified as Quercus × macdonaldii, a naturally occurring hybrid of Quercus lobata and Quercus pacifica, or possibly other oak species. Both parents are placed in section Quercus. It is considered a species by Greene but derived from hybrids involving Quercus pacifica, Quercus lobata, and possibly others. See also California chaparral and woodlands ecoregion California coastal sage and chaparral sub-ecoregion References External links Jepson eFlora treatment of Quercus × macdonaldii USDA Plants Profile for Quercus × macdonaldii [berberidifolia × lobata] (MacDonald oak) Calflora Database: Quercus × macdonaldii (MacDonald's oak, MacDonald oak) macdonaldii Endemic flora of California Natural history of the Channel Islands of California Natural history of the California chaparral and woodlands Natural history of Los Angeles County, California Trees of Northern America Hybrid plants Taxonomy articles created by Polbot
Quercus × macdonaldii
Biology
404
65,545,997
https://en.wikipedia.org/wiki/List%20of%20ferns%20of%20Georgia%20%28U.S.%20state%29
This is a list of ferns and other pteridophytes native to the U.S state of Georgia. References Lists of plants Flora of Georgia (U.S. state)
List of ferns of Georgia (U.S. state)
Biology
39
21,972,463
https://en.wikipedia.org/wiki/33%20Vulpeculae
33 Vulpeculae is a single star located around 500 light-years away from the Sun in the northern constellation of Vulpecula. It is visible to the naked eye as a dim, orange-hued star with an apparent visual magnitude of 5.31. The object is drifting closer to the Earth with a heliocentric radial velocity of −25 km/s. This is an evolved giant star with a stellar classification of K3.5 III, having exhausted the supply of hydrogen at it its core and expanded to 35 times the Sun's radius. It serves as a spectral standard for stars of its particular class. This star is radiating 334 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 4,070 K. References External links K-type giants Vulpecula Durchmusterung objects Vulpeculae, 33 199697 103511 8032
33 Vulpeculae
Astronomy
193
30,257,441
https://en.wikipedia.org/wiki/Anja%20Feldmann
Anja Feldmann (born 8 March 1966 in Bielefeld) is a German computer scientist. Education and career Feldmann studied computer science at Universität Paderborn and received her degree in 1990. She continued her studies at Carnegie Mellon University, where she earned her M.Sc. in 1991 and her Ph.D. in 1995. Following four years of postdoctoral work at AT&T Labs Research, she held research positions at Saarland University and Technical University Munich. In 2006 she was appointed as professor of Internet Network Architectures for the Telekom Innovation Laboratories at the Technische Universität Berlin. As Professor her research focused on Internet measurement, Teletraffic engineering, traffic characterization and debugging network performance issues. She has also conducted research into intrusion detection and network architecture. She has served on more than 50 committees and was the co-chair of SIGCOMM. Alex Snoeren said that she "was instrumental in the establishment of a rigorous science of Internet measurement. Among her many contributions, she is perhaps best known for her work in traffic characterization and engineering.” Between 2009 and 2013 Feldmann was Dean of the Computer Science and Electrical Engineering department at the Technische Universität Berlin, Germany. From 2012 until early 2018 Feldmann sat on the employer side of the supervisory board of SAP. October 2017 she was appointed as director of the Max Planck Institute for Informatics, her focus will be on researching the Internet architecture. Other activities Karlsruhe Institute of Technology (KIT), Member of the Supervisory Board Honors and awards 2009: Member of Leopoldina 2011: Gottfried Wilhelm Leibniz Prize 2011: Berlin Science Award 2023: ACM Fellow References External links Website of the Intelligent Networks Research Group at TU Berlin Website T-Labs Gottfried Wilhelm Leibniz Prize winners 1966 births Living people Scientists from Bielefeld Max Planck Society people Technical University of Munich alumni Feldmann Anja German women academics German women computer scientists German computer scientists Network topology Members of the German National Academy of Sciences Leopoldina Max Planck Institute directors 2023 fellows of the Association for Computing Machinery
Anja Feldmann
Mathematics
428
4,665,377
https://en.wikipedia.org/wiki/Rampart%20crater
Rampart craters are a specific type of impact crater which are accompanied by distinctive fluidized ejecta features found mainly on Mars. Only one example is known on Earth, the Nördlinger Ries impact structure in Germany. A rampart crater displays an ejecta with a low ridge along its edge. Usually, rampart craters show a lobate outer margin, as if material moved along the surface, rather than flying up and down in a ballistic trajectory. The flows sometimes are diverted around small obstacles, instead of falling on them. The ejecta look as if they move as a mudflow. Some of the shapes of rampart craters can be duplicated by shooting projectiles into mud. Although rampart craters can be found all over Mars, the smaller ones are only found in the high latitudes where ice is predicted to be close to the surface. It seems that the impact has to be powerful enough to penetrate to the level of the subsurface ice. Since ice is thought to be close to the surface in latitudes far from the equator, it does not take a large impact to reach the ice level. Based on images from the Viking program in the 1970s, it is generally accepted that rampart craters are evidence of ice or liquid water beneath the surface of Mars. The impact melts or boils the water in the subsurface producing a distinctive pattern of material surrounding the crater. Ryan Schwegman described double layered ejecta (DLE) craters as showing two distinct layers of ejecta that appear to have been put in place as a mobile, ground-hugging flow. His measurements suggest that ejecta mobility (the distance ejecta travels from the crater rim) typically goes up with increasing latitude and may reflect ice concentration. That is the higher the latitude, the greater the ice content. The lobateness (curved shape of the perimeter of ejecta) usually goes down with increasing latitude. Furthermore, DLEs on sedimentary ground seem to display higher ejecta mobility than those on volcanic surfaces. A detailed discussion of various kinds of Martian craters, including double-layer ejecta craters (rampart craters) can be found in a 2014 paper by David Weiss and James Head. Single-layered ejecta craters Single-layered ejecta craters are one type of rampart crater. They have one ejecta lobe that extends 1 to 1.5 crater radii from the rim of the crater. They have an average diameter of 10 km. Although present at all latitudes, they are most common near the equator. Their average size increases the more distant from the equator. It has been suggested that these types of craters are produced by impact into icy ground. Specifically, it is an impact that does not go entirely through the icy layer. The increase in size away from the equator is explained by a possible greater thickness in the icy layer away from the equator. Double and multiple layered ejecta craters Another type of rampart crater is called a double-layered ejecta (DLE) crater. It displays two lobes of ejecta. Related to these are (MLE) craters that have more than 2 or more layers of ejecta. They are larger than single layered ejecta craters, having an average diameter of 22 km. Their ejecta are about 2.2 radii from the crater rim. They are more concentrated near the equator (mostly between 40 degrees from the equator). Evidence leads researchers to believe that they result from an impact that goes through an icy layer and into a rocky layer. There may be more of them closer to the equator because the icy layer is not as thick there; hence more impacts will penetrate all the way through the icy layer and into the rocky layer. They are larger at all latitudes than single layer ejecta craters. The icy layer has been called by different names: cryosphere, permafrost, and ice-cemented cryosphere. Researchers have analyzed the distribution of both of these craters to determine the thickness of an icy layer that may surround the total surface of Mars. The depth of a crater has been found to be about one tenth of its diameter. So by measuring the diameter, the depth can be easily found. They mapped the position and size of all of these craters and then determined the maximum size of single-layered craters and the smallest size for multiple-layered craters for each latitude. Remember the single-layered ejecta crater does not penetrate the icy layer, but the multiple-layered does. An average of those should give the thickness of the icy layer. From such an analysis, they determined that the icy layer or cryosphere varies from about 1.3 km (equator) to 3.3 km (poles). This represents a great deal of frozen water. It would be equal to 200 meters of water spread over the entire planet, if one assumes 20% pore space. The Phoenix lander confirmed the existence of large amounts of water ice in the northern regions of Mars. This finding was predicted by theory and was measured from orbit by the Mars Odyssey instruments, so the idea that rampart crater size shows the depth to ice was confirmed by other space probes. The image below from the Phoenix lander shows ice that was exposed by the descent engines. They are normally small craters found in the far north or south parts of the planet Pancake craters In the Mariner and Viking mission a type of crater was found that was called a "pancake crater." It is similar to a rampart crater, but does not have a rampart. The ejecta is flat along its whole area, like a pancake. Under higher resolutions it resembles a double-layer crater that has degraded. These craters are found in the same latitudes as double-layer craters (40–65 degrees). It has been suggested that they are just the inner layer of a double-layer crater in which the outer, thin layer has eroded. Craters classified as pancakes in Viking images, turned out to be double-layer craters when seen at higher resolutions by later spacecraft. See also Geology of Mars Impact event LARLE crater Martian Craters Pedestal crater Peak ring (crater) References External links The Role of Subsurface Ice in Rampart Crater Formation Viking 1 orbiter image, 1977 Ages and Onset Diameters of Rampart Craters In Equatorial Regions on Mars. Craters as seen by Viking Planetary science Geology of Mars
Rampart crater
Astronomy
1,303
32,019,370
https://en.wikipedia.org/wiki/ACT%20domain
In molecular biology, the ACT domain is a protein domain that is found in a variety of proteins involved in metabolism. ACT domains are linked to a wide range of metabolic enzymes that are regulated by amino acid concentration. The ACT domain is named after three of the proteins that contain it: aspartate kinase, chorismate mutase and TyrA. The archetypical ACT domain is the C-terminal regulatory domain of 3-phosphoglycerate dehydrogenase (3PGDH), which folds with a ferredoxin-like topology. A pair of ACT domains form an eight-stranded antiparallel sheet with two molecules of allosteric inhibitor serine bound in the interface. Biochemical exploration of a few other proteins containing ACT domains supports the suggestions that these domains contain the archetypical ACT structure. The ACT domain was discovered by Aravind and Koonin using iterative sequence searches. References Protein domains
ACT domain
Biology
196
31,930,599
https://en.wikipedia.org/wiki/Escherichia%20coli%20O104%3AH4
Escherichia coli O104:H4 is an enteroaggregative Escherichia coli strain of the bacterium Escherichia coli, and the cause of the 2011 Escherichia coli O104:H4 outbreak. The "O" in the serological classification identifies the cell wall lipopolysaccharide antigen, and the "H" identifies the flagella antigen. Analysis of genomic sequences obtained by BGI Shenzhen shows that the O104:H4 outbreak strain is an enteroaggregative E. coli (EAEC or EAggEC) type that has acquired Shiga toxin genes, presumably by horizontal gene transfer. Genome assembly and copy-number analysis both confirmed that two copies of the Shiga toxin stx2 prophage gene cluster are a distinctive characteristic of the genome of the O104:H4 outbreak strain. The O104:H4 strain is characterized by these genetic markers: Shiga toxin stx2 positive tellurite resistance gene cluster positive intimin adherence gene negative β-lactamases ampC, ampD, ampE, ampG, ampH are present. The European Commission (EC) integrated approach to food safety defines a case of Shiga-like toxin-producing E. coli (STEC) diarrhea caused by O104:H4 by an acute onset of diarrhea or bloody diarrhea together with the detection of the Shiga toxin 2 (Stx2) or the Shiga gene stx2. Prior to the 2011 outbreak, only one case identified as O104:H4 had been observed, in a woman in South Korea in 2005. Pathophysiology E. coli O104 is a Shiga toxin–producing E. coli (STEC). The toxins cause illness and the associated symptoms by sticking to the intestinal cells and aggravating the cells along the intestinal wall. This, in turn, can cause bloody stools to occur. Another effect from this bacterial infection is hemolytic uremic syndrome (HUS), which is a condition characterized by destruction of red blood cells, that over a long period of time can cause kidney failure. Some common symptoms of HUS are vomiting, bloody diarrhea, and blood in the urine. Infection A common mode of E. coli O104:H4 infection involves ingestion of fecally contaminated food; the disease can thus be considered a foodborne illness. Most recently in 2011, an outbreak of the O104:H4 strain in Germany caused the deaths of several people, and hundreds were hospitalised. German authorities traced the infection back to fenugreek sprouts grown from contaminated seeds imported from Egypt, but these results are debated. Diagnosis To diagnose infection with STEC, a patient's stool (feces) can be tested in a laboratory for the presence of Shiga toxin. Testing methods used include direct detection of the toxin by immunoassay, or detection of the stx2 gene or other virulence-factor genes by PCR. If infection with STEC is confirmed, the E. coli strain may be serotyped to determine whether O104:H4 is present. Treatment E. coli O104:H4 is difficult to treat as it is resistant to many antibiotics, although it is susceptible to carbapenems. Prevention Spread of E. coli is prevented simply by thorough hand-washing with soap, washing and hygienically preparing food, and properly heating/cooking food, so the bacteria are destroyed. References Escherichia coli
Escherichia coli O104:H4
Biology
752
493,744
https://en.wikipedia.org/wiki/DEC%20PRISM
PRISM (Parallel Reduced Instruction Set Machine) was a 32-bit RISC instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). It was the outcome of a number of DEC research projects from the 1982–1985 time-frame, and the project was subject to continually changing requirements and planned uses that delayed its introduction. This process eventually decided to use the design for a new line of Unix workstations. The arithmetic logic unit (ALU) of the microPrism version had completed design in April 1988 and samples were fabricated, but the design of other components like the floating point unit (FPU) and memory management unit (MMU) were still not complete in the summer when DEC management decided to cancel the project in favor of MIPS-based systems. An operating system codenamed MICA was developed for the PRISM architecture, which would have served as a replacement for both VAX/VMS and ULTRIX on PRISM. PRISM's cancellation had significant effects within DEC. Many of the team members left the company over the next year, notably Dave Cutler who moved to Microsoft and led the development of Windows NT. The MIPS-based workstations were moderately successful among DEC's existing Ultrix users but had little success competing against companies like Sun Microsystems. Meanwhile, DEC's cash-cow VAX line grew increasingly less performant as new RISC designs outperformed even the top-of-the-line VAX 9000. As the company explored the future of the VAX they concluded that a PRISM-like processor with a few additional changes could address all of these markets. Starting where PRISM left off, the DEC Alpha program started in 1989. History Background Introduced in 1977, the VAX was a runaway success for DEC, cementing its place as the world's #2 computer vendor behind IBM. The VAX was noted for its rich instruction set architecture (ISA), which was implemented in complex microcode. The VMS operating system was layered on top of this ISA, which drove it to have certain requirements for interrupt handling and the memory model used for memory paging. By the early 1980s, VAX systems had become "the computing hub of many technology-driven companies, sending spokes of RS-232 cables out to a rim of VT-100 terminals that kept the science and engineering departments rolling." This happy situation was upset by the relentless improvement of semiconductor manufacturing as encoded by Moore's Law; by the early 1980s there were a number of capable 32-bit single-chip microprocessors with performance similar to early VAX machines yet able to fit into a desktop pizza box form factor. Companies like Sun Microsystems introduced Motorola 68000 series-based Unix workstations that could replace a huge multi-user VAX machine with one that provided even more performance but was inexpensive enough to be purchased for every user that required one. While DEC's own microprocessor teams were introducing a series of VAX implementations at lower price-points, the price-performance ratio of their systems continued to be eroded. By the later half of the 1980s, DEC found itself being locked out of the technical market. RISC During the 1970s, IBM had been carrying out studies of the performance of their computer systems and found, to their surprise, that 80% of the computer's time was spent performing only five operations. The hundreds of other instructions in their ISAs, implemented using microcode, went almost entirely unused. The presence of the microcode introduced a delay when the instructions were decoded, so even when one called one of those five instructions directly, it ran slower than it could if there was no microcode. This led to the IBM 801 design, the first modern RISC processor. Around the same time, in 1979, Dave Patterson was sent on a sabbatical from University of California, Berkeley to help DEC's west-coast team improve the VAX microcode. Patterson was struck by the complexity of the coding process and concluded it was untenable. He first wrote a paper on ways to improve microcoding, but later changed his mind and decided microcode itself was the problem. He soon started the Berkeley RISC project. The emergence of RISC sparked off a long-running debate within the computer industry about its merits; when Patterson first outlined his arguments for the concept in 1980, a dismissive dissenting opinion was published by DEC. By the mid-1980s practically every company with a processor design arm began exploring the RISC approach. In spite of any official disinterest, DEC was no exception. In the period from 1982 to 1985, no fewer than four attempts were made to create a RISC chip at different DEC divisions. Titan from DEC's Western Research Laboratory (WRL) in Palo Alto, California was a high-performance ECL based design that started in 1982, intended to run Unix. SAFE (Streamlined Architecture for Fast Execution) was a 64-bit design that started the same year, designed by Alan Kotok (of Spacewar! fame) and Dave Orbits and intended to run VMS. HR-32 (Hudson, RISC, 32-bit) started in 1984 by Rich Witek and Dan Dobberpuhl at the Hudson, MA fab, intended to be used as a co-processor in VAX machine. The same year Dave Cutler started the CASCADE project at DECwest in Bellevue, Washington. PRISM Eventually, Cutler was asked to define a single RISC project in 1985, selecting Rich Witek as the chief architect. In August 1985 the first draft of a high-level design was delivered, and work began on the detailed design. The PRISM specification was developed over a period of many months by a five-person team: Dave Cutler, Dave Orbits, Rich Witek, Dileep Bhandarkar, and Wayne Cardoza. Through this early period, there were constant changes in the design as debates within the company argued over whether it should be 32- or 64-bit, aimed at a commercial or technical workload, and so forth. These constant changes meant the final ISA specification was not complete until September 1986. At the time, the decision was made to produce two versions of the basic concept, DECwest worked on a "high-end" ECL implementation known as Crystal, while the Semiconductor Advanced Development team worked on microPRISM, a CMOS version. This work was 98% done 1985–86 and was heavily supported by simulations by Pete Benoit on a large VAXcluster. Through this era there was still considerable scepticism on the part of DEC engineering as a whole about whether RISC was really faster, or simply faster on the trivial five-line programs being used to demonstrate its performance. Based on the Crystal design, in 1986 it was compared to the then-fastest machine in development, the VAX 8800. The conclusion was clear: for any given amount of investment, the RISC designs would outperform a VAX by 2-to-1. In the middle of 1987, the decision was made that both designs be 64-bit, although this lasted only a few weeks. In October 1987, Sun introduced the Sun-4. Powered by a 16 MHz SPARC, a commercial version of Patterson's RISC design, it ran four times as fast as their previous top-end Sun-3 using a 20 MHz Motorola 68020. With this release, DEC once again changed the target for PRISM, aiming it solely at the workstation space. This resulted in the microPRISM being respecified as a 32-bit system while the Crystal project was canceled. This introduced more delays, putting the project far behind schedule. By early 1988 the system was still not complete; the CPU design was nearly complete, but the FPU and MMU, both based on the contemporary Rigel chipset for the VAX, were still being designed. The team decided to stop work on those parts of the design and focus entirely on the CPU. Design was completed in March 1988 and taped out by April. Cancellation Throughout the PRISM period, DEC was involved in a major debate over the future direction of the company. As newer RISC-based workstations were introduced, the performance benefit of the VAX was constantly eroded, and the price/performance ratio completely undermined. Different groups within the company debated how to best respond. Some advocated moving the VAX into the high-end, abandoning the low-end to the workstation vendors like Sun. This led to the VAX 9000 program, which was referred to internally as the "IBM killer". Others suggested moving into the workstation market using PRISM or a commodity processor. Still others suggested re-implementing the VAX on a RISC processor. Frustrated with the growing number of losses to cheaper faster competitive machines, independently, a small skunkworks group in Palo Alto, outside of Central Engineering, focused on workstations and UNIX/Ultrix, entertained the idea of using an off-the-shelf RISC processor to build a new family of workstations. The group carried out due diligence, eventually choosing the MIPS R2000. This group acquired a development machine and prototyped a port of Ultrix to the system. From the initial meetings with MIPS to a prototype machine took only 90 days. Full production of a DEC version could begin as early as January 1989, whereas it would be at least another year before a PRISM based machine would be ready. When the matter was raised at DEC headquarters the company was split on which approach was better. Bob Supnik was asked to consider the issue for an upcoming project review. He concluded that while the PRISM system appeared to be faster, the MIPS approach would be less expensive and much earlier to market. At the acrimonious review meeting by the company's Executive Committee in July 1988, the company decided to cancel Prism, and continue with the MIPS workstations and high-end VAX products. The workstation emerged as the DECstation 3100. By this time samples of the microPRISM had been returned and were found to be mostly working. They also proved capable of running at speeds of 50 to 80 MHz, compared to the R2000's 16 to 20. Performance predictions based on these observations suggested a significant performance improvement over existing and announced RISC products from other vendors. However, without the accompanying floating-point unit, whose design had been halted, or the cache interface chip required for operating at such frequencies, which had been part of a cancelled project, floating-point performance predictions remained hypothetical. Legacy By the time of the July 1988 meeting, the company had swung almost entirely into the position that the RISC approach was a workstation play. But PRISM's performance was similar to that of the latest VAX machines and the RISC concept had considerable room for growth. As the meeting broke up, Ken Olsen asked Supnik to investigate ways that Digital could keep the performance of VMS systems competitive with RISC-based Unix systems. A group of engineers formed a team, variously referred to as the "RISCy VAX" or "Extended VAX" (EVAX) task force, to explore this issue. By late summer, the group had explored three concepts, a subset of the VAX ISA with a RISC-like core, a translated VAX that ran native VAX code and translated it on-the-fly to RISC code and stored in a cache, and the ultrapipelined VAX, a much higher-performance CISC implementation. All of these approaches had issues that meant they would not be competitive with a simple RISC machine. The group next considered systems that combined both an existing VAX single-chip solution as well as a RISC chip for performance needs. These studies suggested that the system would inevitably be hamstrung by the lower-performance part and would offer no compelling advantage. It was at this point that Nancy Kronenberg pointed out that people ran VMS, not VAX, and that VMS only had a few hardware dependencies based on its modelling of interrupts and memory paging. There appeared to be no compelling reason why VMS could not be ported to a RISC chip as long as these small bits of the model were preserved. Further work on this concept suggested this was a workable approach. Supnik took the resulting report to the Strategy Task Force in February 1989. Two questions were raised: could the resulting RISC design also be a performance leader in the Unix market, and should the machine be an open standard? And with that, the decision was made to adopt the PRISM architecture with the appropriate modifications, eventually becoming the Alpha, and began the port of VMS to the new architecture. When PRISM and MICA were cancelled, Dave Cutler left Digital for Microsoft, where he was put in charge of the development of what became known as Windows NT. Cutler's architecture for NT was heavily inspired by many aspects of MICA. Design In terms of integer operations, the PRISM architecture was similar to the MIPS designs. Of the 32-bits in the instructions, the 6 highest and 5 lowest bits were the instruction, leaving the other 21 bits of the word for encoding either a constant or register locations. Sixty-four 32-bit registers were included, as opposed to thirty-two in the MIPS, but usage was otherwise similar. PRISM and MIPS both lack the register windows that were a hallmark of the other major RISC design, Berkeley RISC. The PRISM design was notable for several aspects of its instruction set. Notably, PRISM included Epicode (extended processor instruction code), which defined a number of "special" instructions intended to offer the operating system a stable ABI across multiple implementations. Epicode was given its own set of 22 32-bit registers to use. A set of vector processing instructions were later added as well, supported by an additional sixteen 64-bit vector registers that could be used in a variety of ways. References Bibliography Prism documents at bitsavers.org Further reading Bhandarkar, Dileep P. (1995). Alpha Architecture and Implementations. Digital Press. Bhandarkar, D. et al. (1990. "High performance issue orientated architecture". Proceedings of Compcon Spring '90, pp. 153–160. Conrad, R. et al. (1989). "A 50 MIPS (peak) 32/64 b microprocessor". ISSCC Digest of Technical Papers, pp. 76–77. Instruction set architectures Information technology projects
DEC PRISM
Technology,Engineering
2,975
67,784,602
https://en.wikipedia.org/wiki/French%20Foundation%20for%20the%20Study%20of%20Human%20Problems
The French Foundation for the Study of Human Problems (), often referred to as the Alexis Carrel Foundation or the Carrel Foundation, was a eugenics organization created by Nobel laureate in Medicine Alexis Carrel under the Vichy regime in World War II France. Alexis Carrel spent most of his career at the Rockefeller Institute in New York and returned to France just before the outbreak of World War II. Carrel, who had worked previously with Philippe Pétain during the First World War, accepted an offer to establish and lead a foundation for the study of human problems. Its ambitious mission was to give an account of the "human element associating the soul and the body". Charged with "the comprehensive study of the most appropriate measures needed to safeguard, improve, and advance the French people in all their activities," the Foundation was created by decree of the Vichy regime in 1941, and Carrel was appointed as "regent". The Foundation initiated studies on demographics (Robert Gessain, Paul Vincent, Jean Bourgeois), nutrition (Jean Sutter), and housing (Jean Merlet), as well as the first polls (Jean Stoetzel). The foundation employed 300 researchers from the summer of 1942 to the end of the autumn of 1944. The Foundation made many positive accomplishments during its time. It promoted the 16 December 1942 Act which established the prenuptial certificate, which was required before marriage and which sought to insure the good health of the spouses, in particular in regard to sexually transmitted diseases (STD) and "life hygiene". The institute also established the , which could be used to record students' grades in the French secondary schools, and thus classify and select them according to scholastic performance. Carrel was suspended after the liberation of Paris in August 1944 and died soon thereafter, thus avoiding the inevitable purge. The Foundation itself was "purged", but resurfaced soon after as the French Institute for Demographic Studies (INED) after the war. Most members of Carrel's team moved to INED, led by demographist Alfred Sauvy, who coined the expression "Third World". Others joined Robert Debré's which later became INSERM. See also Collaboration with the Axis Powers during World War II Human enhancement International Eugenics Conference Nazi eugenics Philippe Pétain Révolution nationale References Notes Citations Works cited 1941 in France 1942 in France 1943 in France 1944 documents Bioethics Eugenics organizations France in World War II German occupation of France during World War II Government of France Pseudo-scholarship Technological utopianism Vichy France
French Foundation for the Study of Human Problems
Technology
521
46,278,012
https://en.wikipedia.org/wiki/Initial%20acquisition%20of%20microbiota
The initial acquisition of microbiota is the formation of an organism's microbiota immediately before and after birth. The microbiota (also called flora) are all the microorganisms including bacteria, archaea and fungi that colonize the organism. The microbiome is another term for microbiota or can refer to the collected genomes. Many of these microorganisms interact with the host in ways that are beneficial and often play an integral role in processes like digestion and immunity. The microbiome is dynamic: it varies between individuals, over time, and can influenced by both endogenous and exogenous forces. Abundant research in invertebrates has shown that endosymbionts may be transmitted vertically to oocytes or externally transmitted during oviposition. Research on the acquisition of microbial communities in vertebrates is relatively sparse, but also suggests that vertical transmission may occur. In humans Early hypotheses assumed that human babies are born sterile and that any bacterial presence in the uterus would be harmful to the fetus. Some believed that both the womb and maternal milk were sterile, and that bacteria did not enter an infant's intestinal tract until supplementary food was provided. In 1900, the French pediatrician Henry Tissier isolated Bifidobacterium from the stool of healthy, breast-fed infants. He concluded that breast milk was not sterile and suggested that diarrhea caused by an imbalance of intestinal flora could be treated by supplementing food with Bifidobacterium. However, Tissier still claimed that the womb was sterile and that infants did not come into contact with bacteria until entering the birth canal. Over the last few decades, research on the perinatal acquisition of microbiota in humans has expanded as a result of developments in DNA sequencing technology. Bacteria have been detected in umbilical cord blood, amniotic fluid, and fetal membranes of healthy, term babies. The meconium, an infant's first bowel movement of digested amniotic fluid, has also been shown to contain a diverse community of microbes. These microbial communities consist of genera commonly found in the mouth and intestines, which may be transmitted to the uterus via the blood stream, and in the vagina, which may ascend through the cervix. In non-human vertebrates In one experiment, pregnant mice were given food containing genetically labeled Enterococcus faecium. The meconium of term offspring delivered by these mice via sterile C-section was found to contain labeled E. faecium, while pups from control mice given non-inoculated food did not contain E. faecium. This evidence supports the possibility of vertical microbial transmission in mammals. Most research on vertical transmission in non-mammalian vertebrates focuses on pathogens in agricultural animals (e.g. chicken, fish). It is not known whether these species also incorporate commensal flora into eggs. In invertebrates Marine sponges are host to many sponge-specific microbe species that are found across several sponge lineages. These microbes are detected in divergent populations without overlapping ranges but are not found in the sponges' immediate environment. As a result, it is thought that the symbionts were established by a colonization event before sponges diversified and are maintained through vertical (and, to a lesser extent, horizontal) transmission. The presence of microorganisms in both the oocytes and in the embryos of sponges has been confirmed. Many insects depend on microbial symbionts to obtain amino acids and other nutrients that are not available from their primary food source. Microbiota may be passed on to offspring via bacteriocytes associated with the ovaries or developing embryo, by feeding larvae with microbe-fortified food, or by smearing eggs with a medium containing microbes during oviposition. Alternatively, in other instances, microbiota composition can also be determined by the environment, as is the case for mosquito larvae, living in the water. See also Human microbiome project Human microbiota Gut flora Vaginal flora References Microorganisms and humans
Initial acquisition of microbiota
Biology
854
5,557,857
https://en.wikipedia.org/wiki/Pre-main-sequence%20star
A pre-main-sequence star (also known as a PMS star and PMS object) is a star in the stage when it has not yet reached the main sequence. Earlier in its life, the object is a protostar that grows by acquiring mass from its surrounding envelope of interstellar dust and gas. After the protostar blows away this envelope, it is optically visible, and appears on the stellar birthline in the Hertzsprung-Russell diagram. At this point, the star has acquired nearly all of its mass but has not yet started hydrogen burning (i.e. nuclear fusion of hydrogen). The star continues to contract, its internal temperature rising until it begins hydrogen burning on the zero age main sequence. This period of contraction is the pre-main sequence stage. An observed PMS object can either be a T Tauri star, if it has fewer than 2 solar masses (), or else a Herbig Ae/Be star, if it has 2 to 8 . Yet more massive stars have no pre-main-sequence stage because they contract too quickly as protostars. By the time they become visible, the hydrogen in their centers is already fusing and they are main-sequence objects. The energy source of PMS objects is gravitational contraction, as opposed to hydrogen burning in main-sequence stars. In the Hertzsprung–Russell diagram, pre-main-sequence stars with more than 0.5 first move vertically downward along Hayashi tracks, then leftward and horizontally along Henyey tracks, until they finally halt at the main sequence. Pre-main-sequence stars with less than 0.5 contract vertically along the Hayashi track for their entire evolution. PMS stars can be differentiated empirically from main-sequence stars by using stellar spectra to measure their surface gravity. A PMS object has a larger radius than a main-sequence star with the same stellar mass and thus has a lower surface gravity. Although they are optically visible, PMS objects are rare relative to those on the main sequence, because their contraction lasts for only 1 percent of the time required for hydrogen fusion. During the early portion of the PMS stage, most stars have circumstellar disks, which are the sites of planet formation. See also Protoplanetary disk Protostar Stellar evolution Young stellar object References Star types Star formation
Pre-main-sequence star
Astronomy
481
4,355,120
https://en.wikipedia.org/wiki/Fredholm%20theory
In mathematics, Fredholm theory is a theory of integral equations. In the narrowest sense, Fredholm theory concerns itself with the solution of the Fredholm integral equation. In a broader sense, the abstract structure of Fredholm's theory is given in terms of the spectral theory of Fredholm operators and Fredholm kernels on Hilbert space. The theory is named in honour of Erik Ivar Fredholm. Overview The following sections provide a casual sketch of the place of Fredholm theory in the broader context of operator theory and functional analysis. The outline presented here is broad, whereas the difficulty of formalizing this sketch is, of course, in the details. Fredholm equation of the first kind Much of Fredholm theory concerns itself with the following integral equation for f when g and K are given: This equation arises naturally in many problems in physics and mathematics, as the inverse of a differential equation. That is, one is asked to solve the differential equation where the function is given and is unknown. Here, stands for a linear differential operator. For example, one might take to be an elliptic operator, such as in which case the equation to be solved becomes the Poisson equation. A general method of solving such equations is by means of Green's functions, namely, rather than a direct attack, one first finds the function such that for a given pair , where is the Dirac delta function. The desired solution to the above differential equation is then written as an integral in the form of a Fredholm integral equation, The function is variously known as a Green's function, or the kernel of an integral. It is sometimes called the nucleus of the integral, whence the term nuclear operator arises. In the general theory, and may be points on any manifold; the real number line or -dimensional Euclidean space in the simplest cases. The general theory also often requires that the functions belong to some given function space: often, the space of square-integrable functions is studied, and Sobolev spaces appear often. The actual function space used is often determined by the solutions of the eigenvalue problem of the differential operator; that is, by the solutions to where the are the eigenvalues, and the are the eigenvectors. The set of eigenvectors span a Banach space, and, when there is a natural inner product, then the eigenvectors span a Hilbert space, at which point the Riesz representation theorem is applied. Examples of such spaces are the orthogonal polynomials that occur as the solutions to a class of second-order ordinary differential equations. Given a Hilbert space as above, the kernel may be written in the form In this form, the object is often called the Fredholm operator or the Fredholm kernel. That this is the same kernel as before follows from the completeness of the basis of the Hilbert space, namely, that one has Since the are generally increasing, the resulting eigenvalues of the operator are thus seen to be decreasing towards zero. Inhomogeneous equations The inhomogeneous Fredholm integral equation may be written formally as which has the formal solution A solution of this form is referred to as the resolvent formalism, where the resolvent is defined as the operator Given the collection of eigenvectors and eigenvalues of K, the resolvent may be given a concrete form as with the solution being A necessary and sufficient condition for such a solution to exist is one of Fredholm's theorems. The resolvent is commonly expanded in powers of , in which case it is known as the Liouville-Neumann series. In this case, the integral equation is written as and the resolvent is written in the alternate form as Fredholm determinant The Fredholm determinant is commonly defined as where and and so on. The corresponding zeta function is The zeta function can be thought of as the determinant of the resolvent. The zeta function plays an important role in studying dynamical systems. Note that this is the same general type of zeta function as the Riemann zeta function; however, in this case, the corresponding kernel is not known. The existence of such a kernel is known as the Hilbert–Pólya conjecture. Main results The classical results of the theory are Fredholm's theorems, one of which is the Fredholm alternative. One of the important results from the general theory is that the kernel is a compact operator when the space of functions are equicontinuous. A related celebrated result is the Atiyah–Singer index theorem, pertaining to index (dim ker – dim coker) of elliptic operators on compact manifolds. History Fredholm's 1903 paper in Acta Mathematica is considered to be one of the major landmarks in the establishment of operator theory. David Hilbert developed the abstraction of Hilbert space in association with research on integral equations prompted by Fredholm's (amongst other things). See also Green's functions Spectral theory Fredholm alternative References Mathematical physics Spectral theory
Fredholm theory
Physics,Mathematics
1,019
16,633,452
https://en.wikipedia.org/wiki/Enzyme%20mimic
Enzyme mimic (or Artificial enzyme) is a branch of biomimetic chemistry, which aims at imitating the function of natural enzymes. An enzyme mimic is a small molecule complex that models the molecular structure, spectroscopic properties, or reactivity of an enzyme, sometimes called bioinspired complexes. Overview Enzymes are biological catalysts: biopolymers that catalyze a reaction. Although a small number of natural enzymes are built from RNA–termed Ribozymes–most enzymes are proteins. Like any other protein, an enzyme is an amino acid polymer with added cofactors and other post-translational modifications. Often, most of the amino acid polymer is indirectly involved with the enzymes function, perhaps providing ancillary structure or connectivity, indirect activity regulation, or molecular identification of the enzyme. As a result, most enzymes are large molecules weighing many kilodaltons. This bulk can obscure various investigative techniques such as NMR, EPR, electrochemistry, crystallography, among others. It is standard practice to compare spectroscopic data from enzymes to similar spectroscopic data derived from better characterized small molecules. In this way, the understanding of metalloenzymes and other metalloproteins have developed. In many cases, the small molecule analogs were created for other reasons; however, it has been increasingly common for groups to intentionally make small molecule analogs also known as enzyme mimics. These enzyme mimics are prime examples of bioinorganic chemistry. Motivation Most enzyme mimic studies are motivated by a combination of factors, including factors that are unrelated to the enzyme. Several of the factors that are related to the enzyme are listed below. Defining the active site structure. A number of important active sites are still poorly defined. This includes the oxygen evolving complex and nitrogenase. In an effort to understand these enzymes, small molecule analogs are created and compared to the data which exists for the proteins. Understanding the active site function. The structure of some enzymes are very well characterized, however, the function of some component of the active site is poorly understood. This is often investigated through site-directed mutagenesis. In addition, the synthesis of a model complex can suggest the function of various components. Reproducing the enzymes function. A number of enzymes are an interest since they catalyze a reaction chemist find challenging. These reactions include the partial oxidation of a hydrocarbon by methane monooxygenase (MMO) or the oxidation and production of hydrogen by hydrogenase. "Functional" enzyme mimics or bioinspired catalysts are designed with characteristics of the enzyme in hopes of reproducing the enzymes functionality. Significant examples This list is extremely abbreviated in terms of the enzymes mimicked and the primary investigators working on each enzyme mimic. Richard Holm's work on mimics of nitrogenase and creation of iron sulfur clusters. Stephen Lippard's work on MMO. Thomas Rauchfuss's, Marcetta Darensbourg's and Christopher Pickett's work on Hydrogenase mimics. Harry Gray's work with porphyrins complexes. Ronald Breslow and Larry E. Overman coined the term of "artificial enzyme". References Biochemistry Inorganic chemistry
Enzyme mimic
Chemistry,Biology
659
24,154,456
https://en.wikipedia.org/wiki/C16H22N2O2
{{DISPLAYTITLE:C16H22N2O2}} The molecular formula C16H22N2O2 (molar mass: 274.36 g/mol) may refer to: 4-Acetoxy-DET 4-Acetoxy-MiPT Isamoltane (CGP-361A) Molecular formulas
C16H22N2O2
Physics,Chemistry
75
36,924,547
https://en.wikipedia.org/wiki/Avraham%20Avigdorov
Avraham Avigdorov (; July 2, 1929 – September 4, 2012) was an Israeli soldier and recipient of the Hero of Israel award (today the Medal of Valor), the highest Israeli military decoration. Avigdorov received the award for destroying two Bren machine gun positions on March 17, 1948, during the civil war phase of the 1947–1949 Palestine war. Biography Early life Avigdorov was born in 1929 in Mitzpa, a moshava near Tiberias in Mandatory Palestine. His father Gad, a member of HaShomer, was killed in the 1936 Arab Revolt. Avigdorov studied agriculture at Mikve Israel. Military service and aftermath After finishing his studies, Avigdorov joined the Palmach in July 1947 and was assigned to the Yiftach Brigade. On March 18, 1948, during the civil war fought in the dying days of the British administration and shortly before the establishment of Israel and the outbreak of the 1948 Arab-Israeli War, he was part of an ambush of an Arab weapons convoy in the Kiryat Motzkin area. Avigdorov killed two Bren machine gunners defending the convoy and damaged their vehicle, thus turning the tide of the battle in the Palmach's favor. The vehicle he damaged exploded, seriously injuring Avigdorov. According to Avigdorov, he was placed in the morgue in the Rothschild Hospital in Haifa after being proclaimed dead by a local doctor. He was taken out after showing signs of life and stayed in a hospital with burns and a broken jaw until 1949. In that year he was operated on by South American plastic surgeons and released. In July 1949 he was awarded the Hero of Israel citation, and in April 1973 he received the Medal of Valor automatically. Following the Yom Kippur War of 1973, Avigdorov visited bereaved families, as well as wounded veterans, to show them that one could live with an injury. Later life In civilian life, he worked for the Ministry of Agriculture in testing pesticides. Personal life Avigdorov married Aliza and they had three children. References 1929 births 2012 deaths Palmach members Israeli soldiers Israeli military personnel of the 1948 Arab–Israeli War Recipients of the Medal of Valor (Israel) People from Tiberias Explosion survivors
Avraham Avigdorov
Chemistry
473
78,653,446
https://en.wikipedia.org/wiki/HIP%2067522
HIP 67522 is a G-class star which, by comparison with the Sun, is slightly larger (1.38 ) and cooler (5675 K versus 5772 K for the Sun). It lies about 127 parsecs away in the constellation Centaurus. Its visual magnitude of 9.8 makes it much too faint to be seen by the unaided eye. Two exoplanets, HIP 67522 b and HIP 67522 c, are known to orbit the star and transit its face as seen from Earth. Their orbital periods are much less than Mercury's 88 days around the Sun, being 6.96 days for b and 14.33 days for c. References Centaurus G-type main-sequence stars Hypothetical planetary systems 067522 120411 Durchmusterung objects
HIP 67522
Astronomy
168
75,274,295
https://en.wikipedia.org/wiki/SCQ1
(S)-SCQ1 is a drug which acts as a potent and selective antagonist for the 5-HT2B and 5-HT2C serotonin receptors, but with only modest affinity for the closely related 5-HT2A receptor and other targets such as 5-HT7. Since most currently available 5-HT2 class ligands have relatively poor selectivity and bind to all three subtypes, the selectivity of (S)-SCQ1 is expected to be useful for studying 5-HT2A receptor mediated responses in the absence of 5-HT2B and 5-HT2C activation. See also SB-206553 SB-242,084 Z3517967757 References 5-HT2C antagonists Benzochromenes Ketones Spiro compounds Quinuclidines
SCQ1
Chemistry
183
55,341,223
https://en.wikipedia.org/wiki/Butylate%20%28herbicide%29
Butylate or butilate is a widely used thiocarbamate herbicide. As a herbicide, it was introduced in 1962, and it quickly became the fourth most used herbicide in the US, with used in 1974. Its use has declined significantly, to in 1991 to by 1998. It is used on corn (field, sweet, and popcorn), to control grassy and broadleaf weeds and nutsedge. Application Butylate is applied as an emulsifiable concentrate of 85% active ingredient and is incorporated into the soil, being applied preplant, at plant, postplant, or after harvest. Its maxmimum application rate is 6.3 lb/acre (7.1 kg/Ha), which is much higher than many other herbicides. Soil incorporation is necessary due to the high volatility. Granular and encapsulated forms are also used. Butylate is often used in combination with atrazine and cyanazine. There is no residential application of butylate; its use is only on feed-crop. Butylate can interact with other thiocarbamate herbicides in plants, such as with carbofuran, which reduces synergistically the root and shoot growth of barley, as carbofuran slows the plant's metabolism of butylate, thus letting it accumulate. The effect is not seen on corn, where both carbamates compete for the same mode of uptake. Health Butylate is not toxic, not carcinogenic, not teratogenic and not mutagenic. Rat trials show acute neurotoxic effects at the lowest observed adverse effects level (LOAEL) of 2,000 mg/kg/day, leading to a no observed adverse effects level (NOAEL) of 600 milligrams/kilogram/day being established for the general population. At 400 mg/kg/day, rat offspring development was affected, showing decreased fetal weights and increased incidences of misaligned sterebrae, although no developmental effect at any dose is noted in rabbits. The EPA estimates human food exposure was under 0.3% of the permissible dose for all population subgroups, and so dietary risk of butylate is not a concern. Butylate is metabolised rapidly, so there are no residues expected in meat, milk, poultry or eggs, and no intact residues are found in harvested corn. The EPA requires that pesticide applicators wear, among other items, pants. Environmental Butylate has high volatility, with a vapor pressure of 170 mPa, so it may drift outside of target areas. Butylate is mobile to slightly mobile in soil. Ground water contamination is not expected. Technical butylate is highly toxic to freshwater fish, and non-toxic to birds and mammals. Surface water runoff may occur after rain. References Links Herbicides Thiocarbamates Isobutyl compounds
Butylate (herbicide)
Biology
608
15,717,778
https://en.wikipedia.org/wiki/Nokia%20N96
The Nokia N96 is a discontinued high-end mobile phone announced by Nokia on 11 February 2008 at the Mobile World Congress in Barcelona as part of the Nseries line. The N96 runs Symbian OS v9.3 (S60 3rd Edition, FP2). It is compatible with the N-Gage 2.0 gaming platform and has a DVB-H TV tuner and AV output. Compared to the popular Nokia N95 8GB, the N96 has a doubled flash storage capacity (16 GB), dual LED flashes and a slimmer design. However, critics had negative views on the N96's battery life and user-unfriendliness and its downgraded CPU clock speed raised questions. It was one of 2008's most anticipated mobile phones, but its launch was delayed and it was only widely available from October 2008. It is thus considered a commercial failure. Critics stated that the Nokia N85 provided more new features at a significantly lower price. Release Shipments for the N96 began in September 2008. Europe, Middle East and Asia-Pacific were the first locations to provide the phone to consumers. The American and Chinese versions were expected shortly thereafter. In the US, the phone was sold for $900, which was seen as being too expensive. The general UK release date for the N96 was 1 October, although London had a separate date of 24 September, where the phone went on sale exclusively at Nokia's flagship stores on Regent Street and at Terminal 5 of Heathrow Airport. Differences from N95 8 GB Additions: Dual-LED camera flash (single LED in N95 8 GB) New audio DSP Longer music playback time (14 hrs) and video playback time (6 hrs) Windows Media WMV9 video codec Hardware acceleration for H.264 and WMV video codecs DVB-H 1.0 receiver built in – only usable with paid subscription Flip-out stand for more comfortable viewing of content when placed on a flat surface (surrounds the lens assembly) S60 3rd Edition upgraded from Feature Pack 1 to Feature Pack 2 linu v88.0.12.0 Java ME engine upgraded from MIDP 2.0 to MIDP 2.1 User data preserved when upgrading firmware - this feature is also present on the N95-2 as v21 installs UDP base files Open C/C++ support New QuickOffice application - opens all Microsoft Office files New version of Nokia Video Centre (show & edit videos) New release of Nokia Experience software 2.0 Hi-Speed microUSB (write 3 Mbit/s, read 4.1 Mbit/s) – N95 8 GB uses full-speed USB microSD memory card slot - as in original N95, while N95 8 GB has no card slot RSS 2.1 reader FM radio with RDS Dual-band HSDPA (900 and 2100 MHz) - N95 supports 2100 MHz only No need to open slider for optimal GPS reception VGA front camera - N95 8 GB has CIF front camera Video Flash lightguns Upgraded Bluetooth stereo audio FOTA (Firmware Over the Air) OMA E-mail Notification v1.0 OMA Device Management v1.2 OpenGL ES 1.1 plugin Dual Transfer Mode (MSC 11) SPP Bluetooth profiles Removals: Free sat-nav service – Nokia advised that this was in the pipeline and that they fully expected it to be made available, but did not say when it would be available Support for Nokia Music Headset HS-45, AD-54 CPU: N96 has dual ARM9 264 MHz with no floating point instructions, N95 has dual ARM11 332 MHz with vector floating point N96 has 8x image digital zoom and 4x video digital zoom while N95 has 20x image digital zoom and 8x video digital zoom, although the benefits of this are debatable Same battery as original N95 (950 mAh), but reportedly has a much better battery life due to software improvements under Feature Pack 2 - Nokia N95 8 GB has 1200 mAh battery No hardware 3D graphics accelerator No infrared port N95 has a lens cover and a high-quality shutter (both the N95 8 GB and N96 do not have this feature) No manually selected MMS messaging mode. If the user write a long text message, the N96 will automatically select the MMS mode which could stop recipients from receiving the message if they do not have MMS set up on their phones. A Nokia USA employee stated that there was an update in the works to fix this very soon. It is assumed that this automatic selection of MMS mode is due to Nokia's Smart Connectivity. The built-in VoIP client from N95 which allowed the user to make Internet calls without installing any additional software was removed from N96. Nevertheless, the VoIP 2.1 API still exists which can be used by software developers in their applications. The pencil key used to mark and unmark items and highlight text is not included, but this action can still be done by holding down the # key. In popular culture In 2008, a video commercial advertising Nokia N96 Limited Edition Bruce Lee, became viral in the internet. The video, produced by the Beijing office of the J. Walter Thompson (JWT) agency and targeting a Chinese market, shows what looks like archival footage of Bruce Lee doing various tricks with nunchaku (playing table tennis, lighting a cigarette in another person's mouth and extinguishing thrown lighted matches mid-air). The video was specifically made like a never-seen-before footage of Bruce Lee (particularly, the ball in the video was digitally added in post-production), which was later admitted by the JWT chief creative officer Polly Chu. The associated website shown in the commercial, nokia-lee.com.cn, has since then been taken over by pornographic content. The N96 also appears in Katy Perry's "Hot n Cold" music video. See also Nokia N85 Sony Ericsson W995 Sony Ericsson C905 Samsung i8510 Innov8 References External links Original Press Release, Americas Press Release N96 specifications from Nokia Europe N96 Drivers and Support from Nokia Europe N-Gage (service) compatible devices Mobile phones introduced in 2008 Nokia Nseries Slider phones Discontinued flagship smartphones
Nokia N96
Technology
1,336
856,954
https://en.wikipedia.org/wiki/Otter%20%28software%29
Otter is an infrastructure automation tool that runs under Microsoft Windows, designed by the software company Inedo. Otter utilizes Infrastructure as Code to model infrastructure and configuration. Otter provisions and configures servers automatically, without logging in to a command prompt. Key areas Otter focuses on two key areas: Configuration automation - Otter allows users to model the configuration of servers, roles, and environments; monitor for drift, schedule changes, and ensure consistency across servers Orchestration automation - Otter can spin up cloud servers, build containers, deploy packages, patch servers, or any other multi-server/service automation Otter can continuously monitor for server configuration drift, can automatically remediate drift, and can send notification when drift occurs. Key features Otter has a visual, web-based user interface that is designed to "create complex configurations and orchestrations using the intuitive, drag-and-drop editor, and then switch to-and-from code/text mode as needed." Otter aims to enable DevOps practices through its UI, and shows the configuration state of an organization's servers infrastructure (local, virtual, cloud-built). Otter supports Microsoft Windows, and supports Linux-based operating systems through SSH-based agents. Otter monitors servers for configuration changes, and reports when the configuration has drifted. It supports both agent and agentless Windows servers. From version 1.5, Otter integrates with Atlassian Jira and Git via extensions. PowerShell Otter allows the use of Windows PowerShell scripts. See also Ansible Chef Puppet Configuration Management Continuous configuration automation DevOps toolchain References Orchestration software Configuration management
Otter (software)
Engineering
324
15,426,227
https://en.wikipedia.org/wiki/Radisson%20Substation
Radisson Substation is a large electrical substation located near Radisson on the Route de la Baie James highway. The switching station, owned by Hydro-Québec, is the largest substation in its power grid, covering an area of a massive 100 football fields. Electrical power heading into the switchyard is collected via several relatively short alternating current (AC) 735 kilovolt (kV) and AC 315 kV power lines coming from hydroelectric plants like the Robert-Bourassa power station, part of the James Bay Project, located away. The power from these lines is either raised to AC 735 kV from AC 315 kV, statically inverted to high-voltage direct current (HVDC) +/- 450 kV, or unchanged (735 kV to 735 kV). The amount of electrical energy passing through the substation is 6,600 megawatts (MW). Two AC 735 kV lines and one HVDC +/- 450 kV line, part of the James Bay transmission system, transfer this electrical energy over a distance of to Montreal and Northeast United States. See also Hydro-Québec Hydro-Québec's electricity transmission system Quebec - New England Transmission References Converter stations James Bay Project
Radisson Substation
Engineering
251
42,116,074
https://en.wikipedia.org/wiki/NGC%201746
NGC 1746 is an asterism in the constellation Taurus that was described in 1863 by Heinrich Louis d'Arrest and as a result was recorded in the New General Catalogue (NGC). Previously, the object was classified as an open cluster; however, it was shown through more recent observations that it is a random formation of stars in Earth's sky, an asterism. NGC 1746 has an apparent magnitude of 6.1. It is also known as the Cluster of Clusters or the Taurus Triplet as the region of the sky also include NGC 1750 and NGC 1758. NGC 1746 is a sparse, large star grouping that spans around 40 arcminutes. Originally classified as an open cluster, later studies suggest it is likely to be an asterism. NGC 1750 is a true open star cluster, spanning 20 arcminutes, with an estimated age of 150–200 million years. It contains several dozen stars, many of which are still on the main sequence. NGC 1758 is a smaller and denser open cluster, about 10 arcminutes in size. Its stars are older, with an estimated age of 800 million years, and redder due to the evolution of its massive stars into giant phases. Although these clusters are visually close, detailed analysis of stellar motion and distance indicates that NGC 1750 and NGC 1758 are unrelated clusters, and NGC 1746 is likely a random grouping of stars rather than a true cluster. Sources Galadí-Enríquez, D.; Jordi, C.; Trullols, E.: "Astrometry and Photometry of Open Clusters: NGC 1746, NGC 1750 and NGC 1758"; in: Astrophysics and Space Science, Bd. 263, Nr. 1/4, S. 307ff. (1998) External links 1746 Taurus (constellation) Asterisms (astronomy)
NGC 1746
Astronomy
374
70,289,315
https://en.wikipedia.org/wiki/7%20Trianguli
7 Trianguli is a solitary star located in the northern constellation Triangulum. It has an apparent magnitude of 5.25, making it faintly visible to the naked eye under ideal conditions. The star is situated at distance of 360 light years but is approaching with a heliocentric radial velocity of , which is poorly constrained. 7 Trianguli has a stellar classification of A0 V or B9.5 V, depending on the study. At present it has 2.77 times the mass of the Sun and 3.24 times the radius of the Sun. It shines at 89.1 times the luminosity of the Sun from its photosphere at an effective temperature of 10,685 K, giving it a blueish white glow. 7 Trianguli is a young star, with an age of 283 million years and spins rapidly with a projected rotational velocity of . It has been classified as having a peculiar spectrum, but it is considered doubtful that it is actually a chemically peculiar star. Together with δ Trianguli and γ Trianguli, it forms an optical (line-of-sight) triple. References A-type main-sequence stars B-type main-sequence stars Triangulum Trianguli, 7 013869 010559 0655 Durchmusterung objects
7 Trianguli
Astronomy
270
21,242,503
https://en.wikipedia.org/wiki/International%20Conference%20of%20Physics%20Students
The International Conference of Physics Students (ICPS) is an annual conference of the International Association of Physics Students (IAPS). Usually, up to 500 students from all over the world attend the event, which takes place in another country every year in August. The event includes the opportunity for students at bachelor, master and doctoral level to present their research, whilst listening and interacting with invited speakers of international reputation. During the event, usually lasting between 5 and 7 days, the IAPS holds its Annual General Meeting (AGM) and elects a new Executive Committee. The choice of the host country of ICPS is made two years in advance. Program The main component of the conference consists of lectures given by the students themselves for other students. Guest lectures held by invited speakers and lab tours complete the scientific program. Further activities include city tours, excursions and social events. The participation fee is usually close to €200 per person, including accommodation, food and any extra activity organised by the local committee. Conference venues The following list contains the venues of the ICPS conferences 2022 Puebla, Mexico 2021 Copenhagen, Denmark 2020 Puebla, Mexico (cancelled due to COVID-19 pandemic) 2019 Cologne, Germany 2018 Helsinki, Finland 2017 Turin, Italy 2016 Malta 2015 Zagreb, Croatia 2014 Heidelberg, Germany 2013 Edinburgh, United Kingdom 2012 Utrecht, The Netherlands 2011 Budapest, Hungary 2010 Graz, Austria 2009 Split, Croatia 2008 Kraków, Poland 2007 London, United Kingdom 2006 Bucharest, Romania 2005 Coimbra, Portugal 2004 Novi Sad, Serbia and Montenegro 2003 Odense, Denmark 2002 Budapest, Hungary 2001 Dublin, Ireland 2000 Zadar, Croatia 1999 Helsinki, Finland 1998 Coimbra, Portugal 1997 Vienna, Austria 1996 Szeged, Hungary 1995 Copenhagen, Denmark 1994 St. Petersburg, Russia 1993 Bodrum, Turkey 1992 Lisbon, Portugal 1991 Vienna, Austria 1990 Amsterdam, Netherlands 1989 Freiburg, Germany 1988 Prague, Czechoslovakia 1987 Debrecen, Hungary 1986 Budapest, Hungary History In 1985 a group of Hungarian students decided to host a gathering of Physics students from all over the world. This resulted in the first International Conference of Physics Students in 1986. Due to the large success of this conference a second meeting in 1987 was organized in Debrecen, Hungary. At this occasion the International Association of Physics Students was founded. Furthermore, it was decided to have an International meeting annually. Since then, 30 conferences have taken place. ICPS 2017 The ICPS 2017 took place in Turin, between August 7 and August 14, hosted by the Italian Association of Physics Students (Associazione Italiana Studenti di Fisica) (AISF). The Italian Organizing Committee prepared the formal bid to host the conference only one year after its foundation, when it was not yet formally recognized as a National Committee of IAPS. The event was mostly held at the Campus Einaudi, with an opening ceremony hosted at the Cavallerizza Reale and the Rector's Palace of the University of Turin. Approximately 450 students attended the event and almost 50 volunteers from AISF were involved in the conference activities. Notable guests were Elena Aprile (Columbia University), Steve Cowley (University of Oxford), Roberto Vittori (European Space Agency) and James Kakalios (University of Minnesota, author of the popular book The Physics of Superheroes). Young invited speakers included Francesco Tombesi (NASA), Agnese Bissi (Harvard University) and Francesco Prino (University of Turin). The program of ICPS 2017 comprised visits to the National Centre for Oncological Hadrontherapy in Pavia, traditional wine cellars, the Italian Institute of Technology in Genoa, the Venaria Reale palace, the Sacra di San Michele Abbey and a number of innovation hubs in Northern Italy. ICPS 2016 The ICPS 2016 took place from August, 11th to August, 17th in Malta and was hosted by physics students from the University of Malta. Around 350 physics students attended the conference. Notable guest speakers were Jocelyn Bell Burnell from the University of Oxford and Mark McCaughrean from ESA. ICPS 2015 The ICPS 2015 took place from August, 12th to August, 19th in Zagreb and was hosted by physics students from the Croatian Physical Society. Around 400 physics students attended the conference. Notable guest speakers was Prof. Philip W. Phillips. ICPS 2014 The ICPS 2014 took place from August, 10th to August, 17th in Heidelberg and was hosted by physics students from the jDPG. Around 450 physics students attended the conference. Notable guest speakers were Metin Tolan, Karlheinz Meier and John Dudley, president of the European Physical Society. ICPS 2013 The ICPS 2013 took place from August, 15th to August, 21st in Edinburgh and was hosted by physics students from Heriot-Watt University. Around 400 physics students attended the conference. ICPS 2012 The ICPS 2012 took place from August, 3rd to August, 10th in Utrecht and was hosted by physics students from SPIN. Around 400 physics students attended the conference. ICPS 2011 The ICPS 2011 took place from August, 11th to August, 18th in Budapest and was hosted by physics students from the Hungarian Association of Physics Students (Mafihe). Around 400 physics students attended the conference. Notable guest speakers were Ferenc Krausz from the Max Planck Institute for Quantum Optics, Carlo Rubbia from CERN, and Laszlo Kiss from the Kinkily Observatory. ICPS 2010 The ICPS 2010 took place from August, 17th to August, 23rd in Graz and was hosted by physics students from both Graz University of Technology and University of Graz. A total of 446 students attended the conference. This number includes 64 volunteers to help to organize the event. Notable guest speakers were Peter Zoller and Sabine Schindler, both University of Innsbruck; and John Ellis from CERN. See also University of Malta Graz University of Technology University of Heidelberg References External links IAPS ICPS2008 ICPS2009 ICPS2010 ICPS 2011 ICPS 2012 ICPS 2014 ICPS 2015 ICPS 2016 ICPS 2017 ICPS 2021 International conferences International student organizations Physics education Physics conferences
International Conference of Physics Students
Physics
1,238
45,672,964
https://en.wikipedia.org/wiki/NGC%201592
NGC 1592 is an irregular galaxy in the constellation Eridanus. It is about 20,000 light-years across. It has not been studied in detail, as it is at 27 degrees south, making it not visible below 63 degrees north in a flat area, and about 50 degrees north in a hilly area. It was discovered in 1835 by John Herschel. 2014 observations Until 2014, not much was known about the galaxy, other than the fact it was irregular. In early 2014, the galaxy was observed with a 2-foot telescope at the SARA remote observatory in Chile, revealing the galaxy in higher resolution. It appears the galaxy is in the process of forming stars at a high rate - primarily in the red areas in the image. Additionally, the galaxy has several small clumps of stars, implying an ongoing merger. Companions NGC 1592 appears to have a companion, 2MFGC (2MASS Flat Galaxy Catalog) 3572, at 40 million light years away, assuming similar velocity with NGC 1592. they are separated by about 750,000 ±200,000 light years. References External links 18351114 Irregular galaxies Barred spiral galaxies 1592 015292 Eridanus (constellation)
NGC 1592
Astronomy
247
2,859,017
https://en.wikipedia.org/wiki/Nu%20Aquarii
Nu Aquarii (ν Aqr, ν Aquarii) is the Bayer designation for a star in the equatorial constellation of Aquarius. With an apparent visual magnitude of 4.52, Nu Aquarii is visible to the naked eye. Its distance from Earth, as determined from parallax measurements, is around . At an estimated age of 708 million years, it has evolved into a giant star with a spectrum that matches a stellar classification of G8 III. It has than double the mass of the Sun and has expanded to eight times the Sun's radius. Nu Aquarii is radiating 37-fold the luminosity of the Sun from its outer atmosphere at an effective temperature of . At this heat, the star is glowing with the yellowish hue of a G-type star. Together with μ Aquarii, it is Albulaan , a name derived from the Arabic term al-bulaʽān (ألبولعان), meaning "the two swallowers". This star, along with ε Aqr (Albali) and μ Aqr (Albulaan), were al Bulaʽ (البلع)—the Swallower. In Chinese, (), meaning Celestial Ramparts, refers to an asterism consisting of ν Aquarii, ξ Aquarii, 46 Capricorni, 47 Capricorni, λ Capricorni, 50 Capricorni, 18 Aquarii, 29 Capricorni, 9 Aquarii, 8 Aquarii, 14 Aquarii, 17 Aquarii and 19 Aquarii. Consequently, the Chinese name for ν Aquarii itself is (, ). References External links Image Nu Aquarii The Constellations and Named Stars Albulaan 201381 Aquarii, Nu Aquarius (constellation) G-type giants 104459 Aquarii, 013 8093 Durchmusterung objects
Nu Aquarii
Astronomy
402
4,542,797
https://en.wikipedia.org/wiki/Water%20well%20pump
A water well pump is a pump that is used in extracting water from a water well. Deep well pumps extract groundwater from subterranean aquifers, offering a reliable source of water independent of municipal networks. These pumps, often submersible and powered by electricity, can access water reserves located much deeper than shallow wells, ensuring a consistent supply even during periods of drought. They include different kinds of pumps, most of them submersible pumps: Hand pump, manually operated Injector, a jet-driven pump Mechanical or rotary lobe pump requiring mechanical parts to pump water Solar-powered water pump Pump driven by air as used by the Amish Pump driven by air as used in the Australian outback Manual pumpless or hand pump wells requiring a human operator The pump replaces the use of a bucket and pulley system to extract water. External links Water well pump article Pumps Water wells
Water well pump
Physics,Chemistry,Engineering,Environmental_science
178
1,151,761
https://en.wikipedia.org/wiki/Linksys%20WRT54G%20series
The Linksys WRT54G Wi-Fi series is a series of Wi-Fi–capable residential gateways marketed by Linksys, a subsidiary of Cisco, from 2003 until acquired by Belkin in 2013. A residential gateway connects a local area network (such as a home network) to a wide area network (such as the Internet). Models in this series use one of various 32-bit MIPS processors. All WRT54G models support Fast Ethernet for wired data links, and 802.11b/g for wireless data links. Hardware and revisions WRT54G The original WRT54G was first released in December 2002. It has a 4+1 port network switch (the Internet/WAN port is part of the same internal network switch, but on a different VLAN). The devices have two removable antennas connected through Reverse Polarity TNC connectors. The WRT54GC router is an exception and has an internal antenna with optional external antenna. As a cost-cutting measure, as well as to satisfy FCC rules that prohibit fitting external antennas with higher gain, the design of the latest version of the WRT54G no longer has detachable antennas or TNC connectors. Instead, version 8 routers simply route thin wires into antenna 'shells' eliminating the connector. As a result, Linksys HGA7T and similar external antennas are no longer compatible with this model. Until version 5, WRT54G shipped with Linux-based firmware. WRT54GS The WRT54GS is nearly identical to the WRT54G except for additional RAM, flash memory, and SpeedBooster software. Versions 1 to 3 of this router have 8 MB of flash memory. Since most third parties' firmware only use up to 4 MB flash, a JFFS2-based read/write filesystem can be created and used on the remaining 4 MB free flash. This allows for greater flexibility of configurations and scripting, enabling this small router to both load-balance multiple ADSL lines (multi-homed) or to be run as a hardware layer-2 load balancer (with appropriate third party firmware). WRT54GL Linksys released the WRT54GL (the best-selling router of all time) in 2005 to support third-party firmware based on Linux, after the original WRT54G line was switched from Linux to VxWorks, starting with version 5. The WRT54GL is technically a reissue of the version 4 WRT54G. Cisco was sued by the FSF for copyright infringement, but the case was settled. WRTSL54GS WRTSL54GS is similar to the WRT54GS while adding additional firmware features and a USB 2.0 port (referred to as StorageLink) which can be used for a USB hard disk or flash drive. Unlike other models, the WRTSL54GS has only a single 1.5 dBi antenna, and it is not removable. WRT54GX WRT54GX comes with SRX (Speed and Range eXpansion), which uses "True MIMO" technology. It has three antennas and was once marketed as a "Pre-N" router, with eight times the speed and three times the range over standard 802.11g routers. WRT54GP2 and WRTP54G WRT54GP2 has 1 or 2 antennas, and a built-in analog telephony adapter (ATA) with 2 phone lines, but only 3 network ports. "Vonage" WRTP54G has 1 antenna, 2 phone lines, 4 network ports — Same S/N Prefix WRT54GX2 WRT54GX2 has 2 antennas, and was advertised to have up to 6 times the speed and 2 times the range over standard 802.11g routers. Chipset Realtek. It is not compatible with DD-WRT. WRT54GX4 WRT54GX4 has 3 moveable antennas, and is advertised to have 10 times the speed and 3 times the range of standard 802.11g routers. WRT54GX4-EU: chipset Realtek RTL8651B, radio chipset Airgo AGN303BB, flash S29GL064M90TFIR4. It does not appear to be compatible with DD-WRT. WRT51AB WRT series with 802.11a support. (First Generation) WRT55AG WRT54G series with 802.11a support. WTR54GS The Linksys WTR54GS is a confusingly named derivative of the WRT54G. It is a compact wireless travel router with SpeedBooster support that has only one LAN and one WAN Fast Ethernet interfaces, but has two wireless interfaces. The WTR54GS has the ability to make an unencrypted wireless connection on one interface, and make open shared connections on the other wireless interface, or the LAN port. WRT54G2 The WRT54G2 is an iteration of the WRT54G in a smaller, curved black case with internal antenna(s). This unit includes a four port 10/100 switch and one WAN port. * Note: 1.5 of the WRT54G2 is NOT supported by dd-wrt. This is because it uses Atheros components (i.e. the Atheros SoC) which require more than the 2 MB of Flash Memory built-in for a dd-wrt solution. WRT54GS2 The WRT54GS2 is the WRT54G2 hardware with the VxWorks 5.5 Firmware including SpeedBooster. It has a sleek black design with 2 internal antennas. It includes a 4-port 10/100 switch and one 10/100 WAN port on the rear. WRT54GC WRT54GC series with 802.11b/g support. This unit has a four port 10/100 switch and one WAN port. The "C" in the router number stands for compact, as the unit measures 4" by 4" by 1" with an internal antenna. The unit can be expanded with addition of HGA7S external antenna to boost range. Hardware Version 1.0 is the only option available in the United States since introduction in 2005. Version 2.0 is shipping in, amongst other countries, the United Kingdom. This unit has 1 MB flash, 4 MB RAM and a non-detachable external antenna. The internal hardware is based on a Marvell ARM914 ("Libertas") reference design which is probably identical to the SerComm IP806SM, Xterasys XR-2407G, Abocom ARM914, Hawking HWGR54 Revision M, and the Airlink 101 AR315W. By appropriately changing the value of the firmware byte 0x26, the WRT54GC can be cross-flashed with firmware based on the same reference platform. There were reports in 2006 that a sister platform of the WRT54GC (the AR315W) was hacked to run Linux. WRT54G3G/WRT54G3GV2 Mobile Broadband router The WRT54G3G/WRT54G3GV2 Mobile Broadband routers are variants that have four Fast Ethernet ports, one Internet wired port (For DSL/Cable connections), plus a PCMCIA slot for use with a Cellular Based PC Card "aircard". The V2 model has two additional USB ports for 3G modem use and one other USB port, which has yet to be put to use. Other cellular providers To use this router with other cellular providers, one must use an alternative firmware. The stock firmware does not support cellular providers, even though one does have the exact supported aircard. For example, Telus Mobility (CANADA) uses the Sierra Wireless Aircard 595, which is supported by this router, but because it is from Telus Mobility and not from Sprint (USA), it will never load the card into the router to make it operational. This is only true for the Sprint and AT&T-branded models. WRT54G-TM, WRTU54G-TM, and WRTU54GV2-TM The WRT54G-TM (TM stands for T-Mobile) is also called the T-Mobile "Hotspot@Home" service. It allows calls to be made via T-Mobile's GSM network or via Wi-Fi Unlicensed Mobile Access (UMA), using the same telephone and phone number (a special dual-mode phone designed for the service is required e.g. BlackBerry Pearl 8120). Additionally, once a call is in progress, one may transition from Wi-Fi to GSM (and vice versa) seamlessly, as Wi-Fi signal comes and goes, such as when entering or exiting a home or business. A special router is not needed to use the service, but the T-Mobile branded routers are supposed to enhance the telephone's battery life. This is the only known tweak to the TM version of the firmware. The hardware appears similar to that of the WRT54GL, except it has 32 MB RAM and 8 MB flash memory. The WRT54G-TM having a serial number that starts with C061 has these specifications: Broadcom BCM5352EKPBG CPU 32 MB RAM (Hynix HY5DU561622ETP-D43) 8 MB Flash (JS28f640) Uses the same BINs that the WRT54GS v3.0 does WRT54G-RG The WRT54G-RG (RG stands for Rogers) is also called the Rogers TalkSpot Voice-Optimized Router. It works with Rogers' Talkspot UMA service, which allows calls to be made via Rogers' cellular network or via Wi-Fi Unlicensed Mobile Access (UMA), using the same telephone and phone number. A UMA-compatible phone is required. The WRT54G-RG and the WRT54G-TM are identical in terms of hardware. WRT54GH The WRT54GH comes with an internal antenna, a four-port network switch, and support for Wi-Fi 802.11b/g. Third-party firmware projects After Linksys was obliged to release source code of the WRT54G's firmware under terms of the GNU General Public License, there have been many third party projects enhancing that code as well as some entirely new projects using the hardware in these devices. Three of the most widely used are DD-WRT, Tomato and OpenWrt. Hardware versions and firmware compatibility As of January 2006, most third-party firmware are no longer compatible with version 5 of both the WRT54G and the WRT54GS. The amount of flash memory in the version 5 devices has been reduced to 2 MB, too small for current Linux-based third-party firmware. (See table above for information on identifying the version based on the serial number printed on the bottom of the unit, and on the outside of the shrink-wrapped retail box.) Some users have succeeded in flashing and running a stripped down but fully functional version of DD-WRT called 'micro' on a version 5 WRT54G. An easier method not requiring any disassembly of the device has since been devised for flashing v5-v8 to DD-WRT. To support third-party firmware, Linksys has re-released the WRT54G v4, under the new model name WRT54GL (the 'L' in this name allegedly stands for 'Linux'). It is also possible to replace the 2 MB flash chip in the WRT54G with a 4 MB flash chip. The Macronix International 29LV320BTC-90 is a suitable part although others may work as well. The user must first install a JTAG header and use a JTAG cable to back up the firmware, then replace the chip and restore the firmware with the JTAG cable. After testing for proper functionality of the modified unit, third-party firmware can be flashed using the JTAG cable and a suitable image file. With the Attitude Adjustment (12.09) release of OpenWrt, all WRT54G hardware versions with 16 MB of RAM are no longer supported, and older Backfire (10.03) is recommended instead. Issues came from dropping support for the legacy Broadcom target brcm-2.4, making lower end devices run out of memory easily. Support for Attitude Adjustment is limited to WRT54G hardware versions with 32 MB of RAM, which includes WRT54GS and (apart from performing RAM upgrades through hardware modifications) some of the WRT54G and WRT54GL versions having the capability for unlocking their additional 16 MB of RAM. See also Linksys routers References Further reading External links Linksys website GPL Code Center at Linksys Wireless networking hardware Hardware routers Linux-based devices Linksys
Linksys WRT54G series
Technology
2,755
14,407,606
https://en.wikipedia.org/wiki/Erwin%20Madelung
Erwin Madelung (18 May 1881 – 1 August 1972) was a German physicist. He was born in 1881 in Bonn. His father was the surgeon Otto Wilhelm Madelung. He earned a doctorate in 1905 from the University of Göttingen, specializing in crystal structure, and eventually became a professor. It was during this time he developed the Madelung constant, which characterizes the net electrostatic effects of all ions in a crystal lattice, and is used to determine the energy of one ion. In 1921 he succeeded Max Born as the Chair of Theoretical Physics at the Goethe University Frankfurt, which he held until his retirement in 1949. He specialized in atomic physics and quantum mechanics, and it was during this time he developed the Madelung equations, an alternative form of the Schrödinger equation. He is also known for the Madelung rule, which states that atomic orbitals are filled in order of increasing quantum numbers. Publications Magnetisierung durch schnell verlaufende Stromvorgänge mit Rücksicht auf Marconis Wellendetektor. Göttingen, Univ., Phil. Fak., Diss., 1905. Die mathematischen Hilfsmittel des Physikers, Springer Verlag, Berlin 1922. subsequent editions: 1925, 1936, 1950, 1953, 1957, 1964. References External links Portrait drawing at Frankfurt University 1881 births 1972 deaths 20th-century German physicists University of Göttingen alumni Burials at Frankfurt Main Cemetery People involved with the periodic table
Erwin Madelung
Chemistry
311
19,367,907
https://en.wikipedia.org/wiki/Biomarkers%20%28journal%29
Biomarkers is a peer-reviewed academic journal covering research on biomarkers. It is published by Taylor & Francis and the editor-in-chief is Martin Möckel (Charité). According to the Journal Citation Reports, the journal has a 2015 impact factor of 2.016. References External links Academic journals established in 1996 Taylor & Francis academic journals English-language journals Biochemistry journals
Biomarkers (journal)
Chemistry
81
50,029,487
https://en.wikipedia.org/wiki/Viu%20%28streaming%20service%29
Viu (pronounced as view) is a Hong Kong-based over-the-top video on demand streaming service from PCCW Media Group's Viu International Ltd. Operated in a dual-revenue model comprising subscriptions and advertising, Viu delivers content in different genres from Asia's top content providers with local language subtitles, as well as original production series under the Viu Original initiative (similar to original programming from other services like Disney+, Amazon Prime Video and Netflix). Viu is now available in 25 markets across Asia, Africa and the Middle East including Bahrain, Egypt, Hong Kong, Indonesia, Jordan, Kuwait, Malaysia, Myanmar, Oman, Philippines, Qatar, Saudi Arabia, Singapore, South Africa, Thailand, and United Arab Emirates. As of December 2022 annual results, Viu had an estimated 66.5 million monthly active users. History Viu International Ltd launched the Viu OTT video service in Hong Kong on 26 October 2015. In January 2016, Viu announced its official launch in Singapore. In March 2016, Viu announced its official launch in India and Malaysia. In May 2016, Viu launches in Indonesia. In November 2016, Viu announced its official launch in the Philippines and has achieved four million users and over 218 million video views. In February 2017, Viu was available in major Middle East countries including Bahrain, Egypt, Jordan, Kuwait, Oman, Qatar, Saudi Arabia and the United Arab Emirates. In May 2017, Viu announced its official launch in Thailand and has expanded to 15 markets across Asia and the Middle East countries. In September 2018, Viu announced its official launch in Myanmar. In March 2019, Viu announced its official launch in South Africa. In December 2019, Viu announced its suspension of operations in India without providing a date. On 5 November 2021, Viu stopped its services in India. In June 2023, PCCW and the Canal+ Group announced a strategic partnership between the two companies, under which Canal+ will invest $300 million, including an initial investment of $200 million, in Viu and gain a 26.1% minority stake in the streaming service, with the option to invest further and gain a 51% controlling stake. The partnership serves as a catalyst for international expansions for both PCCW and Canal+ as well as collaborations between Viu and Canal+ on original productions. In February 2024, the Canal+ Group announced it had increased its stake in Viu to 30%. This move underscored Canal+'s confidence in Viu, and highlighted the group's strategic focus on Asia as a pivotal growth area. By deepening its investment in Viu, which operates as a leading streaming service in Asia, the Middle East, and South Africa, Canal+ aims to accelerate its expansion in these regions. The total investment by Canal+ in Viu reached approximately $300 million, with the group maintaining the option to further increase its stake to 50%. This progression is indicative of the evolving relationship between Canal+ and PCCW, and underscores the importance of Viu in Canal+'s global strategy. Content Viu offers a wide-range catalogue of movies and TV shows from across Asia, including popular and new titles. First-run TV episodes are available on the platform with advertisements in between, at least 72 hours after broadcast; Paid users can access episodes without advertisements, at least eight hours after broadcast. Several titles are also available separately in localized language audio dub. Movie titles available on the platform, mostly from its sister network (now Baogu Movies) or partnered (tvN Movies), require active Premium subscription. Original programming Viu has begun producing their original titles, in which most of their originals was based in Southeast Asia that has already received multiple award recognitions such as Asian Academy Creative Awards and Asian Television Awards. They also produce originals for the Middle East and Africa as well, as well as previously produced their originals for the Indian markets. Drama Comedy Co-production Exclusive international distribution In addition to producing and distributing local titles, several Viu shows have been acquired and co-produced, partly or wholly, by Viu for exclusive first-run release or distribution (under the Viu Original label) in Southeast Asia, the Middle East and South Africa in deals with partners in other regions such as Rakuten Viki, Far EasTone Friday Video, KC Global Media (ONE) among others. Unlike major streaming platforms, which prevent third-party linear channels from securing broadcast rights of most co-produced shows, most Viu Original-labeled titles may be screened by local television channels in their respective countries, provided that the networks' local rights for such titles are for linear television broadcast only in their respective country jurisdiction. Availability Viu is accessible via its website, and through its dedicated application on Android, iOS and Huawei OS services, as well as through select smart TV devices. Users can also save and download select episodes of shows, depending on their desired plan, which determine the maximum number of downloads allowed per user at a given time. As mentioned above, Viu offers a dual-revenue system, allowing users to access its content library without restrictions. While both free and premium users can watch its content without additional charges, subscribers can watch shows and movie titles in both 720p and 1080p high-definition, as well as without advertisements. See also Viaplay Showmax PCCW MOOV ViuTV Vuclip List of streaming media services Notes References External links Viu (streaming media) Subscription video on demand services Streaming media systems Streaming television Internet television streaming services Internet properties established in 2015 2015 establishments in Hong Kong
Viu (streaming service)
Technology
1,147
18,786,805
https://en.wikipedia.org/wiki/Lyons%20Groups%20of%20Galaxies
Lyons Groups of Galaxies (or LGG) is an astronomical catalog of nearby groups of galaxies complete to a limiting apparent magnitude B0=14.0 with a recession velocity smaller than 5,500 km/s. The catalogue was obtained from the Lyon-Meudon Extragalactic Database. Two methods were used in group construction: a percolation method derived from Huchra and Geller and a hierarchical method initiated by R. Brent Tully. The catalog is a synthesized version of the two results. The LGG includes 485 groups and 3,933 member galaxies. See also Abell catalogue New General Catalogue Messier Catalogue References External links LGG description at CDS Full LGG list of records at CDS ADS: General study of group membership. II - Determination of nearby groups Astronomical catalogues of galaxy clusters
Lyons Groups of Galaxies
Astronomy
166
3,019,112
https://en.wikipedia.org/wiki/Trifluoroacetic%20acid
Trifluoroacetic acid (TFA) is a synthetic organofluorine compound with the chemical formula CF3CO2H. It is a haloacetic acid, with all three of the acetyl group's hydrogen atoms replaced by fluorine atoms. It is a colorless liquid with a vinegar-like odor. TFA is a stronger acid than acetic acid, having an acid ionisation constant, Ka, that is approximately 34,000 times higher, as the highly electronegative fluorine atoms and consequent electron-withdrawing nature of the trifluoromethyl group weakens the oxygen-hydrogen bond (allowing for greater acidity) and stabilises the anionic conjugate base. TFA is commonly used in organic chemistry for various purposes. Synthesis TFA is prepared industrially by the electrofluorination of acetyl chloride or acetic anhydride, followed by hydrolysis of the resulting trifluoroacetyl fluoride: + 4 → + 3 + + → + Where desired, this compound may be dried by addition of trifluoroacetic anhydride. An older route to TFA proceeds via the oxidation of 1,1,1-trifluoro-2,3,3-trichloropropene with potassium permanganate. The trifluorotrichloropropene can be prepared by Swarts fluorination of hexachloropropene. Uses TFA is the precursor to many other fluorinated compounds such as trifluoroacetic anhydride, trifluoroperacetic acid, and 2,2,2-trifluoroethanol. It is a reagent used in organic synthesis because of a combination of convenient properties: volatility, solubility in organic solvents, and its strength as an acid. TFA is also less oxidizing than sulfuric acid but more readily available in anhydrous form than many other acids. One complication to its use is that TFA forms an azeotrope with water (b. p. 105 °C). TFA is used as a strong acid to remove protecting groups such as Boc used in organic chemistry and peptide synthesis. At a low concentration, TFA is used as an ion pairing agent in liquid chromatography (HPLC) of organic compounds, particularly peptides and small proteins. TFA is a versatile solvent for NMR spectroscopy (for materials stable in acid). It is also used as a calibrant in mass spectrometry. TFA is used to produce trifluoroacetate salts. Safety Trifluoroacetic acid is a strong acid. TFA is harmful when inhaled, causes severe skin burns and is toxic for aquatic organisms even at low concentrations. Skin burns are severe, heal poorly and can be necrotic. Vapour fumes have an LC50 of 10.01 mg/L, tested on rats over 4 hours. Inhalation symptoms include mucus irritation, coughing, shortness of breath and possible formation of oedemas in the respiratory tract. Exposure damages the kidneys. Environment Although trifluoroacetic acid is not produced biologically or abiotically, it is a metabolic breakdown product of the volatile anesthetic agent halothane. It is also thought to be responsible for halothane-induced hepatitis. It also may be formed by photooxidation of the commonly used refrigerant 1,1,1,2-tetrafluoroethane (R-134a). Moreover, it is formed as an atmospheric degradation product of almost all fourth-generation synthetic refrigerants, also called hydrofluoroolefins (HFO), such as 2,3,3,3-tetrafluoropropene. Trifluoroacetic acid degrades very slowly in the environment and has been found in increasing amounts as a contaminant in water, soil, food, and the human body. Median concentrations of a few micrograms per liter have been found in beer and tea. Seawater can contain about 200 ng of TFA per liter. Biotransformation by decarboxylation to fluoroform has been discussed. Trifluoroacetic acid is mildly phytotoxic. See also Fluoroacetic acidhighly toxic but naturally occurring rodenticide CH2FCOOH Difluoroacetic acid Trichloroacetic acid, the chlorinated analog Trifluoroacetone – also abbreviated TFA References Perfluorocarboxylic acids Reagents for organic chemistry Organic compounds with 2 carbon atoms
Trifluoroacetic acid
Chemistry
991
46,999
https://en.wikipedia.org/wiki/Buffer%20solution
A buffer solution is a solution where the pH does not change significantly on dilution or if an acid or base is added at constant temperature. Its pH changes very little when a small amount of strong acid or base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many living systems that use buffering for pH regulation. For example, the bicarbonate buffering system is used to regulate the pH of blood, and bicarbonate also acts as a buffer in the ocean. Principles of buffering Buffer solutions resist pH change because of a chemical equilibrium between the weak acid HA and its conjugate base A−: When some strong acid is added to an equilibrium mixture of the weak acid and its conjugate base, hydrogen ions (H+) are added, and the equilibrium is shifted to the left, in accordance with Le Chatelier's principle. Because of this, the hydrogen ion concentration increases by less than the amount expected for the quantity of strong acid added. Similarly, if strong alkali is added to the mixture, the hydrogen ion concentration decreases by less than the amount expected for the quantity of alkali added. In Figure 1, the effect is illustrated by the simulated titration of a weak acid with pKa = 4.7. The relative concentration of undissociated acid is shown in blue, and of its conjugate base in red. The pH changes relatively slowly in the buffer region, pH = pKa ± 1, centered at pH = 4.7, where [HA] = [A−]. The hydrogen ion concentration decreases by less than the amount expected because most of the added hydroxide ion is consumed in the reaction and only a little is consumed in the neutralization reaction (which is the reaction that results in an increase in pH) Once the acid is more than 95% deprotonated, the pH rises rapidly because most of the added alkali is consumed in the neutralization reaction. Buffer capacity Buffer capacity is a quantitative measure of the resistance to change of pH of a solution containing a buffering agent with respect to a change of acid or alkali concentration. It can be defined as follows: where is an infinitesimal amount of added base, or where is an infinitesimal amount of added acid. pH is defined as −log10[H+], and d(pH) is an infinitesimal change in pH. With either definition the buffer capacity for a weak acid HA with dissociation constant Ka can be expressed as where [H+] is the concentration of hydrogen ions, and is the total concentration of added acid. Kw is the equilibrium constant for self-ionization of water, equal to 1.0. Note that in solution H+ exists as the hydronium ion H3O+, and further aquation of the hydronium ion has negligible effect on the dissociation equilibrium, except at very high acid concentration. This equation shows that there are three regions of raised buffer capacity (see figure 2). In the central region of the curve (coloured green on the plot), the second term is dominant, and Buffer capacity rises to a local maximum at pH = pKa. The height of this peak depends on the value of pKa. Buffer capacity is negligible when the concentration [HA] of buffering agent is very small and increases with increasing concentration of the buffering agent. Some authors show only this region in graphs of buffer capacity. Buffer capacity falls to 33% of the maximum value at pH = pKa ± 1, to 10% at pH = pKa ± 1.5 and to 1% at pH = pKa ± 2. For this reason the most useful range is approximately pKa ± 1. When choosing a buffer for use at a specific pH, it should have a pKa value as close as possible to that pH. With strongly acidic solutions, pH less than about 2 (coloured red on the plot), the first term in the equation dominates, and buffer capacity rises exponentially with decreasing pH: This results from the fact that the second and third terms become negligible at very low pH. This term is independent of the presence or absence of a buffering agent. With strongly alkaline solutions, pH more than about 12 (coloured blue on the plot), the third term in the equation dominates, and buffer capacity rises exponentially with increasing pH: This results from the fact that the first and second terms become negligible at very high pH. This term is also independent of the presence or absence of a buffering agent. Applications of buffers The pH of a solution containing a buffering agent can only vary within a narrow range, regardless of what else may be present in the solution. In biological systems this is an essential condition for enzymes to function correctly. For example, in human blood a mixture of carbonic acid (HCO) and bicarbonate (HCO) is present in the plasma fraction; this constitutes the major mechanism for maintaining the pH of blood between 7.35 and 7.45. Outside this narrow range (7.40 ± 0.05 pH unit), acidosis and alkalosis metabolic conditions rapidly develop, ultimately leading to death if the correct buffering capacity is not rapidly restored. If the pH value of a solution rises or falls too much, the effectiveness of an enzyme decreases in a process, known as denaturation, which is usually irreversible. The majority of biological samples that are used in research are kept in a buffer solution, often phosphate buffered saline (PBS) at pH 7.4. In industry, buffering agents are used in fermentation processes and in setting the correct conditions for dyes used in colouring fabrics. They are also used in chemical analysis and calibration of pH meters. Simple buffering agents {| class="wikitable" ! Buffering agent !! pKa !! Useful pH range |- | Citric acid || 3.13, 4.76, 6.40 || 2.1–7.4 |- | Acetic acid || 4.8 || 3.8–5.8 |- | KH2PO4 || 7.2 || 6.2–8.2 |- | CHES || 9.3 || 8.3–10.3 |- | Borate || 9.24 || 8.25–10.25 |} For buffers in acid regions, the pH may be adjusted to a desired value by adding a strong acid such as hydrochloric acid to the particular buffering agent. For alkaline buffers, a strong base such as sodium hydroxide may be added. Alternatively, a buffer mixture can be made from a mixture of an acid and its conjugate base. For example, an acetate buffer can be made from a mixture of acetic acid and sodium acetate. Similarly, an alkaline buffer can be made from a mixture of the base and its conjugate acid. "Universal" buffer mixtures By combining substances with pKa values differing by only two or less and adjusting the pH, a wide range of buffers can be obtained. Citric acid is a useful component of a buffer mixture because it has three pKa values, separated by less than two. The buffer range can be extended by adding other buffering agents. The following mixtures (McIlvaine's buffer solutions) have a buffer range of pH 3 to 8. {| class="wikitable" ! 0.2 M Na2HPO4 (mL) ! 0.1 M citric acid (mL) ! pH |- | 20.55 | 79.45 | style="background:#ff0000; color:white" | 3.0 |- | 38.55 | 61.45 | style="background:#ff7777; color:white" |4.0 |- | 51.50 | 48.50 | style="background:#ff7700;" | 5.0 |- | 63.15 | 36.85 | style="background:#ffff00;" |6.0 |- | 82.35 | 17.65 | style="background:#007777; color:white" | 7.0 |- | 97.25 | 2.75 |style="background:#0077ff; color:white" | 8.0 |} A mixture containing citric acid, monopotassium phosphate, boric acid, and diethyl barbituric acid can be made to cover the pH range 2.6 to 12. Other universal buffers are the Carmody buffer and the Britton–Robinson buffer, developed in 1931. Common buffer compounds used in biology For effective range see Buffer capacity, above. Also see Good's buffers for the historic design principles and favourable properties of these buffer substances in biochemical applications. Calculating buffer pH Monoprotic acids First write down the equilibrium expression This shows that when the acid dissociates, equal amounts of hydrogen ion and anion are produced. The equilibrium concentrations of these three components can be calculated in an ICE table (ICE standing for "initial, change, equilibrium"). {| class="wikitable" |+ ICE table for a monoprotic acid |- ! ! [HA] !! [A−] !! [H+] |- ! I | C0 || 0 || y |- ! C | −x || x || x |- ! E | C0 − x || x || x + y |} The first row, labelled I, lists the initial conditions: the concentration of acid is C0, initially undissociated, so the concentrations of A− and H+ would be zero; y is the initial concentration of added strong acid, such as hydrochloric acid. If strong alkali, such as sodium hydroxide, is added, then y will have a negative sign because alkali removes hydrogen ions from the solution. The second row, labelled C for "change", specifies the changes that occur when the acid dissociates. The acid concentration decreases by an amount −x, and the concentrations of A− and H+ both increase by an amount +x. This follows from the equilibrium expression. The third row, labelled E for "equilibrium", adds together the first two rows and shows the concentrations at equilibrium. To find x, use the formula for the equilibrium constant in terms of concentrations: Substitute the concentrations with the values found in the last row of the ICE table: Simplify to With specific values for C0, Ka and y, this equation can be solved for x. Assuming that pH = −log10[H+], the pH can be calculated as pH = −log10(x + y). Polyprotic acids Polyprotic acids are acids that can lose more than one proton. The constant for dissociation of the first proton may be denoted as Ka1, and the constants for dissociation of successive protons as Ka2, etc. Citric acid is an example of a polyprotic acid H3A, as it can lose three protons. {| class="wikitable" style="width: 230px; |+ Stepwise dissociation constants |- ! |Equilibrium!!Citric acid |- | H3A H2A− + H+||pKa1 = 3.13 |- | H2A− HA2− + H+|| pKa2 = 4.76 |- | HA2− A3− + H+|| pKa3 = 6.40 |} When the difference between successive pKa values is less than about 3, there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. In the case of citric acid, the overlap is extensive and solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5. Calculation of the pH with a polyprotic acid requires a speciation calculation to be performed. In the case of citric acid, this entails the solution of the two equations of mass balance: CA is the analytical concentration of the acid, CH is the analytical concentration of added hydrogen ions, βq are the cumulative association constants. Kw is the constant for self-ionization of water. There are two non-linear simultaneous equations in two unknown quantities [A3−] and [H+]. Many computer programs are available to do this calculation. The speciation diagram for citric acid was produced with the program HySS. N.B. The numbering of cumulative, overall constants is the reverse of the numbering of the stepwise, dissociation constants. {| class="wikitable" |+ Relationship between cumulative association constant (β) values and stepwise dissociation constant (K) values for a tribasic acid. ! Equilibrium!! Relationship |- | A3− + H+ AH2+||Log β1= pka3 |- | A3− + 2H+ AH2+||Log β2 =pka2 + pka3 |- | A3− + 3H+ AH3||Log β3 = pka1 + pka2 + pka3 |} Cumulative association constants are used in general-purpose computer programs such as the one used to obtain the speciation diagram above. See also Henderson–Hasselbalch equation Good's buffers Common-ion effect Metal ion buffer Mineral redox buffer References External links Acid–base chemistry Acid–base physiology Equilibrium chemistry
Buffer solution
Chemistry
2,859
66,568,074
https://en.wikipedia.org/wiki/2nd%20Engineer%20Brigade%20%28United%20States%29
The 2nd Engineer Brigade was a military engineering brigade of the United States Army, that was subordinate to United States Army Alaska and had its headquarters at Fort Richardson, Alaska, prior to deactivation in 2015. History World War II The 2nd Engineer Amphibian Brigade was activated at Camp Edwards on 20 June 1942, with the 532nd Engineer Shore Regiment and 592nd Engineer Boat Regiment assigned. Colonel William F. Heavey, who was appointed its commander on 6 August 1942, and was promoted to brigadier general on 10 September, led the brigade for the rest of the war. It quickly expanded to 6,000 men, but lost 1,500 in September to the 540th Shore Regiment. On 1 October, the brigade was reorganized; the 532nd and 592nd became engineer amphibian regiments and the 542nd Engineer Amphibian Regiment was formed. The brigade, less the 542nd Engineer Amphibian Regiment, moved by rail to Camp Carrabelle on 15 October. On 7 November, the brigade moved to Fort Ord, California, where it was joined by the 542nd Engineer Amphibian Regiment the following day. In January and February 1943, the brigade embarked from the San Francisco Port of Embarkation on a series of vessels bound for Australia. In Australia, the brigade was based at Cairns, although its headquarters was co-located with that of I Corps in Rockhampton, away. The brigade helped the 411th Base Shop Battalion establish a landing craft construction facility, which produced its first finished LCVP on 7 April. In May, elements of the brigade began moving to New Guinea. A detachment of ten LCMs of the 592nd Engineer Amphibian Regiment went to Port Moresby, where it moved supplies to the Lakekamu River. They were followed by detachments of the 532nd and 542nd, which moved to Milne Bay, Oro Bay and Samarai. On 30 June, the brigade participated in its first amphibious operation, the landing at Nassau Bay. On 4 July, the brigade was renamed the 2nd Engineer Special Brigade, and its three regiments became engineer boat and shore regiments. The 2nd Engineer Special Brigade trained at Cairns with the Australian 9th Division in June and July 1943. The 532nd Engineer Boat and Shore Regiment then moved to New Guinea, and landed part of the 9th Division at Red Beach near Lae on 4 September. On 22 September, it landed elements of the 9th Division at Scarlet Beach near Finschhafen. On 11 October, four Japanese barges attempted to land on Scarlet Beach. They were defeated by men of the 532nd Engineer Boat and Shore Regiment, including Private Junior Van Noy, who was posthumously awarded the Medal of Honor. Over the next few months, units of the 2nd Engineer Special Brigade participated in the landings at Arawe, Long Island, Saidor, Sio, Los Negros, Talasea, Hollandia, Wakde and Biak. On 20 October 1944 it participated in the amphibious assault on Leyte in the Philippines. Over the following months it participated in a series of amphibious operations to liberate the Philippines. Nine of the 2nd Engineer Special Brigade's units were awarded Presidential Unit Citations. Organization Brigade Headquarters 532nd Engineer Boat and Shore Regiment 542nd Engineer Boat and Shore Regiment 592nd Engineer Boat and Shore Regiment 562nd Engineer Boat Maintenance Battalion 1458th Engineer Maintenance Company 1459th Engineer Maintenance Company 1460th Engineer Maintenance Company 1570th Engineer Heavy Equipment Shop Company 1762nd Engineer Parts Supply Platoon 262nd Medical Battalion 162nd Ordnance Maintenance Company 189th Quartermaster Gas Supply Company 287th Signal Company 695th Truck Company 3498th Ordnance Medium Maintenance Company 5204th Transportation Corps Amphibious Truck Company Medical Detachment, 2nd Engineer Special Brigade Support Battery (Provisional) 2nd Engineer Special Brigade 416th Army Service Forces Band Korean War The 2nd Engineer Special Brigade arrived back in San Francisco on 16 December 1945, and returned to Fort Ord. It later moved to Fort Worden, Washington, where it was stationed when the Korean War broke out in June 1950. The brigade moved to Yokohama, Japan, and participated in the landing at Inchon in September 1950. Afterwards it operated the ports of Suyong and Ulsan. The brigade was redesignated as the 2nd Amphibious Support Brigade on 26 June 1952. In December 1953 it moved to Camp McGill in Japan, where it was inactivated on 24 June 1955. The brigade was reactivated at Fort Belvoir, Virginia as the 2nd Amphibious Support Command, on 13 November 1956, and inactivated at Fort Story, Virginia, on 25 August 1965. Afghanistan The brigade was reactivated as the 2nd Engineer Brigade from the 3rd Maneuver Enhancement Brigade, on 16 September 2011. Although no longer an amphibian brigade, it wore the World War II-era seahorse emblem until inactivated there on 15 May 2015. Structure in 2011 2nd Engineer Brigade 17th Combat Sustainment Support Battalion 6th Engineer Battalion 793rd Military Police Battalion 9th Army Band United States Army Alaska NCO Academy References Bibliography Engineer Brigades of the United States Army United States Army Corps of Engineers Military units and formations established in 1942
2nd Engineer Brigade (United States)
Engineering
1,030
1,277,926
https://en.wikipedia.org/wiki/Pacific%20decadal%20oscillation
The Pacific decadal oscillation (PDO) is a robust, recurring pattern of ocean-atmosphere climate variability centered over the mid-latitude Pacific basin. The PDO is detected as warm or cool surface waters in the Pacific Ocean, north of 20°N. Over the past century, the amplitude of this climate pattern has varied irregularly at interannual-to-interdecadal time scales (meaning time periods of a few years to as much as time periods of multiple decades). There is evidence of reversals in the prevailing polarity (meaning changes in cool surface waters versus warm surface waters within the region) of the oscillation occurring around 1925, 1947, and 1977; the last two reversals corresponded with dramatic shifts in salmon production regimes in the North Pacific Ocean. This climate pattern also affects coastal sea and continental surface air temperatures from Alaska to California. During a "warm", or "positive", phase, the west Pacific becomes cooler and part of the eastern ocean warms; during a "cool", or "negative", phase, the opposite pattern occurs. The Pacific decadal oscillation was named by Steven R. Hare, who noticed it while studying salmon production pattern results in 1997. The Pacific decadal oscillation index is the leading empirical orthogonal function (EOF) of monthly sea surface temperature anomalies (SST-A) over the North Pacific (poleward of 20°N) after the global average sea surface temperature has been removed. This PDO index is the standardized principal component time series. A PDO 'signal' has been reconstructed as far back as 1661 through tree-ring chronologies in the Baja California area. Mechanisms Several studies have indicated that the PDO index can be reconstructed as the superimposition of tropical forcing and extra-tropical processes. Thus, unlike El Niño–Southern Oscillation (ENSO), the PDO is not a single physical mode of ocean variability, but rather the sum of several processes with different dynamic origins. At inter-annual time scales the PDO index is reconstructed as the sum of random and ENSO induced variability in the Aleutian Low, whereas on decadal timescales ENSO teleconnections, stochastic atmospheric forcing and changes in the North Pacific oceanic gyre circulation contribute approximately equally. Additionally sea surface temperature anomalies have some winter to winter persistence due to the reemergence mechanism. ENSO teleconnections, the atmospheric bridge ENSO can influence the global circulation pattern thousands of kilometers away from the equatorial Pacific through the "atmospheric bridge". During El Niño events, deep convection and heat transfer to the troposphere is enhanced over the anomalously warm sea surface temperature, this ENSO-related tropical forcing generates Rossby waves that propagate poleward and eastward and are subsequently refracted back from the pole to the tropics. The planetary waves form at preferred locations both in the North and South Pacific Ocean, and the teleconnection pattern is established within 2–6 weeks. ENSO driven patterns modify surface temperature, humidity, wind, and the distribution of clouds over the North Pacific that alter surface heat, momentum, and freshwater fluxes and thus induce sea surface temperature, salinity, and mixed layer depth (MLD) anomalies. The atmospheric bridge is more effective during boreal winter when the deepened Aleutian Low results in stronger and cold northwesterly winds over the central Pacific and warm/humid southerly winds along the North American west coast, the associated changes in the surface heat fluxes and to a lesser extent Ekman transport creates negative sea surface temperature anomalies and a deepened MLD in the central Pacific and warm the ocean from the Hawaii to the Bering Sea. SST reemergence Midlatitude SST anomaly patterns tend to recur from one winter to the next but not during the intervening summer, this process occurs because of the strong mixed layer seasonal cycle. The mixed layer depth over the North Pacific is deeper, typically 100-200m, in winter than it is in summer and thus SST anomalies that form during winter and extend to the base of the mixed layer are sequestered beneath the shallow summer mixed layer when it reforms in late spring and are effectively insulated from the air-sea heat flux. When the mixed layer deepens again in the following autumn/early winter the anomalies may again influence the surface. This process has been named "reemergence mechanism" by Alexander and Deser and is observed over much of the North Pacific Ocean although it is more effective in the west where the winter mixed layer is deeper and the seasonal cycle greater. Stochastic atmospheric forcing Long term sea surface temperature variation may be induced by random atmospheric forcings that are integrated and reddened into the ocean mixed layer. The stochastic climate model paradigm was proposed by Frankignoul and Hasselmann, in this model a stochastic forcing represented by the passage of storms alter the ocean mixed layer temperature via surface energy fluxes and Ekman currents and the system is damped due to the enhanced (reduced) heat loss to the atmosphere over the anomalously warm (cold) SST via turbulent energy and longwave radiative fluxes, in the simple case of a linear negative feedback the model can be written as the separable ordinary differential equation: where v is the random atmospheric forcing, λ is the damping rate (positive and constant) and y is the response. The variance spectrum of y is: where F is the variance of the white noise forcing and w is the frequency, an implication of this equation is that at short time scales (w>>λ) the variance of the ocean temperature increase with the square of the period while at longer timescales(w<<λ, ~150 months) the damping process dominates and limits sea surface temperature anomalies so that the spectra became white. Thus an atmospheric white noise generates SST anomalies at much longer timescales but without spectral peaks. Modeling studies suggest that this process contribute to as much as 1/3 of the PDO variability at decadal timescales. Ocean dynamics Several dynamic oceanic mechanisms and SST-air feedback may contribute to the observed decadal variability in the North Pacific Ocean. SST variability is stronger in the Kuroshio Oyashio extension (KOE) region and is associated with changes in the KOE axis and strength, that generates decadal and longer time scales SST variance but without the observed magnitude of the spectral peak at ~10 years, and SST-air feedback. Remote reemergence occurs in regions of strong current such as the Kuroshio extension and the anomalies created near the Japan may reemerge the next winter in the central pacific. Advective resonance Saravanan and McWilliams have demonstrated that the interaction between spatially coherent atmospheric forcing patterns and an advective ocean shows periodicities at preferred time scales when non-local advective effects dominate over the local sea surface temperature damping. This "advective resonance" mechanism may generate decadal SST variability in the Eastern North Pacific associated with the anomalous Ekman advection and surface heat flux. North Pacific oceanic gyre circulation Dynamic gyre adjustments are essential to generate decadal SST peaks in the North Pacific, the process occurs via westward propagating oceanic Rossby waves that are forced by wind anomalies in the central and eastern Pacific Ocean. The quasi-geostrophic equation for long non-dispersive Rossby Waves forced by large scale wind stress can be written as the linear partial differential equation: where h is the upper-layer thickness anomaly, τ is the wind stress, c is the Rossby wave speed that depends on latitude, ρ0 is the density of sea water and f0 is the Coriolis parameter at a reference latitude. The response time scale is set by the Rossby waves speed, the location of the wind forcing and the basin width, at the latitude of the Kuroshio Extension c is 2.5 cm s−1 and the dynamic gyre adjustment timescale is ~(5)10 years if the Rossby wave was initiated in the (central)eastern Pacific Ocean. If the wind white forcing is zonally uniform it should generate a red spectrum in which h variance increases with the period and reaches a constant amplitude at lower frequencies without decadal and interdecadal peaks, however low frequencies atmospheric circulation tends to be dominated by fixed spatial patterns so that wind forcing is not zonally uniform, if the wind forcing is zonally sinusoidal then decadal peaks occurs due to resonance of the forced basin-scale Rossby waves. The propagation of h anomalies in the western pacific changes the KOE axis and strength and impact SST due to the anomalous geostrophic heat transport. Recent studies suggest that Rossby waves excited by the Aleutian low propagate the PDO signal from the North Pacific to the KOE through changes in the KOE axis while Rossby waves associated with the NPO propagate the North Pacific Gyre oscillation signal through changes in the KOE strength. Impacts Temperature and precipitation The PDO spatial pattern and impacts are similar to those associated with ENSO events. During the positive phase the wintertime Aleutian Low is deepened and shifted southward, warm/humid air is advected along the North American west coast and temperatures are higher than usual from the Pacific Northwest to Alaska but below normal in Mexico and the Southeastern United States. Winter precipitation is higher than usual in the Alaska Coast Range, Mexico and the Southwestern United States but reduced over Canada, Eastern Siberia and Australia McCabe et al. showed that the PDO along with the AMO strongly influence multidecadal droughts pattern in the United States, drought frequency is enhanced over much of the Northern United States during the positive PDO phase and over the Southwest United States during the negative PDO phase in both cases if the PDO is associated with a positive AMO. The Asian Monsoon is also affected, increased rainfall and decreased summer temperature is observed over the Indian subcontinent during the negative phase. Reconstructions and regime shifts The PDO index has been reconstructed using tree rings and other hydrologically sensitive proxies from west North America and Asia. MacDonald and Case reconstructed the PDO back to 993 using tree rings from California and Alberta. The index shows a 50–70 year periodicity but is a strong mode of variability only after 1800, a persistent negative phase occurring during medieval times (993–1300) which is consistent with La Niña conditions reconstructed in the tropical Pacific and multi-century droughts in the South-West United States. Several regime shifts are apparent both in the reconstructions and instrumental data, during the 20th century regime shifts associated with concurrent changes in SST, SLP, land precipitation and ocean cloud cover occurred in 1924/1925, 1945/1946, and 1976/1977: 1750: PDO displays an unusually strong oscillation. 1924/1925: PDO changed to a "warm" phase. 1945/1946: The PDO changed to a "cool" phase, the pattern of this regime shift is similar to the 1970s episode with maximum amplitude in the subarctic and subtropical front but with a greater signature near the Japan while the 1970s shift was stronger near the American west coast. 1976/1977: PDO changed to a "warm" phase. 1988/1989: A weakening of the Aleutian low with associated SST changes was observed, in contrast to others regime shifts this change appears to be related to concurrent extratropical oscillation in the North Pacific and North Atlantic rather than tropical processes. 1997/1998: Several changes in sea surface temperature and marine ecosystem occurred in the North Pacific after 1997/1998, in contrast to prevailing anomalies observed after the 1970s shift. The SST declined along the United States west coast and substantial changes in the populations of salmon, anchovy and sardine were observed as the PDO changed back to a cool "anchovy" phase. However the spatial pattern of the SST change was different with a meridional SST seesaw in the central and western Pacific that resembled a strong shift in the North Pacific Gyre Oscillation rather than the PDO structure. This pattern dominated much of the North Pacific SST variability after 1989. The 2014 flip from the cool PDO phase to the warm phase, which vaguely resembles a long and drawn-out El Niño event, contributed to record-breaking surface temperatures across the planet in 2014. Predictability The NOAA Earth System Research Laboratory produces official ENSO forecasts, and Experimental statistical forecasts using a linear inverse modeling (LIM) method to predict the PDO, LIM assumes that the PDO can be separated into a linear deterministic component and a non-linear component represented by random fluctuations. Much of the LIM PDO predictability arises from ENSO and the global trend rather than extra-tropical processes and is thus limited to ~4 seasons. The prediction is consistent with the seasonal footprinting mechanism in which an optimal SST structure evolves into the ENSO mature phase 6–10 months later that subsequently impacts the North Pacific Ocean SST via the atmospheric bridge. Skills in predicting decadal PDO variability could arise from taking into account the impact of the externally forced and internally generated Pacific variability. Related patterns The interdecadal Pacific oscillation (IPO) is a similar but less localised phenomenon; it covers the Southern hemisphere as well (50°S to 50°N). ENSO tends to lead PDO cycling. Shifts in the IPO change the location and strength of ENSO activity. The South Pacific convergence zone moves northeast during El Niño and southwest during La Niña events. The same movement takes place during positive IPO and negative IPO phases respectively. (Folland et al., 2002) Interdecadal temperature variations in China are closely related to those of the NAO and the NPO. The amplitudes of the NAO and NPO increased in the 1960s and interannual variation patterns changed from 3–4 years to 8–15 years. Sea level rise is affected when large areas of water warm and expand, or cool and contract. See also California Current Hadley cell Ocean heat content Pacific–North American teleconnection pattern North Pacific Oscillation North Atlantic oscillation Atlantic multidecadal oscillation (AMO) El Niño Southern Oscillation (ENSO) Pacific Meridional Mode (PMM) References Further reading External links Regional climate effects Physical oceanography Pacific Ocean Climate oscillations
Pacific decadal oscillation
Physics
3,013
57,662,947
https://en.wikipedia.org/wiki/Rocketry%20SA
Rocketry SA is the official voice and controlling body for all aspects of non-commercial and non-governmental rocketry in South Africa. The organization is registered as a non profit organization in South Africa. Rocketry SA promotes model rocketry, high-power rocketry, amateur rocketry, and aerospace modelling. History The organization was established in 2003, under the name South African Amateur Space Association (SAASA). During the 2013 Annual General Meeting of the Association, members voted in favour of a name change. Due to practical reasons, however, and in line with the legal requirements of registering a non-profit organization in South Africa, the name was only officially changed in 2017 to "Rocketry SA". Organizational structure Rocketry SA functions as a member-based, non-profit organization. In line with South African law, the Organization appoints a CEO (chief executive office), supported by an EXCO (Executive committee). Rocketry SA co-exists under the government-mandated South African National Space Agency (SANSA), which resides under the Department of Science and Technology (DSC), which in turn reports to the Minister of Science and Technology. Membership Rocketry SA membership is open to any with a keen interest in promoting rocketry. Rocketry SA maintains a database of all registered rocketeers. This database includes information like certification level, which is used to determine which motors may be purchased by the member. External links Official website References Rocketry Space program of South Africa
Rocketry SA
Engineering
301
51,693,229
https://en.wikipedia.org/wiki/Thermomyces%20lanuginosus
Thermomyces lanuginosus is a species of thermophilic fungus that belongs to Thermomyces, a genus of hemicellulose degraders. It is classified as a deuteromycete and no sexual form has ever been observed. It is the dominant fungus of compost heaps, due to its ability to withstand high temperatures and use complex carbon sources for energy. As the temperature of compost heaps rises and the availability of simple carbon sources decreases, it is able to out compete pioneer microflora. It plays an important role in breaking down the hemicelluloses found in plant biomass due to the many hydrolytic enzymes that it produces, such as lipolase, amylase, xylanase, phytase, and chitinase. These enzymes have chemical, environmental, and industrial applications due to their hydrolytic properties. They are used in the food, petroleum, pulp and paper, and animal feed industries, among others. A few rare cases of endocarditis due to T. lanuginosus have been reported in humans. History The fungus was first described 1899 by Tsiklinskaya, after a chance discovery of it growing on a potato which had been inoculated with garden soil. It was later isolated in 1907 from leaves on warm compost piles by Hugo Miehe. Miehe was the first person to work with thermophilic microorganisms in his study of the spontaneous combustion of damp haystacks. T. lanuginosis was one of four species of thermophilic fungi isolated from self-heating hay by Miehe, along with Mucor pusillus, Thermoidium sulfureum, and Thermoascus aurantiacus. The fungus was also isolated by a number of different researchers. Griffon and Maublanc isolated it from fungus on moist oats in 1911, but placed it in the genus Sepedonium, as did Velich in 1914. Kurt Noack isolated several thermophilic fungi from natural habitats, including T. lanuginosis, studying their physiology further. Cooney and Emerson provided taxonomic descriptions of the 13 known fungal species during WWII, while studying alternate sources of rubber. Taxonomy This species has a number of synonyms, due to different names and categories being applied when it was first being described. Tsiklinskaya originally isolated and described the species, but failed to indicate the size of aleuriospores and didn't include drawings. Photographs of mycelium and spores were inconclusive because they didn't give a true picture of the size or structure due to failure in indicating the magnification. Due to this uncertainty, both Griffon and Maublanc (1911) and Velich (1914) placed it in the genus Sepedonium when they isolated and described it. It has also been placed in the Acremoniella category by Rege (1927) and Curzi (1929). In 1933 the name Montospora lanuginosa was proposed by Mason, but this was followed by a trend towards accepting the genus Humicola, as proposed by Bunce (1961) because of the questionable status of Montosporra. It was Pugh et al. who reintroduced the genus Thermomyces. Although the literature occasionally refers to this species by the earlier name Humicola lanuginosa, it is now uniformly referred to by its current name Thermomyces lanuginosa. Genome The fungus has a number of important industrial applications because it produces the largest amounts of hydrolyzing enzymes of any thermophilic fungus. This has led to an interest in studying its genetics, and subsequently resulted in the sequencing of its genome. The proteome of T. lanuginosis contains 5100 genes, with 83 tRNA genes. One of the features that has been discovered through sequencing of the genome is that the fungus has a ubiquitin degradation pathway, which helps it respond to various environmental stressors, such as nutrient limitation, heat shock, and heavy metal exposure, and may be essential for adaptation during rising temperatures. It is also capable of histone acetylation/deacetylation and contains high numbers of methylases, which play important roles in packing and condensation of DNA. Growth and morphology Thermomyces lanuginosus is classified as a thermophile, and experiences rapid growth at high temperatures. In the lab, colonies can be cultured in a glucose-salt liquid medium fortified with peptone. Colonies are white and velvety at first, generally less than 1 mm high, but soon turn grey or green-ish grey, starting from the center. Mature colonies are dull dark brown to black, often with pink or vinaceous diffusing pigment secreted from the colony. Masses of developing aleuriophores can be seen on the fine, colourless hyphae of young colonies when viewed under a microscope. These aleuriophores are short, measure 10-15μ in length, and arise at right angles to the hyphae. They are generally unbranched but occasionally branch once or twice near the base, appearing as a cluster. Septations may occur but are often difficult to observe. Aleuriospores are borne singly at tips of the aleuriophores. No teleomorph is known for this species. The asexual conida are borne singly on short stalks and are one celled, dark brown, with a roughened surface. Spores are colorless and smooth at first, but turn dark brown during maturation, and the thick exospore becomes wrinkled. Mature spores are spherical, irregularly shaped, and range from 6-10 μ in diameter. Both immature and mature spores can be easily separated from the aleuriophore, which usually ruptures slightly below the point of attachment, so free spores may be found with the top portion still attached. Thermophilic moulds grown at high temperatures (above 50 °C) contain dense body vesicles in their hyphae that function as storage organelles, mainly for phospholipids. In T. lanuginosus, nine times more lipid storage vesicles are grown at 52 °C than 37 °C. Heavy pigmentation of spores allows them to withstand temperature and heat stress, and the pigments have been found to be similar to hydroxylated pigments in aphids. Physiology Temperature effects Thermophilic fungi are the only eukaryotic organisms that can grow above 45 °C. In general, the minimum temperature required for growth is at least 20 °C, while the maximum is 60 °C or 62 °C. The optimal growth temperature for T. lanuginosus is 45-50 °C. While the maximum yield of spores occurs at 25 °C, their growth is faster at 50 °C. No growth is observed at temperatures below 30 °C or above 60 °C. Enzyme sensitivity and activity of transporters in the fungus also temperature influenced. Nutrition Thermophilic fungi are unable to grow under anaerobic conditions and require oxygen to grow. While carbon dioxide is not a nutritional requirement for fungi, T. lanuginosis growth is severely affected by lack of it. This is most likely because carbon dioxide is required for the assimilation of pyruvate carboxylase, needed for development. In compost heaps where the fungus is commonly found, the availability of soluble carbon decreases as temperature increases, and main carbon sources tend to be polysaccharides like cellulose and hemicellulose. T. lanuginosis is unable to utilize cellulose because it does not produce a cellulase, but it is well adapted to using other complex carbon sources such as hemicellulose. It is capable of growing commensally by using sugars released when cellulose is hydrolyzed by a cellulolytic partner. The hydrolytic products of cellulose and hemicellulose - glucose, xylose and mannose, are transported using the same proton-driven symport. This transport is constitutive, specific, and carrier-mediated, and its sensitivity is temperature dependent. Thermomyces lanuginosis concurrently utilizes glucose and sucrose at 50 °C, with sucrose being utilized faster than glucose. Both sugars also used concurrently at 30 °C at nearly identical rates. The use of sucrose occurs at the same time as glucose for two major reasons: because the invertase is insensitive to catabolite repression by glucose, and because the activity of the glucose uptake system is repressed by glucose itself as well as by sucrose. The two sugars reciprocally influence their utilization in the mixture. Enzymes There are a number of enzymes secreted by T. lanuginosis, and many of them have applications in various industries. Lipase The lipase of T. lanuginosis has catalytic centre that contains three amino acids (serine-histidine-aspartic acid) and is covered by a short alpha-helical loop or "lid" that moves to allow substrate access to the active site. A single mutation in the serine alters the lid's motion, which affects enzyme binding affinity. The lipase also contains disulfide linkages but no free —SH group. The productivity and thermostability of the lipase differs with different strains, but it has been found to be stable at a pH of 4 to 11 and optimally active at 8.0. Temperature wise, some activity has been observed at 65 °C, but it is completely inactivated at 80 °C. The thin mycelial suspensions formed by the fungus make it desirable for use in the production of stable lipase for manufacturing detergents for hot water machine washing. The immobilization of Thermomyces lanuginosus lipase (TLL) or other lipases on diverse supports utilizing a variety of immobilization techniques has been investigated in scientific literature with the aim of enhancing enzyme stability, reusability, and performance in various biocatalytic applications, such as biodiesel production and ester synthesis. These outcomes highlight the potential of immobilized lipases for industrial-scale applications in the food, beverage, and biofuel sectors, as they offer environmentally friendly and sustainable alternatives for diverse chemical transformations. Amylase α-amylase is a dimeric enzyme that can hydrolyse starch, amylose, and amylopectin, but not maltose The α-amylase of T. lanuginosis is most active at slightly acidic conditions and a temperature of 65 °C. At 100 °C it is inactivated by self-association of subunits, converting it to an inactive trimeric species. The enzyme is able to withstand otherwise lethal temperatures and then return to native state with full enzymatic activity, which allows the fungus to survive fluctuating high and low temperatures. Xylanase Xylan is the most abundant structural polysaccharide in nature other than cellulose, and is broken down through many enzymes, including xylanase. The xylanase of T. lanuginosus is a polypeptide of 225 amino acids and is highly homologous with other xylanases, but it differs due to the presence of a disulfide bridge that most mesophilic xylanases do not have, and its increased density of charged residues throughout the protein. This makes starch degrading enzymes of T. lanuginosus the most thermostable enzymes among fungal sources. While the temperature optima of most xylanases range from 55 to 65 °C, xylanases of some strains of T. lanuginosus are optimally active at 70 to 80 °C.The xylanase is even stable to denaturants such as urea, and has the ability to refold after denaturing. Xylanases have found applications in the food, animal feed, and pulp and paper industries as they can be used to breakdown xylan in industrial enzymatic reactions. Phytase The phytase of T. lanuginosus has optimum activity at 65 °C and a pH of 6.0. Differential scanning calorimetry has shown that high temperature (69 °C) is required to unfold it. Phytic acid is the primary storage form of phosphate in cereal, legumes, and oilseeds. These are the main components of animal feed, but monogastric animals and humans are unable to digest phytate completely and do not benefit from the phosphate. Extra phosphorus needs to be added into feed to desphosphorylate the phytic acid because it forms insoluble complexes with some metal ions, making them unavailable for nutrition. There is therefore an interest in the use of phytases to break down the phytic acid and avoid this extra step. Chitinase Four putative chitinase encoding genes have been identified in T. lanuginosus. Chitinases are glycosyl hydrolases that break down the β-1,4 linkages of chitin. They are active over broad pH (3.0–11.0) and temperature (30–60◦C) range. Chitinases are biologically useful because they break down the biopolymer chitin. Chitin poses a severe environmental problem in the form of chitinous waste, which is produced at up to 100 billion metric tonnes annually. They can be used in the degradation of chitin in crude shrimp shells without pre-treatment with harsh chemicals, and also have applications in medicine as chitinase has been found to have antifungal properties. Glucoamylase Thermomyces lanuginosus also produced glucoamylase. It has suggested usefulness in the commercial production of glucose syrups because it is insensitive to end product inhibition. Trehalase Trehalase is a monomeric glycoprotein with 20% carbohydrate content. It is optimally active at 50 °C. It is produced constitutively in T. lanuginosus, but is strongly bound to the hyphal wall. Invertase Thermomyces lanuginosus has a very unstable invertase that can be stabilized by thiols in the lab and inactivated by thiol-modifying compound, suggesting it is a thiol protein. Adaptations for survival Thermomyces lanuginosus has a number of adaptations for survival, including homeoviscous adaptation. The concentration of linoleic acid is twofold higher at 30 °C than at 50 °C, meaning it can adjust the fatty acid composition of its membrane in response to temperature to vary the fluidity and keep its enzymes functioning optimally. It can also gain thermotolerance - conidia that are heat shocked show enhanced survival at higher temperatures. Habitat and ecology Thermophilic fungi are primarily compost fungi, though T. lanuginosus has also been found to thrive in spoil tips, senescent grass leaves, sewage, and peat and bog soils, and is the dominant species of thermophilic fungi in hot springs. Though it is sometimes found in soil, this is not a natural habitat for T. lanuginosus, and the concentration of spores of thermophilic fungi per gram material is approximately 106 higher in composts than soils. It is proposed that their wide presence in soil is due to dispersal of spores elsewhere and fallout from air. Thermomyces lanuginosus has two of the most important qualities required for being a compost colonizer - it is able to withstand high temperatures and use complex carbon sources for energy. It produces thermostable hemicellulases that degrade hemicellulose of plant biomass into simpler sugars. As the temperature in compost systems rises, the pioneer flora disappears and thermophilic fungi become dominant. Exothermic reactions of saprophytic and mesophilc microflora raise the temperature to 40 °C, which causes thermophilic spores to germinate and eventually outgrow pioneers, raising the temperature even higher to 60 °C. Around the end of decomposition, thermophilic fungi compose 50-70% of compost microbial biomass. T. lanuginosus is a secondary sugar fungus and can participate in mutualistic relationships with some true cellulose decomposers of composts. Uses of Thermomyces lanuginosus Lipase (TLL) Thermomyces lanuginosus Lipase (TLL) has a number of different chemical, environmental, and industrial applications, where hydrolytic processes are involved. Its regiospecificity allows the oleochemical industry to produce products such as cocoa butter equivalents, human milk fat substitutes, and other specific-structured lipids. It can be used for hydrolysis of oils and fats, alcoholysis or transesterifications of oils and fats, esterification of fatty acids, and acidolysis and interesterification of oils. The first commercialized lipase to be used in detergents was Lipolase, a fungal lipase initially derived from T. lanuginosus. Lipases help the capability of detergent by removing stains. Lipolases have also been used in ionic liquids - environmentally attractive alternatives to typical organic solvents. TLL has many uses in organic chemistry, such in resolution of racemic mixtures, and creation of supercritical fluids. Large scale environmental applications include use in the degradation of polymers, treatment of wastewater from the meat industry, pretreatment of wool, and a sensor of fat quality in large scale processing. The lipase has been found to be useful in the production of drugs and drug intermediates, such as anti-tumour agents, because it can help bring about kinetic resolutions of synthetically important chiral building blocks. It also has applications in the petroleum industry as it can be used in the production of biodiesel. Pathology Endocarditis caused by the fungus has been reported in humans, but is very rare. The first report of T. lanuginosus endocarditis was made postmortem over 25 years ago in a patient who had prior valvular surgery for Staphylococcus aureus infective endocarditis, where it remained asymptomatic for more than 6 months. Another case was reported in an otherwise immunocompetent patient who had a prosthetic heart valve inserted following bacterial endocarditis. T. lanuginosus endocarditis was most likely the result of contamination during that surgery. In the latter case, the illness lasted 9 years and was treated with aggressive surgery and voriconaozle therapy. Risk factors for mould endocarditis include pre-existing lesions, valvular heart disease, prior cardiac surgery (such as valvular surgery, coronary artery bypass grafting, pacemaker or defibrillator insertion and surgery of the aorta) immunosuppression including pregnancy and prematurity, intravenous drug abuse, and having intravenous lines. Symptoms at presentation may include fever, chills, cardiac failure, neurological symptoms including weakness, confusion and visual impairment, respiratory symptoms, skin lesions, chest pain, leg pain, back pain and constitutional symptoms such as anorexia, malaise and weight loss. Fungal endocarditis is fatal without treatment. It has a high morbidity and mortality, as well as a potential for relapse, so patients with uncommon non-Aspergillus mould endocarditis may require lifelong suppressive antifungal therapy. References Trichocomaceae Fungi described in 1899 Fungus species
Thermomyces lanuginosus
Biology
4,118
4,982,675
https://en.wikipedia.org/wiki/Get%20a%20Mac
The "Get a Mac" campaign was a television advertising campaign created for Apple Inc. (Apple Computer, Inc. at the start of the campaign) by TBWA\Media Arts Lab, the company's advertising agency, that ran from 2006 to 2009. The advertising campaign ran in the United States, Canada, Australia, New Zealand, the United Kingdom, Japan, and Germany. Synopsis The Get a Mac advertisements follow a standard template. They open to a plain white background, and a man dressed in casual clothes introduces himself as an Apple Mac computer ("Hello, I'm a Mac."), while a man in a more formal suit-and-tie combination introduces himself as a Microsoft Windows personal computer ("And I'm a PC."). The two then act out a brief vignette, in which the capabilities and attributes of Mac and PC are compared, with PC—characterized as formal and somewhat polite, though uninteresting and overly concerned with work—often being frustrated by the more laid-back Mac's abilities. The commercials end with a still shot of a MacBook laptop computer displaying the Mac logo on its screen. The earlier commercials in the campaign involved a general comparison of the two computers, whereas the later ones mainly concerned Windows Vista and Windows 7. The original American advertisements star actor Justin Long as the Mac, and author and humorist John Hodgman as the PC, and were directed by Phil Morrison. The American advertisements also aired on Canadian, Australian, and New Zealand television, and at least 24 of them were dubbed into Spanish, French, German, and Italian. The British campaign stars comedic duo Robert Webb as Mac and David Mitchell as PC, while the Japanese campaign features the comedic duo Rahmens. Several of the British and Japanese advertisements, although based on the originals, were slightly altered to better target the new audiences. Both the British and Japanese campaigns also feature several original ads not seen in the American campaign. The Get a Mac campaign is the successor to the Switch ads which were first broadcast in 2002. Both campaigns were filmed against a plain white background. Apple's former CEO, Steve Jobs, introduced the campaign during a shareholder's meeting the week before the campaign started. The campaign also coincided with a change of signage and employee apparel at Apple retail stores detailing reasons to switch to Macs. The Get a Mac campaign received the Grand Effie Award in 2007. Advertisements The advertisements play on perceived weaknesses of non-Mac personal computers, especially those running Microsoft Windows, of which PC is clearly intended to be a parody, and corresponding strengths possessed by the Mac OS (such as immunity to circulating viruses and spyware targeted at Microsoft Windows). The target audience of these ads is not devoted PC users, but rather those who are more likely to "swing" towards Apple. Apple realized that many consumers who chose PCs did so because of their lack of knowledge of the Apple brand. With this campaign, Apple was targeting those users who may not consider Macs when purchasing but may be persuaded to when they view these ads. Each of the ads is about 30 seconds in length and is accompanied by a song called "Having Trouble Sneezing", which was composed by Mark Mothersbaugh. North American campaign The following is an alphabetical list of the ads that appeared in the campaign shown in the United States, Canada, Australia and New Zealand. Accident—PC, who is sitting in a wheelchair and wearing casts on his arms, explains that he fell off his desk when someone tripped over his power cord, thus prompting Mac to point out that the MacBook's and MacBook Pro's magnetic power cord prevents such an occurrence. The Macbook featured at the end demonstrates the feature. Angel/Devil—Mac gives PC an iPhoto book to view. Suddenly, angel and devil versions of PC appear behind him. The angel encourages PC to compliment Mac, while the devil prods PC to destroy the book. In the end, PC says the book is good and then turns around, feeling the air where the angel and devil versions of himself were. Bake Sale—When Mac questions PC regarding a bake sale he has set up, PC replies that he is trying to raise money by himself in order to fix Vista's problems. Mac decides to contribute by buying a cupcake, but as soon as he takes a bite, PC asks him to pay ten million dollars for it. Bean Counter—PC is trying to balance his budget, admitting that Vista's problems are frustrating PC users and it's time to take drastic action: spending almost all of the money on advertising. When Mac asks PC if he thinks the small amount of money left will fix Vista, PC reallocates all of it to advertising. This ad coincided with the introduction of Microsoft's "I'm a PC" campaign. Better—Mac praises PC's ability with spreadsheets but explains that he is better with life-related activities such as music, pictures, and movies. PC defensively asks what Mac means by "better," only to sheepishly claim a different definition when Mac tells him. Better Results—PC and Mac discuss making home movies and show each other their efforts. Supermodel Gisele Bündchen enters, representing Mac's movie, while PC's movie is represented by a man with a hairy chest wearing a blonde wig and a dress similar to Bündchen's. PC states that his movie is a "work-in-progress." Biohazard Suit—PC first appears wearing a biohazard suit to protect himself from PC viruses and malware, of which PC says there are 20,000 discovered every day. Mac asks PC if he is going to live in the suit for the rest of his life, but PC cannot hear him because he is too protected by his virus-proof mask, and takes it off. PC then shrieks and struggles to place it on again. Boxer—PC is introduced by a ring announcer as if he were in a boxing match, stating that he's not going down without a fight. Mac explains that the issue is not a competition but, rather, people switching to a computer that's simpler and more intuitive. The announcer admits his brother-in-law recently purchased a Mac and loves it. This is also the first ad to show Mac OS X Leopard. Breakthrough—Mac and PC's therapist (played by Corinne Bohrer, see "Counselor" below) suggest that PC's problems are simply a result of software and hardware coming from various sources, whereas Mac gets all his hardware and software from one place. PC keeps repeating "It's not my fault!" with the support of Mac and the therapist before concluding, "It's Mac's fault! It's Mac's fault!" Mac and the therapist are disappointed in PC's conclusion, but PC nevertheless ends with the comment "What a Breakthrough!" Broken Promises—PC tells Mac how excited he is about the launch of Windows 7 and assures him it won't have the same problems as Vista. However, Mac feels like he has heard this before and has a series of flashbacks with past versions of PC assuring him about Windows Vista, XP, ME, 98, 95, and 2.0. In the last flashback, PC says, "Trust me." Back in the present, he explains this time it's going to be different and says, "Trust me," in an almost identical way to his flashback counterparts. Calming Teas—PC announces calming teas and bath salts to make Vista's annoyances easier to live with, such as "Crashy-time Chamomile", "Missing Driver Mint", "Pomegranate Patience", and "Raspberry Restart". He runs out of time before he can talk about his bath salts. Choose a Vista—Confused about which of the six versions of Windows Vista to get, PC spins a large game show wheel. PC lands on Lose a Turn, and Mac questions why PC put that space on the wheel. Computer Cart—PC and three other men in suits are on a computer cart. When Mac asks why, PC says that he gets an error with a Windows Media Player Dynamic-link library file (WMP.DLL), and that the others suffer from similar errors. The man in the beige suit represents error 692, the man in the grey suit represents a Syntax error, and the man in the bottom of the cart represents a fatal system error (PC whispers, "He's a goner," at the commercial's end). Mac explains that Macs don't get cryptic error messages. Counselor—PC and Mac visit a psychotherapist (played by Corinne Bohrer) to resolve their differences. The therapist suggests they each compliment each other. While Mac finds it easy to compliment PC, PC's resentment of Mac's abilities is too deep for him to reciprocate. The counselor suggests that they come twice a week. Customer Care—Mac is seen with a Mac Genius from an Apple Retail Store's Genius Bar, who can fix Mac problems. PC then has a short montage of endless automated customer-support messages, never reaching a real person, much to his disappointment. PC then says that his source of help is "the same" as a Mac Genius. Elimination—PC attempts to find Megan, a new laptop hunter, the perfect PC. He starts by eliminating from a lineup of fellow PCs all those who have too-small screens and too-slow processors. However, none of the PCs is "immune" to viruses, which is Megan's #1 concern, so PC leaves her with Mac. Flashback—Mac asks PC if he would like to see the website and home movie that he made. This prompts PC to remember a time when both he and Mac were children: when the younger Mac asks the younger PC if he would like to see some artwork he did, the younger PC takes out a calculator and calculates the time they have just wasted (This may be a reference to the time when PC's were text-based, while Macs were slower but had GUIs). Returning from the flashback, PC does the same thing. Genius—Mac introduces PC to one of the Mac Geniuses from the Apple Retail Store's Genius Bar. PC tests the Genius, starting with math questions, which culminates in asking her, on a scale of one to ten, how much does he loathe Mac, to which she answers "Eleven." Surprised, PC says "She's good. Very good." Gift Exchange—Mac and PC exchange gifts for Christmas. PC, who is hoping for a C++ GUI programming guide, is disappointed to receive a photo album of previous Get a Mac ads made on iPhoto. In contrast, he gives Mac a C++ GUI programming guide. Goodwill—Mac and PC agree to put aside their differences because of the Christmas season. Although PC momentarily slips and states that Mac wastes his time with frivolous pursuits like home movies and blogs, the two agree, as Mac says, to "Pull it into hug harbor," and they wish each other a good holiday. Group—PC is at a support group for PCs living with Vista. The other PCs there tell him to take it one day at a time and that he is facing the biggest fact of all—that Vista isn't working as it should. They all wish the Vista problems will go away sooner and a lot easier. One of them says pleasingly that he has been error-free for a week, but he starts to repeat himself uncontrollably, discouraging the others. iLife—PC listens to an iPod and praises iTunes. Mac replies that the rest of iLife works just as well and comes on every Mac. PC defensively responds by listing the cool apps that he comes with, but he can only identify a calculator and a clock. I Can Do Anything—In this animated commercial designed for the holiday season, PC asks Mac why the former loves the holidays so much. Mac asks if it's the season for peace on earth, but PC replies that they get to be animated and can do anything. PC demonstrates by floating in the air, building a snowman in fast motion, and asking a hopping bunny where he is going. The bunny, who can speak, says he's going to the Apple Store for some last-minute gifts. PC then purposely tips off the snowman's head, making it fall on the bunny, and sarcastically apologizes to him, calling himself clumsy. The animation style for this ad mimics the Rankin/Bass animation style seen in a number of classic Christmas specials. Legal Copy—Every time PC says something positive about himself, the legal copy that appears on the screen bottom increases. He finally states that PCs are now "100% trouble-free", and the legal copy covers the whole screen. Meant for Work—PC, looking haggard and covered in stickers, complains about the children who use him and their activities, such as making movies and blogging, which are wearing him out. He also says he cries himself to sleep mode every night, complaining that, unlike Mac, he is meant more for office work. PC is then alerted because his user wants to listen to some emo music and, with a loud groan, trudges off, showing an Anarchy sticker on his back. Misprint—PC is on the phone with PC World, attempting to report a misprint. He explains how the print said, "The fastest Windows Vista notebook we tested this year is a Mac." PC argues how impossible it is for a Mac to run Vista faster than a PC, while Mac tries to explain that it is true. While arguing with PCWorld over the phone, PC says that he'll put Mac on the line to set things straight. However, he instead impersonates Mac, saying that PCs are faster. Network—Mac and PC are holding hands to demonstrate their ability to network with each other. A Japanese woman representing a new digital camera enters and takes Mac's free hand. While Mac and the camera are perfectly compatible and speak to each other fluently, PC—who cannot speak Japanese—is utterly confused and unable to communicate, representing that Windows PCs need a driver installation with virtually all new hardware. Now What—PC begins by showing off his new, long book, I Want to Buy a Computer — Now What? to help customers deal with all the difficult computer-buying decisions if they have no one to help. Mac then explains that, at Apple Stores, personal shoppers help customers find the perfect Mac, even offering workshops to teach people about using the computers. Upon hearing this, PC brings out his book's companion volume, I Just Bought a Computer — Now What? Office Stress—Mac's new Microsoft Office 2008 has just been released. In the box that PC gives Mac is a stress toy for him to use when he gets overwhelmed from doing lots more work. However, PC begins using the toy, complaining that Microsoft Office is also compatible with Mac, that he wants to switch his files over, and that he is getting less work than Mac, eventually breaking the toy. Off the Air—Mac and PC appear with a Mac Genius, who announces it is now easier than ever to switch to a Mac and that a Mac Genius can switch over PC files to a new Mac for free. PC then protests that fear is what keeps people from switching, and people don't need to hear about the Mac Genius. In protest, he pulls a cover over the camera, which has a test card drawn on it, and declares that they are off the air. Out of the Box—Mac (in a white box) and PC (in a brown box doing some exercises) are discussing what they will do when they are unpacked. Mac says that he can get started right away, but PC is held up by the numerous activities that he must complete before being useful. Mac eventually leaves to get right to work, but PC is forced to wait for parts that are still in other boxes. Party is Over—PC unhappily throws a party celebrating the release of Windows Vista. He complains to Mac that he had to upgrade his hardware and now can't use some of his old software and peripherals. He then talks with one of the party members about throwing another party in five years, which turns into five years and a day, and so on. PC Choice Chat—PC has his own radio talk show called PC Choice Chat, and people begin to call in asking for advice on which computer to get. All the callers ask for advice on a computer that would qualify as a Mac but not as a PC. One caller asks for a computer for people who hate getting viruses, another caller asks for PC help like Mac Geniuses, and a third caller wants to switch to Mac altogether. PC ignores these calls. PC Innovations Lab—PC introduces himself and then starts talking about the PC Innovations Lab he has set up. When Mac questions him about it, he tells Mac that in response to the Mac's magnetic power cord, he wrapped another PC in bubble wrap, and in response to Mac's all-day battery life, he made an extremely long power cord. Mac tells PC that innovations should make people's lives easier, to which PC shows Mac another PC with cupholders on its shoulders. PC then takes the cup and says "Cheers to innovation!" PC News—PC is sitting at a news desk and turns it over to a correspondent at what seems to be a launch party for Windows 7. A person being interviewed reveals that he is switching to a Mac. PC is surprised by this and asks why, but more people speak of how Mac is #1 with customer satisfaction until PC finally says to cut the feed. He then suggests going to commercial, but Mac acknowledges that they are in a commercial, so PC instead suggests going to another commercial. Pep Rally—PC is introduced by a cheerleading squad. When asked, PC explains Mac's number-one status on college campuses with a built-in iSight camera, a stable operating system, and an ability to run Microsoft Office so well, so he wants to win students back with a pep rally. The cheerleaders cheer, "Mac's Number One!" and upon PC's complaint, they cheer, "PC's Number Two!" Pizza Box—PC tries to attract college students by posing as a free box of pizza. This ad was aired during Apple's 2008 back-to-school promotion. Podium—PC, in the style of a political candidate, is standing at a podium making declarations about Windows Vista, urging those who are having compatibility problems with existing hardware to simply replace them and to ignore the new features of Mac OS X Leopard. However, he privately admits to Mac that he himself has downgraded to Windows XP three weeks ago. His key slogan is: "Ask not about what Vista can do for you; ask what you can buy for Vista." PR Lady—Mac and PC are joined by a public relations representative (played by Mary Chris Wall), who has been hired by PC to place a positive spin on the reaction to Windows Vista and claims that many people are even downgrading back to Windows XP. Her response to claims that more people are switching to Mac instead is a sheepish "No comment." Referee—A referee is present, according to PC, to make sure that Mac doesn't go on saying that Leopard is better and faster than Vista. When Mac defends himself, saying it was The Wall Street Journal that compared the two, PC complains, and the referee sides with Mac. Upon insulting the referee, PC is ejected, but PC rebuts, saying that he has nowhere to go (in the ad's area). Restarting—Mac and PC explain how they both have a lot in common, but their discussion is hampered by PC's unfortunate habit of freezing and restarting. Sabotage—PC is present, but a different actor (Robert Webb in UK version) appears in Mac's place, obviously reciting poorly memorized lines to flatter PC. The real Mac arrives soon after, and, while PC denies anything is happening, the impostor Mac tells the real Mac that he is a big fan of his. Sad Song—PC sings a short country-and-Western-style song to express his grievances about people leaving PCs for Macs and Vista's technical issues. A hound-dog then howls, which Mac says is a "nice touch." A longer version ends with Mac asking PC if the dog is his, which it isn't. Sales Pitch—Although Mac introduces himself, as usual, PC says, "And buy a PC." He explains that Mac's increasing popularity is forcing him to be more forward in his self-promotion, so he is reduced to holding up red signs depicting various pitches. Santa Claus—Another animated Get a Mac commercial featuring Santa Claus and Christmas caroling by both PC and Mac. PC spoils the group's singing of "Santa Claus is Coming to Town" by inserting "Buy a PC and not a Mac this holiday season or any other time for goodness sake," and claims, "That's how I learned it." The animation style is similar to the Rankin/Bass television specials Rudolph the Red-Nosed Reindeer and Santa Claus Is Comin' to Town. Security—In a reference to criticisms of Windows Vista's security features, PC is a joined by a tall United States Secret Service-style bodyguard who represents Vista's new security feature. The guard intrusively demands PC's decisions to cancel or allow every incoming or outgoing interaction he has with Mac. Self Pity—Mac, for once, is wearing a suit. He explains that he "does work stuff, too," and has been running Microsoft Office for years. Upon hearing this, PC becomes despondent and collapses on the floor, begging to be left alone to depreciate. Stuffed—PC enters slowly with a ballooned torso, explaining that all the trial software is slowing him down. Mac replies that Macs only come with the specific software for which customers ask (namely, the iLife package). As PC finally gets on his mark, Mac begins his intro again, but PC realizes that he has forgotten something and begins to slowly leave. Stacks—PC is searching through all of his pictures, trying to find a photograph of his friend. He searches one picture at a time, but Mac states that iPhoto has a feature called Faces, in which iPhoto can tag the face of a person and find other pictures of the same person, putting them all into the same folder and saving search time. PC responds to the facial-recognition technology as expensive and tells Mac to sort the pictures instead because he has the technology to make it easier. Surgery—PC appears in the garb of a patient awaiting surgery and explains that he is upgrading to Windows Vista but requires surgery to upgrade (specifically, upgrading such items as graphics cards, processors, memory, etc.). In reference to perceived difficulties in upgrading, PC admits that he is worried about going through it and bequeaths his peripherals to Mac should he not survive. Mac asks PC if, like him, his upgrade could be straightforward. Surprise—Mac appears alongside a customer (Andrée Vermeulen) with PC notably absent. Mac tries to convince the customer, who wants to buy an effective computer, that she should get a PC, telling her that they're much better and more stable. The customer seems skeptical, tells Mac she will "think about it", and leaves. A frustrated Mac pulls off a mask and his clothes, revealing himself to be PC in disguise. The real Mac then appears, sees PC's discarded mask and clothes, and says, "I don't even want to ask." Tech Support—A technician (Brian Huskey) is present to install a webcam on PC (using masking tape to attach it to his head). PC is extremely pleased by his new upgrade, but upon hearing from the technician that Mac has a built-in webcam, he storms off without waiting for the camera to be fully installed. Teeter Tottering—A woman who had a PC has a box of things that were in her PC and says she's switching to Mac. PC tries to convince her to stay while she goes over to Mac every time. Throne—PC appears in a king's robe and on a throne saying, even though switching computers can be difficult, his subjects won't leave him and that he's still the "king" of computers. Mac then begins talking about how PC's subjects can bring their PC into an Apple Store wherein all PC files can be transferred over to a new Mac, at which point PC declares Mac banished. Time Machine—Mac appears with nine clones of himself behind him, who all introduce themselves at once. PC is shocked, so the various Macs explain that it is simply Time Machine, a feature in Leopard that makes regular backups of a user's hard drive. PC admits that he likes the feature, and the Mac clones thank him one at a time. Time Traveler—PC uses a time machine to travel to the year 2150 to see if any major issues such as freezing and crashing have been removed from the PC and to see if PCs will eventually be as hassle-free as Macs are. Promptly after PC arrives in 2150, his future self freezes, which answers the question. Top of the Line—PC and Mac appear with a customer who is looking for a new computer. PC introduces her to the "top-of-the-line" PC (Patrick Warburton), a handsome and overly slick PC in a suit. She asks him about screen size and speed, to which the top-of-the-line PC says he's the best. However, he balks when she says she doesn't want to deal with any viruses or hassle. She decides to go with Mac, so the top-of-the-line PC hands her his business card and tells her, "When you're ready to compromise...you call me." Touché—Right after PC introduces himself, Mac replies, "And I'm a PC, too." Mac explains to the confused PC that he can run both Mac OS X and Microsoft Windows, calling himself "the only computer you'll ever need." PC mutters, "Oh...touché." Mac explains, referring to the rules of fencing, that one only says touché after he or she makes a point and someone else makes a counterpoint, but PC continues to misuse the word. A similar conversation occurred in Dodgeball: A True Underdog Story, a film in which Justin Long (Mac) appeared. Trainer—The commercial starts off traditionally, but PC is doing sit-ups with a trainer in a striped shirt (Robert Loggia), whose fierce coaching style discourages PC. PC suggests the trainer try some "positive reinforcement," but the trainer compliments Mac instead, and PC is offended. This is the first commercial to show the Mac OS X Snow Leopard. Tree Trimming—In another animated Get a Mac commercial for the holiday season, Mac and PC set aside their disagreements and decide to trim a Christmas tree by hanging ornaments and stringing lights. Mac tells PC that they are good friends, while PC gets nervous. When they are finished, PC does not want to light the lights on the tree, but Mac persuades him to do so. PC plugs in the tree's lights, but, when illuminated, the lights spell: "PC RULES." He apologizes to Mac and says that it "just sort of happened." Trust Mac—PC, in an attempt to hide from spyware, is wearing a trench coat, a fedora, dark glasses, and a false mustache. PC offers Mac a disguise, but Mac declines, saying he does not have to worry about the normal PC spyware and viruses with Mac OS X Leopard. V Word—PC declares that people should to stop referring to his operating system (Vista) by name. He says using the word "doesn't sit well with frustrated PC users. From now on, we're going to use a word with a lot less baggage:'Windows.'" During the scene, he holds a black box with a large red button that sounds a buzzer when pressed. PC presses the button whenever Mac says Vista. After pointing out that not using the word isn't the same as fixing the operating system's problems, Mac ends the ad by saying Vista several times in rapid succession, thwarting PC's attempts to sound the buzzer. Viruses—PC has caught a new virus (represented as a cold) and warns Mac to stay away from him, citing the 114,000 known viruses for PCs. Mac states the viruses that affect PCs do not affect him, and PC announces that he will crash before collapsing onto the floor in a faint. Work vs. Home—Mac describes how he enjoys doing fun activities such as podcasts and movies, which leads PC to claim that he also does fun activities such as timesheets, spreadsheets, and pie charts. After Mac states that it's difficult to capture a family vacation using a pie chart, PC rebuts by showing a pie chart representing "hanging-out time" and "just kicking it" with different shades of gray. Mac replies, "I feel like I was there." Wall Street Journal—Mac is reading a favorable review of himself by Walt Mossberg in The Wall Street Journal. Jealous, PC claims he also received a great review but is caught off-guard when Mac asks for specific details. This ad is currently not available on the Apple website but can be found on YouTube. Yoga—Mac is watching PC have a yoga session in which the yoga instructor (Judy Greer) is coaching PC in expelling bad Vista energy and forgetting Vista's problems. When the yoga instructor goes on to complain that Vista caused errors in her yoga billing and then storms off, PC considers switching to pilates. Web-exclusive campaign Several advertisements have been shown exclusively in Flash ad campaigns running on numerous websites. Unlike the ads shown on television, these advertisements have not been posted as high-quality QuickTime videos on Apple's website. These ads run for approximately 20 seconds each and reference specific online advertising features (such as banner ads), making it unlikely they will ever appear on television. The titles are taken from the Flash-video file names. Banging—PC expresses his regret for upgrading to Windows Vista because it is causing him various problems. Mac tries to comfort him, but PC continues to bang his head on the side of the banner advertisement. Booby Trap— PC and Mac are at PCMag. PC is angry that they put up a banner ad saying that iLife '09 is the best office suite. PC hooks some cables up to the banner claiming that whoever clicks that will get shocked. PC proves it himself by clicking it. Claw—In a skyscraper ad, PC is using a grabber claw to try to grab a boxed copy of Microsoft Office 2008 for Mac that is sitting in the top banner ad. He claims that if people see that Office 08 is on the Mac, that they will ask questions regarding what a PC can do that the Mac can't. Mac points out that Office has been on the Mac for years, and that this is simply the latest version. PC knocks over the Office box, which causes an alarm to go off. PC hands the grabber claw to Mac, saying "He did it!" Cramped—In the only known UK web-exclusive ad, PC and Mac (portrayed by Mitchell and Webb) are lying head-to-head in a banner ad, complaining about the size and format of the banner ad, and encouraging the user to click the ad quicker. Customer Experience—A banner ad shows that Mac is rated #1 among customers experience. PC is frustrated and goes to more opinions from a before and after hair ad. Both say that the Mac is better. Customer Satisfaction—A "Mac Customer Satisfaction Score" meter appears in a banner ad above Mac and PC. The meter's needle is hovering at about 85 out of 100. PC excuses himself and climbs up to the upper banner ad, and pulls on the needle. He accidentally breaks off the tip of the meter, and then waves it at the 20 mark, saying "Customer satisfaction is dropping..." Easy as 1–23—In a Web banner, PC shows Mac his new slogan. Mac assumes it means "PC. Easy as 1-2-3," but PC corrects him by stating it means "Easy as 1 through 23". He then pulls out 23 steps for using a PC. Editorial—PC drags his own op-ed column into the banner ad (since these ads appeared on news sites, such as cnn.com, it "blends" in with the rest of the site). The op-ed headline says "Stop Switching to Mac!" PC explains that people are switching to Macs more than ever, and that they need to know how much it is hurting PC. He makes a couple of anguished poses in the photo box to illustrate how frustrated he is. Hiding—PC peeks in from the left side of the screen. When Mac asks what PC is doing, PC explains that he is hiding from viruses and spyware. PC then leaves, saying that he has to run a scan. There are two versions of this ad: a 300x250 square ad and a 160x600 vertical banner ad. PC is identical in both versions, but Mac's performance features a different take in each. Knocking—PC panics about needing to search for new drivers for his hardware now that he's upgraded to Windows Vista. He tries to force his way off the left side of the screen so he can leave to find the new drivers but repeatedly runs into a wall. When he finally succeeds in breaking through the left side of the screen, he finds himself jumping back in from the right side of the screen. Newswire—PC, jealous of Mac's good press, gets his own newswire ticker above the ad. Unfortunately, the newswire displays unflattering headlines such as "Vista Users Upset Over Glitches" and "Users Downgrade to XP." PC says he hates his stupid newswire and then the next headline on the newswire is "PC Hates His Stupid Newswire." Not—A banner ad on the top of the page reads, "Leopard is better and faster than Vista." —Wall Street Journal. On the side, Mac introduces himself while PC climbs a ladder. Mac asks what PC is doing and he says that he is fixing an embarrassing typo. He then climbs all the way to the top and staples a piece of paper that says NOT at the end of the quotation. He then tells Mac that they have the whole Internet to correct and asks Mac to grab the ladder. PC Turf (PCMag and PCWorld exclusive)—PC welcomes Web surfers to his turf, PCWorld.com, and remarks that Mac must feel out of place there. Mac points out that they said some great things about Macs, so PC asks security to remove Mac because he's going to be a problem. The PCMag version is identical, except PC's voice is re-dubbed to say "PCMag.com." Refresh—A banner ad on the top of the page reads, "Vista...one of the biggest blunders in technology?" —CNET.com. Off to the side, PC sees the banner and realizes its another bad review of Vista and decides to do an emergency refresh. He walks over and opens a compartment door that says, "Emergency Banner Refresh." PC flips the switch, and the banner is replaced by another banner that reads, "It's time for a Vista do-over" —PC Magazine. PC, frustrated about this review, flips the switch again. The banner is replaced by another that reads, "Mac OS X Leopard: A Perfect 10" —InfoWorld. PC sees this positive review and is relieved until he realizes it's about Leopard. PC angrily flips the switch again to end the ad. Sign—In a skyscraper ad, Mac asks PC about an unlit sign in a separate banner ad that reads, "DON'T GIVE UP ON VISTA." PC replies that it will stop the problem of frustrated Windows Vista users downgrading to XP or switching to Macs. He presses a button, lighting up only the GIVE UP part of the sign. He presses it again, lighting up ON VISTA. Frustrated, PC presses the button repeatedly, causing GIVE UP and ON VISTA to light up alternately. Switcher Cams—A banner ad at the top of the page displays a bank of 5 security camera screens which show users walking into Apple Stores; as users walk past each camera "PC SWITCHER" lights up in red beneath each screen. On the side, PC sees the switchers and is disappointed they are upgrading to Mac instead of to Windows 7. Mac says he thought Windows 7 was "supposed to be an improvement", to which PC responds that Macs are still #1 in customer satisfaction and that people will have to move their files over anyway. Still observing the switchers, PC leaves the side and appears on one of the video screens, managing to stop one switcher from going into the Apple Store but says there are still "thousands and thousands to go". UK campaign For the British market, the ads were recast with the popular British comedy double act Mitchell and Webb in the lead roles; David Mitchell as PC and Robert Webb as Mac. As well as original ads, several ads from the American campaign were reshot with new dialogue and slightly altered scenes. These ads are about 40 seconds long, which is slightly longer than the US advertisements. The following ads are exclusive to the UK: Art Language—In an effort to relate to the creative artistic types whom he assumes own Macs, PC, dressed in a stereotypically bohemian fashion, begins speaking to Mac using unnecessarily pretentious language. Despite Mac's insistence that he enables anyone to be creative, PC continues using big words, eventually confusing even himself. Court—PC, dressed in a barrister's outfit, questions Mac on how long it takes to make an iPhoto photo book that Mac claims to have made in a few minutes. Doubting Mac's claim, PC eventually resorts to cutting off Mac whenever he tries to speak. Magic—Exchanging an average 50k Word document in a file to Mac, PC makes out that the process is much harder than it actually is through the use of a drum roll and a magician's assistant, and shouting "Amazing!" at the end of the transfer. Bemused, Mac points out that he is compatible with PC and effortlessly passes him back a photo, at the end of which PC shouts "Amazing!". Naughty Step—PC unveils his naughty step: the ultimate deterrent to an unruly errant child (similar to the technique used by Jo Frost in the UK and US series Supernanny). He goes on to explain that children should not be making pictures, movies, and websites on a "proper, grown-up PC". Mac points out that these are activities children like to do, resulting in his own banishment to the naughty step. Office at Home—PC is proud of his role in both the office and the home, but Mac retaliates by stating that homes are not run like offices, and thus shouldn't have office computers. PC eagerly begins to describe the ways in which homes can be run like offices, with his increasing authoritarianism prompting Mac to sarcastically comment that PC's home "sounds like a fun place". Office Posse—PC wonders why Microsoft Office (Excel, PowerPoint, Word and Entourage) are standing with Mac and is surprised when Mac says that he runs Office also. PC attempts to order and then entice the Office members to join him, but they refuse, resulting in what Mac calls an awkward moment. Tentacle—PC praises Britain's work ethic, chastising Mac's insistence on the need for fun in life. In attempting to persuade Mac of his point of view, PC employs the use of several animal metaphors, but becomes sidetracked through his increasingly eager musing about the practical applications of octopus tentacles in an office. Several American ads were modified for the UK market. In some of these ads, the events that occur in the narrative differ significantly from the original American campaign. Others follow the original ads more closely, with only minor differences (many based on the differences in characterization from the actors involved or language differences between American English and British English). These ads are also performed by Mitchell and Webb. The adapted ads are Accident—The ad follows the same narrative, with a different ending: PC, on heavy pain medication, requests to be pushed over to the window so he can look at the pigeons, only for Mac to point out that there are no pigeons nor a window. PC responds with a dreamy "You're funny...". Network—The ad follows the same narrative, but in the British version Mac connects with a Japanese printer instead of a digital camera. PC is also more involved in the dialogue, attempting to communicate in Japanese with the printer, only to mangle his words, first declaring that he is a rice cake before asking, "Where is the train station?". This larger involvement of PC, when compared to PC in the American ad, is also shown by the appearance of subtitles whenever PC, Mac, or the printer speak in Japanese; in the American ad, there are no subtitles translating Mac and the camera's dialogue, further evidencing that PC is lost in the conversation. Out of the Box—The ad is almost exactly the same as the American version. However, Mac doesn't mention his built-in camera. Also, at the end, PC pulls out an extremely thick user manual and starts reading it. Pie Chart—The ad is based on the American Work vs. Home. The light-grey area of PC's family holiday pie chart now represents "shenanigans and tomfoolery" and the dark-grey area represents "hijinks". Also, PC further divides hijinks into "capers", "monkey business", and "just larking about". Restarting—The ad follows much the same narrative as the American ad, with the only major difference being that, after Mac has left to get someone from IT, PC awakens and wonders where everyone has gone. Stuffed—This ad contains no significant changes from the American version. Trust Mac—The ad follows the same narrative as the American version, but at the end, PC yells out that there is nobody present but two Macs having fun. Virus—Based on the American ad Viruses, it contains the dialogue "This one's a humdinger" instead of "a doozy" but otherwise contains no significant changes. Japanese campaign On December 12, 2006, Apple began to release ads in Japan that were similar in style to the US Get a Mac ads. The Mac and PC are played by the Rahmens, a Japanese comedy duo. The ads used to be viewable at Apple's Japan website. The following ads are exclusive to Japan: Nengajo—Mac shows PC the New Year's Card he made using iPhoto. PC then looks at it, remarking about the picture of the wild boar on the card. Nicknames—PC is confused as to why Mac is not called a PC. Mac then explains that more people use him at home, and PC counters that he is more business-oriented. PC then asks for a nickname for himself; Mac then names him Wāku (work). Practice Drawing—PC says he can create pictures, but they are all graphs. For example, what Mac thinks is Manhattan is a bar graph and what Mac thinks is a mountain view is a line graph. Mac catches on, correctly identifying a pie chart, but PC responds that it is a pizza, chiding Mac for having no artistic sense. This is similar to Art Language, in that PC is trying to connect with artsy people like Mac. Steps—Mac tells PC that he has made his own webpage using iWeb. PC then asks for the steps to make his own. Mac gives them, finishing after step three. PC then pesters Mac for step four, which Mac finally explains is to have a cup of coffee. Several American ads were modified for the Japanese market. In some of these ads, the events that occur in the narrative differ significantly from the original American campaign. Others follow the original ads more closely, with only minor differences (many based on the differences in characterization from the actors involved). The adapted ads are Bloated—This ad is similar to Stuffed, but in this ad, PC makes no reference to bloatware (limited or useless versions of programs loaded onto new PCs), instead complaining about how much space installing a new operating system takes. Mac expresses his hopes that PC didn't have to delete any important data. iLife—This ad is almost exactly the same as the American version, except that PC is listening to Eurobeat on his iPod rather than slow jams, and Mac gives a pregnant pause instead of complimenting PC on his pre-loaded calculator and clock. iMovie—This ad with Miki Nakatani, is nearly identical to the American ad Better Results, except that PC actually thinks that his home movie is comparable to the Mac home movie. Microsoft Office—Based on the UK ad Office Posse, the ad contains only minor differences. At the end of the ad, PC tries to entice Office by chanting, "Overtime! Overtime! All together now!" Pie Chart—This ad is based on the American ad Work vs. Home. The narrative is largely the same, with the only significant differences being that Mac is blogging rather than working with movies, music, and podcasts, and the names of the divisions of the pie chart each represent Sightseeing and Relaxing at a Café. Restart—This ad is identical to the American ad Restarting, except that PC doesn't restart again after Mac goes off to get IT. Security—This ad is based on the American ad Trust Mac, but contains some significant changes. Rather than disguising himself to hide from viruses, PC dons protective gear to fight viruses. PC demands that any virus out there come and fight him. After Mac points out a virus, PC slowly moves behind Mac to protect himself. Virus—The ad contains no significant changes from the American ad Viruses. Keynote videos While not strictly a part of the ad campaign, Hodgman and Long appeared in videos during Steve Jobs's keynote addresses at the 2006, 2007, and 2009 Worldwide Developers Conference and the 2008 MacWorld Expo. Hodgman alone appeared in the November 2020 Apple Event. WWDC 2006—In an attempt to stall Mac development, PC claims to have a message from Steve Jobs that says that the developers should take the rest of the year off, and that Microsoft could use some help with Vista. He starts to go off-topic about his vacation with Jobs, but when Mac arrives he says he's just preparing for their next commercial and starts to sing the Meow Mix theme song off-key. WWDC 2007—PC dresses up as Steve Jobs, and announces that he is quitting and shutting down Apple. He claims that Vista did so well, selling tens of dozens of copies, that there's no need for Leopard, and that he got his iPod-killer, a brown Zune. He tells the developers to just go home because they're no longer needed. Mac arrives and chides PC for trying to mislead the developers again like last year. He asks if PC really thinks the audience will believe he is Jobs. PC then claims he is Phil Schiller. MacWorld Expo 2008—PC and Mac stand under a Happy New Year sign, and PC talks about what a terrible year 2007 has been for him, referring to Windows Vista as a failure while Apple Inc. experienced success with Mac OS X Leopard, iPod Touch, and iPhone. Despite this, PC says he is optimistic for the future, claiming it to be the Year of the PC. When asked what his plans are for 2008, PC states he is "just going to copy everything [Mac] did in 2007." WWDC 2009—PC comes out and greets the crowd and says that he wants them to have a great conference with "incredible innovations that will keep Apple at the forefront..." He stops, then says, "I think I can do that better." Now it's take 2. He wishes them a "week with some innovation, but not a lot, please. Yeah, I like that." Then he says some stuff about the 1 Billion App Countdown. He asks for apps and ideas. He says, "I hope you're thinking of some great ideas because I'm thinking of some great ideas too!...What are your ideas?" Eventually at Take 16, PC gives up and Mac tells everyone to have a great conference. Apple Event November 2020—PC criticizes the upgrades made to the MacBook Air earlier in the event. Effectiveness Before the campaign's launch, Apple had seen lower sales in 2005–06. One month after the start of the "Get a Mac" campaign, Apple saw an increase of 200,000 Macs sold, and at the end of July 2006, Apple announced that it had sold 1.3 million Macs. Apple had an overall increase in sales of 39% for the fiscal year ending September 2006. Criticism In an article for Slate magazine, Seth Stevenson criticized the campaign as being too "mean spirited", suggesting, "isn't smug superiority (no matter how affable and casually dressed) a bit off-putting as a brand strategy?". Writing in The Guardian, Charlie Brooker criticized the casting of comedians Mitchell and Webb in the UK campaign, noting that in the sitcom they were then starring in together, Peep Show, "Mitchell plays a repressed, neurotic underdog, and Webb plays a selfish, self-regarding poseur... So when you see the ads, you think, 'PCs are a bit rubbish yet ultimately lovable, whereas Macs are just smug, preening tossers.'" PC Magazine Editor in Chief Lance Ulanoff criticized the campaign's use of the term "PC" to refer specifically to IBM PC compatible, or Wintel, computers, noting that this usage, though common, is incorrect, as the Macintosh is also a personal computer. In a 2008 column, he recommended that the characters instead introduce themselves as "a Mac PC" and "a Windows PC", adding, "Of course, the ads would then be far less effective, because consumers might realize that the differences Apple is trying to tout aren't quite as huge as Apple would like you to believe." I'm a PC Microsoft responded to the Get a Mac advertising campaign in late 2008 by releasing the I'm a PC campaign, featuring Microsoft employee Sean Siler as a John Hodgman look-alike. While Apple's ads show personifications of both Mac and PC systems, the Microsoft ads show PC users instead proudly defining themselves as PCs. Justin Gets Real In the wake of the Mac transition to Apple silicon, in March 2021, Intel made a similar advertising campaign, known as Justin Gets Real, featuring Justin Long as himself promoting Intel PCs over Macs. These commercials typically start with Long stating, "Hello, I'm a..." against the familiar plain white background before he suddenly says, "Justin, just a real person doing a real comparison between Mac and PC.". He is then seen interacting with the computers in a realistic setting and/or with others using them. In popular culture/parodies Videos parodying the Get a Mac campaign have been published online by Novell, to promote Linux, represented by a young and fashionable woman. A different set of videos parodying the campaign have been produced, but with Linux portrayed as a typical male nerd. Cingular Wireless has their own commercial that is a same structure as Mac ads, but comparing Cingular Wireless to Verizon Wireless. To promote Steam on Mac, Valve made a parody with Portal and Team Fortress 2 sentry guns. After the 2007–2008 Writers Guild of America strike, the cast and crew of the American television show Numb3rs decided to parody the "Get a Mac" commercials to promote the return of the show on Friday, April 4, 2008. In the ad, brothers Don Eppes (Rob Morrow) and Dr. Charlie Eppes (David Krumholtz) debate the merits of being an FBI agent versus being a mathematician. The cast and crew used two hours of production time to film the 34-second ad. The Get a Mac campaign became the basis for the long running YouTube series Hi, I'm a Marvel...and I'm a DC by ItsJustSomeRandomGuy. The series took the classic superhero characters from Marvel Comics and DC Comics and compared their film adaptations. In this case Marvel was constantly touted as being superior due to having more successful film adaptations of their characters than DC who have notoriously not only have had fewer adaptations, but also many of them being critically or commercially panned. Late Show with David Letterman made parodies of the Get a Mac campaign, from Mac's wig being taken off by PC to reveal baldness, to Mac as David Hasselhoff eating a cheeseburger drunk. On an episode of Air Farce Live, aired around the time of the Canadian federal election, had a sketch where one of the comedians was introduced as a Liberal, and the other as a PC (Progressive Conservative Party of Canada). The sketch was split into separate parts during the episode. City of Heroes offered a series of online video parodies with a commercial featuring dialog centered around two machinima characters. They all start the same: one proclaiming "I'm a hero" and the other proclaiming "I'm a villain." The video was made to promote their new Mac edition of the game for OS X computers, released in February 2009. Instant Star and Degrassi: The Next Generation were in a parody where they would describe their own shows. Alexz Johnson portrayed Instant Star (Johnson portrays Jude Harrison in the show) and Miriam McDonald portrayed Degrassi (McDonald portrays Emma Nelson in the show). SuperNews! made 2 shorts based on the "Get a Mac" ads, which features Bill Gates and Steve Jobs fighting each other. Before all videos were removed from Current's YouTube Channel by Al Jazeera Media Network, the first one was the highest viewed video in said channel with over 3,000,000 views. A Funny or Die promo video for the release of John Hodgman's book That Is All includes a segment in which Hodgman walks through a 'void' room in his deranged millionaire mansion. Justin Long sits alone in the white open space from the Get A Mac ads, happy to see Hodgman again and eager to make another commercial. T-Mobile also created their own version in 2011 featuring Carly Foulkes. The advertisements followed the same structure as the Mac ads, but comparing T-Mobile to AT&T. In one episode of Element Animation’s The Crack!, there is a parody of it where two eggs make a parody of the ad, with one of them ruining it everytime. See also Apple Switch ad campaign Apple evangelist Cola Wars Comparative advertising Apple Inc. advertising References Apple Inc. advertising American television commercials Advertising campaigns 2006 introductions American advertising slogans 2006 quotations 2000s television commercials Computing comparisons
Get a Mac
Technology
11,575
3,173,103
https://en.wikipedia.org/wiki/Wrongful%20life
Wrongful life is the name given to a cause of action in which someone is sued by a severely disabled child (through the child's legal guardian) for failing to prevent the child's birth. Typically, a child and the child's parents will sue a doctor or a hospital for failing to provide information about the disability during the pregnancy, or a genetic disposition before the pregnancy. Had the mother been aware of this information, it is argued, she would have had an abortion, or chosen not to conceive at all. The term "wrongful life" is also sometimes applied to what are more accurately described as wrongful living claims alleging that doctors or hospitals failed to follow a patient's end-of-life directive (for example, a MOLST or POLST) and kept the patient alive longer than preferred, thereby causing unnecessary and unwanted suffering. However, the confusion between the two is understandable and readily explained. Although wrongful life and wrongful living claims arise at opposite ends of the human lifespan, they are related in the sense that both types of claims seek the same relief: a judgment awarding monetary damages for "unwanted life." History Historically, only parents could sue for their own damages incurred as a result of the birth of a disabled child (e.g., the mother's own pregnancy medical bills and cost of psychiatric treatment for both parents' emotional distress resulting from the realization that their child was disabled). This cause of action is known as wrongful birth. But the child could not sue for his or her own damages, which were often much more substantial, in terms of the cost of round-the-clock personal care and special education. In four U.S. states—California, Maine, New Jersey, and Washington—the child is allowed to bring a wrongful life cause of action for such damages. In a 1982 case involving hereditary deafness, the Supreme Court of California was the first state supreme court to endorse the child's right to sue for wrongful life, but in the same decision, limited the child's recovery to special damages. This rule implies that the child can recover objectively provable economic damages, but cannot recover general damages like subjective "pain and suffering"—that is, monetary compensation for the entire experience of having a disabled life versus having a healthy mind and/or body. The Supreme Court of California's 1982 decision, in turn, was based on the landmark California Court of Appeal decision in Curlender v. Bio-Science Laboratories (1980). The Curlender decision involved a child who was allegedly born with Tay–Sachs disease after the parents relied upon the defendants' representations about the reliability of their genetic tests in refraining from proceeding with amniocentesis. The most famous passage from the Curlender opinion is as follows: Curlender was not the first appellate decision to authorize a cause of action for wrongful life—it noted that a 1977 decision of the intermediate appellate court of New York had taken the same position, and was promptly overruled by the highest court of that state a year later. However, Curlender stands as the first such appellate decision which was not later overruled. Most other jurisdictions, including all U.S. states except California, Maine, New Jersey, and Washington, England and Wales, Ontario, and Australia, have refused to allow the wrongful life cause of action. In Germany, the Federal Constitutional Court declared wrongful life claims unconstitutional. The court reasoned that such a claim implies that the life of a disabled person is less valuable than that of a non-disabled one. Therefore, claiming damages for one's life as such violates the human dignity principle codified in the first article of the German Basic Law. Nevertheless, the German Federal Court stuck to its previous practice of granting to suffered families indemnification in form of living expenses for a child. It emphasized that damages referred to did not imply the existence of the child by itself but the economical obligation of parents to pay maintenance. It was finally upheld by the Constitutional Court in 1998, stating no matter what was the difference between existence of a child and parents' obligation to pay maintenance in terms of damage, because the recognition of a child as a person after Art. 1 I GG did not lay on the undertaking that obligation by parents. In 2005, the Dutch Supreme Court fully upheld a wrongful life claim in the Netherlands' first wrongful life case ever. Ethics Since wrongful life suits are a relatively new application of human rights, doctors and scholars have not come to consensus regarding their place in medical ethics. Others have objected to wrongful life claims on conceptual grounds, including the question of whether there exist rights and duties with regards to non-existent persons. See also Wrongful abortion Bioethics References Further reading Belsky, Alan J., Injury as a Matter of Law: Is This the Answer to the Wrongful Life Dilemma?, 22 U. Balt. L. Rev 185 (1993). Belsky External links Sydney Morning Herald article from May 2004 Article on wrongful life in the Cornell Law Review. Abortion Abortion law Bioethics Tort law Medical malpractice Health law he:הולדה בעוולה
Wrongful life
Technology
1,072
22,585,408
https://en.wikipedia.org/wiki/Griggsia
Griggsia is a genus of fungi in the class Dothideomycetes. The relationship of this taxon to other taxa within the class is unknown (incertae sedis). Also, the placement of this genus within the Dothideomycetes is uncertain. The genus name of Griggsia is in honour of American botanist Robert Fiske Griggs (1881–1962). The genus was circumscribed by Frank Lincoln Stevens and Nora Elizabeth Dalbey in Bot. Gaz. vol.68 on page 224 in 1919. A monotypic genus, it contains the single species Griggsia cyathea. See also List of Dothideomycetes genera incertae sedis References Enigmatic Dothideomycetes taxa Monotypic Dothideomycetes genera Fungus species
Griggsia
Biology
169
33,441,387
https://en.wikipedia.org/wiki/Mobile%20application%20management
Mobile application management (MAM) describes the software and services responsible for provisioning and controlling access to internally developed and commercially available mobile apps used in business settings, on both company-provided and 'bring your own' mobile operating systems as used on smartphones and tablet computers. Mobile application management provides granular controls at the application level which enable system administrators to manage and secure application or 'app' data. MAM differs from mobile device management (MDM), which focuses on controlling the entire device, and requires that users enroll or register their device, and install a service agent. While some enterprise mobility management (EMM) suites include a MAM function, their capabilities may be limited in comparison to stand-alone MAM solutions, because EMM suites require a device management profile in order to enable app management capabilities. History Enterprise mobile application management has been driven by the widespread adoption and use of mobile applications in business settings. In 2010, the International Data Corporation (IDC) reported that smartphone use in the workplace will double between 2009 and 2014. The 'bring your own device' (BYOD) phenomenon is a factor behind mobile application management, with personal PC, smartphone, and tablet use in business settings, vs. business-owned devices, rising from 31 per cent in 2010 to 41 per cent in 2011. When an employee brings a personal device into an enterprise setting, mobile application management enables the corporate information technology (IT) staff to download required applications, control access to business data, and remove locally cached business data from the device if it is lost or stolen, or when its owner no longer works with the company. Use of mobile devices in the workplace is also being driven from above. According to Forrester Research, businesses now see mobile as an opportunity to drive innovation across a wide range of business processes. Forrester issued a forecast in August 2011 predicting that the "mobile management services market" would reach $6.6 billion by 2015 – a 69 per cent increase over a previous forecast issued six months earlier. Citing the plethora of mobile devices in the enterprise – and a growing demand for mobile apps from employees, line-of-business decision-makers, and customers – the report states that organizations are broadening their "mobility strategy" beyond mobile device management to "managing a growing number of mobile applications". The advent of Internet of Things (IoT) has been changing lives for the better. It is not limited to homes but, has a pivotal role connecting teams, devices and decisions. With more connected devices, there is a boost in data generation, data analysis and reporting. App wrapping App wrapping was initially a favoured method of applying policy to applications as part of mobile application management solutions. App wrapping sets up a dynamic library, and adds to an existing binary that controls certain aspects of an application. For instance, at start-up, you can change an app so that it requires authentication using a local passkey. Or you could intercept a communication so that it would be forced to use your company's virtual private network (VPN), or prevent that communication from reaching a particular application that holds sensitive data. Normally, application wrapping is performed using a SDK from an application or EMM seller that permits an engineer or administrator to convey an API that empowers the board arrangements to be set up. For instance, an application wrapping API would permit an administrator to control who can download a portable application and whether corporate information got to by that application can be reordered. Application wrapping can be applied during interior advancement of programming or sometime later to off-the-rack programming buys essentially by adding executable code through the SDK. Quite possibly the most broadly utilized portable application stages, Microsoft's Office 365, likewise gives its own extraordinary arrangement of issues respect to the board. Before, Office 365 didn't permit application the executives by means of outsider EMM supports; that usefulness was accessible just through its InTune cloud-based administration. Increasingly, the likes of Apple and Samsung are overcoming the issue of app wrapping. Aside from the fact that app wrapping is a legal grey zone, and may not meet its actual aims, it is not possible to adapt the entire operating system to deal with numerous wrapped apps. In general, wrapped apps available in the app stores have also not proven to be successful due to their inability to perform without MDM. System features An end-to-end mobile application management solution provides the ability to: control the provisioning, updating, and removal of mobile applications via an enterprise app store, monitor application performance and usage, and remotely wipe data from managed applications. Core features of mobile application management systems include: App configuration App delivery (Enterprise App Store) App performance monitoring App updating App version management App wrapping Crash log reporting Event management Push services Reporting and tracking Usage analytics User & group access control User authentication See also Over-the-air programming Mobile security List of Mobile Device Management software References Mobile telecommunication services Application Management Application Management
Mobile application management
Technology
1,006
32,848,139
https://en.wikipedia.org/wiki/Design%20Triangle
Design Triangle was a transport design firm, based in Cambridge, UK. Founded in 1986, Design Triangle specializes in the design of the interiors and exteriors of vehicles (and vessels) for public transport, particularly rail passenger cars. The company was dissolved in 2017. Rail Vehicles Design Triangle is best known for the concept design of the interior and exterior of the Heathrow Express Class 332 trains and the exterior design of the MTR Hong Kong Airport Express. Other published work includes the design of the larnrod Eireann Inter-City Mark 4 trains and the interior concept design for the Class 378 rail cars for London Overground. Design Triangle also recently redesigned the Swiss Railways’ range of vehicle interiors. According to Jane's World Railways, other projects include interior and exterior concept designs for Rotterdam Metro, STIB Brussels Tramway 2000, Connex Melbourne X'Trapolis 100 trains, Spoornet 9E loco driver's cab refurbishment, design of the exterior and driver's cab for Hong Kong's MTR Airport Express train, passenger flow studies for DLR Docklands Light Railway, interior design for BAE Systems on Kawasaki's MARC III Bi-Level coaches, interior design concepts for Madrid Metro and rail seat prototypes for KAB Seating. Marine Design Triangle has created industrial design concepts for the exteriors and interiors of fast ferries and water taxis for Damen Shipyards, including the Dubai Water Taxi and the Damen DFF 3207 low wash catamaran for Dubai. Services Design Triangle designs the form, aesthetics and layout of vehicles, including interiors, exteriors and driver’s cabs, from industrial design concepts to 3D CAD models and manufacturing drawings. Services are provided to railway companies, rail vehicle manufacturers and component manufacturers. Award At the Design Business Association's Design Effectiveness Awards in 2000, Design Triangle won the Grand Prix and Design Management awards for the design of the Class 332 trains, as a collaborative team with BAA Heathrow Express, Wolff Olins and Glazer. References External links Design Triangle - rail vehicles Design companies of the United Kingdom Industrial design firms Vehicle design Companies established in 1986 Companies disestablished in 2017 1986 establishments in the United Kingdom 2017 disestablishments in the United Kingdom
Design Triangle
Engineering
447
41,139,312
https://en.wikipedia.org/wiki/GIT%20quotient
In algebraic geometry, an affine GIT quotient, or affine geometric invariant theory quotient, of an affine scheme with an action by a group scheme G is the affine scheme , the prime spectrum of the ring of invariants of A, and is denoted by . A GIT quotient is a categorical quotient: any invariant morphism uniquely factors through it. Taking Proj (of a graded ring) instead of , one obtains a projective GIT quotient (which is a quotient of the set of semistable points.) A GIT quotient is a categorical quotient of the locus of semistable points; i.e., "the" quotient of the semistable locus. Since the categorical quotient is unique, if there is a geometric quotient, then the two notions coincide: for example, one has for an algebraic group G over a field k and closed subgroup H. If X is a complex smooth projective variety and if G is a reductive complex Lie group, then the GIT quotient of X by G is homeomorphic to the symplectic quotient of X by a maximal compact subgroup of G (Kempf–Ness theorem). Construction of a GIT quotient Let G be a reductive group acting on a quasi-projective scheme X over a field and L a linearized ample line bundle on X. Let be the section ring. By definition, the semistable locus is the complement of the zero set in X; in other words, it is the union of all open subsets for global sections s of , n large. By ampleness, each is affine; say and so we can form the affine GIT quotient Note that is of finite type by Hilbert's theorem on the ring of invariants. By universal property of categorical quotients, these affine quotients glue and result in which is the GIT quotient of X with respect to L. Note that if X is projective; i.e., it is the Proj of R, then the quotient is given simply as the Proj of the ring of invariants . The most interesting case is when the stable locus is nonempty; is the open set of semistable points that have finite stabilizers and orbits that are closed in . In such a case, the GIT quotient restricts to which has the property: every fiber is an orbit. That is to say, is a genuine quotient (i.e., geometric quotient) and one writes . Because of this, when is nonempty, the GIT quotient is often referred to as a "compactification" of a geometric quotient of an open subset of X. A difficult and seemingly open question is: which geometric quotient arises in the above GIT fashion? The question is of a great interest since the GIT approach produces an explicit quotient, as opposed to an abstract quotient, which is hard to compute. One known partial answer to this question is the following: let be a locally factorial algebraic variety (for example, a smooth variety) with an action of . Suppose there are an open subset as well as a geometric quotient such that (1) is an affine morphism and (2) is quasi-projective. Then for some linearlized line bundle L on X. (An analogous question is to determine which subring is the ring of invariants in some manner.) Examples Finite group action by A simple example of a GIT quotient is given by the -action on sending Notice that the monomials generate the ring . Hence we can write the ring of invariants as Scheme theoretically, we get the morphism which is a singular subvariety of with isolated singularity at . This can be checked using the differentials, which are hence the only point where the differential and the polynomial both vanish is at the origin. The quotient obtained is a conical surface with an ordinary double point at the origin. Torus action on plane Consider the torus action of on by . Note this action has a few orbits: the origin , the punctured axes, , and the affine conics given by for some . Then, the GIT quotient has structure sheaf which is the subring of polynomials , hence it is isomorphic to . This gives the GIT quotientNotice the inverse image of the point is given by the orbits , showing the GIT quotient isn't necessarily an orbit space. If it were, there would be three origins, a non-separated space. See also quotient stack character variety Chow quotient Notes References Pedagogical References Algebraic geometry
GIT quotient
Mathematics
1,002
2,530,126
https://en.wikipedia.org/wiki/Turbary
Turbary is the ancient right to cut turf, or peat, for fuel on a particular area of bog. The word may also be used to describe the associated piece of bog or peatland and, by extension, the material extracted from the turbary. Turbary rights, which are more fully expressed legally as common of turbary, are often associated with commonage, or, in some cases, rights over another person's land. Turbary was not always an unpaid right (easement), but, at least in Ireland, regulations governed the price that could be charged. Turf was widely used as fuel for cooking and domestic heating but also for commercial purposes such as evaporating brine to produce salt. The right to take peat was particularly important in areas where firewood was scarce. The right to collect firewood was protected by estovers. In the New Forest of southern England, a particular right of turbary belongs not to an individual person, dwelling or plot of land, but to a particular hearth and chimney. Ecology In more recent times, as the ecological significance of the bog lands has been better understood, and as the amount of remaining peat has been decreasing, partly due to fuel usage and partly due to usage of peat as fertiliser, as well as agricultural incursions into drained bog lands, some of the remaining bogs have come under environmental protection. This has created controversy over the rights of turbary, and in some cases extinguished the right. Geography Geographic regions of turbary works in Europe include the Netherlands, Ireland, Scotland and Wales, and The Broads in Norfolk and Suffolk, England, and the Audomarois marshlands near Saint-Omer, France The term is also used in colloquial language by older generations in Ireland, in places such as County Clare, to refer to the area where turf is cut, or to the material extracted. Etymology The word is derived from Anglo-French and Low German, . Compare Sanskrit , meaning "tuft of grass". Places Turbary Park in Bournemouth, Dorset has a name derived from the term. References Common law English forest law Legal terminology Peat mining Property law Wetlands
Turbary
Environmental_science
448
34,605,177
https://en.wikipedia.org/wiki/Astronomy%20North
Astronomy North is a Canadian astronomical society for auroras. They collaborate with the project Aurora Max and the Canadian Space Agency. Website The website of Astronomy North contains important information for amateur astronomers. These amateur astronomers can watch the auroras, the sunspots and the weather with simple scientific coordinates. See also List of astronomical societies References External links https://web.archive.org/web/20101227070816/http://astronomynorth.com/about-us/ Astronomy organizations Amateur astronomy organizations Learned societies of Canada Astronomy in Canada
Astronomy North
Astronomy
115
45,421,211
https://en.wikipedia.org/wiki/Backpacking%20with%20animals
Pack animals, such as the horse, llama, goat, dog, and donkey, are sometimes used to help carry the weight of a backpackers gear during an excursion. These animals need special considerations when accompanying backpackers on a trip. Some areas restrict the use of horses and other pack animals. For example, Great Basin National Park does not allow domestic animals at all in backcountry areas. Like their human counterparts, pack animals require special backpacking gear like a variety of leads, harnesses, and panniers or packs. Dog packs are widely available in outdoor sporting goods stores. Predators can be attracted to pack animals, so caution is necessary when bringing domesticated animals into backcountry areas. Some trails have permanent corrals that specifically cater to large pack animals. History Horse Packhorses have been used since the earliest period of horse domestication. They were invaluable throughout antiquity, through the Middle Ages, and into modern times, used wherever roads were nonexistent or poorly maintained. They were heavily used in the transport of goods in England in the period up until the coming of the first turnpike roads and canals in the 18th century. Away from main routes, their use persisted into the 19th century. This usage has left a history of old paths across wilderness areas called packhorse roads, and distinctive narrow and low sided stone arched packhorse bridges at various locations. The packhorse, mule or donkey was a critical tool in the development of the Americas. In colonial America, Spanish, French, Dutch and English traders made use of pack horses to carry goods to remote Native Americans and to carry hides back to colonial market centers. They had little choice, the America's had virtually no improved waterways before the 1820s and roads in times before the railroad and automobile were only improved locally around a municipality, and only rarely in between. Mules Mules are still used extensively to transport cargo in rugged roadless regions, such as the large wilderness areas of California's Sierra Nevada mountains. Commercial pack mules are used recreationally, such as to supply mountaineering base camps, and also to supply trail building and maintenance crews, and backcountry footbridge building crews. As of July 2014, there are at least sixteen commercial mule pack stations in business in the Sierra Nevada. The Angeles chapter of the Sierra Club has a Mule Pack Section that organizes hiking trips with supplies carried by mules. Dogs Dogs tend to show admirable hill-climbing ability and can carry a few kilos (several pounds) of gear (their own dry food and other) when among a backpacking party. However, few dogs will be able to traverse the roughest off-trail terrain that their human backpacking companions will cross with little trouble. For example, cross-country travel through fields of 1-meter (3-foot) boulders or dense 3/4-meter-tall (2-foot) brush may cause a dog to balk or halt entirely. Such balking may be especially pronounced when one or more of these factors is present: small body size, puppyhood or age greater than a few years, obesity, and a dog pack weight of greater than a few kilos or pounds. A steep descent will cause a dog much more hesitation than it will a backpacking human. Restricting travel to well-maintained trails, therefore, may be needed. Attention to a dog's paw condition is important. For example, hidden adhesions of pine pitch between toes may cause balking or limping even when otherwise uncalled for. Otherwise, dogs will need few other special arrangements while backpacking. As experienced owners of large dogs of the working and sporting breeds can attest, a dog in a backpacking party needs comparatively little in terms of insulation, shelter, and bedding. Their food need only consist of some combination of human food scraps, fish scraps, and their own carried dry dog food. See also Camping Mountain guide Outfitter Pack goat Pack saddle Pack station Trail riding Working animal References External links American Hiking Society Preserves and protects hiking trails and the hiking experience Leave No Trace - The Leave No Trace Center for Outdoor Ethics is an educational, nonprofit organization dedicated to the responsible enjoyment and active stewardship of the outdoors by all people, worldwide. Animal equipment Hiking equipment Saddles
Backpacking with animals
Biology
870
43,377,296
https://en.wikipedia.org/wiki/%CE%A9-bounded%20space
In mathematics, an ω-bounded space is a topological space in which the closure of every countable subset is compact. More generally, if P is some property of subspaces, then a P-bounded space is one in which every subspace with property P has compact closure. Every compact space is ω-bounded, and every ω-bounded space is countably compact. The long line is ω-bounded but not compact. The bagpipe theorem describes the ω-bounded surfaces. References Properties of topological spaces
Ω-bounded space
Mathematics
105
22,656,459
https://en.wikipedia.org/wiki/Trifluoromethanesulfonyl%20azide
Trifluoromethanesulfonyl azide or triflyl azide is an organic azide used as a reagent in organic synthesis. Preparation Trifluoromethanesulfonyl azide is prepared by treating trifluoromethanesulfonic anhydride with sodium azide, traditionally in dichloromethane. However, the use of dichloromethane is avoided since it can generate highly explosive azido-chloromethane and diazidomethane. The reaction may also instead be conducted in toluene, acetonitrile, or pyridine. (Tf = ) An alternative route starts from imidazole-1-sulfonyl azide. Reactions Trifluoromethanesulfonyl azide generally converts amines to azides. See also Tosyl azide Diphenylphosphoryl azide Sulfuryl diazide References Azido compounds Triflyl compounds Explosive chemicals
Trifluoromethanesulfonyl azide
Chemistry
212
29,493,295
https://en.wikipedia.org/wiki/Wave%E2%80%93current%20interaction
In fluid dynamics, wave–current interaction is the interaction between surface gravity waves and a mean flow. The interaction implies an exchange of energy, so after the start of the interaction both the waves and the mean flow are affected. For depth-integrated and phase-averaged flows, the quantity of primary importance for the dynamics of the interaction is the wave radiation stress tensor. Wave–current interaction is also one of the possible mechanisms for the occurrence of rogue waves, such as in the Agulhas Current. When a wave group encounters an opposing current, the waves in the group may pile up on top of each other which will propagate into a rogue wave. Classification identifies five major sub-classes within wave–current interaction: interaction of waves with a large-scale current field, with slow – as compared to the wavelength – two-dimensional horizontal variations of the current fields; interaction of waves with small-scale current changes (in contrast with the case above), where the horizontal current varies suddenly, over a length scale comparable with the wavelength; the combined wave–current motion for currents varying (strongly) with depth below the free surface; interaction of waves with turbulence; and interaction of ship waves and currents, such as in the ship's wake. See also generalized Lagrangian mean rip current Footnotes References Physical oceanography Water waves
Wave–current interaction
Physics,Chemistry
271
34,793,304
https://en.wikipedia.org/wiki/Abyss%20Box
The Abyss Box is a vessel containing of water at the very high pressure of 18 megapascals to simulate the natural underwater environment of bathyal fauna living at about below the surface. It is on display at Oceanopolis aquarium in Brest, France. It was designed by French researcher Bruce Shillito from Pierre and Marie Curie University in Paris. All the equipment maintaining the extreme pressure inside the Abyss Box weighs . The device keeps deep-dwelling creatures alive so they can be studied, especially regarding their adaptability to warmer ocean temperatures. Currently the Abyss Box houses only common species of deep sea creatures including a deep sea crab, Bythograea thermydron and a deep sea prawn, Pandalus borealis, which are some of the hardier species with a higher survival rate in depressurized environments. The fauna on display were collected by Victor 6000, a specialised remotely operated vehicle (ROV). See also Abyssal plain Deep sea Deep sea creature Deep ocean water Submarine landslide The Blue Planet Shutdown of thermohaline circulation References External links Deep Sea Foraminifera – Deep Sea Foraminifera from 4400 metres depth, Antarctica - an image gallery and description of hundreds of specimens Deep Ocean Exploration on the Smithsonian Ocean Portal Deep Sea Creatures Facts and images from the deepest parts of the ocean How Deep Is The Ocean Facts and infographic on ocean depth Brest, France Climate change and the environment Effects of climate change Oceanography Deep sea fish Marine biology
Abyss Box
Physics,Biology,Environmental_science
301
9,768,775
https://en.wikipedia.org/wiki/FlexPro
FlexPro is a proprietary software package for analysis and presentation of scientific and technical data, produced by Weisang GmbH. It runs on Microsoft Windows and is available in English, German, Japanese, Chinese and French. FlexPro has its roots in the test and measurement domain and supports different binary file formats of data acquisition instruments and software. In particular, FlexPro can analyze large amounts of data with high sampling rates. Features FlexPro is a software application for analyzing and presenting data. All data, analyses and presentations are stored in an object database. The structure of the database is similar to a file system on a hard drive. It is possible to build up a hierarchy of folders in FlexPro to organize the analysis. An entire FlexPro database can be stored in one file with size limited only by the hard drive space - not limited by the computer's RAM. The FlexPro user interface is based on Microsoft Office-like Ribbon technology. FlexPro provides wizards to create different 2D and 3D graphs as well as tables for data presentation. Typical graphs featured are line, symbol, scatter, bar, contour, waterfall, surface and polar plots. FlexPro also supports the media object (video format). FlexPro lets you create multi-page reports directly in the FlexPro project database. FlexPro has a built-in programming language, FPScript, which is optimized for data analysis and supports direct operations on non-scalar objects such as vectors and matrices as well as composed data structures like signals, signal series or surfaces. All operations can be executed either graphically (through menus or dialog boxes) or programmatically. Programmatic access is provided through an Automation Object Model and the built-in Microsoft Visual Basic for Applications Development Environment (VBA). Data can be analyzed either graphically using cursors in 2D or 3D graphs or mathematically using analysis objects or FPScript formulas. The underlying algorithm of an analysis object can be parameterized through a property sheet. Raw data, analysis objects and presentation objects like graphs, tables and documents form a dynamic network which can be updated after new data has been imported. FlexPro supports time and frequency domain signal analysis, spectral analysis, order tracking, linear and non-linear curve fitting, descriptive and inductive statistics, event isolation, acoustics, FIR and IIR filtering as well as counting procedures (e.g., Rainflow-counting). FlexPro carries out all calculations not only with numbers, but also with physical quantities composed of a value and unit. In addition to SI units, FlexPro also handles popular non-SI units such as Gaussian units and US units. FlexPro can export publication-quality graphs and reports to a number of file formats, including HTML, JPEG, PNG, and WMF. FlexPro’s Data Explorer indexes data archives on the hard drive or server. Configurable queries can be used to search for characteristic values or other data attributes and to find the data records to be evaluated. It is also possible to set up user-defined calculations during the indexing process which can be used for further data analysis. FlexPro supports various data files for import (standard file formats and binary file formats of data acquisition instruments and software): e.g. text and ASCII data (.csv and .txt), Excel workbooks, media files, wave files, ODBC data source, Matlab (.mat), National Instruments (.tdm, .tdms), ASAM ODS and ASAM COMMON MDF4, IMC Famos, NASA-CDF, Dewetron, DEWESoft, Graphtec, Hioki, IMC, Nicolet/Gould, OROS, SEFRAM, Viper, TEAC, Sony, Tektronik, Powermeter, Catman, Caesar, Imtec, Stemmer, Yokogawa, SPSS, LabView, Diadem, TurboLab, Systat, TableCurve. See also List of information graphics software List of numerical analysis software Comparison of numerical analysis software References External links Official website Plotting software Science software for Windows Data and information visualization software Numerical software Regression and curve fitting software
FlexPro
Mathematics
865
562,574
https://en.wikipedia.org/wiki/Cryptococcus
Cryptococcus is a genus of fungi in the family Cryptococcaceae that includes both yeasts and filamentous species. The filamentous, sexual forms or teleomorphs were formerly classified in the genus Filobasidiella, while Cryptococcus was reserved for the yeasts. Most yeast species formerly referred to Cryptococcus have now been placed in different genera. The name Cryptococcus comes from the Greek for "hidden sphere" (literally "hidden berry"). Some Cryptococcus species cause a disease called cryptococcosis. Taxonomy The genus was described by French mycologist Jean Paul Vuillemin in 1901, when he failed to find ascospores characteristic of the genus Saccharomyces in the yeast previously known as Saccharomyces neoformans. Over 300 additional names were subsequently added to the genus, almost all of which were later removed following molecular research based on cladistic analysis of DNA sequences. As a result, some ten species are currently recognized in Cryptococcus. The teleomorph was first described in 1975 by K.J. Kwon-Chung, who obtained cultures of the type species, Filobasidiella neoformans, by crossing strains of the yeast Cryptococcus neoformans. She was able to observe basidia similar to those of the genus Filobasidium, hence the name Filobasidiella for the new genus. Following changes to the International Code of Nomenclature for algae, fungi, and plants, the practice of giving different names to teleomorph and anamorph forms of the same fungus was discontinued, meaning that Filobasidiella became a synonym of the earlier name Cryptococcus. General characteristics The cells of species that produce yeasts are covered in a thin layer of glycoprotein capsular material that has a gelatin-like consistency, and that among other functions, serves to help extract nutrients from the soil. The C. neoformans capsule consists of several polysaccharides, of which the major one is the immunomodulatory polysaccharide called glucuronoxylomannan (GXM). GXM is made up of the monosaccharides glucuronic acid, xylose and mannose and can also contain O-acetyl groups. The capsule functions as the major virulence factor in cryptococcal infection and disease. Some Cryptococcus species have a huge diversity at the infraspecific level with different molecular types based on their genetic differences, mainly due to their geographical distribution, molecular characteristics, and ecological niches. Cryptococcus species are not known to produce distinct, visible fruitbodies. All teleomorph forms appear to be parasites of other fungi. In teleomorphs the hyphae are colourless, are clamped or unclamped, and bear haustorial cells with filaments that attach to the hyphae of host fungi. The basidia are club-shaped and highly elongated. Spores arise in succession from four loci at the apex (which is sometimes partly septate). These spores are passively released and may remain on the basidium in chains, unless disturbed. In the type species, the spores germinate to form yeast cells, but yeast states are not known for all species. Habitat, distribution and species Cryptococcus neoformans is cosmopolitan and is the most prominent medically important species. It is best known for causing a severe form of meningitis and meningoencephalitis in people with HIV/AIDS. It may also infect organ-transplant recipients and people receiving certain cancer treatments. In its yeast state C. neoformans is found in the droppings of wild birds, often pigeons; when dust of the droppings is stirred up, it can infect humans or pets that inhale the dust. Infected humans and animals do not transmit their infection to others. The taxonomy of C. neoformans has been reviewed: it has now been divided into two species: Cryptococcus neoformans sensu stricto and Cryptococcus deneoformans. Cryptococcus gattii (formerly C. neoformans var. gattii) is endemic to tropical parts of the continent of Africa and Australia. It is capable of causing disease in non-immunocompromised people. In its yeast state it has been isolated from eucalyptus trees in Australia. The taxonomy of C. gattii has been reviewed; it has now been divided into five species: C. gattii sensu stricto, C. bacillisporus, 'C. deuterogattii, C. tetragattii, and C. decagattii. Cryptococcus depauperatus is parasitic on Lecanicillium lecanii, an entomopathogenic fungus, and is known from Sri Lanka, England, the Netherlands, the Czech Republic, and Canada. It is not known to produce a yeast state. This species grows as long, branching filaments and is self-fertile, i.e. it is homothallic. It can reproduce sexually with itself throughout its life cycle. Cryptococcus luteus is parasitic on Granulobasidium vellereum, a corticioid fungus, and is known from England and Italy. It too is not known to produce a yeast state. Cryptococcus amylolentus was originally isolated as a yeast from beetle tunnels in South African trees. It forms a basidia-bearing teleomorph in culture. References Tremellomycetes Basidiomycota genera Yeasts
Cryptococcus
Biology
1,180
48,464,465
https://en.wikipedia.org/wiki/Specialty%20pharmacy
Specialty pharmacy refers to distribution channels designed to handle specialty drugs — pharmaceutical therapies that are either high cost, high complexity and/or high touch. High touch refers to higher degree of complexity in terms of distribution, administration, or patient management which drives up the cost of the drugs. In the early years specialty pharmacy providers attached "high-touch services to their overall price tags" arguing that patients who receive specialty pharmaceuticals "need high levels of ancillary and follow-up care to ensure that the drug spend is not wasted on them." An example of a specialty drug that would only be available through specialty pharmacy is interferon beta-1a (Avonex), a treatment for MS that requires a refrigerated chain of distribution and costs $17,000 a year. Some specialty pharmacies deal in pharmaceuticals that treat complex or rare chronic conditions such as cancer, rheumatoid arthritis, hemophilia, H.I.V. psoriasis, inflammatory bowel disease (IBD) or Hepatitis C. "Specialty pharmacies are seen as a reliable distribution channel for expensive drugs, offering patients convenience and lower costs while maximizing insurance reimbursements from those companies that cover the drug. Patients typically pay the same co-payments whether or not their insurers cover the drug." As the market demanded specialization in drug distribution and clinical management of complex therapies, specialized pharma (SP) evolved.„ Specialty pharmacies may handle therapies that are biologics, and are injectable or infused (although some are oral medications). By 2008 the pharmacy benefit management dominated the specialty pharmacies market having acquired smaller specialty pharmacies. PBMs administer specialty pharmacies in their network and can "negotiate better prices and frequently offer a complete menu of specialty pharmaceuticals and related services to serve as an attractive 'one-stop shop' for health plans and employers." In the mid 1990s, there were fewer than 30 specialty drugs on the market, but by 2008 that number had increased to 200. History Specialty pharmacies initially served a limited number of people with a small number of high-cost, low-volume, and high-maintenance conditions, such as hemophilia and Gaucher's disease. In 1991 the FDA approved the first version of Genzyme's orphan drug alglucerase, the only treatment for Gaucher's disease. At that time, according to Genzyme CEO Henri Termeer — a pioneer in the biotechnology industry business model — one treatment of Ceredase for one patient took 22,000 placentas annually to manufacture, a difficult and expensive procedure. A new version of Ceredase, called Cerezyme, Imiglucerase which Genzyme produced in 1994 using genetically modified cells in vitro, was cheaper and easier to produce, was approved in several countries. In 2005 there were only about 4,500 patients on Cerezyme. In marketing imiglucerase, Termeer introduced the innovative and successful business strategy that became a model for the biotechnology or life sciences industry in general and specialty pharmacy in particular. Genzyme's added revenue from profits on the highly priced orphan or specialty drugs like imiglucerase, which had no competition, was used to undertake research and development for other drugs and to allow them to fund programs that distribute a small portion of production for free. In 2005 he then created what would eventually become a feature service of specialty pharmacy, by hiring 34 people to help patients acquire insurance plans that would cover the cost of their drugs. By 2005 although Cerezyme cost the average patient (including babies) $200,000 a year, it could cost a single adult patient as much as $520,000 a year even though it cost Genzyme less than $52,000 to manufacture, because the alternative is severe debilitation and early death. In 2005 there were only about 4,500 patients on Cerezyme. In 1992 Stadtlanders Pharmacy — a subsidiary of Bergen Brunswig Corporation — was a grassroots company in Pittsburgh that occupied one floor of a seven-story office building and had only a handful of employees and sold drugs by mail-order to patients with chronic conditions with "higher-than-average prescription prices". In 1992 this included "ancillary to primary therapy to manage side effects, as well as HIV, transplant, and a new growth area, multiple sclerosis (MS)." By 1995 Stadtlanders added others, including growth hormones. Compared to retail drugstores that dealt in high volumes of lower margin drugs, Stadtlanders successful business model focused on lower volumes of higher priced drugs resulting in "healthier revenues." By about 1995 there were fewer than 30 specialty drugs on the market. By 2000 Stadtlanders "generated annual revenues of $500 million by selling drugs by mail-order to patients with chronic conditions." In the 1990s specialized pharmacies were mainly mom-and-pop organizations and the specialty pharmacy industry was highly fragmented. In the 1990s more expensive lifesaving therapies became available. Pharmacies like Stadtlanders began to do more than fill prescriptions. They would fill out the cumbersome insurance paperwork for patients to secure reimbursement — often from Medicare — coordinate "benefits to eliminate the potentially enormous out-of-pocket costs." They were able to keep these specialty drugs in stock when most retail pharmacies could not. In this way they could intervene for patients who "needed immediate access to therapies to prevent organ rejection" but who "did not have the money for such payments, nor did they have the expertise they needed to complete the forms." These "pharmacies also coordinated referrals from hospital discharge planners and delivered the medication to the patients’ homes to allow therapy to begin immediately upon hospital discharge." In 1999 CVS launched ProCare, a "chain of specialty pharmacies, about 1,500 square feet in size, serving patients with chronic diseases and conditions that require complex and expensive drug regimens." CVS was an early pioneer in online mail-delivery prescription filling service which it operated through CVS.com, a rebranded website it acquired with the 1999 purchase of Soma.com, the first major online pharmacy. By September 2000 when CVS acquired Stadtlander for $124 million Stadtlanders had become "one of the largest employers" in Allegheny County, Pennsylvania. By 1999, the market for specialty pharmaceuticals was estimated at about $16 billion and it was "a particularly fast-growing segment of the drug industry." CVS became a consolidator in the special pharmacy market. By the end of 2000, CVS's specialty pharmacy business consisted of mail-order operations and 46 CVS ProCare pharmacies located in 17 states and the District of Columbia. Overall, CVS saw its revenues surpass the $20 billion mark for the first time in 2000, while net income reached a record $746 million." By 2001 CVS' specialty pharmacy ProCare was the "largest integrated retail/mail provider of specialty pharmacy services" in the United States. It was consolidated with their pharmacy benefit management company, PharmaCare in 2002. In their 2001 annual report CVS anticipated that the "$16 billion specialty pharmacy market" would grow at "an even faster rate than traditional pharmacy due in large part to the robust pipeline of biotechnology drugs." Niche independents like Stadtlanders and smaller specialty pharmacies acquired by larger corporations as the specialty pharmacy industry’s "profitability grew exponentially." By 2008 the specialty pharmacy marketplace was "dominated primarily by traditional" pharmacy benefit management that "merged with previously existing specialty pharmacies, or those that are retail-based or insurer-owned. These organizations typically have the muscle to negotiate better prices and frequently offer a complete menu of specialty pharmaceuticals and related services to serve as an attractive 'one-stop shop' for health plans and employers." By 2007 specialty costs began to drive pharmacy trend. According to Express Scripts 2007 Drug Trend Report in 2007 there was a 14% increase in specialty drugs. There was a 60.4% increase in utilization, a 37.4% increase in costs and a 4.9% increase in new medications. By 2008 there were more than 200 specialty pharmaceuticals on the market. By 2014 CVS Caremark, Express Scripts and Walgreens represented more than 50% of the specialty drug market in the United States. other specialty providers included Commcare Pharmacy. By 2015 the trend among U.S. pharmacies is to "incorporate specialty pharmacy in their operations." Specialty pharmacy offers services such as "adherence management, benefits investigation, and patient education, and the challenge of space limitations for item stocking." Pharmacy benefit management (PBM) Pharmacy benefit management — working with employers, health plans companies and government programs — dominate the specialty pharmacy market in the United States since at 2008. According to the American Pharmacists Association (APhA), "Historically, a pharmacy benefit manager (PBM) is a third-party administrator of prescription drug programs. PBMs are primarily responsible for developing and maintaining the formulary, contracting with pharmacies, negotiating discounts and rebates with drug manufacturers, and processing and paying prescription drug claims. For the most part, they work with self-insured companies and government programs striving to maintain or reduce the pharmacy expenditures of the plan while concurrently trying to improve health care outcomes." Further reading This brief article contains valuable information about the history of specialty pharmacy. References Specialty drugs
Specialty pharmacy
Biology
1,972
9,739,652
https://en.wikipedia.org/wiki/USSD%20Gateway
Unstructured Supplementary Service Data, or USSD is a communication protocol used by GSM cellular telephones to communicate with the service provider's computers. A gateway is the collection of hardware and software required to interconnect two or more disparate networks, including performing protocol conversion. Functionality A USSD gateway routes USSD messages from the signalling network to a service application and back. A 'USSD gateway' service is also called a 'USSD center'. USSD gateway is based upon the ability of the delivery agent or the source to send and receive USSD messages. A USSD is a session-based protocol. USSD messages travel over GSM signalling channels, and are used to query information and trigger services. Unlike similar services (SMS and MMS), which are store and forward based, USSD establishes a real time session between mobile handset and application handling the service. Difference between USSD and other gateways The difference between USSD gateways and other messaging gateways is that USSD gateways maintain a single interactive session once the connection is established. SMS and MMS store and forward messages independently of the user session, similar to the way email is sent over the internet. Modular operation Session module: as per directions from the Signalling System No. 7 (SS7) protocol stack's Mobile Application Part (MAP), it receives and sends out session IDs from the session ID pool, and maintains and destroys the sessions. MAP layer: Mobile Application Part is present both on the server and on the MS. Gateway: a gateway will wait for messages from the MAP layer, and work to route these messages into Short Message Peer-to-Peer (SMPP) protocol, which is then delivered to the server applications. This is the most important operation, and this is the reason why USSDs are primarily used, as it helps to directly connect users to applications like bill checking and others. Locator: this tries to find out the current cell site, and relays it to the gateway. Then the messages are routed using Routing Numbers. Home Location Register: this is the home zone where the given cell phone's number is registered in the database. This is different from the Visitor Location Register which is where the user is roaming. The reason why USSD is commonly used is because it enhances the WCDMA signalling and multiplexes the coherent signals. Types of applications Balance Check: the user can send a Process Supplementary Service request (PSSR) to the home zone, which will forward this, under guidance from the gateway, to the correct application. Then, the application sends an acknowledgement via USSD gateway, HLR etc., known as PSSR response back to the user. Balance Notification at the end of charged call can also be given using Unstructured Supplementary Service Notify (USSN) message. Voice Chat: using the same process as above, one can use voice chat. This is highly useful when VoIP enabled phones are not available. Advertising: the application can advertise their product using USSD, which is less invasive than telemarketing. Roaming: this has huge advantages while roaming. This is because USSD services are well available in roaming networks, and all the USSD messages are directed towards the subscriber's Home Network itself, thus, same set of services that are available in home network can be given in visited network too, giving subscribers a Virtual Home Environment (VHE). Apart from PSSR and USSN, there is another method called Unstructured Supplementary Service Request (USSR) message that initiates a session by USSD Gateway to a Mobile User. This message can be used in conjunction with USSR initiated session to provide session based services like Menu services through USSD. Also, in the earlier phases of MAP (Mobile Application Part), PSSR message was called PSSD (PSS Data). References Further reading Mobile telecommunications standards 3GPP standards GSM standard
USSD Gateway
Technology
805
7,795,789
https://en.wikipedia.org/wiki/A%20Pail%20of%20Air
"A Pail of Air" is a science fiction short story by American writer Fritz Leiber. It originally appeared in the December 1951 issue of Galaxy Magazine and was dramatized on the radio show X Minus One in March 1956. Plot The story is narrated by a ten-year-old boy living on Earth after it has become a rogue planet, having been torn away from the Sun by a passing "dark star". The loss of solar heating has caused the Earth's atmosphere to freeze into thick layers of "snow". The boy's father had worked with a group of other scientists to construct a large shelter, but the earthquakes accompanying the disaster had destroyed it and killed the others. He managed to construct a smaller, makeshift shelter called the "Nest" for his family, where they maintain a breathable atmosphere by periodically retrieving pails of frozen oxygen to thaw over a fire. They have survived in this way for a number of years. At the end, they are found by a search party from a large group of survivors at Los Alamos, where they are using nuclear power to provide heat and have begun using rockets to search for other survivors (radio being ineffective at long range without an ionosphere). They reveal that other groups of humans have survived at Argonne, Brookhaven, and Harwell nuclear research facilities as well as in Tannu Tuva, and that plans are being made to establish uranium-mining colonies at Great Slave Lake or in the Congo region. Collections The story is collected in The Best of Fritz Leiber, Constellations, and Fritz Leiber: Selected Stories (2010). See also Rogue planet External links Listen to A Pail of Air on X Minus One, NBC, 1956 A Pail of Air full text at Project Gutenberg 1951 short stories Fiction about black holes Post-apocalyptic short stories Fiction about rogue planets Science fiction short stories Short stories by Fritz Leiber Works originally published in Galaxy Science Fiction
A Pail of Air
Physics
397
156,859
https://en.wikipedia.org/wiki/Comparison%20of%20analog%20and%20digital%20recording
Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and associated anti-aliasing filter implementation, jitter and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion. Two prominent differences in performance between the two methods are the bandwidth and the signal-to-noise ratio (S/N ratio). The bandwidth of the digital system is determined, according to the Nyquist frequency, by the sample rate used. The bandwidth of an analog system is dependent on the physical and electronic capabilities of the analog circuits. The S/N ratio of a digital system may be limited by the bit depth of the digitization process, but the electronic implementation of conversion circuits introduces additional noise. In an analog system, other natural analog noise sources exist, such as flicker noise and imperfections in the recording medium. Other performance differences are specific to the systems under comparison, such as the ability for more transparent filtering algorithms in digital systems and the harmonic saturation and speed variations of analog systems. Dynamic range The dynamic range of an audio system is a measure of the difference between the smallest and largest amplitude values that can be represented in a medium. Digital and analog differ in both the methods of transfer and storage, as well as the behavior exhibited by the systems due to these methods. The dynamic range capability of digital audio systems far exceeds that of analog audio systems. Consumer analog cassette tapes have a dynamic range of between 50 and 75 dB. Analog FM broadcasts rarely have a dynamic range exceeding 50 dB. Analog studio master tapes can have a dynamic range of up to 77 dB. An LP made out of perfect vinyl would have a theoretical dynamic range of 70 dB, though measurements indicate actual performance in the 60 to 70 dB range. Compare this to digital recording. Typically, a 16-bit digital recording has a dynamic range of between 90 and 95 dB. The benefits of using digital recorders with greater than 16-bit accuracy can be applied to the 16 bits of audio CD. Meridian Audio founder John Robert Stuart stresses that with the correct dither, the resolution of a digital system is theoretically infinite, and that it is possible, for example, to resolve sounds at −110 dB (below digital full-scale) in a well-designed 16-bit channel. Overload conditions There are some differences in the behaviour of analog and digital systems when high level signals are present, where there is the possibility that such signals could push the system into overload. With high level signals, analog magnetic tape approaches saturation, and high frequency response drops in proportion to low frequency response. While undesirable, the audible effect of this can be reasonably unobjectionable. In contrast, digital PCM recorders show non-benign behaviour in overload; samples that exceed the peak quantization level are simply truncated, clipping the waveform squarely, which introduces distortion in the form of large quantities of higher-frequency harmonics. In principle, PCM digital systems have the lowest level of nonlinear distortion at full signal amplitude. The opposite is usually true of analog systems, where distortion tends to increase at high signal levels. A study by Manson (1980) considered the requirements of a digital audio system for high quality broadcasting. It concluded that a 16-bit system would be sufficient, but noted the small reserve the system provided in ordinary operating conditions. For this reason, it was suggested that a fast-acting signal limiter or 'soft clipper' be used to prevent the system from becoming overloaded. With many recordings, high level distortions at signal peaks may be audibly masked by the original signal, thus large amounts of distortion may be acceptable at peak signal levels. The difference between analog and digital systems is the form of high-level signal error. Some early analog-to-digital converters displayed non-benign behaviour when in overload, where the overloading signals were 'wrapped' from positive to negative full-scale. Modern converter designs based on sigma-delta modulation may become unstable in overload conditions. It is usually a design goal of digital systems to limit high-level signals to prevent overload. To prevent overload, a modern digital system may compress input signals so that digital full-scale cannot be reached Physical degradation Unlike analog duplication, digital copies are exact replicas that can be duplicated indefinitely and without generation loss, in principle. Error correction allows digital formats to tolerate significant media deterioration though digital media is not immune to data loss. Consumer CD-R compact discs have a limited and variable lifespan due to both inherent and manufacturing quality issues. With vinyl records, there will be some loss in fidelity on each playing of the disc. This is due to the wear of the stylus in contact with the record surface. Magnetic tapes, both analog and digital, wear from friction between the tape and the heads, guides, and other parts of the tape transport as the tape slides over them. The brown residue deposited on swabs during cleaning of a tape machine's tape path is actually particles of magnetic coating shed from tapes. Sticky-shed syndrome is a prevalent problem with older tapes. Tapes can also suffer creasing, stretching, and frilling of the edges of the plastic tape base, particularly from low-quality or out-of-alignment tape decks. When a CD is played, there is no physical contact involved as the data is read optically using a laser beam. Therefore, no such media deterioration takes place, and the CD will, with proper care, sound exactly the same every time it is played (discounting aging of the player and CD itself); however, this is a benefit of the optical system, not of digital recording, and the Laserdisc format enjoys the same non-contact benefit with analog optical signals. CDs suffer from disc rot and slowly degrade with time, even if they are stored properly and not played. M-DISC, a recordable optical technology which markets itself as remaining readable for 1,000 years, is available in certain markets, but as of late 2020 has never been sold in the CD-R format. (Sound could, however, be stored on an M-DISC DVD-R using the DVD-Audio format.) Noise For electronic audio signals, sources of noise include mechanical, electrical and thermal noise in the recording and playback cycle. The amount of noise that a piece of audio equipment adds to the original signal can be quantified. Mathematically, this can be expressed by means of the signal-to-noise ratio (SNR or S/N ratio). Sometimes the maximum possible dynamic range of the system is quoted instead. With digital systems, the quality of reproduction depends on the analog-to-digital and digital-to-analog conversion steps, and does not depend on the quality of the recording medium, provided it is adequate to retain the digital values without error. Digital media capable of bit-perfect storage and retrieval have been commonplace for some time, since they were generally developed for software storage which has no tolerance for error. The process of analog-to-digital conversion will, according to theory, always introduce quantization distortion. This distortion can be rendered as uncorrelated quantization noise through the use of dither. The magnitude of this noise or distortion is determined by the number of quantization levels. In binary systems this is determined by and typically stated in terms of the number of bits. Each additional bit adds approximately 6 dB in possible SNR (e.g. 24 x 6 = 144 dB for 24-bit and 120 dB for 20-bit quantization). The 16-bit digital system of Red Book audio CD has 216 = 65,536 possible signal amplitudes, theoretically allowing for an SNR of 98 dB. Rumble Rumble is a form of noise characteristic caused by imperfections in the bearings of turntables. The platter tends to have a slight amount of motion besides the desired rotation and the turntable surface also moves up, down and side-to-side slightly. This additional motion is added to the desired signal as noise, usually of very low frequencies, creating a rumbling sound during quiet passages. Very inexpensive turntables sometimes used ball bearings, which are very likely to generate audible amounts of rumble. More expensive turntables tend to use massive sleeve bearings, which are much less likely to generate offensive amounts of rumble. Increased turntable mass also tends to lead to reduced rumble. A good turntable should have rumble at least 60 dB below the specified output level from the pick-up. Because they have no moving parts in the signal path, digital systems are not subject to rumble. Wow and flutter Wow and flutter are a change in frequency of an analog device and are the result of mechanical imperfections. Wow is a form of flutter that occurs at a slower rate. Wow and flutter are most noticeable on signals which contain pure tones. For LP records, the quality of the turntable will have a large effect on the level of wow and flutter. A good turntable will have wow and flutter values of less than 0.05%, which is the speed variation from the mean value. Wow and flutter can also be present in the recording, as a result of the imperfect operation of the recorder. Owing to their use of precision crystal oscillators for their timebase, digital systems are not subject to wow and flutter. Frequency response For digital systems, the upper limit of the frequency response is determined by the sampling frequency. The choice of sample sampling frequency in a digital system is based on the Nyquist–Shannon sampling theorem. This states that a sampled signal can be reproduced exactly as long as it is sampled at a frequency greater than twice the bandwidth of the signal, the Nyquist frequency. Therefore, a sampling frequency of 40 kHz is mathematically sufficient to capture all the information contained in a signal having frequency components less than or equal to 20 kHz. The sampling theorem also requires that frequency content above the Nyquist frequency be removed from the signal before sampling it. This is accomplished using anti-aliasing filters which require a transition band to sufficiently reduce aliasing. The bandwidth provided by the 44,100 Hz sampling frequency used by the standard for audio CDs is sufficiently wide to cover the entire human hearing range, which roughly extends from 20 Hz to 20 kHz. Professional digital recorders may record higher frequencies, while some consumer and telecommunications systems record a more restricted frequency range. Some analog tape manufacturers specify frequency responses up to 20 kHz, but these measurements may have been made at lower signal levels. Compact Cassettes may have a response extending up to 15 kHz at full (0 dB) recording level. At lower levels (−10 dB), cassettes are typically limited to 20 kHz due to self-erasure of the tape media. The frequency response for a conventional LP player might be 20 Hz to 20 kHz, ±3 dB. The low-frequency response of vinyl records is restricted by rumble noise (described above), as well as the physical and electrical characteristics of the entire pickup arm and transducer assembly. The high-frequency response of vinyl depends on the cartridge. CD4 records contained frequencies up to 50 kHz. Frequencies of up to 122 kHz have been experimentally cut on LP records. Aliasing Digital systems require that all high-frequency signal content above the Nyquist frequency must be removed prior to sampling, which, if not done, will result in these ultrasonic frequencies "folding over" into frequencies in the audible range, producing a kind of distortion called aliasing. Aliasing is prevented in digital systems by an anti-aliasing filter. However, designing an analog filter that precisely removes all frequency content exactly above or below a certain cutoff frequency, is impractical. Instead, a sample rate is usually chosen which is above the Nyquist requirement. This solution is called oversampling, and allows a less aggressive and lower-cost anti-aliasing filter to be used. Early digital systems may have suffered from a number of signal degradations related to the use of analog anti-aliasing filters, e.g., time dispersion, nonlinear distortion, ripple, temperature dependence of filters etc. Using an oversampling design and delta-sigma modulation, a less aggressive analog anti-aliasing filter can be supplemented by a digital filter. This approach has several advantages since the digital filter can be made to have a near-ideal frequency domain transfer function, with low in-band ripple, and no aging or thermal drift. However, the digital anti-aliasing filter may introduce degradations due to time domain response particularly at lower sample rates. Analog systems are not subject to a Nyquist limit or aliasing and thus do not require anti-aliasing filters or any of the design considerations associated with them. Instead, the limits of analog storage formats are determined by the physical properties of their construction. Sampling rates CD quality audio is sampled at 44,100 Hz (Nyquist frequency = 22.05 kHz) and at 16 bits. Sampling the waveform at higher frequencies and allowing for a greater number of bits per sample allows noise and distortion to be reduced further. DAT can sample audio at up to 48 kHz, while DVD-Audio can be 96 or 192 kHz and up to 24 bits resolution. With any of these sampling rates, signal information is captured above what is generally considered to be the human hearing frequency range. The higher sample rates impose less restrictions on anti-aliasing filter implementation which can result in both lower complexity and less signal distortion. Work done in 1981 by Muraoka et al. showed that music signals with frequency components above 20 kHz were only distinguished from those without by a few of the 176 test subjects. A perceptual study by Nishiguchi et al. (2004) concluded that "no significant difference was found between sounds with and without very high frequency components among the sound stimuli and the subjects... however, [Nishiguchi et al] can still neither confirm nor deny the possibility that some subjects could discriminate between musical sounds with and without very high frequency components." In blind listening tests conducted by Bob Katz in 1996, recounted in his book Mastering Audio: The Art and the Science, subjects using the same high-sample-rate reproduction equipment could not discern any audible difference between program material identically filtered to remove frequencies above 20 kHz versus 40 kHz. This demonstrates that presence or absence of ultrasonic content does not explain aural variation between sample rates. He posits that variation is due largely to performance of the band-limiting filters in converters. These results suggest that the main benefit to using higher sample rates is that it pushes consequential phase distortion from the band-limiting filters out of the audible range and that, under ideal conditions, higher sample rates may not be necessary. Dunn (1998) examined the performance of digital converters to see if these differences in performance could be explained by the band-limiting filters used in converters and looking for the artifacts they introduce. Quantization A signal is recorded digitally by an analog-to-digital converter, which measures the amplitude of an analog signal at regular intervals specified by the sampling rate, and then stores these sampled numbers in computer hardware. Numbers on computers represent a finite set of discrete values, which means that if an analog signal is digitally sampled using native methods (without dither), the amplitude of the audio signal will simply be rounded to the nearest representation. This process is called quantization, and these small errors in the measurements are manifested aurally as low level noise or distortion. This form of distortion, sometimes called granular or quantization distortion, has been pointed to as a fault of some digital systems and recordings particularly some early digital recordings, where the digital release was said to be inferior to the analog version. However, "if the quantisation is performed using the right dither, then the only consequence of the digitisation is effectively the addition of a white, uncorrelated, benign, random noise floor. The level of the noise depends on the number of the bits in the channel." The range of possible values that can be represented numerically by a sample is determined by the number of binary digits used. This is called the resolution, and is usually referred to as the bit depth in the context of PCM audio. The quantization noise level is directly determined by this number, decreasing exponentially (linearly in dB units) as the resolution increases. With an adequate bit depth, random noise from other sources will dominate and completely mask the quantization noise. The Redbook CD standard uses 16 bits, which keeps the quantization noise 96 dB below maximum amplitude, far below a discernible level with almost any source material. The addition of effective dither means that, "in practical terms, the resolution is limited by our ability to resolve sounds in noise. ... We have no problem measuring (and hearing) signals of –110dB in a well-designed 16- bit channel." DVD-Audio and most modern professional recording equipment allows for samples of 24 bits. Analog systems do not necessarily have discrete digital levels in which the signal is encoded. Consequently, the accuracy to which the original signal can be preserved is instead limited by the intrinsic noise-floor and maximum signal level of the media and the playback equipment. Quantization in analog media Since analog media is composed of molecules, the smallest microscopic structure represents the smallest quantization unit of the recorded signal. Natural dithering processes, like random thermal movements of molecules, the nonzero size of the reading instrument, and other averaging effects, make the practical limit larger than that of the smallest molecular structural feature. A theoretical LP composed of perfect diamond, with a groove size of 8 micron and a feature size of 0.5 nanometer, has a quantization that is similar to a 16-bit digital sample. Dither as a solution It is possible to make quantization noise audibly benign by applying dither. To do this, noise is added to the original signal before quantization. Optimal use of dither has the effect of making quantization error independent of the signal, and allows signal information to be retained below the least significant bit of the digital system. Dither algorithms also commonly have an option to employ some kind of noise shaping, which pushes the frequency of much of the dither noise to areas that are less audible to human ears, lowering the level of the noise floor apparent to the listener. Dither is commonly applied during mastering before final bit depth reduction, and also at various stages of DSP. Timing jitter One aspect that may degrade the performance of a digital system is jitter. This is the phenomenon of variations in time from what should be the correct spacing of discrete samples according to the sample rate. This can be due to timing inaccuracies of the digital clock. Ideally, a digital clock should produce a timing pulse at exactly regular intervals. Other sources of jitter within digital electronic circuits are data-induced jitter, where one part of the digital stream affects a subsequent part as it flows through the system, and power supply induced jitter, where noise from the power supply causes irregularities in the timing of signals in the circuits it powers. The accuracy of a digital system is dependent on the sampled amplitude values, but it is also dependent on the temporal regularity of these values. The analog versions of this temporal dependence are known as pitch error and wow-and-flutter. Periodic jitter produces modulation noise and can be thought of as being the equivalent of analog flutter. Random jitter alters the noise floor of the digital system. The sensitivity of the converter to jitter depends on the design of the converter. It has been shown that a random jitter of 5 ns may be significant for 16 bit digital systems. In 1998, Benjamin and Gannon researched the audibility of jitter using listening tests. They found that the lowest level of jitter to be audible was around 10 ns (rms). This was on a 17 kHz sine wave test signal. With music, no listeners found jitter audible at levels lower than 20 ns. A paper by Ashihara et al. (2005) attempted to determine the detection thresholds for random jitter in music signals. Their method involved ABX listening tests. When discussing their results, the authors commented that: So far, actual jitter in consumer products seems to be too small to be detected at least for reproduction of music signals. It is not clear, however, if detection thresholds obtained in the present study would really represent the limit of auditory resolution or it would be limited by resolution of equipment. Distortions due to very small jitter may be smaller than distortions due to non-linear characteristics of loudspeakers. Ashihara and Kiryu [8] evaluated linearity of loudspeaker and headphones. According to their observation, headphones seem to be more preferable to produce sufficient sound pressure at the ear drums with smaller distortions than loudspeakers. Signal processing After initial recording, it is common for the audio signal to be altered in some way, such as with the use of compression, equalization, delays and reverb. With analog, this comes in the form of outboard hardware components, and with digital, the same is typically accomplished with plug-ins in a digital audio workstation (DAW). A comparison of analog and digital filtering shows technical advantages to both methods. Digital filters are more precise and flexible. Analog filters are simpler, can be more efficient and do not introduce latency. Analog hardware When altering a signal with a filter, the output signal may differ in time from the signal at the input, which is measured as its phase response. All analog equalizers exhibit this behavior, with the amount of phase shift differing in some pattern, and centered around the band that is being adjusted. Although this effect alters the signal in a way other than a strict change in frequency response, it is usually not objectionable to listeners. Digital filters Because the variables involved can be precisely specified in the calculations, digital filters can be made to objectively perform better than analog components. Other processing such as delay and mixing can be done exactly. Digital filters are also more versatile. For example, the linear phase equalizer does not introduce frequency-dependent phase shift. This filter may be implemented digitally using a finite impulse response filter but has no practical implementation using analog components. A practical advantage of digital processing is the more convenient recall of settings. Plug-in parameters can be stored on the computer, whereas parameter details on an analog unit must be written down or otherwise recorded if the unit needs to be reused. This can be cumbersome when entire mixes must be recalled manually using an analog console and outboard gear. When working digitally, all parameters can simply be stored in a DAW project file and recalled instantly. Most modern professional DAWs also process plug-ins in real time, which means that processing can be largely non-destructive until final mix-down. Analog modeling Many plug-ins exist now that incorporate analog modeling. There are audio engineers that endorse them and feel that they compare equally in sound to the analog processes that they imitate. Analog modeling carries some benefits over their analog counterparts, such as the ability to remove noise from the algorithms and modifications to make the parameters more flexible. On the other hand, other engineers also feel that the modeling is still inferior to the genuine outboard components and still prefer to mix "outside the box". Sound quality Subjective evaluation Subjective evaluation attempts to measure how well an audio component performs according to the human ear. The most common form of subjective test is a listening test, where the audio component is simply used in the context for which it was designed. This test is popular with hi-fi reviewers, where the component is used for a length of time by the reviewer who then will describe the performance in subjective terms. Common descriptions include whether the component has a bright or warm sound, or how well the component manages to present a spatial image. Another type of subjective test is done under more controlled conditions and attempts to remove possible bias from listening tests. These sorts of tests are done with the component hidden from the listener, and are called blind tests. To prevent possible bias from the person running the test, the blind test may be done so that this person is also unaware of the component under test. This type of test is called a double-blind test. This sort of test is often used to evaluate the performance of lossy audio compression. Critics of double-blind tests see them as not allowing the listener to feel fully relaxed when evaluating the system component, and can therefore not judge differences between different components as well as in sighted (non-blind) tests. Those who employ the double-blind testing method may try to reduce listener stress by allowing a certain amount of time for listener training. Early digital recordings Early digital audio machines had disappointing results, with digital converters introducing errors that the ear could detect. Record companies released their first LPs based on digital audio masters in the late 1970s. CDs became available in the early 1980s. At this time analog sound reproduction was a mature technology. There was a mixed critical response to early digital recordings released on CD. Compared to vinyl record, it was noticed that CD was far more revealing of the acoustics and ambient background noise of the recording environment. For this reason, recording techniques developed for analog disc, e.g., microphone placement, needed to be adapted to suit the new digital format. Some analog recordings were remastered for digital formats. Analog recordings made in natural concert hall acoustics tended to benefit from remastering. The remastering process was occasionally criticised for being poorly handled. When the original analog recording was fairly bright, remastering sometimes resulted in an unnatural treble emphasis. Super Audio CD and DVD-Audio The Super Audio CD (SACD) format was created by Sony and Philips, who were also the developers of the earlier standard audio CD format. SACD uses Direct Stream Digital (DSD) based on delta-sigma modulation. Using this technique, the audio data is stored as a sequence of fixed amplitude (i.e. 1-bit) values at a sample rate of 2.884 MHz, which is 64 times the 44.1 kHz sample rate used by CD. At any point in time, the amplitude of the original analog signal is represented by the density of 1's or 0's in the data stream. This digital data stream can therefore be converted to analog by passing it through an analog low-pass filter. The DVD-Audio format uses standard, linear PCM at variable sampling rates and bit depths, which at the very least match and usually greatly surpass those of standard CD audio (16 bits, 44.1 kHz). In the popular Hi-Fi press, it had been suggested that linear PCM "creates [a] stress reaction in people", and that DSD "is the only digital recording system that does not [...] have these effects". This claim appears to originate from a 1980 article by Dr John Diamond. The core of the claim that PCM recordings (the only digital recording technique available at the time) created a stress reaction rested on using the pseudoscientific technique of applied kinesiology, for example by Dr Diamond at an AES 66th Convention (1980) presentation with the same title. Diamond had previously used a similar technique to demonstrate that rock music (as opposed to classical) was bad for your health due to the presence of the "stopped anapestic beat". Diamond's claims regarding digital audio were taken up by Mark Levinson, who asserted that while PCM recordings resulted in a stress reaction, DSD recordings did not. However, a double-blind subjective test between high resolution linear PCM (DVD-Audio) and DSD did not reveal a statistically significant difference. Listeners involved in this test noted their great difficulty in hearing any difference between the two formats. Analog preference The vinyl revival is in part because of analog audio's imperfection, which adds "warmth". Some listeners prefer such audio over that of a CD. Founder and editor Harry Pearson of The Absolute Sound magazine says that "LPs are decisively more musical. CDs drain the soul from music. The emotional involvement disappears". Dub producer Adrian Sherwood has similar feelings about the analog cassette tape, which he prefers because of its "warmer" sound. Those who favor the digital format point to the results of blind tests, which demonstrate the high performance possible with digital recorders. The assertion is that the "analog sound" is more a product of analog format inaccuracies than anything else. One of the first and largest supporters of digital audio was the classical conductor Herbert von Karajan, who said that digital recording was "definitely superior to any other form of recording we know". He also pioneered the unsuccessful Digital Compact Cassette and conducted the first recording ever to be commercially released on CD: Richard Strauss's Eine Alpensinfonie. The perception of analog audio being demonstrably superior was also called into question by music analysts following revelations that audiophile label Mobile Fidelity Sound Lab had been covertly using Direct Stream Digital files to produce vinyl releases marketed as coming from analog master tapes, with lawyer and audiophile Randy Braun stating that "These people who claim they have golden ears and can hear the difference between analog and digital, well, it turns out you couldn't." Hybrid systems While the words analog audio usually imply that the sound is described using a continuous signal approach, and the words digital audio imply a discrete approach, there are methods of encoding audio that fall somewhere between the two. Indeed, all analog systems show discrete (quantized) behaviour at the microscopic scale. While vinyl records and common compact cassettes are analog media and use quasi-linear physical encoding methods (e.g. spiral groove depth, tape magnetic field strength) without noticeable quantization or aliasing, there are analog non-linear systems that exhibit effects similar to those encountered on digital ones, such as aliasing and "hard" dynamic floors (e.g. frequency-modulated hi-fi audio on videotapes, PWM encoded signals). See also Audiophile Audio system measurements History of sound recording References Bibliography Pohlmann, K. (2005). Principles of Digital Audio 5th edn, McGraw-Hill Comp. External links Digital audio recording Sound recording Analog and digital recording
Comparison of analog and digital recording
Technology
6,270
802,149
https://en.wikipedia.org/wiki/Indian%20rhinoceros
The Indian rhinoceros (Rhinoceros unicornis) is also known as the greater one-horned rhinoceros, great Indian rhinoceros and Indian rhino. It is the second largest living rhinoceros species, with adult males weighing and adult females . Its thick skin is grey-brown with pinkish skin folds. It has a single horn on its snout that grows up to long. Its upper legs and shoulders are covered in wart-like bumps, and it is nearly hairless aside from the eyelashes, ear fringes and tail brush. The Indian rhinoceros is native to the Indo-Gangetic Plain and occurs in 12 protected areas in northern India and southern Nepal. It is a grazer, eating mainly grass, but also twigs, leaves, branches, shrubs, flowers, fruits and aquatic plants. It is a largely solitary animal, only associating in the breeding season and when rearing calves. Females give birth to a single calf after a gestation of 15.7 months. The birth interval is 34–51 months. Captive individuals can live up to 47 years. It is susceptible to diseases such as anthrax, and those caused by parasites such as leeches, ticks and nematodes. The Indian rhinoceros is listed as Vulnerable on the IUCN Red List, as the population is fragmented and restricted to less than . Excessive hunting and agricultural development reduced its range drastically. In the early 1990s, the global population was estimated at between 1,870 and 1,895 individuals. Since then, the population increased due to conservation measures taken by the governments. As of August 2018, it was estimated to comprise 3,588 individuals. However, poaching remains a continuous threat. Taxonomy Rhinoceros unicornis was the scientific name used by Carl Linnaeus in 1758 who described a rhinoceros with one horn. As type locality, he indicated Africa and India. He described two species in India, the other being Rhinoceros bicornis, and stated that the Indian species had two horns, while the African species had only one. The Indian rhinoceros is a single species. Several specimens were described since the end of the 18th century under different scientific names, which are all considered synonyms of Rhinoceros unicornis today: R. indicus by Cuvier, 1817 R. asiaticus by Blumenbach, 1830 R. stenocephalus by Gray, 1867 R. jamrachi by Sclater, 1876 R. bengalensis by Kourist, 1970 Etymology The generic name rhinoceros is derived through Latin from the , which is composed of (rhino-, "of the nose") and (keras, "horn") with a horn on the nose. The name has been in use since the 14th century. The Latin word ūnicornis means "one-horned". Evolution Ancestral rhinoceroses first diverged from other perissodactyls in the Early Eocene. Mitochondrial DNA comparison suggests the ancestors of modern rhinos split from the ancestors of Equidae around 50 million years ago. The extant family, the Rhinocerotidae, first appeared in the Late Eocene in Eurasia, and the ancestors of the extant rhino species dispersed from Asia beginning in the Miocene. The last common ancestor of living rhinoceroses belonging to the subfamily Rhinocerotinae is suggested to have lived around 16 million years ago, with the ancestors of the genus Rhinoceros diverging from the ancestors of other living rhinoceroses around 15 million years ago. The genus Rhinoceros has been found to be overall slightly more closely related to the Sumatran rhinoceros (as well as to the extinct woolly rhinoceros and the extinct Eurasian genus Stephanorhinus) than to living African rhinoceroses, though there appears to have been gene flow between the ancestors of living African rhinoceroses and the genus Rhinoceros, as well as between the ancestors of the genus Rhinoceros and the ancestors of the woolly rhinoceros and Stephanorhinus. A cladogram showing the relationships of recent and Late Pleistocene rhinoceros species (minus Stephanorhinus hemitoechus) based on whole nuclear genomes, after Liu et al., 2021: The earliest fossils of the genus Rhinoceros date to the Late Miocene, around 8–9 million years ago. The divergence between the Indian and Javan rhinoceros is estimated to have occurred around 4.3 million years ago. The earliest representatives of the modern Indian rhinoceros appeared during the Early Pleistocene (2.6-0.8 million years ago). Fossils indicate that the Indian rhinoceros during the Pleistocene also inhabited areas considerably further east of its current distribution, including mainland Southeast Asia, South China and the island of Java, Indonesia. Characteristics Indian rhinos have a thick grey-brown skin with pinkish skin folds and one horn on their snout. Their upper legs and shoulders are covered in wart-like bumps. They have very little body hair, aside from eyelashes, ear fringes and tail brush. Bulls have huge neck folds. The skull is heavy with a basal length above and an occiput above . The nasal horn is slightly back-curved with a base of about by that rapidly narrows until a smooth, even stem part begins about above base. In captive animals, the horn is frequently worn down to a thick knob. The Indian rhino's single horn is present in both bulls and cows, but not on newborn calves. The horn is pure keratin, like human fingernails, and starts to show after about six years. In most adults, the horn reaches a length of about , but has been recorded up to in length and in weight. Among terrestrial land mammals native to Asia, Indian rhinos are second in size only to the Asian elephant. They are also the second-largest living rhinoceros, behind only the white rhinoceros. Bulls have a head and body length of with a shoulder height of , while cows have a head and body length of and a shoulder height of . The bull, averaging about is heavier than the cow, at an average of about . The largest individuals reportedly weigh up to . The rich presence of blood vessels underneath the tissues in folds gives them the pinkish colour. The folds in the skin increase the surface area and help in regulating the body temperature. The thick skin does not protect against bloodsucking Tabanus flies, leeches and ticks. Distribution and habitat Indian rhinos once ranged across the entire northern part of the Indian subcontinent, along the Indus, Ganges and Brahmaputra River basins, from Pakistan to the Indian-Myanmar border, including Bangladesh and the southern parts of Nepal and Bhutan. They may have also occurred in Myanmar, southern China and Indochina. They inhabit the alluvial grasslands of the Terai and the Brahmaputra basin. As a result of habitat destruction and climatic changes its range has gradually been reduced so that by the 19th century, it only survived in the Terai grasslands of southern Nepal, northern Uttar Pradesh, northern Bihar, northern West Bengal, and in the Brahmaputra Valley of Assam. The species was present in northern Bihar and Oudh at least until 1770 as indicated in maps produced by Colonel Gentil. On the former abundance of the species, Thomas C. Jerdon wrote in 1867: Today, its range has further shrunk to a few pockets in southern Nepal, northern West Bengal, and the Brahmaputra Valley. Its habitat is surrounded by human-dominated landscapes, so that in many areas, it occurs in cultivated areas, pastures, and secondary forests. In the 1980s, Indian rhinos were frequently seen in the narrow plain area of Manas River and Royal Manas National Park in Bhutan. Populations In 2022, the total Indian rhinoceros population was estimated to be 4,014 individuals, up from 2,577 in 2006. Among them, 3,262 are in India and the remaining 752 are in Nepal and Bhutan. There is no permanent rhino population in Bhutan, but small rhino populations are occasionally known to cross from the Manas National Park or Buxa Tiger Reserve in India. In India, there are around 2,885 individuals in Assam, including 2,613 in Kaziranga National Park, 125 in Orang National Park, 107 in Pobitora Wildlife Sanctuary and 40 in Manas National Park. West Bengal has a population of 339 individuals, including 287 in Jaldapara National Park and 52 in Gorumara National Park. Only 38 individuals are found in Dudhwa National Park, in Uttar Pradesh. By 2014, the population in Assam increased to 2,544 Indian rhinos, an increase of 27% since 2006, although more than 150 individuals were killed by poachers during these years. The population in Kaziranga National Park was estimated at 2,048 individuals in 2009. By 2009, the population in Pobitora Wildlife Sanctuary had increased to 84 individuals in an area of . In 2015, Nepal had 645 Indian rhinos living in Parsa National Park, Chitwan National Park, Bardia National Park, Shuklaphanta Wildlife Reserve and respective buffer zones in the Terai Arc Landscape as recorded in a survey conducted from 11 April to 2 May 2015. The survey showed that the population of rhinos in Nepal from 2011 to 2015 increased 21% or 111 individuals. The Indian rhino population, which once numbered as low as 100 individuals in the early 1900s, has increased to more than 3,700 in the year 2021 as per The International Rhino Foundation. Ecology and behaviour Bulls are usually solitary. Groups consist of cows with calves, or of up to six subadults. Such groups congregate at wallows and grazing areas. They are foremost active in early mornings, late afternoons and at night, but rest during hot days. They bathe regularly. The folds in their skin trap water and hold it even when they exit wallows. They are excellent swimmers and can run at speeds of up to for short periods. They have excellent senses of hearing and smell, but relatively poor eyesight. Over 10 distinct vocalisations have been recorded. Males have home ranges of around that overlap each other. Dominant males tolerate other males passing through their territories except when they are in mating season, when dangerous fights break out. Indian rhinos have few natural enemies, except for tigers, which sometimes kill unguarded calves, but adult rhinos are less vulnerable due to their size. Mynahs and egrets both eat invertebrates from the rhino's skin and around its feet. Tabanus flies, a type of horse-fly, are known to bite rhinos. The rhinos are also vulnerable to diseases spread by parasites such as leeches, ticks, and nematodes like Bivitellobilharzia nairi. Anthrax and the blood-disease sepsis are known to occur. In March 2017, a group of four tigers consisting of an adult male, tigress and two cubs killed a 20-year-old male Indian rhinoceros in Dudhwa Tiger Reserve. Such cases are rare, as Indian rhinoceroseslike most megaherbivoresare mostly invulnerable to predation. Diet Indian rhinos are grazers. Their diet consists almost entirely of grasses (such as Arundo donax, Bambusa tulda, Cynodon dactylon, and Oryza sativa), but they also eat leaves, twigs and branches of shrubs and trees (such as Lagerstroemia indica), flowers, fruits (such as Ficus religiosa), and submerged and floating aquatic plants. They feed in the mornings and evenings. They use their semi-prehensile lips to grasp grass stems, bend the stem down, bite off the top, and then eat the grass. They tackle very tall grasses or saplings by walking over the plant, with legs on both sides and using the weight of their bodies to push the end of the plant down to the level of the mouth. Mothers also use this technique to make food edible for their calves. They drink for a minute or two at a time, often imbibing water filled with rhinoceros urine. Social life Indian rhinos form a variety of social groupings. Bulls are generally solitary, except for mating and fighting. Cows are largely solitary when they are without calves. Mothers will stay close to their calves for up to four years after their birth, sometimes allowing an older calf to continue to accompany her once a newborn calf arrives. Subadult bulls and cows form consistent groupings, as well. Groups of two or three young bulls often form on the edge of the home ranges of dominant bulls, presumably for protection in numbers. Young cows are slightly less social than the bulls. Indian rhinos also form short-term groupings, particularly at forest wallows during the monsoon season and in grasslands during March and April. Groups of up to 10 rhinos, typically a dominant male with females and calves, gather in wallows. Indian rhinos make a wide variety of vocalisations. At least 10 distinct vocalisations have been identified: snorting, honking, bleating, roaring, squeak-panting, moo-grunting, shrieking, groaning, rumbling and humphing. In addition to noises, the Indian rhino uses olfactory communication. Adult bulls urinate backwards, as far as behind them, often in response to being disturbed by observers. Like all rhinos, the Indian rhinoceros often defecates near other large dung piles. The Indian rhino has pedal scent glands which are used to mark their presence at these rhino latrines. Bulls have been observed walking with their heads to the ground as if sniffing, presumably following the scent of cows. In aggregations, Indian rhinos are often friendly. They will often greet each other by waving or bobbing their heads, mounting flanks, nuzzling noses, or licking. Indian rhinos will playfully spar, run around, and play with twigs in their mouths. Adult bulls are the primary instigators in fights. Fights between dominant bulls are the most common cause of rhino mortality, and bulls are also very aggressive toward cows during courtship. Bulls chase cows over long distances and even attack them face-to-face. Indian rhinos use their horns for fighting, albeit less frequently than African rhinos, largely using the incisors of the lower jaw to inflict wounds. Reproduction Captive bulls breed at five years of age, but wild bulls attain dominance much later when they are larger. In one five-year field study, only one Indian rhino estimated to be younger than 15 years mated successfully. Captive cows breed as young as four years of age, but in the wild, they usually start breeding only when six years old, which likely indicates they need to be large enough to avoid being killed by aggressive bulls. The ovarian cycle lasts 5.5 to 9 weeks on average. Their gestation period is around 15.7 months, and birth interval ranges from 34 to 51 months. An estimated 10% of calves will die before maturity. This is mainly attributed to predatory attacks from tigers (Panthera tigris). In captivity, four Indian rhinos lived over 40 years, the oldest living to be 47. Threats Habitat degradation caused by human activities and climate change as well as the resulting increase in the floods has caused many Indian rhino deaths and has limited their ranging areas which is shrinking. Serious declines in quality of habitat have occurred in some areas, due to severe invasion by alien plants into grasslands affecting some populations, and demonstrated reductions in the extent of grasslands and wetland habitats due to woodland encroachment and silting up of beels (swampy wetlands). Grazing by domestic livestock is another cause. The Indian rhino species is inherently at risk because over 70% of its population occurs at a single site, Kaziranga National Park. Any catastrophic event such as disease, civil disorder, poaching, or habitat loss would have a devastating impact on the Indian rhino's status. Additionally, a small population of rhinos may be prone to inbreeding depression. Poaching Sport hunting became common in the late 19th and early 20th centuries and was the main cause for the decline of Indian rhinoceros populations. Indian rhinos were hunted relentlessly and persistently. Reports from the mid-19th century claim that some British military officers shot more than 200 rhinos in Assam alone. By 1908, the population in Kaziranga National Park had decreased to around 12 individuals. In the early 1900s, the Indian rhinoceros was almost extinct. At present, poaching for the use of horn in traditional Chinese Medicine is one of the main threats that has led to decreases in several important populations. Poaching for the Indian rhino's horn became the single most important reason for the decline of the Indian rhinoceros after conservation measures were put in place from the beginning of the 20th century, when legal hunting ended. From 1980 to 1993, 692 rhinos were poached in India, including 41 rhinos in India's Laokhowa Wildlife Sanctuary in 1983, almost the entire population of the sanctuary. By the mid-1990s, the Indian rhinoceros had been extirpated in this sanctuary. Between 2000 and 2006, more than 150 rhinos were poached in Assam. Almost 100 rhinos were poached in India between 2013 and 2018. In 1950, in Nepal the Chitwan's forest and grasslands extended over more than and were home to about 800 rhinos. When poor farmers from the mid-hills moved to the Chitwan Valley in search of arable land, the area was subsequently opened for settlement, and poaching of wildlife became rampant. The Chitwan population has repeatedly been jeopardised by poaching; in 2002 alone, poachers killed 37 animals to saw off and sell their valuable horns. Conservation The Indian rhinoceros is listed as vulnerable by the IUCN Red list, as of 2018. Globally, R. unicornis has been listed in CITES Appendix I since 1975. The Indian and Nepalese governments have taken major steps towards Indian rhinoceros conservation, especially with the help of the World Wide Fund for Nature (WWF) and other non-governmental organisations. In 1910, all rhino hunting in India became prohibited. In 1957, the country's first conservation law ensured the protection of rhinos and their habitat. In 1959, Edward Pritchard Gee undertook a survey of the Chitwan Valley, and recommended the creation of a protected area north of the Rapti River and of a wildlife sanctuary south of the river for a trial period of 10 years. After his subsequent survey of Chitwan in 1963, he recommended extension of the sanctuary to the south. By the end of the 1960s, only 95 rhinos remained in the Chitwan Valley. The dramatic decline of the rhino population and the extent of poaching prompted the government to institute the Gaida Gasti – a rhino reconnaissance patrol of 130 armed men and a network of guard posts all over Chitwan. To prevent the extinction of rhinos, the Chitwan National Park was gazetted in December 1970, with borders delineated the following year and established in 1973, initially encompassing an area of . To ensure the survival of rhinos in case of epidemics, animals were translocated annually from Chitwan to Bardia National Park and Shuklaphanta National Park since 1986. The Indian rhinoceros population living in Chitwan and Parsa National Parks was estimated at 608 mature individuals in 2015. Reintroduction to new areas Indian rhinos have been reintroduced to areas where they had previously inhabited but became extinct. These efforts have produced mixed results, mainly due to lack of proper planning and management, sustained effort, and adequate security for the introduced animals. In 1984, five Indian rhinos were relocated to Dudhwa National Park—four from the fields outside the Pobitora Wildlife Sanctuary and one from Goalpara. This has born results and the population has increased to 21 rhinos by 2006. In early 1980s, Laokhowa Wildlife Sanctuary in Assam had more than 70 Indian rhinos which were all killed by poachers. In 2016, two Indian rhinos, a mother and her daughter, were reintroduced to the sanctuary from Kaziranga National Park as part of the Indian Rhino Vision 2020 (IRV 2020) program, but both animals died within months due to natural causes. Indian rhinos were once found as far west as the Peshawar Valley during the reign of Mughal Emperor Babur, but are now extinct in Pakistan. After rhinos became "regionally extinct" in Pakistan, two rhinos from Nepal were introduced in 1983 to Lal Suhanra National Park, which have not bred so far. In captivity Indian rhinoceroses were initially difficult to breed in captivity. In the second half of the 20th century, zoos became adept at breeding Indian rhinoceros. By 1983, nearly 40 babies had been born in captivity. As of 2012, 33 Indian rhinos were born at Switzerland's Zoo Basel alone, meaning that most captive animals are related to the Basel population. Due to the success of Zoo Basel's breeding program, the International Studbook for the species has been kept there since 1972. Since 1990, the Indian rhino European Endangered Species Programme is also being coordinated there, with the goal of maintaining genetic diversity in the global captive Indian rhinoceros population. The first recorded captive birth of an Indian rhinoceros was in Kathmandu in 1826, but another successful birth did not occur for nearly 100 years. In 1925, a rhino was born in Kolkata. No rhinoceros was successfully bred in Europe until 1956 when first European breeding took place when baby rhino Rudra was born in Zoo Basel on 14 September 1956. In June 2009, an Indian rhino was artificially inseminated using sperm collected four years previously and cryopreserved at the Cincinnati Zoo's CryoBioBank before being thawed and used. She gave birth to a male calf in October 2010. In June 2014, the first "successful" live-birth from an artificially inseminated rhino took place at the Buffalo Zoo in New York. As in Cincinnati, cryopreserved sperm was used to produce the female calf, Monica. Cultural significance The Indian rhinoceros is one of the motifs on the Pashupati seal and many terracotta figurines that were excavated at archaeological sites of the Indus Valley civilisation. The Rhinoceros Sutra is an early text in the Buddhist tradition and is part of the Gandhāran Buddhist texts and the Pali Canon; a version was also incorporated into the Sanskrit Mahavastu. It praises the solitary lifestyle and stoicism of the Indian rhinoceros and is associated with the eremitic lifestyle symbolized by the Pratyekabuddha. Europe In the 3rd century, Philip the Arab exhibited an Indian rhinoceros in Rome. In 1515, Manuel I of Portugal obtained an Indian rhinoceros as a gift, which he passed on to Pope Leo X, but which died in a shipwreck off the coast of Italy in early 1516, on the way from Lisbon to Rome. Three artistic representations were prepared of this rhinoceros: A woodcut by Hans Burgkmair, a drawing and a woodcut called Dürer's Rhinoceros by Albrecht Dürer, all dated 1515. In 1577–1588, Abada was a female Indian rhinoceros kept by the Portuguese kings Sebastian I and Henry I from 1577 to 1580 and by Philip II of Spain from about 1580 to 1588. She was the first rhinoceros seen in Europe after Dürer's Rhinoceros. In about 1684, the first presumably Indian rhinoceros arrived in England. George Jeffreys, 1st Baron Jeffreys spread the rumour that his chief rival Francis North, 1st Baron Guilford had been seen riding on it. In 1741–1758, Clara the rhinoceros (c. 1738 – 14 April 1758) was a female Indian rhinoceros who became famous during 17 years of touring Europe in the mid-18th century. She arrived in Europe in Rotterdam in 1741, becoming the fifth living rhinoceros to be seen in Europe in modern times since Dürer's rhinoceros in 1515. After tours through towns in the Dutch Republic, the Holy Roman Empire, Switzerland, the Polish–Lithuanian Commonwealth, France, the Kingdom of the Two Sicilies, the Papal States, Bohemia and Denmark, she died in Lambeth, England. In 1739, she was drawn and engraved by two English artists. She was then brought to Amsterdam, where Jan Wandelaar made two engravings that were published in 1747. In the subsequent years, the rhinoceros was exhibited in several European cities. In 1748, Johann Elias Ridinger made an etching of her in Augsburg, and Petrus Camper modelled her in clay in Leiden. In 1749, Georges-Louis Leclerc, Comte de Buffon drew it in Paris. In 1751, Pietro Longhi painted her in Venice. See also Unicorn, mythological character The Soul of the Rhino References External links TheBigZoo.com: Greater Indian Rhinoceros Indian Rhino page at AnimalInfo.org Indian Rhinoceros page at UltimateUngulate.com EDGE species Fauna of Assam Fauna of South Asia Mammals described in 1758 Mammals of India Mammals of Nepal Rhinoceros (genus) Symbols of Assam Taxa named by Carl Linnaeus Vulnerable animals
Indian rhinoceros
Biology
5,219
1,447,583
https://en.wikipedia.org/wiki/Titer
Titer (American English) or titre (British English) is a way of expressing concentration. Titer testing employs serial dilution to obtain approximate quantitative information from an analytical procedure that inherently only evaluates as positive or negative. The titer corresponds to the highest dilution factor that still yields a positive reading. For example, positive readings in the first 8 serial, twofold dilutions translate into a titer of 1:256 (i.e., 2−8). Titres are sometimes expressed by the denominator only, for example 1:256 is written 256. The term also has two other, conflicting meanings. In titration, the titer is the ratio of actual to nominal concentration of a titrant, e.g. a titer of 0.5 would require 1/0.5 = 2 times more titrant than nominal. This is to compensate for possible degradation of the titrant solution. Second, in textile engineering, titre is also a synonym for linear density. Etymology Titer has the same origin as the word "title", from the French word titre, meaning "title" but referring to the documented purity of a substance, often gold or silver. This comes from the Latin word titulus, also meaning "title". Examples Antibody titer An antibody titer is a measurement of how much antibody an organism has produced that recognizes a particular epitope. It is conventionally expressed as the inverse of the greatest dilution level that still gives a positive result on some test. ELISA is a common means of determining antibody titers. For example, the indirect Coombs test detects the presence of anti-Rh antibodies in a pregnant woman's blood serum. A patient might be reported to have an "indirect Coombs titer" of 16. This means that the patient's serum gives a positive indirect Coombs test at any dilution down to 1/16 (1 part serum to 15 parts diluent). At greater dilutions the indirect Coombs test is negative. If a few weeks later the same patient had an indirect Coombs titer of 32 (1/32 dilution which is 1 part serum to 31 parts diluent), this would mean that she was making more anti-Rh antibody, since it took a greater dilution to abolish the positive test. Many traditional serological tests such as hemagglutination or complement fixation employ this principle. Such tests can typically be read visually, which makes them fast, cost-effective, and able to be deployed in a wide variety of laboratory environments. The interpretation of any serological titer result is guided by reference values that are specific to the antigen or antibody in question, so a titer of 1:32 may be below the cut-off for one test but above for another. Other examples A viral titer is the lowest concentration of a virus that still infects cells. To determine the titer, several dilutions are prepared, such as 10−1, 10−2, 10−3, ... 10−8. The titer of a fat is the temperature, in degrees Celsius, at which it solidifies. The higher the titer, the harder the fat. This titer is used in determining whether an animal fat is considered tallow (titer higher than 40 °C) or a grease (titer below 40 °C). See also Serology Titration W/v Mg% Virus quantification Viral titer References Chemical pathology Titration Immunology Immunologic tests
Titer
Chemistry,Biology
745
28,364,332
https://en.wikipedia.org/wiki/Miller%20Act
The Miller Act (ch. 642, Sec. 1-3, 49 stat. 793,794, codified as amended in Title 40 of the United States Code) requires prime contractors on some government construction contracts to post bonds guaranteeing both the performance of their contractual duties and the payment of their subcontractors and material suppliers. The Act was originally enacted as the Heard Act in 1894. That act established a single performance and payment bond that "did afford some protection to... unpaid subcontractors and materialmen, but it was fraught with substantive and procedural limitations", and it was superseded by the Miller Act of 1935. Background and purpose The Miller Act addresses two concerns that would otherwise exist in the performance of federal government construction projects: Performance Bonds: The contractor's abandonment or other nonperformance of a government job may cause critical delays and added expense in the government procurement process. The bonding process helps weed out irresponsible contractors who may be unable to obtain bonds, and the bond itself will defray the government's cost of substitute performance in the event of default. The subrogration right of the bond surety against the contractor, i.e., the right of the surety to sue the contractor and any principals who may have guaranteed the bond, is a deterrent to nonperformance. Payment Bonds: Subcontractors and material suppliers would otherwise be reluctant to work on such projects (knowing that sovereign immunity prevents the establishment of a mechanic's lien), diminishing competition and driving up construction costs. Summary Application The Miller Act applies to contracts awarded for the construction, alteration, or repair of any public building or public work of the United States Federal government. While the Act provides that the bonds must be posted on contracts exceeding $100,000, Federal Acquisition Regulation (FAR) Part 28 requires the bonds only on contracts that exceed $150,000. The Act requires the Federal Acquisition Regulations to establish alternative payment protections for contracts in excess of $30,000 but not exceeding $150,000, with the contract-specific protection to be determined by the contracting officer. While the Miller Act applies only to federal contracts, state legislatures throughout the United States have enacted "Little Miller Acts" that establish similar requirements for state contracts. Posting of performance bonds Once contract is awarded, the contractor must furnish the government a performance bond issued by a surety satisfactory to the officer awarding the contract, in an amount the contracting officer considers adequate, for the protection of the Government. Posting of payment bonds The contractor must also furnish a payment bond with a surety satisfactory to the contracting officer for the protection of all persons supplying labor and material in carrying out the work provided for in the contract for the use of each person. The amount of the payment bond generally must equal the total amount payable by the terms of the contract. Enforcement on payment bonds A subcontractor or material supplier that has not been paid, within 90 days of the day on which he last furnished labor or materials for which the claim is made, may bring a civil action on the payment bond for the amount unpaid at the time the suit is brought. The suit must be brought no later than one year after the day on which the last of the labor was performed or material was supplied by the person bringing the action. The agency issuing the contract is required to provide a copy of the payment bond, which identifies the surety, which would be the defendant in an enforcement action, upon the presentation of an affidavit indicating the person requesting the copy has not been paid for labor or materials furnished under the contract. A person having a direct contractual relationship with a subcontractor, but no contractual relationship, express or implied, with the contractor furnishing the payment bond, may bring a civil action on the payment bond on giving written notice to the contractor within 90 days from the date on which the person did or performed the last of the labor or furnished or supplied the last of the material for which the claim is made. The action must state with substantial accuracy the amount claimed and the name of the party to whom the material was furnished or supplied or for whom the labor was done or performed. Waiver of payment bond rights A waiver of the right to pursue a payment bond action under the Act by a person supplying labor or materials is void unless it was executed in writing, signed by the person whose right is to be waived, and executed after the labor or materials have been supplied. References United States federal commerce legislation Sureties Construction law 1893 in American law 1935 in American law
Miller Act
Engineering
932
64,572,373
https://en.wikipedia.org/wiki/Phase-space%20wavefunctions
Phase-space representation of quantum state vectors is a formulation of quantum mechanics elaborating the phase-space formulation with a Hilbert space. It "is obtained within the framework of the relative-state formulation. For this purpose, the Hilbert space of a quantum system is enlarged by introducing an auxiliary quantum system. Relative-position state and relative-momentum state are defined in the extended Hilbert space of the composite quantum system and expressions of basic operators such as canonical position and momentum operators, acting on these states, are obtained." Thus, it is possible to assign a meaning to the wave function in phase space, , as a quasiamplitude, associated to a quasiprobability distribution. The first wave-function approach of quantum mechanics in phase space was introduced by Torres-Vega and Frederick in 1990 (also see). It is based on a generalised Husimi distribution. In 2004 Oliveira et al. developed a new wave-function formalism in phase space where the wave-function is associated to the Wigner quasiprobability distribution by means of the Moyal product. An advantage might be that off-diagonal Wigner functions used in superpositions are treated in an intuitive way, , also gauge theories are treated in an operator form. Phase space operators Instead of thinking in terms multiplication of function using the star product, we can shift to think in terms of operators acting in functions in phase space. Where for the Torres-Vega and Frederick approach the phase space operators are with and And Oliveira's approach the phase space operators are with In the general case and with , where , , and are constants. These operators satisfy the uncertainty principle: Symplectic Hilbert space To associate the Hilbert space, , with the phase space , we will consider the set of complex functions of integrable square, in , such that Then we can write , with where is the dual vector of . This symplectic Hilbert space is denoted by . An association with the Schrödinger wavefunction can be made by , letting , we have . Then . Torres-Vega–Frederick representation With the operators of position and momentum a Schrödinger picture is developed in phase space The Torres-Vega–Frederick distribution is Oliveira representation Thus, it is now, with aid of the star product possible to construct a Schrödinger picture in phase space for deriving both side by , we have therefore, the above equation has the same role of Schrödinger equation in usual quantum mechanics. To show that , we take the 'Schrödinger equation' in phase space and 'star-multiply' by the right for where is the classical Hamiltonian of the system. And taking the complex conjugate subtracting both equations we get which is the time evolution of Wigner function, for this reason is sometimes called quasiamplitude of probability. The -genvalue is given by the time independent equation . Star-multiplying for on the right, we obtain Therefore, the static Wigner distribution function is a -genfunction of the -genvalue equation, a result well known in the usual phase-space formulation of quantum mechanics. In the case where , worked in the beginning of the section, the Oliveira approach and phase-space formulation are indistinguishable, at least for pure states. Equivalence of representations As it was states before, the first wave-function formulation of quantum mechanics was developed by Torres-Vega and Frederick, its phase-space operators are given by and This operators are obtained transforming the operators and (developed in the same article) as and where . This representation is some times associated with the Husimi distribution and it was shown to coincides with the totality of coherent-state representations for the Heisenberg–Weyl group. The Wigner quasiamplitude, , and Torres-Vega–Frederick wave-function, , are related by where and . See also Wigner quasiprobability distribution Husimi Q representation Quasiprobability distribution Phase-space formulation References Quantum mechanics
Phase-space wavefunctions
Physics
821
26,548,992
https://en.wikipedia.org/wiki/C21H22O5
{{DISPLAYTITLE:C21H22O5}} The molecular formula C21H22O5 (molar mass: 354.39 g/mol, exact mass: 354.146724 u) may refer to: Xanthohumol, a prenylated chalconoid Isoxanthohumol, a prenylated flavanone Molecular formulas
C21H22O5
Physics,Chemistry
83
17,919,517
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Stanislas%20Cloez
François Stanislas Cloez (24 June 1817 – 12 October 1883) was a French chemist, who authored both as "F. S. Cloez" and "S. Cloez", and is known for his pioneering role in analytical chemistry during the 19th century. He was a founder and later president of the Chemistry Society of France. In 1851, Cloez and Italian chemist Stanislao Cannizzaro, working on collaborative research, prepared cyanamide by the action of ammonia on cyanogen chloride in ethereal solution. In the 1870s, he commenced the identification of the constituents of individual essential oils and their classification into groups according to their suitability for medicinal, industrial and perfumery purposes. He identified the major constituent of eucalyptus oil, which he called "eucalyptol" (now generally known as cineole). In honour of his work on eucalyptus oil Eucalyptus cloeziana (Gympie messmate) is named after him. Cloez also played a role in developing a theory on the origin of life elsewhere in the Solar System. In 1864, Cloez was the first scientist to examine a carbonaceous chondrite, the Orgueil meteorite, after it had fallen in France. Cloez said that its content "would seem to indicate the existence of organized substances in celestial bodies." The Orgueil meteorite was subject to a hoax, when a sample of the meteorite was contaminated with a rush seed. The hoax was discovered in the 1960s when the meteorite was being examined for evidence of extraterrestrial biological material. There is no suggestion in literature that Cloez was party to this hoax. References 1817 births 1883 deaths 19th-century French chemists French organic chemists
François Stanislas Cloez
Chemistry
361
8,666,685
https://en.wikipedia.org/wiki/Molecular%20Koch%27s%20postulates
Molecular Koch's postulates are a set of experimental criteria that must be satisfied to show that a gene found in a pathogenic microorganism encodes a product that contributes to the disease caused by the pathogen. Genes that satisfy molecular Koch's postulates are often referred to as virulence factors. The postulates were formulated by the microbiologist Stanley Falkow in 1988 and are based on Koch's postulates. Postulates As per Falkow's original descriptions, the three postulates are: "The phenotype or property under investigation should be associated with pathogenic members of a genus or pathogenic strains of a species. Specific inactivation of the gene(s) associated with the suspected virulence trait should lead to a measurable loss in pathogenicity or virulence. Reversion or allelic replacement of the mutated gene should lead to restoration of pathogenicity." To apply the molecular Koch's postulates to human diseases, researchers must identify which microbial genes are potentially responsible for symptoms of pathogenicity, often by sequencing the full genome to compare which nucleotides are homologous to the protein-coding genes of other species. Alternatively, scientists can identify which mRNA transcripts are at elevated levels in the diseased organs of infected hosts. Additionally, the tester must identify and verify methods for inactivating and reactivating the gene being studied. In 1996, Fredricks and Relman proposed seven molecular guidelines for establishing microbial disease causation: "A nucleic acid sequence belonging to a putative pathogen should be present in most cases of an infectious disease. Microbial nucleic acids should be found preferentially in those organs or gross anatomic sites known to be diseased (i.e., with anatomic, histologic, chemical, or clinical evidence of pathology) and not in those organs that lack pathology. Fewer, or no, copy numbers of pathogen-associated nucleic acid sequences should occur in hosts or tissues without disease. With resolution of disease (for example, with clinically effective treatment), the copy number of pathogen-associated nucleic acid sequences should decrease or become undetectable. With clinical relapse, the opposite should occur. When sequence detection predates disease, or sequence copy number correlates with severity of disease or pathology, the sequence-disease association is more likely to be a causal relationship. The nature of the microorganism inferred from the available sequence should be consistent with the known biological characteristics of that group of organisms. When phenotypes (e.g., pathology, microbial morphology, and clinical features) are predicted by sequence-based phylogenetic relationships, the meaningfulness of the sequence is enhanced. Tissue-sequence correlates should be sought at the cellular level: efforts should be made to demonstrate specific in-situ hybridization of microbial sequence to areas of tissue pathology and to visible microorganisms or to areas where microorganisms are presumed to be located. These sequence-based forms of evidence for microbial causation should be reproducible." References Epidemiology Microbiology Diseases and disorders Cause (medicine)
Molecular Koch's postulates
Chemistry,Biology,Environmental_science
658
8,849,394
https://en.wikipedia.org/wiki/Launch%20commit%20criteria
Launch commit criteria are the criteria which must be met in order for the countdown and launch of a Space Shuttle or other launch vehicle to continue. These criteria relate to safety issues and the general success of the launch, as opposed to supplemental data. Atlas V Launch commit criteria for Atlas V launches are similar to those used for the Atlas V launch of the Mars Science Laboratory wind at the launch pad exceeds ceiling less than or visibility less than upper-level conditions containing wind shear that could lead to control problems for the launch vehicle. cloud layer greater than thick that extends into freezing temperatures cumulus clouds with tops that extend into freezing temperatures within of the edge of a thunderstorm that is producing lightning for 30 minutes after the last lightning is observed. field mill instrument readings within of the launch pad or the flight path exceed +/- 1,500 volts per meter for 15 minutes after they occur thunderstorm anvil is within of the flight path thunderstorm debris cloud is within or fly through a debris cloud for three hours Do not launch through disturbed weather that has clouds that extend into freezing temperatures and contain moderate or greater precipitation, or launch within of disturbed weather adjacent to the flight path Do not launch through cumulus clouds formed as the result of or directly attached to a smoke plume Falcon 9 NASA has identified the Falcon 9 vehicle cannot be launched under the following conditions. sustained wind at the level of the launch pad in excess of , upper-level conditions containing wind shear that could lead to control problems for the launch vehicle, launch through a cloud layer greater than thick that extends into freezing temperatures, launch within of cumulus clouds with tops that extend into freezing temperatures, within of the edge of a thunderstorm that is producing lightning within 30 minutes after the last lightning is observed, within of an attached thunderstorm anvil cloud, within of disturbed weather clouds that extend into freezing temperatures and contain moderate or greater precipitation, within of a thunderstorm debris cloud, through cumulus clouds formed as the result of or directly attached to a smoke plume. The following should delay launch: delay launch for 15 minutes if field mill instrument readings within of the launch pad exceed +/- 1,500 volts per meter, or +/- 1,000 volts per meter, delay launch for 30 minutes after lightning is observed within of the launch pad or the flight path. Unique for Crew Dragon launches of the Falcon 9: weather downrange has a high chance or is violating splashdown limits (wind, wave, lightning, and precipitation limits) in case of a launch escape Space Shuttle Weather The weather conditions NASA required during countdown and launch were specified for "prior to loading external tank propellant" and "after loading propellant has begun". Weather forecasts were provided by the 45th Weather Squadron at nearby Patrick Air Force Base with concerns such as thunderstorms, winds, low cloud ceilings, or anvil clouds noted in the report. Prior to loading propellant Tanking was not to begin if the 24-hour average temperature had been below , the wind was observed or forecast to exceed for the next three-hour period, or there was a forecast to be greater than a 20% chance of lightning within five nautical miles of the launch pad during the first hour of tanking. After propellant loading was underway After tanking began, the countdown must not be continued, nor the Shuttle launched, if any of the following weather criteria were exceeded: Temperature Once propellant loading had begun, the countdown was to be stopped if the temperature remained above for more than 30 consecutive minutes. The minimum temperature the countdown may proceed at was determined by a table of temperatures determined by wind speed and relative humidity ranging from (high humidity, high winds) to (low humidity, low winds). In no case was the space shuttle to be launched if the temperature was degrees or colder. Wind For launch the wind constraints at the launch pad varied slightly for each mission. The peak wind speed allowable was . However, when the wind direction was between 100 degrees and 260 degrees, the peak speed varies and may be as low as . Precipitation None was allowed to exist at the launch pad or within the flight path. References External links Launch Weather Forecast for Cape Canaveral from Patrick Space Force Base Example pre-launch weather report from MAVEN pre-launch press conference Spaceflight
Launch commit criteria
Astronomy
864
3,059,692
https://en.wikipedia.org/wiki/Stokes%20radius
The Stokes radius or Stokes–Einstein radius of a solute is the radius of a hard sphere that diffuses at the same rate as that solute. Named after George Gabriel Stokes, it is closely related to solute mobility, factoring in not only size but also solvent effects. A smaller ion with stronger hydration, for example, may have a greater Stokes radius than a larger ion with weaker hydration. This is because the smaller ion drags a greater number of water molecules with it as it moves through the solution. Stokes radius is sometimes used synonymously with effective hydrated radius in solution. Hydrodynamic radius, RH, can refer to the Stokes radius of a polymer or other macromolecule. Spherical case According to Stokes’ law, a perfect sphere traveling through a viscous liquid feels a drag force proportional to the frictional coefficient : where is the liquid's viscosity, is the sphere's drift speed, and is its radius. Because ionic mobility is directly proportional to drift speed, it is inversely proportional to the frictional coefficient: where represents ionic charge in integer multiples of electron charges. In 1905, Albert Einstein found the diffusion coefficient of an ion to be proportional to its mobility constant: where is the Boltzmann constant and is electrical charge. This is known as the Einstein relation. Substituting in the frictional coefficient of a perfect sphere from Stokes’ law yields which can be rearranged to solve for , the radius: In non-spherical systems, the frictional coefficient is determined by the size and shape of the species under consideration. Research applications Stokes radii are often determined experimentally by gel-permeation or gel-filtration chromatography. They are useful in characterizing biological species due to the size-dependence of processes like enzyme-substrate interaction and membrane diffusion. The Stokes radii of sediment, soil, and aerosol particles are considered in ecological measurements and models. They likewise play a role in the study of polymer and other macromolecular systems. See also Born equation Capillary electrophoresis Dynamic light scattering Equivalent spherical diameter Einstein relation (kinetic theory) Ionic radius Ion transport number Molar conductivity References Fluid dynamics Radii
Stokes radius
Chemistry,Engineering
455
17,633,498
https://en.wikipedia.org/wiki/Software%20quality%20assurance%20analyst
A software quality assurance (QA) analyst, also referred to as a software quality analyst or simply a quality assurance (QA) analyst, is an individual who is responsible for applying the principles and practices of software quality assurance throughout the software development life cycle. Software testing is one of many parts of the larger process of QA. Testing is used to detect errors in a product, while QA also fixes the processes that resulted in those errors. Software QA analysts may have professional certification from a software testing certification board, like the International Software Testing Qualifications Board (ISTQB). References Software quality Computer occupations Systems analysis
Software quality assurance analyst
Technology,Engineering
128
71,388,879
https://en.wikipedia.org/wiki/Nanidovirineae
Nanidovirineae is a suborder of viruses in the order Nidovirales, comprising two families. Hosts Ghost sharks and the halfbeak Hyporhamphus sajori serve as natural hosts for species in the suborder. Nanidovirineae where found infecting these fishes only in China. Nanidovirineae is distinguished from its sibling suborders mainly by this reliance on fish hosts, although Tobaniviridae also have some fish as natural hosts. Taxonomy Families Nanghoshaviridae Nanhypoviridae Sibling suborders Arnidovirineae Coronaviridae Mesnidovirineae   Monidovirineae Ronidovirineae Tornidovirineae References Nidovirales Virus suborders
Nanidovirineae
Biology
162
36,069,397
https://en.wikipedia.org/wiki/Mozilla%20Open%20Badges
Image files that contain verifiable information about learning achievements, Open Badges are based on a group of specifications and open technical standards originally developed by the Mozilla Foundation with funding from the MacArthur Foundation. The Open Badges standard describes a method for packaging information about accomplishments, embedding it into portable image files as a digital badge, and establishing an infrastructure for badge validation. The standard was originally maintained by the Badge Alliance Standard Working Group, but transitioned officially to the IMS Global Learning Consortium IMS rebranded to 1EdTech Consortium in 2022. History In 2011, the Mozilla Foundation announced their plan to develop an open technical standard called Open Badges to create and build a common system for the issuance, collection, and display of digital badges on multiple instructional sites. To launch the Open Badges project, Mozilla and MacArthur engaged with over 300 nonprofit organizations, government agencies and others about informal learning, breaking down education monopolies and fuelling individual motivation. Much of this work was guided by "Open Badges for Lifelong Learning", an early working paper created by Mozilla and the MacArthur Foundation. In 2012, Mozilla launched Open Badges 1.0 and partnered with the City of Chicago to launch The Chicago Summer of Learning (CSOL), a badges initiative to keep local youth ages four to 24 active and engaged during the summer. Institutions and organizations like Purdue University, MOUSE and the UK-based DigitalME adopted badges, and Mozilla saw international interest in badging programs from Australia and Italy to China and Scotland. By 2013, over 1,450 organizations were issuing Open Badges and Mozilla's partnership with Chicago had grown into the Cities of Learning Initiative, an opportunity to apply CSOL's success across the country. In 2014, Mozilla launched the Badge Alliance, a network of organizations and individuals committed to building the open badging ecosystem and advancing the Open Badges specification. Founding members include Mozilla, the MacArthur Foundation, DigitalME, Sprout Fund, and Blackboard. More than 650 organizations from six continents signed up through the Badge Alliance to contribute to the Open Badges ecosystem. In 2015, the Badge Alliance spun out of Mozilla and became a part of MacArthur Foundation spin off, Collective Shift - a nonprofit devoted to redesigning social systems for a connected world. Later that year, Collective Shift partnered with Concentric Sky to develop Open Badges 2.0. That same year, Concentric Sky launched the open source project Badgr to serve as a reference implementation for Open Badges. The Badgr Server is written in Python using the Django framework; source code is available under version 3 of the GNU Affero General Public License. In early 2016, IMS Global announced their commitment to Open Badges as an interoperable standard for digital credentials, and in late 2016, Mozilla announced that stewardship of the Open Badges standard would transition officially to IMS Global. In late 2018, Mozilla announced that it would retire the Mozilla Backpack program that enabled users to collect and showcase their Open Badge credentials and migrate all users to Concentric Sky's open source Badgr platform. Technical details Open Badges are designed to serve a broad range of digital badge use cases, including both academic and non-academic uses. The core Open Badge specification is made up of three types of Badge Objects: Assertion Represents an awarded badge. It contains information about a single badge that belongs to an individual earner. BadgeClass Contains information about the accomplishment(s) a specific badge recognizes. As the same badge may be awarded to many earners, there may be many Assertions that correspond to a single BadgeClass. IssuerOrganization Contains a collection of information about the entity (e.g., person, organization) which issued a badge. Beginning with version 1.1, valid JSON-LD must be used for Open Badges. Version 1.1 also adds Extensions, a structure that follows a standard format for collaboratively extending Badge Objects so that any issuer, earner, or consumer can understand the information added to badges. Any issuer may define and publish Extensions to include new types of metadata in badges. Any other issuer may use the same extensions to publish similar information in a mutually recognizable way. An exploratory prototype draft xAPI vocabulary has been defined so that Open Badges may be referenceable from Experience API activity streams. References External links Open Badges Technical Specification (Version 2.0 Public Draft, MozillaWiki on Badges (Historical) E-learning Cloud standards Open content projects Online content distribution Computer icons Internet culture
Mozilla Open Badges
Technology
929
32,008,824
https://en.wikipedia.org/wiki/MIZ%20zinc%20finger
In molecular biology the MIZ-type zinc finger domain is a zinc finger-containing protein with homology to the yeast protein, Nfi-1. Miz1 is a sequence specific DNA binding protein that can function as a positive-acting transcription factor. Miz1 binds to the homeobox protein Msx2, enhancing the specific DNA-binding ability of Msx2. Other proteins containing this domain include the human pias family (protein inhibitor of activated STAT protein). The name MIZ is derived from Msx-interacting-zinc finger. The crystal structure of S. cerevisiae sumo e3 ligase siz1 containing this domain has been solved. References Protein domains
MIZ zinc finger
Biology
143
863,739
https://en.wikipedia.org/wiki/Protvino
Protvino (Russian: Протвино) is a town in Moscow Oblast, Russia, located about south of Moscow and west of Serpukhov, on the left bank of the Protva River. Population: History Construction of an urban-type settlement intended to house a large high energy physics research laboratory started in 1958, and the Rosatom Institute for High Energy Physics was opened here in 1965. The institute is known for the 70 GeV proton accelerator which was the largest in the world at the time it was launched in 1967, and other physics research. Town status was granted in 1989. The UNK Collider was the last big planned particle accelerator. Among the discoveries made at IHEP are that of antihelium and the Serpukhov cross-section effect. Administrative and municipal status Within the framework of administrative divisions, it is incorporated as Protvino Town Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts. As a municipal division, Protvino Town Under Oblast Jurisdiction is incorporated as Protvino Urban Okrug. Transport In the city the Protvino railroad station is located, although it is only used for cargo transport. Public transport is provided by buses. Twin towns – sister cities Protvino is twinned with: Antony, France Bowling Green, United States Gomel, Belarus Lahoysk, Belarus Milan, United States Somero, Finland Notable people Nixelpixel (born 1993), feminist and cyber activist Vitali Yelsukov (born 1973), football player Anatoli Bugorski (born 1942), survivor of a particle accelerator accident Inga Kuznetsova (born 1974), surrealist poet and writer References Notes Sources External links Unofficial website of Protvino Institute for High Energy Physics Cities and towns in Moscow Oblast Nuclear research institutes Cities and towns built in the Soviet Union Populated places established in 1958 Naukograds
Protvino
Engineering
395
60,852,153
https://en.wikipedia.org/wiki/De%20novo%20gene%20birth
De novo gene birth is the process by which new genes evolve from non-coding DNA. De novo genes represent a subset of novel genes, and may be protein-coding or instead act as RNA genes. The processes that govern de novo gene birth are not well understood, although several models exist that describe possible mechanisms by which de novo gene birth may occur. Although de novo gene birth may have occurred at any point in an organism's evolutionary history, ancient de novo gene birth events are difficult to detect. Most studies of de novo genes to date have thus focused on young genes, typically taxonomically restricted genes (TRGs) that are present in a single species or lineage, including so-called orphan genes, defined as genes that lack any identifiable homolog. It is important to note, however, that not all orphan genes arise de novo, and instead may emerge through fairly well characterized mechanisms such as gene duplication (including retroposition) or horizontal gene transfer followed by sequence divergence, or by gene fission/fusion. Although de novo gene birth was once viewed as a highly unlikely occurrence, several unequivocal examples have now been described, and some researchers speculate that de novo gene birth could play a major role in evolutionary innovation, morphological specification, and adaptation, probably promoted by their low level of pleiotropy. History As early as the 1930s, J. B. S. Haldane and others suggested that copies of existing genes may lead to new genes with novel functions. In 1970, Susumu Ohno published the seminal text Evolution by Gene Duplication. For some time subsequently, the consensus view was that virtually all genes were derived from ancestral genes, with François Jacob famously remarking in a 1977 essay that "the probability that a functional protein would appear de novo by random association of amino acids is practically zero." In the same year, however, Pierre-Paul Grassé coined the term "overprinting" to describe the emergence of genes through the expression of alternative open reading frames (ORFs) that overlap preexisting genes. These new ORFs may be out of frame with or antisense to the preexisting gene. They may also be in frame with the existing ORF, creating a truncated version of the original gene, or represent 3’ extensions of an existing ORF into a nearby ORF. The first two types of overprinting may be thought of as a particular subtype of de novo gene birth; although overlapping with a previously coding region of the genome, the primary amino-acid sequence of the new protein is entirely novel and derived from a frame that did not previously contain a gene. The first examples of this phenomenon in bacteriophages were reported in a series of studies from 1976 to 1978, and since then numerous other examples have been identified in viruses, bacteria, and several eukaryotic species. The phenomenon of exonization also represents a special case of de novo gene birth, in which, for example, often-repetitive intronic sequences acquire splice sites through mutation, leading to de novo exons. This was first described in 1994 in the context of Alu sequences found in the coding regions of primate mRNAs. Interestingly, such de novo exons are frequently found in minor splice variants, which may allow the evolutionary “testing” of novel sequences while retaining the functionality of the major splice variant(s). Still, it was thought by some that most or all eukaryotic proteins were constructed from a constrained pool of “starter type” exons. Using the sequence data available at the time, a 1991 review estimated the number of unique, ancestral eukaryotic exons to be < 60,000, while in 1992 a piece was published estimating that the vast majority of proteins belonged to no more than 1,000 families. Around the same time, however, the sequence of chromosome III of the budding yeast Saccharomyces cerevisiae was released, representing the first time an entire chromosome from any eukaryotic organism had been sequenced. Sequencing of the entire yeast nuclear genome was then completed by early 1996 through a massive, collaborative international effort. In his review of the yeast genome project, Bernard Dujon noted that the unexpected abundance of genes lacking any known homologs was perhaps the most striking finding of the entire project. In 2006 and 2007, a series of studies provided arguably the first documented examples of de novo gene birth that did not involve overprinting. These studies were conducted using the accessory gland transcriptomes of Drosophila yakuba and Drosophila erecta and they identified 20 putative lineage-restricted genes that appeared unlikely to have resulted from gene duplication. Levine and colleagues identified and confirmed five de novo candidate genes specific to Drosophila melanogaster and/or the closely related Drosophila simulans through a rigorous approach that combined bioinformatic and experimental techniques. Since these initial studies, many groups have identified specific cases of de novo gene birth events in diverse organisms. The first de novo gene identified in yeast, BSC4 gene was identified in S. cerevisiae in 2008. This gene shows evidence of purifying selection, is expressed at both the mRNA and protein levels, and when deleted is synthetically lethal with two other yeast genes, all of which indicate a functional role for the BSC4 gene product. Historically, one argument against the notion of widespread de novo gene birth is the evolved complexity of protein folding. Interestingly, Bsc4 was later shown to adopt a partially folded state that combines properties of native and non-native protein folding. In plants, the first de novo gene to be functionally characterized was QQS, an Arabidopsis thaliana gene identified in 2009 that regulates carbon and nitrogen metabolism. The first functionally characterized de novo gene identified in mice, a noncoding RNA gene, was also described in 2009. In primates, a 2008 informatic analysis estimated that 15/270 primate orphan genes had been formed de novo. A 2009 report identified the first three de novo human genes, one of which is a therapeutic target in chronic lymphocytic leukemia. Since this time, a plethora of genome-level studies have identified large numbers of orphan genes in many organisms, although the extent to which they arose de novo, and the degree to which they can be deemed functional, remain debated. Identification Identification of de novo emerging sequences There are two major approaches to the systematic identification of novel genes: genomic phylostratigraphy and synteny-based methods. Both approaches are widely used, individually or in a complementary fashion. Genomic phylostratigraphy Genomic phylostratigraphy involves examining each gene in a focal, or reference, species and inferring the presence or absence of ancestral homologs through the use of the BLAST sequence alignment algorithms or related tools. Each gene in the focal species can be assigned an age (aka “conservation level” or “genomic phylostratum”) that is based on a predetermined phylogeny, with the age corresponding to the most distantly related species in which a homolog is detected. When a gene lacks any detectable homolog outside of its own genome, or close relatives, it is said to be a novel, taxonomically restricted or orphan gene. Phylostratigraphy is limited by the set of closely related genomes that are available, and results are dependent on BLAST search criteria. In addition, it is often difficult to determine based on lack of observed sequence similarity whether a novel gene has emerged de novo or has diverged from an ancestral gene beyond recognition, for instance following a duplication event. This was pointed out by a study that simulated the evolution of genes of equal age and found that distant orthologs can be undetectable for rapidly evolving genes. On the other hand, when accounting for changes in the rate of evolution in young regions of genes, a phylostratigraphic approach was more accurate at assigning gene ages in simulated data. Subsequent studies using simulated evolution found that phylostratigraphy failed to detect an ortholog in the most distantly related species for 13.9% of D. melanogaster genes and 11.4% of S. cerevisiae genes. However, a reanalysis of studies that used phylostratigraphy in yeast, fruit flies and humans found that even when accounting for such error rates and excluding difficult-to-stratify genes from the analyses, the qualitative conclusions were unaffected. The impact of phylostratigraphic bias on studies examining various features of de novo genes remains debated. Synteny-based approaches Synteny-based approaches use order and relative positioning of genes (or other features) to identify the potential ancestors of candidate de novo genes. Syntenic alignments are anchored by conserved “markers.” Genes are the most common marker in defining syntenic blocks, although k-mers and exons are also used. Confirmation that the syntenic region lacks coding potential in outgroup species allows a de novo origin to be asserted with higher confidence. The strongest possible evidence for de novo emergence is the inference of the specific "enabling" mutation(s) that created coding potential, typically through the analysis of smaller sequence regions, termed microsyntenic regions, of closely related species. One challenge in applying synteny-based methods is that synteny can be difficult to detect across longer timescales. To address this, various optimization techniques have been created, such as using exons clustered irrespective of their specific order to define syntenic blocks or algorithms that use well-conserved genomic regions to expand microsyntenic blocks. There are also difficulties associated with applying synteny-based approaches to genome assemblies that are fragmented or in lineages with high rates of chromosomal rearrangements, as is common in insects. Synteny-based approaches can be applied to genome-wide surveys of de novo genes and represent a promising area of algorithmic development for gene birth dating. Some have used synteny-based approaches in combination with similarity searches in an attempt to develop standardized, stringent pipelines that can be applied to any group of genomes in an attempt to address discrepancies in the various lists of de novo genes that have been generated. Determination of status Even when the evolutionary origin of a particular coding sequence has been established, there is still a lack of consensus about what constitutes a genuine de novo gene birth event. One reason for this is a lack of agreement on whether or not the entirety of the sequence must be non-genic in origin. For protein-coding de novo genes, it has been proposed that de novo genes be divided into subtypes based on the proportion of the ORF in question that was derived from a previously noncoding sequence. Furthermore, for de novo gene birth to occur, the sequence in question must be a gene which has led to a questioning of what constitutes a gene, with some models establishing a strict dichotomy between genic and non-genic sequences, and others proposing a more fluid continuum. All definitions of genes are linked to the notion of function, as it is generally agreed that a genuine gene should encode a functional product, be it RNA or protein. There are, however, different views of what constitutes function, depending whether a given sequence is assessed using genetic, biochemical, or evolutionary approaches. The ambiguity of the concept of ‘function’ is especially problematic for the de novo gene birth field, where the objects of study are often rapidly evolving. To address these challenges, the Pittsburgh Model of Function deconstructs ‘function’ into five meanings to describe the different properties that are acquired by a locus undergoing de novo gene birth: Expression, Capacities, Interactions, Physiological Implications, and Evolutionary Implications. It is generally accepted that a genuine de novo gene is expressed in at least some context, allowing selection to operate, and many studies use evidence of expression as an inclusion criterion in defining de novo genes. The expression of sequences at the mRNA level may be confirmed individually through techniques such as quantitative PCR, or globally through RNA sequencing (RNA-seq). Similarly, expression at the protein level can be determined with high confidence for individual proteins using techniques such as mass spectrometry or western blotting, while ribosome profiling (Ribo-seq) provides a global survey of translation in a given sample. Ideally, to confirm a gene arose de novo, a lack of expression of the syntenic region of outgroup species would also be demonstrated. Genetic approaches to detect a specific phenotype or change in fitness upon disruption of a particular sequence, are useful to infer function. Other experimental approaches, including screens for protein-protein and/or genetic interactions, may also be employed to confirm a biological effect for a particular de novo ORF. Evolutionary approaches may be employed to infer the existence of a molecular function from computationally derived signatures of selection. In the case of TRGs, one common signature of selection is the ratio of nonsynonymous to synonymous substitutions (dN/dS ratio), calculated from different species from the same taxon. Similarly, in the case of species-specific genes, polymorphism data may be used to calculate a pN/pS ratio from different strains or populations of the focal species. Given that young, species-specific de novo genes lack deep conservation by definition, detecting statistically significant deviations from 1 can be difficult without an unrealistically large number of sequenced strains/populations. An example of this can be seen in Mus musculus, where three very young de novo genes lack signatures of selection despite well-demonstrated physiological roles. For this reason, pN/pS approaches are often applied to groups of candidate genes, allowing researchers to infer that at least some of them are evolutionarily conserved, without being able to specify which. Other signatures of selection, such as the degree of nucleotide divergence within syntenic regions, conservation of ORF boundaries, or for protein-coding genes, a coding score based on nucleotide hexamer frequencies, have instead been employed. Prevalence Estimates of numbers Frequency and number estimates of de novo genes in various lineages vary widely and are highly dependent on methodology. Studies may identify de novo genes by phylostratigraphy/BLAST-based methods alone, or may employ a combination of computational techniques, and may or may not assess experimental evidence for expression and/or biological role. Furthermore, genome-scale analyses may consider all or most ORFs in the genome, or may instead limit their analysis to previously annotated genes. The D. melanogaster lineage is illustrative of these differing approaches. An early survey using a combination of BLAST searches performed on cDNA sequences along with manual searches and synteny information identified 72 new genes specific to D. melanogaster and 59 new genes specific to three of the four species in the D. melanogaster species complex. This report found that only 2/72 (~2.8%) of D. melanogaster-specific new genes and 7/59 (~11.9%) of new genes specific to the species complex were derived de novo, with the remainder arising via duplication/retroposition. Similarly, an analysis of 195 young (<35 million years old) D. melanogaster genes identified from syntenic alignments found that only 16 had arisen de novo. In contrast, an analysis focused on transcriptomic data from the testes of six D. melanogaster strains identified 106 fixed and 142 segregating de novo genes. For many of these, ancestral ORFs were identified but were not expressed. A newer study found that up to 39 % of orphan genes in the Drosophila clade may have emerged de novo, as they overlap with non-coding regions of the genome. Highlighting the differences between inter- and intra-species comparisons, a study in natural Saccharomyces paradoxus populations found that the number of de novo polypeptides identified more than doubled when considering intra-species diversity. In primates, one early study identified 270 orphan genes (unique to humans, chimpanzees, and macaques), of which 15 were thought to have originated de novo. Later reports identified many more de novo genes in humans alone that are supported by transcriptional and proteomic evidence. Studies in other lineages/organisms have also reached different conclusions with respect to the number of de novo genes present in each organism, as well as the specific sets of genes identified. A sample of these large-scale studies is described in the table below. Generally speaking, it remains debated whether duplication and divergence or de novo gene birth represent the dominant mechanism for the emergence of new genes, in part because de novo genes are likely to both emerge and be lost more frequently than other young genes. In a study on the origin of orphan genes in 3 different eukaryotic lineages, authors found that on average only around 30% of orphan genes can be explained by sequence divergence. Dynamics It is important to distinguish between the frequency of de novo gene birth and the number of de novo genes in a given lineage. If de novo gene birth is frequent, it might be expected that genomes would tend to grow in their gene content over time; however, the gene content of genomes is usually relatively stable. This implies that a frequent gene death process must balance de novo gene birth, and indeed, de novo genes are distinguished by their rapid turnover relative to established genes. In support of this notion, recently emerged Drosophila genes are much more likely to be lost, primarily through pseudogenization, with the youngest orphans being lost at the highest rate; this is despite the fact that some Drosophila orphan genes have been shown to rapidly become essential. A similar trend of frequent loss among young gene families was observed in the nematode genus Pristionchus. Similarly, an analysis of five mammalian transcriptomes found that most ORFs in mice were either very old or species specific, implying frequent birth and death of de novo transcripts. A comparable trend could be shown by further analyses of six primate transcriptomes. In wild S. paradoxus populations, de novo ORFs emerge and are lost at similar rates. Nevertheless, there remains a positive correlation between the number of species-specific genes in a genome and the evolutionary distance from its most recent ancestor. A rapid gain and loss of de novo genes was also found on a population level by analyzing nine natural three-spined stickleback populations. In addition to the birth and death of de novo genes at the level of the ORF, mutational and other processes also subject genomes to constant “transcriptional turnover”. One study in murines found that while all regions of the ancestral genome were transcribed at some point in at least one descendant, the portion of the genome under active transcription in a given strain or subspecies is subject to rapid change. The transcriptional turnover of noncoding RNA genes is particularly fast compared to coding genes. Examples de novo genes Features General Features Recently emerged de novo genes differ from established genes in a number of ways. Across a broad range of species, young and/or taxonomically restricted genes have been reported to be shorter in length than established genes, more positively charged, faster evolving, and to be less expressed. Although these trends could be a result of homology detection bias, a reanalysis of several studies that accounted for this bias found that the qualitative conclusions reached were unaffected. Another feature includes the tendency for young genes to have their hydrophobic amino acids more clustered near one another along the primary sequence. The expression of young genes has also been found to be more tissue- or condition-specific than that of established genes. In particular, relatively high expression of de novo genes was observed in male reproductive tissues in Drosophila, stickleback, mice, and humans, and, in the human brain. In animals with adaptive immune systems, higher expression in the brain and testes may be a function of the immune-privileged nature of these tissues. An analysis in mice found specific expression of intergenic transcripts in the thymus and spleen (in addition to the brain and testes). It has been proposed that in vertebrates de novo transcripts must first be expressed in tissues lacking immune cells before they can be expressed in tissues that have immune surveillance. Evolutionary rate For sequence evolution, dN/dS analysis studies often indicate that de novo genes evolve at a higher rate compared to other genes. For expression evolution and structural evolution, quantitative studies across different evolutionary ages or phylostratigraphic branches are very few. Features that promote de novo gene birth Its also of interest to compare features of recently emerged de novo genes to the pool of non-genic ORFs from which they emerge. Theoretical modeling has shown that such differences are the product both of selection for features that increase the likelihood of functionalization, and of neutral evolutionary forces that influence allelic turnover. Experiments in S. cerevisiae showed that predicted transmembrane domains were strongly associated with beneficial fitness effects when young ORFs were overexpressed, but not when established (older) ORFs were overexpressed. Experiments in E. coli showed that random peptides tended to have more benign effects when they were enriched for amino acids that were small, and that promoted intrinsic structural disorder. Lineage-dependent features Features of de novo genes can depend on the species or lineage being examined. This appears to partly be a result of varying GC content in genomes and that young genes bear more similarity to non-genic sequences from the genome in which they arose than do established genes. Features in the resulting protein, such as the percentage of transmembrane residues and the relative frequency of various predicted secondary structural features show a strong GC dependency in orphan genes, whereas in more ancient genes these features are only weakly influenced by GC content. The relationship between gene age and the amount of predicted intrinsic structural disorder (ISD) in the encoded proteins has been subject to considerable debate. It has been claimed that ISD is also a lineage-dependent feature, exemplified by the fact that in organisms with relatively high GC content, ranging from D. melanogaster to the parasite Leishmania major, young genes have high ISD, while in a low GC genome such as budding yeast, several studies have shown that young genes have low ISD. However, a study that excluded young genes with dubious evidence for functionality, defined in binary terms as being under selection for gene retention, found that the remaining young yeast genes have high ISD, suggesting that the yeast result may be due to contamination of the set of young genes with ORFs that do not meet this definition, and hence are more likely to have properties that reflect GC content and other non-genic features of the genome. Beyond the very youngest orphans, this study found that ISD tends to decrease with increasing gene age, and that this is primarily due to amino acid composition rather than GC content. Within shorter time scales, using de novo genes that have the most validation suggests that younger genes are more disordered in Lachancea, but less disordered in Saccharomyces. Intrinsic structural disorder and aggregation propensity did not show significant differences with age in some studies of mammals and primates, but did in other studies of mammals. One large study of the entire Pfam protein domain database showed enrichment of younger protein domain for disorder-promoting amino acids across animals, but enrichment on the basis of amino acid availability in plants. Role of epigenetic modifications An examination of de novo genes in A. thaliana found that they are both hypermethylated and generally depleted of histone modifications. In agreement with either the proto-gene model or contamination with non-genes, methylation levels of de novo genes were intermediate between established genes and intergenic regions. The methylation patterns of these de novo genes are stably inherited, and methylation levels were highest, and most similar to established genes, in de novo genes with verified protein-coding ability. In the pathogenic fungus Magnaporthe oryzae, less conserved genes tend to have methylation patterns associated with low levels of transcription. A study in yeasts also found that de novo genes are enriched at recombination hotspots, which tend to be nucleosome-free regions. In Pristionchus pacificus, orphan genes with confirmed expression display chromatin states that differ from those of similarly expressed established genes. Orphan gene start sites have epigenetic signatures that are characteristic of enhancers, in contrast to conserved genes that exhibit classical promoters. Many unexpressed orphan genes are decorated with repressive histone modifications, while a lack of such modifications facilitates transcription of an expressed subset of orphans, supporting the notion that open chromatin promotes the formation of novel genes. Structural evolution De novo proteins typically exhibit less well-defined secondary and three-dimensional structures, often lacking rigid folding but having extensive disordered regions. Quantitative analyses are still lacking on the evolution of secondary structural elements and tertiary structures over time. As structure is usually more conserved than sequence, comparing structures between orthologs could provide deeper insides into de novo gene emergence and evolution and help to confirm these genes as true de novo genes. Nevertheless, so far only very few de novo proteins have been structurally and functionally characterized, especially due to problems with protein purification and subsequent stability. Progresses have been made using different purification tags, cell types and chaperones. The ‘antifreeze glycoprotein’ (AFGP) in Arctic codfishes prevents their blood from freezing in arctic waters. Bsc4, a short non-essential de novo protein in yeast, has been shown to be built mainly by β-sheets and has a hydrophobic core. It is associated to DNA repair under nutrient-deficient conditions. The Drosophila de novo protein Goddard has been characterized for the first time in 2017. Knockdown Drosophila melanogaster male flies were not able to produce sperm. Recently, it could be shown that this lack was due to failure of individualization of elongated spermatids. By using computational phylogenomic and structure predictions, experimental structural analyses, and cell biological assays, it was proposed that half of Goddard's structure is disordered and the other half is composed by alpha-helical amino acids. These analyses also indicated that Goddard's orthologs show similar results. Goddard's structure therefore appears to have been mainly conserved since its emergence. Mechanisms Pervasive expression With the development of technologies such as RNA-seq and Ribo-seq, eukaryotic genomes are now known to be pervasively transcribed and translated. Many ORFs that are either unannotated, or annotated as long non-coding RNAs (lncRNAs), are translated at some level, either in a condition or tissue-specific manner. Though infrequent, these translation events expose non-genic sequence to selection. This pervasive expression forms the basis for several models describing de novo gene birth. It has been speculated that the epigenetic landscape of de novo genes in the early stages of formation may be particularly variable between and among populations, resulting in variable gene expression thereby allowing young genes to explore the “expression landscape.” The QQS gene in A. thaliana is one example of this phenomenon; its expression is negatively regulated by DNA methylation that, while heritable for several generations, varies widely in its levels both among natural accessions and within wild populations. Epigenetics are also largely responsible for the permissive transcriptional environment in the testes, particularly through the incorporation into nucleosomes of non-canonical histone variants that are replaced by histone-like protamines during spermatogenesis. Intergenic ORFs as elementary structural modules Analysis of the fold potential diversity shows that the majority of the amino acid sequences encoded by the intergenic ORFs of S. cerevisiae are predicted to be foldable. More importantly, these amino acid sequences with folding potential can serve as elementary building blocks for de novo genes or integrate into pre-existing genes. Order of events For birth of a de novo protein-coding gene to occur, a non-genic sequence must both be transcribed and acquire an ORF before becoming translated. These events could occur in either order, and there is evidence supporting both an “ORF first” and a “transcription first” model. An analysis of de novo genes that are segregating in D. melanogaster found that sequences that are transcribed had similar coding potential to the orthologous sequences from lines lacking evidence of transcription. This finding supports the notion that many ORFs can exist prior to being transcribed. The antifreeze glycoprotein gene AFGP, which emerged de novo in Arctic codfishes, provides a more definitive example in which the de novo emergence of the ORF was shown to precede the promoter region. Furthermore, putatively non-genic ORFs long enough to encode functional peptides are numerous in eukaryotic genomes, and expected to occur at high frequency by chance. Through tracing the evolution history of ORF sequences and transcription activation of human de novo genes, a study showed that some ORFs were ready to confer biological significance upon their birth. At the same time, transcription of eukaryotic genomes is far more extensive than previously thought, and there are documented examples of genomic regions that were transcribed prior to the appearance of an ORF that became a de novo gene. The proportion of de novo genes that are protein-coding is unknown, but the appearance of “transcription first” has led some to posit that protein-coding de novo genes may first exist as RNA gene intermediates. The case of bifunctional RNAs, which are both translated and function as RNA genes, shows that such a mechanism is plausible. The two events may occur simultaneously when chromosomal rearrangement is the event that precipitates gene birth. Models Several theoretical models and possible mechanisms of de novo gene birth have been described. The models are generally not mutually exclusive, and it is possible that multiple mechanisms may give rise to de novo genes. An example is the type III antifreeze protein gene, which originates from an old sialic acid synthase (SAS) gene, in an Antarctic zoarcid fish. “Out of Testis” hypothesis An early case study of de novo gene birth, which identified five de novo genes in D. melanogaster, noted preferential expression of these genes in the testes, and several additional de novo genes were identified using transcriptomic data derived from the testes and male accessory glands of D. yakuba and D. erecta. This is in agreement with other studies that showed there is rapid evolution of genes related to reproduction across a range of lineages, suggesting that sexual selection may play a key role in adaptive evolution and de novo gene birth. A subsequent large-scale analysis of six D. melanogaster strains identified 248 testis-expressed de novo genes, of which ~57% were not fixed. A recent study on twelve Drosophila species additionally identified a higher proportion of de novo genes with testis-biased expression compared to annotated proteome. It has been suggested that the large number of de novo genes with male-specific expression identified in Drosophila is likely due to the fact that such genes are preferentially retained relative to other de novo genes, for reasons that are not entirely clear. Interestingly, two putative de novo genes in Drosophila (Goddard and Saturn) were shown to be required for normal male fertility. A genetic screen of over 40 putative de novo genes with testis-enriched expression in Drosophila melanogaster revealed that one of the de novo genes, atlas, was required for proper chromatin condensation during the final stages of spermatogenesis in male. atlas evolved from the fusion of a protein-coding gene that arose at the base of Drosophila genus and a conserved non-coding RNA. Comparative analysis of the transcriptomes of testis and accessory glands, a somatic tissue of males that is important for fertility, of D. melanogaster suggests that de novo genes make greater contribution to the transcriptomic complexity of testis as compared to accessory glands. Single-cell RNA-seq of D. melanogaster testis revealed that the expression pattern of de novo genes was biased toward early spermatogenesis. In humans, a study that identified 60 human-specific de novo genes found that their average expression, as measured by RNA-seq, was highest in the testes. Another study looking at mammalian-specific genes more generally also found enriched expression in the testes. Transcription in mammalian testes is thought to be particularly promiscuous, due in part to elevated expression of the transcription machinery and an open chromatin environment. Along with the immune-privileged nature of the testes, this promiscuous transcription is thought to create the ideal conditions for the expression of non-genic sequences required for de novo gene birth. Testes-specific expression seems to be a general feature of all novel genes, as an analysis of Drosophila and vertebrate species found that young genes showed testes-biased expression regardless of their mechanism of origination. Preadaptation model The preadaptation model of de novo gene birth uses mathematical modeling to show that when sequences that are normally hidden are exposed to weak or shielded selection, the resulting pool of “cryptic” sequences (i.e. proto-genes) can be purged of “self-evidently deleterious” variants, such as those prone to lead to protein aggregation, and thus enriched in potential adaptations relative to a completely non-expressed and unpurged set of sequences. This revealing and purging of cryptic deleterious non-genic sequences is a byproduct of pervasive transcription and translation of intergenic sequences, and is expected to facilitate the birth of functional de novo protein-coding genes. This is because by eliminating the most deleterious variants, what is left is, by a process of elimination, more likely to be adaptive than expected from random sequences. Using the evolutionary definition of function (i.e. that a gene is by definition under purifying selection against loss), the preadaptation model assumes that “gene birth is a sudden transition to functionality” that occurs as soon as an ORF acquires a net beneficial effect. In order to avoid being deleterious, newborn genes are expected to display exaggerated versions of genic features associated with the avoidance of harm. This is in contrast to the proto-gene model, which expects newborn genes to have features intermediate between old genes and non-genes. The mathematics of the preadaptation model assume that the distribution of fitness effects is bimodal, with new sequences of mutations tending to break something or tinker, but rarely in between. Following this logic, populations may either evolve local solutions, in which selection operates on each individual locus and a relatively high error rate is maintained, or a global solution with a low error rate which permits the accumulation of deleterious cryptic sequences. De novo gene birth is thought to be favored in populations that evolve local solutions, as the relatively high error rate will result in a pool of cryptic variation that is “preadapted” through the purging of deleterious sequences. Local solutions are more likely in populations with a high effective population size. In support of the preadaptation model, an analysis of ISD in mice and yeast found that young genes have higher ISD than old genes, while random non-genic sequences tend to show the lowest levels of ISD. Although the observed trend may have partly resulted from a subset of young genes derived by overprinting, higher ISD in young genes is also seen among overlapping viral gene pairs. With respect to other predicted structural features such as β-strand content and aggregation propensity, the peptides encoded by proto-genes are similar to non-genic sequences and categorically distinct from canonical genes. Proto-gene model This proto-gene model agrees with the preadaptation model about the importance of pervasive expression, and refers to the set of pervasively expressed sequences that do not meet all definitions of a gene as “proto-genes”. In contrast to the preadaptation model, the proto-gene model, suggests newborn genes have features intermediate between old genes and non-genes. Specifically this model envisages a more gradual process under selection from non-genic to genic state, rejecting the binary classification of gene and non-gene. In an extension of the proto-gene model, it has been proposed that as proto-genes become more gene-like, their potential for adaptive change gives way to selected effects; thus, the predicted impact of mutations on fitness is dependent on the evolutionary status of the ORF. This notion is supported by the fact that overexpression of established ORFs in S. cerevisiae tends to be less beneficial (and more harmful) than does overexpression of emerging ORFs. Several features of ORFs correlate with ORF age as determined by phylostratigraphic analysis, with young ORFs having properties intermediate between old ORFs and non-genes; this has been taken as evidence in favor of the proto-gene model, in which proto-gene state is a continuum . This evidence has been criticized, because the same apparent trends are also expected under a model in which identity as a gene is a binary. Under this model, when each age group contains a different ratio of genes vs. non-genes, Simpson's paradox can generate correlations in the wrong direction. Grow slow and moult model The “grow slow and moult” model describes a potential mechanism of de novo gene birth, particular to protein-coding genes. In this scenario, existing protein-coding ORFs expand at their ends, especially their 3’ ends, leading to the creation of novel N- and C-terminal domains. Novel C-terminal domains may first evolve under weak selection via occasional expression through read-through translation, as in the preadaptation model, only later becoming constitutively expressed through a mutation that disrupts the stop codon. Genes experiencing high translational readthrough tend to have intrinsically disordered C-termini. Furthermore, existing genes are often close to repetitive sequences that encode disordered domains. These novel, disordered domains may initially confer some non-specific binding capability that becomes gradually refined by selection. Sequences encoding these novel domains may occasionally separate from their parent ORF, leading or contributing to the creation of a de novo gene. Interestingly, an analysis of 32 insect genomes found that novel domains (i.e. those unique to insects) tend to evolve fairly neutrally, with only a few sites under positive selection, while their host proteins remain under purifying selection, suggesting that new functional domains emerge gradually and somewhat stochastically. Escape from adaptive conflict The evolutionary model escape from adaptive conflict (EAC) proposes a possible way for new gene duplication to be fixed: conflict due to contrasting function within a single gene drives the fixation of new duplication. Pleiotropy-barrier model The 'pleiotropy-barrier' model suggests that newly evolved genes, including de novo genes and duplication-related genes, could facilitate evolutionary innovation or evolution of specific functions due to their low (or no) pleiotropic effect, when facing new selective force, based on observations from human gene-disease data. Human health In addition to its significance for the field of evolutionary biology, de novo gene birth has implications for human health. It has been speculated that novel genes, including de novo genes, may play an outsized role in species-specific traits; however, many species-specific genes lack functional annotation. Nevertheless, there is evidence to suggest that human-specific de novo genes are involved in diseases such as cancer. NYCM, a de novo gene unique to humans and chimpanzees, regulates the pathogenesis of neuroblastomas in mouse models, and the primate-specific PART1, an lncRNA gene, has been identified as both a tumor suppressor and an oncogene in different contexts. Several other human- or primate-specific de novo genes, including PBOV1, GR6, MYEOV, ELFN1-AS1, and CLLU1, are also linked to cancer. Some have even suggested considering tumor-specifically expressed, evolutionary novel genes as their own class of genetic elements, noting that many such genes are under positive selection and may be neofunctionalized in the context of tumors. The specific expression of many de novo genes in the human brain also raises the intriguing possibility that de novo genes influence human cognitive traits. One such example is FLJ33706, a de novo gene that was identified in GWAS and linkage analyses for nicotine addiction and shows elevated expression in the brains of Alzheimer's patients. Generally speaking, expression of young, primate-specific genes is enriched in the fetal human brain relative to the expression of similarly young genes in the mouse brain. Most of these young genes, several of which originated de novo, are expressed in the neocortex, which is thought to be responsible for many aspects of human-specific cognition. Many of these young genes show signatures of positive selection, and functional annotations indicate that they are involved in diverse molecular processes, but are enriched for transcription factors. In addition to their roles in cancer processes, de novo originated human genes have been implicated in the maintenance of pluripotency and in immune function. The preferential expression of de novo genes in the testes is also suggestive of a role in reproduction. Given that the function of many de novo human genes remains uncharacterized, it seems likely that an appreciation of their contribution to human health and development will continue to grow. Genome-scale studies of orphan and de novo genes in various lineages. Note: For purposes of this table, genes are defined as orphan genes (when species-specific) or TRGs (when limited to a closely related group of species) when the mechanism of origination has not been investigated, and as de novo genes when de novo origination has been inferred, irrespective of method of inference. The designation of de novo genes as “candidates” or “proto-genes” reflects the language used by the authors of the respective studies. See also Molecular evolution Population genetics Evolvability Overlapping gene Orphan gene References Genes Modification of genetic information
De novo gene birth
Biology
8,808
53,821,188
https://en.wikipedia.org/wiki/Startle-evoked%20movement
Startle-evoked movement (SEM or startReact) is the involuntary initiation of a planned action in response to a startling stimulus. While the classic startle reflex involves involuntary protective movements, SEMs can be a variety of arm, hand and leg actions including wrist flexion, and rising onto tiptoes. SEMs are performed faster than voluntary movements, but retain the same muscle activation characteristics. SEM has been used to study how the brain, spinal cord and brainstem can interact to produce movement, and provides a potential avenue of exploration for rehabilitation strategies for those with neurological impairments. Neurophysiology Reticulospinal tract connection Muscles lacking reticulospinal tract inputs are not susceptible to SEM. The ability to elicit SEM has been used as evidence for reticulospinal tract connections to the muscles governing grasp of the human hand. Neurological impairment People who have suffered cortical damage such as a stroke are capable of performing SEM. In one experiment, SEM caused stroke survivors to perform arm movements as fast as unimpaired people, despite being slower when performing the same action voluntarily. Furthermore, people with pure hereditary spastic paraplegia, a condition effecting the corticospinal tract, are susceptible to SEM as well. Involuntary initiation SEM is typified by a reduction in the time to perform an action. Voluntary arm extension, for example, occurs roughly 170 milliseconds after a "Go" signal, while SEMs for arm extension occur between 65 and 77 milliseconds. Due to this faster reaction time, and considering conduction velocity, cortical involvement for these movements is unlikely. Movement preparation In order for SEM to occur, a subject must be waiting to perform an action when a startling stimuli is encountered. Typically, this is achieved by first presenting the subject with a "Ready" signal, indicating that the subject should prepare to conduct a specified action; then playing a startling acoustic stimuli (SAS) before the "Go" signal the subject is anticipating. References Neurophysiology Ethology Reflexes
Startle-evoked movement
Biology
431
63,651,657
https://en.wikipedia.org/wiki/Mechanoreceptors%20%28in%20plants%29
A mechanoreceptor is a sensory organ or cell that responds to mechanical stimulation such as touch, pressure, vibration, and sound from both the internal and external environment. Mechanoreceptors are well-documented in animals and are integrated into the nervous system as sensory neurons. While plants do not have nerves or a nervous system like animals, they also contain mechanoreceptors that perform a similar function. Mechanoreceptors detect mechanical stimulus originating from within the plant (intrinsic) and from the surrounding environment (extrinsic). The ability to sense vibrations, touch, or other disturbance is an adaptive response to herbivory and attack so that the plant can appropriately defend itself against harm. Mechanoreceptors can be organized into three levels: molecular, cellular, and organ-level. Mechanism of sensation Signal There is a growing body of knowledge about how mechanoreceptors in plant cells receive information about a mechanical stimulation, but there are many gaps in the current understanding. While a complete model cannot yet be formed, we do know much of what is happening at the plasma membrane. The plasma membrane is full of membrane proteins and ion channels. One type of ion channel are Mechanosensitive (MS) ion channels. MS channels are different from other membrane proteins in that their primary gating stimulus is force, such that they open conduits for ions to pass through the membrane in response to mechanical stimuli. This system allows physical force to create an ion flux, which then results in signal integration and response (as detailed below). MS channels are hypothesized to be the working mechanism in the perception of gravity, vibration, touch, hyper-osmotic and hypo-osmotic stress, pathogenic invasion, and interaction with commensal microbes. MS channels have been discovered across a diverse array of genera as well as in different plant organs, like leaves and stems, and localize to diverse cellular membranes. Not only can mechanoreceptors be present within the plasma membrane of cells, but they can also exist as whole cells whose primary purpose is to detect mechanical stimuli. A well known example is the trigger hairs on the venus fly trap . When repeatedly touched within a certain time span, the plant will snap shut, entrapping and digesting its prey. Integration and response Once the plant perceives a mechanical stimulus via mechanoreceptor cells or mechanoreceptor proteins within the plasma membrane of a cell, the resulting ion flux is integrated through signaling pathways resulting in a response. The signaling cascade (integration) and response is dependent on the type of stimulus and the particular species. For instance, it can manifest as a change in turgor pressure resulting in movement, secretion of defense chemicals, and the closing of stomata. Examples Venus flytrap Dionaea muscipula (Venus fly-trap) is known to rapidly close its lobes when touched to capture and digest its prey. The unique carnivorous plant has extremely sensitive mechanosensory hairs located on the surface of its trap. When one hair is touched by its prey, anion channels will open and depolarize the plasma membrane thus firing an action potential (AP) through the phloem. The AP results in the accumulation of Ca2+ ions. If the hairs are then left alone, the Ca2+ will dissipate. If another hair is stimulated within 30 seconds of the first hair, however, another AP will fire and the [Ca+] will reach a threshold triggering changes in cell turgor in the petiole. This will cause the trap to swiftly snap shut, trapping the pray inside its lobes. As the prey moves around within the trap, it bumps the mechanosensory hairs more thus inducing repetitive firing of AP's. Just three AP's (including the initial two) initiate the production of Jasmonic Acid hormone signaling pathways, creating an airtight seal, beginning the secretion of digestive enzymes and up-regulating the production of transporters for nutrient-uptake. Arabidopsis thaliana When caterpillars chew on leaves, they create a very specific vibrational pattern. Arabidopsis thaliana plants have adapted to elicit chemical defenses when they detect these mechanical vibration patterns to protect themselves from continued herbivory. While the signal perception, integration, and response for this system has not yet been thoroughly researched, the general guidelines for mechanosensory stimulation are thought to hold true. Mechanoreception is thought to start by triggering of mechanosensors in the cell wall and/or plasma membrane of the leaf cells, causing ion fluxes of Ca2+, Reactive Oxygen Species (ROS), and H−. These fluxes initiate signaling pathways which involve many plant hormones and rapid expression of genes that respond early to many plant stresses. These genes up-regulate the production of chemical defense molecules like glucosinolates, polyphenol anthocyanins and a suite of volatile compounds. The plant not only secretes these chemicals in the leaf that is being attacked, but also in other leaves on the plant. It is hypothesized that while there are other signals that inform the plant of herbivory, it is the mechanical vibrations that are eliciting the whole-plant response. References Plant physiology
Mechanoreceptors (in plants)
Biology
1,105
615,574
https://en.wikipedia.org/wiki/Berm
A berm is a level space, shelf, or raised barrier (usually made of compacted soil) separating areas in a vertical way, especially partway up a long slope. It can serve as a terrace road, track, path, a fortification line, a border/separation barrier for navigation, good drainage, industry, or other purposes. Etymology The word is from Middle Dutch and came into usage in English via French. Military use History In medieval military engineering, a berm (or berme) was a level space between a parapet or defensive wall and an adjacent steep-walled ditch or moat. It was intended to reduce soil pressure on the walls of the excavated part to prevent its collapse. It also meant that debris dislodged from fortifications would not fall into (and fill) a ditch or moat. In the trench warfare of World War I, the name was applied to a similar feature at the lip of a trench, which served mainly as an elbow-rest for riflemen. Modern usage In modern military engineering, a berm is the earthen or sod wall or parapet, especially a low earthen wall adjacent to a ditch. The digging of the ditch (often by a bulldozer or military engineering vehicle) can provide the soil from which the berm is constructed. Walls constructed in this manner are an obstacle to vehicles, including most armoured fighting vehicles but are easily crossed by infantry. Because of the ease of construction, such walls can be made hundreds or thousands of kilometres long. A prominent example of such a berm is the Moroccan Western Sahara Wall. Erosion control Berms are also used to control soil erosion and sedimentation by reducing the rate of surface runoff. The berms either reduce the velocity of the water, or direct water to areas that are not susceptible to erosion, thereby reducing the adverse effects of running water on exposed topsoil. Following the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, the construction of berms designed to prevent oil from reaching the fragile Louisiana wetlands (which would result in massive erosion) was proposed early on, and was officially approved by the federal government in mid-June, 2010, after numerous failures to stop and contain the oil leak with more advanced technologies. Geography In coastal geography, a berm is a bank of sand or gravel ridge parallel to the shoreline and a few tens of centimetres high, created by wave action throwing material beyond the average level of the sea. House construction Earth is piled up against exterior walls and packed, sloping down away from the house. The roof may or may not be fully earth covered, and windows/openings may occur on one or more sides of the shelter. Due to the building being above ground, fewer moisture problems are associated with earth berming in comparison to underground/fully recessed construction. Other applications For general applications, a berm is a physical, stationary barrier of some kind. For example, in highway construction, a berm is a noise barrier constructed of earth, often landscaped, running along a highway to protect adjacent land users from noise pollution. The shoulder of a road is also called a berm and in New Zealand the word describes a publicly owned grassed nature strip sometimes planted with trees alongside urban roads (usually called a verge). In snowboard cross, a berm is a wall of snow built up in a corner. In mountain biking, a berm is a banked turn formed by soil, commonly dug from the track, being deposited on the outer rim of the turn. In coastal systems, a berm is a raised ridge of pebbles or sand found at high tide or storm tide marks on a beach. In snow removal, a berm or windrow refers to the linear accumulation of snow cast aside by a plow. Earth berms are used above particle accelerator tunnels to provide shielding from radiation. In open-pit mining, a berm refers to dirt and rock piled alongside a haulage road or along the edge of a dump point. Intended as a safety measure, they are commonly required by government organizations to be at least half as tall as the wheels of the largest mining machine on-site. Physical security systems employ berms to exclude hostile vehicles and slow attackers on foot (similar to the military application without the trench). Security berms are common around military and nuclear facilities. An example is the berm proposed for Vermont Yankee nuclear power plant in Vermont. At Baylor Ballpark, a baseball stadium on the campus of Baylor University, a berm is constructed down the right field line. The berm replaces bleachers, and general admission tickets are sold for fans who wish to sit on the grass or watch the game from the top of the hill. Berms are also used as a method of environmental spill containment and liquid spill control. Bunding is the construction of a secondary impermeable barrier around and beneath storage or processing plant, sufficient to contain the plant's volume after a spill. This is often achieved on large sites by surrounding the plant with a berm. The US Environmental Protection Agency (EPA) requires that oils and fuels stored over certain volume levels be placed in secondary spill containment. Berms for spill containment are typically manufactured from polyvinyl chloride (PVC) or geomembrane fabric that provide a barrier to keep spills from reaching the ground or navigable waterways. Most berms have sidewalls to keep liquids contained for future capture and safe disposal. See also Road verge Earthworks (engineering) Bund Moroccan Wall Marches Limes (Roman Empire) Long acre Flood-meadow Floodplain References External links Engineering barrages Archaeological features Artificial landforms Fortification (architectural elements) Fortification lines Snow removal
Berm
Engineering
1,156
35,238,282
https://en.wikipedia.org/wiki/Abstract%20cell%20complex
In mathematics, an abstract cell complex is an abstract set with Alexandrov topology in which a non-negative integer number called dimension is assigned to each point. The complex is called “abstract” since its points, which are called “cells”, are not subsets of a Hausdorff space as is the case in Euclidean and CW complexes. Abstract cell complexes play an important role in image analysis and computer graphics. History The idea of abstract cell complexes (also named abstract cellular complexes) relates to J. Listing (1862) and E. Steinitz (1908). Also A.W Tucker (1933), K. Reidemeister (1938), P.S. Aleksandrov (1956) as well as R. Klette and A. Rosenfeld (2004) have described abstract cell complexes. E. Steinitz has defined an abstract cell complex as where E is an abstract set, B is an asymmetric, irreflexive and transitive binary relation called the bounding relation among the elements of E and dim is a function assigning a non-negative integer to each element of E in such a way that if , then . V. Kovalevsky (1989) described abstract cell complexes for 3D and higher dimensions. He also suggested numerous applications to image analysis. In his book (2008) he suggested an axiomatic theory of locally finite topological spaces which are generalization of abstract cell complexes. The book contains new definitions of topological balls and spheres independent of metric, a new definition of combinatorial manifolds and many algorithms useful for image analysis. Basic results The topology of abstract cell complexes is based on a partial order in the set of its points or cells. The notion of the abstract cell complex defined by E. Steinitz is related to the notion of an abstract simplicial complex and it differs from a simplicial complex by the property that its elements are no simplices: An n-dimensional element of an abstract complexes must not have n+1 zero-dimensional sides, and not each subset of the set of zero-dimensional sides of a cell is a cell. This is important since the notion of an abstract cell complexes can be applied to the two- and three-dimensional grids used in image processing, which is not true for simplicial complexes. A non-simplicial complex is a generalization which makes the introduction of cell coordinates possible: There are non-simplicial complexes which are Cartesian products of such "linear" one-dimensional complexes where each zero-dimensional cell, besides two of them, bounds exactly two one-dimensional cells. Only such Cartesian complexes make it possible to introduce such coordinates that each cell has a set of coordinates and any two different cells have different coordinate sets. The coordinate set can serve as a name of each cell of the complex which is important for processing complexes. Abstract complexes allow the introduction of classical topology (Alexandrov-topology) in grids being the basis of digital image processing. This possibility defines the great advantage of abstract cell complexes: It becomes possible to exactly define the notions of connectivity and of the boundary of subsets. The definition of dimension of cells and of complexes is in the general case different from that of simplicial complexes (see below). The notion of an abstract cell complex differs essentially from that of a CW-complex because an abstract cell complex is no Hausdorff space. This is important from the point of view of computer science since it is impossible to explicitly represent a non-discrete Hausdorff space in a computer. (The neighborhood of each point in such a space must have infinitely many points). The book by V. Kovalevsky contains the description of the theory of locally finite spaces which are a generalization of abstract cell complexes. A locally finite space S is a set of points where a subset of S is defined for each point P of S. This subset containing a limited number of points is called the smallest neighborhood of P. A binary neighborhood relation is defined in the set of points of the locally finite space S: The element (point) b is in the neighborhood relation with the element a if b belongs to the smallest neighborhood of the element a. New axioms of a locally finite space have been formulated, and it was proven that the space S is in accordance with the axioms only if the neighborhood relation is anti-symmetric and transitive. The neighborhood relation is the reflexive hull of the inverse bounding relation. It was shown that classical axioms of the topology can be deduced as theorems from the new axioms. Therefore, a locally finite space satisfying the new axioms is a particular case of a classical topological space. Its topology is a poset topology or Alexandrov topology. An abstract cell complex is a particular case of a locally finite space in which the dimension is defined for each point. It was demonstrated that the dimension of a cell c of an abstract cell complex is equal to the length (number of cells minus 1) of the maximum bounding path leading from any cell of the complex to the cell c. The bounding path is a sequence of cells in which each cell bounds the next one. The book contains the theory of digital straight segments in 2D complexes, numerous algorithms for tracing boundaries in 2D and 3D, for economically encoding the boundaries and for exactly reconstructing a subset from the code of its boundary. Using the abstract cell complexes, efficient algorithms for tracing, coding and polygonization of boundaries, as well as for the edge detection, are developed and described in the book Abstract Cell Complex Digital Image Representation A digital image may be represented by a 2D Abstract Cell Complex (ACC) by decomposing the image into its ACC dimensional constituents: points (0-cell), cracks/edges (1-cell), and pixels/faces (2-cell). This decomposition together with a coordinate assignment rule to unambiguously assign coordinates from the image pixels to the dimensional constituents permit certain image analysis operations to be carried out on the image with elegant algorithms such as crack boundary tracing, digital straight segment subdivision, etc. One such rule maps the points, cracks, and faces to the top left coordinate of the pixel. These dimensional constituents require no explicit translation into their own data structures but may be implicitly understood and related to the 2D array which is the usual data structure representation of a digital image. This coordinate assignment rule and the renderings of each cell incident to this image is depicted in the image at right. See also Simplicial complex Cubical complex References Topology
Abstract cell complex
Physics,Mathematics
1,333
4,564,285
https://en.wikipedia.org/wiki/Worry%20stone
Worry stones are smooth, polished gemstones, usually in the shape of an oval with a thumb-sized indentation, used for relaxation or anxiety relief. Worry stones are typically around in size. They are used by holding the stone between the index finger and thumb and gently moving one's thumb back and forth across the stone. The action of moving one's thumb back and forth across the stone is thought to reduce stress, but there is no conclusive scientific evidence to support this. Worry stones may also be called palm stones, thumb stones, fidget stones, soothing stones, or sensory stones. History As a folk practice implement, worry stones have many origins. Variations on the concept originate in ancient Greece, Tibet, Ireland, and multiple Native American tribes. The concept of a worry stone began by the simple action of picking a smooth stone and fiddling with the stone. Worry stones made by sea water were generally used by Ancient Greeks. The smoothness of the stone was most often created naturally by running water. Usage From the perspective of cognitive behavior therapy, the use of worry stones is one of many folk practices that can function as psychologically healthy self-soothing exercises. Such techniques are imparted at an early stage of treatment, displacing any familiar but destructive coping methods (nail-biting, scratching, lip-biting, etc.) that the patient may have developed. This helps ready the patient to safely confront anxiety or trauma. Worry stones are simple and intuitive enough to be useful in therapeutic contexts where complexity and unfamiliarity are paramount concerns, such as when offering short-term treatment to refugees. After a patient has mastered a more sophisticated relaxation script for anxiety management, the worry stone itself can serve as a physical 'relaxation script reminder'; the patient may notice an impulse to use the object, and thereby become aware of their own anxiety. See also Fidget Cube Fidget spinner Kombolói – Greek worry beads Mood ring Stress ball References Culture of ancient Greece Gemstones Stress (biological and psychological)
Worry stone
Physics
410
70,800,665
https://en.wikipedia.org/wiki/Neodymium%28III%29%20perchlorate
Neodymium(III) perchlorate is an inorganic compound. It is a salt of neodymium and perchloric acid with the chemical formula of Nd(ClO4)3 – it is soluble in water, forming purple-pink, hydrated crystals. Properties Physical properties Neodymium(III) perchlorate forms pale purple crystals when in its anhydrous form. It is soluble in water. It forms crystals Nd(ClO4)3·nH2O, where n = 4, 4.5 are purple-pink crystals, and n = 6 forms pale pink to lavender crystals. Alkaline salts Nd(ClO4)3 can form alkaline salts, with the general formula of Nd(OH)x(ClO4)3 − x. The salt with x = 1.5 (saturated with 5 water atoms) is a light purple crystal with d = 2.88 g/cm³. Other compounds Nd(ClO4)3 can form compounds with hydrazine, such as Nd(ClO4)3·6N2H4·4H2O which is a small white crystal that is soluble in water, methanol, ethanol and acetone, and insoluble in toluene, with density of 2,3271 g/cm³ at 20 °C. References Perchlorates Neodymium(III) compounds
Neodymium(III) perchlorate
Chemistry
295
42,954,332
https://en.wikipedia.org/wiki/Christopher%20Neyor
Christopher Z. Neyor is a Liberian international energy analyst who is the former President/CEO of the National Oil Company of Liberia. Education Christopher Neyor is a former visiting scholar at the Center for Energy and the Environment of the University of Pennsylvania. He did his undergraduate study in Systems Engineering at Wright State University in Dayton, Ohio and pursued graduate work in energy economics at the University of Denver and Management at Stanford University Graduate School of Business. He is a member of the Institute of Electrical and Electronics Engineers and a registered Professional Engineer in Texas. Career Neyor is the former President/CEO of the National Oil Company of Liberia, before stepping down in 2012. He is the current president and chief executive officer (CEO) of Morweh Energy Group, an energy consultancy firm based in Monrovia, Liberia. He spent a decade with the Liberia Electricity Corporation and served as its final Managing Director before the 1989 breakout of the Liberian Civil Wars. During the 2006 to 2018 administration of former President Ellen Johnson Sirleaf, he served as an advisor on energy issues and helped create the 2015 Liberia National Energy Policy. From X until X, Neyor was a representative for Liberia at the United Nations Framework Convention on Climate Change. Neyor is noted for his reformist agenda and the contributions he has made to the energy and educational sectors in Liberia. Book contributions In 2013, Neyor contributed to a textbook on environmental policy called The Globalization of Cost-Benefit Analysis in Environmental Policy in Chapter 19, titled "Assessing Potential Revenues from Reduced Forest Cover Loss in Liberia, alongside Jessica Donovan, Keith Lawrence, Eduard Niesten, and Eric Werker. The Globalization of Cost-Benefit Analysis in Environmental Policy (2013), published by Oxford University Press, by Michael Livermore, Dean King, Lawrence King, and Richard Revesz See also Energy in Liberia References Year of birth missing (living people) Living people 21st-century Liberian people Wright State University alumni University of Denver alumni Stanford University alumni Electrical engineers
Christopher Neyor
Engineering
406
201,629
https://en.wikipedia.org/wiki/Gradian
In trigonometry, the gradianalso known as the gon (), grad, or gradeis a unit of measurement of an angle, defined as one-hundredth of the right angle; in other words, 100 gradians is equal to 90 degrees. It is equivalent to of a turn, of a degree, or of a radian. Measuring angles in gradians (gons) is said to employ the centesimal system of angular measurement, initiated as part of metrication and decimalisation efforts. In continental Europe, the French word centigrade, also known as centesimal minute of arc, was in use for one hundredth of a grade; similarly, the centesimal second of arc was defined as one hundredth of a centesimal arc-minute, analogous to decimal time and the sexagesimal minutes and seconds of arc. The chance of confusion was one reason for the adoption of the term Celsius to replace centigrade as the name of the temperature scale. Gradians (Gons) are principally used in surveying (especially in Europe), and to a lesser extent in mining and geology. The gon (gradian) is a legally recognised unit of measurement in the European Union and in Switzerland. However, this unit is not part of the International System of Units (SI). History and name The unit originated in France in connection with the French Revolution as the , along with the metric system, hence it is occasionally referred to as a metric degree. Due to confusion with the existing term grad(e) in some northern European countries (meaning a standard degree, of a turn), the name gon was later adopted, first in those regions, and later as the international standard. In France, it was also called . In German, the unit was formerly also called (new degree) (whereas the standard degree was referred to as (old degree)), likewise in Danish, Swedish and Norwegian (also gradian), and in Icelandic. Although attempts at a general introduction were made, the unit was only adopted in some countries, and for specialised areas such as surveying, mining and geology. Today, the degree, of a turn, or the mathematically more convenient radian, of a turn (used in the SI system of units) is generally used instead. In the 1990s, most scientific calculators offered the gon (gradian), as well as radians and degrees, for their trigonometric functions. In the 2010s, some scientific calculators lack support for gradians. Symbol The international standard symbol for this unit is "gon" (see ISO 31-1, Annex B). Other symbols used in the past include "gr", "grd", and "g", the last sometimes written as a superscript, similarly to a degree sign: 50g = 45°. A metric prefix is sometimes used, as in "dgon", "cgon", "mgon", denoting respectively 0.1 gon, 0.01 gon, 0.001 gon. Centesimal arc-minutes and centesimal arc-seconds were also denoted with superscripts c and cc, respectively. Advantages and disadvantages Each quadrant is assigned a range of 100 gon, which eases recognition of the four quadrants, as well as arithmetic involving perpendicular or opposite angles. {| |- |align="right"| 0° ||align="center"| = ||align="right"| 0 gradians |- |align="right"| 90° ||align="center"| = ||align="right"| 100 gradians |- |align="right"| 180° ||align="center"| = ||align="right"| 200 gradians |- |align="right"| 270° ||align="center"| = ||align="right"| 300 gradians |- |align="right"| 360° ||align="center"| = ||align="right"| 400 gradians |} One advantage of this unit is that right angles to a given angle are easily determined. If one is sighting down a compass course of 117 gon, the direction to one's left is 17 gon, to one's right 217 gon, and behind one 317 gon. A disadvantage is that the common angles of 30° and 60° in geometry must be expressed in fractions (as  gon and  gon respectively). Conversion Relation to the metre In the 18th century, the metre was defined as the 10-millionth part of a quarter meridian. Thus, 1 gon corresponds to an arc length along the Earth's surface of approximately 100 kilometres; 1 centigon to 1 kilometre; 10 microgons to 1 metre. (The metre has been redefined with increasing precision since then.) Relation to the SI system of units The gradian is not part of the International System of Units (SI). The EU directive on the units of measurement notes that the gradian "does not appear in the lists drawn up by the CGPM, CIPM or BIPM." The most recent, 9th edition of the SI Brochure does not mention the gradian at all. The previous edition mentioned it only in the following footnote: See also (primarily military use) (the "square radian") Notes References External links Ask Dr Math Definitions of grade, gon and centigrade on sizes.com Dictionary of Units Units of plane angle Decimalisation Metrication Non-SI metric units
Gradian
Mathematics
1,187
4,135,156
https://en.wikipedia.org/wiki/Crashworthiness
Crashworthiness is the ability of a structure to protect its occupants during an impact. This is commonly tested when investigating the safety of aircraft and vehicles. Different criteria are used to figure out how safe a structure is in a crash, depending on the type of impact and the vehicle involved. Crashworthiness may be assessed either prospectively, using computer models (e.g., RADIOSS, LS-DYNA, PAM-CRASH, MSC Dytran, MADYMO) or experiments, or retrospectively, by analyzing crash outcomes. Several criteria are used to assess crashworthiness prospectively, including the deformation patterns of the vehicle structure, the acceleration experienced by the vehicle during an impact, and the probability of injury predicted by human body models. Injury probability is defined using criteria, which are mechanical parameters (e.g., force, acceleration, or deformation) that correlate with injury risk. A common injury criterion is the head impact criterion (HIC). Crashworthiness is measured after the fact by looking at injury risk in real-world crashes. Often, regression or other statistical methods are used to account for the many other factors that can affect the outcome of a crash. History Aviation The history of human tolerance to deceleration can likely be traced to the studies by John Stapp to investigate the limits of human tolerance in the 1940s and 1950s. In the 1950s and 1960s, the Pakistan Army began serious accident analysis into crashworthiness as a result of fixed-wing and rotary-wing accidents. As the US Army's doctrine changed, helicopters became the primary mode of transportation in Vietnam. Due to fires and the forces of deceleration on the spine, pilots were getting spinal injuries in crashes that they would have survived otherwise. Work began to develop energy-absorbing seats to reduce the chance of spinal injuries during training and combat in Vietnam. A lot of research was done to find out what people could handle, how to reduce energy, and how to build structures that would keep people safe in military helicopters. The primary reason is that ejecting from or exiting a helicopter is impractical given the rotor system and typical altitude at which Army helicopters fly. In the late 1960s, the Army published the Aircraft Crash Survival Design Guide. The guide was changed several times and turned into a set of books with different volumes for different aircraft systems. The goal of this guide is to show engineers what they need to think about when making military planes that can survive a crash. Consequently, the Army established a military standard (MIL-STD-1290A) for light fixed- and rotary-wing aircraft. The standard sets minimum requirements for the safety of human occupants in a crash. These requirements are based on the need to keep a space or volume that can be used for living and the need to reduce the deceleration loads on the occupant. Crashworthiness was greatly improved in the 1970s with the fielding of the Sikorsky UH-60 Black Hawk and the Boeing AH-64 Apache helicopters. Primary crash injuries were reduced, but secondary injuries within the cockpit continued to occur. This led to the consideration of additional protective devices such as airbags. Airbags were considered a viable solution to reducing the incidents of head strikes in the cockpit, in Army helicopters. Regulatory agencies The National Highway Traffic Safety Administration, the Federal Aviation Administration, the National Aeronautic and Space Administration, and the Department of Defense have been the leading proponents for crash safety in the United States. They've each come up with their own official safety rules and done a lot of research and development in the field. See also Airbag Airworthiness Anticlimber Automobile safety Buff strength of rail vehicles Bumper (car) Compressive strength Container compression test Crash test Crash test dummy Hugh DeHaven Jerome F. Lederer Railworthiness Roadworthiness Seakeeping Seat belt Seaworthiness Self-sealing fuel tank Spaceworthiness Telescoping (rail cars) References Further reading RDECOM TR 12-D-12, Full Spectrum Crashworthiness Criteria for Rotorcraft , Dec 2011. USAAVSCOM TR 89-D-22A, Aircraft Crash Survival Design Guide, Volume I - Design Criteria and Checklists , Dec 1989. USAAVSCOM TR 89-D-22B, Aircraft Crash Survival Design Guide, Volume II - Aircraft Design Crash Impact Conditions and Human Tolerance , Dec 1989. USAAVSCOM TR 89-D-22C, Aircraft Crash Survival Design Guide, Volume III - Aircraft Structural Crash Resistance , Dec 1989. USAAVSCOM TR 89-D-22D, Aircraft Crash Survival Design Guide, Volume IV - Aircraft Seats, Restraints, Litters, and Cockpit/Cabin Delethalization , Dec 1989. USAAVSCOM TR 89-D-22E, Aircraft Crash Survival Design Guide, Volume V - Aircraft Postcrash Survival , Dec 1989. External links Army Helicopter Crashworthiness at DTIC Basic Principle of Helicopter Crashworthiness at US Army Aeromedical Laboratory National Crash Analysis Center NHTSA Crashworthiness Rulemaking Activities History of Energy Absorption Systems for Crashworthy Helicopter Seats at FAA MIT Impact and Crashworthiness Lab School Bus Crashworthiness Research Rail Equipment Crashworthiness Transport safety Aviation accidents and incidents
Crashworthiness
Physics
1,056
61,791,014
https://en.wikipedia.org/wiki/Standards%20for%20Reporting%20Enzymology%20Data
Standards for Reporting Enzymology Data (STRENDA) is an initiative as part of the Minimum Information Standards which specifically focuses on the development of guidelines for reporting (describing metadata) enzymology experiments. The initiative is supported by the Beilstein Institute for the Advancement of Chemical Sciences. STRENDA establishes both publication standards for enzyme activity data and STRENDA DB, an electronic validation and storage system for enzyme activity data. Launched in 2004, the foundation of STRENDA is the result of a detailed analysis of the quality of enzymology data in written and electronic publications. Organization The STRENDA project is driven by 15 scientists from all over the world forming the STRENDA Commission and supporting the work with expertises in biochemistry, enzyme nomenclature, bioinformatics, systems biology, modelling, mechanistic enzymology and theoretical biology. Reporting guidelines The STRENDA Guidelines propose those minimum information that is needed to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions. This minimum information is suggested to be addressed in a scientific publication when enzymology research data is reported to ensure that data sets are comprehensively described. This allows scientists not only to review, interpret and corroborate the data but also to reuse the data for modelling and simulation of biocatalytic pathways. In addition, the guidelines support researchers making their experimental data reproducible and transparent. As of March 2020, more than 55 international biochemistry journal included the STRENDA Guidelines in their authors' instructions as recommendations when reporting enzymology data. The STRENDA project is registered with FAIRsharing.org and the Guidelines are part of the FAIRDOM Community standards for Systems Biology. Applications STRENDA DB STRENDA DB is a web-based storage and search platform that has incorporated the Guidelines and automatically checks the submitted data on compliance with the STRENDA Guidelines thus ensuring that the manuscript data sets are complete and valid. A valid data set is awarded a STRENDA Registry Number (SRN) and a fact sheet (PDF) is created containing all submitted data. Each dataset is registered at Datacite and assigned a DOI to refer and track the data. After the publication of the manuscript in a peer-reviewed journal the data in STRENDA DB are made open accessible. STRENDA DB is a repository recommended by re3data and OpenDOAR. It is harvested by OpenAIRE. The database service is recommended in the authors' instructions of more than 10 biochemistry journals, including Nature, The Journal of Biological Chemistry, eLife, and PLoS. It has been referred as a standard tool for the validation and storage of enzyme kinetics data in multifold publications A recent study examining eleven publications, including Supporting Information, from two leading journals revealed that at least one omission was found in every one of these papers. The authors concluded that using STRENDA DB in the current version would ensure that about 80% auf the relevant information would be made available. Data Management STRENDA DB is considered a tool for research data management by the research community (e.g. EU project CARBAFIN). References External References Record in FAIRSharing.org for STRENDA DB, https://fairsharing.org/FAIRsharing.ekj9zx Biochemistry Proteins Enzymes Standards Biological databases
Standards for Reporting Enzymology Data
Chemistry,Biology
678
1,025,538
https://en.wikipedia.org/wiki/Findability
Findability is the ease with which information contained on a website can be found, both from outside the website (using search engines and the like) and by users already on the website. Although findability has relevance outside the World Wide Web, the term is usually used in that context. Most relevant websites do not come up in the top results because designers and engineers do not cater to the way ranking algorithms work currently. Its importance can be determined from the first law of e-commerce, which states "If the user can’t find the product, the user can’t buy the product." As of December 2014, out of 10.3 billion monthly Google searches by Internet users in the United States, an estimated 78% are made to research products and services online. Findability encompasses aspects of information architecture, user interface design, accessibility and search engine optimization (SEO), among others. Introduction Findability is similar to discoverability, which is defined as the ability of something, especially a piece of content or information, to be found. It is different from web search in that the word find refers to locating something in a known space while 'search' is in an unknown space or not in an expected location. Mark Baker, the author of Every Page is Page One, mentions that findability "is a content problem, not a search problem". Even when the right content is present, users often find themselves deep within the content of a website but not in the right place. He further adds that findability is intractable, perfect findability is unattainable, but we need to focus on reducing the effort for finding that a user would have to do for themselves. Findability can be divided into external findability and on-site findability, based on where the customers need to find the information. History Heather Lutze is thought to have created the term in the early 2000s. The popularization of the term findability for the Web is usually credited to Peter Morville. In 2005 he defined it as: "the ability of users to identify an appropriate Web site and navigate the pages of the site to discover and retrieve relevant information resources", though it appears to have been first coined in a public context referring to the web and information retrieval by Alkis Papadopoullos in a 2005 article entitled "Findability". External findability External findability is the domain of Internet marketing and search engine optimization (SEO) tactics. External findability can be very influential for businesses. Smaller companies may have trouble influencing external findability, due to being less aware to consumers. Other means are taken to make sure that they are found in search results. Several factors affect external findability: Search engine indexing: As the very first step, webpages need to be found by indexing crawler in order to be shown in the search results. It would be helpful to avoid factors that may lead to webpages being ignored by indexing crawlers. Those factors may include elements that require user interaction, such as entering log-in credentials. Algorithms for indexing vary by the search engine which means the number of webpages of a website successfully being indexed may be very different between Google and Yahoo!'s search engines. Also, in countries like China, government policies could significantly influence the indexing algorithms. In this case, local knowledge about laws and policies could be valuable. Page descriptions in search results: Once the webpages are successfully indexed by web crawlers and show in the search results with decent ranking, the next step is to attract customers to click the link to the web pages. However, the customers can't see the whole web pages at this point; they can only see an excerpt of the webpage's content and metadata. Therefore, displaying meaningful information in a limited space, usually a couple of sentences, in search results is important for increasing click traffic of the webpages, and thus the findability of the web content on your webpages. Keyword matching: At a semantic level, terminology used by the searcher and the content producer be different. Bridging the gap between the terms used by customers and developers is helpful for making web content more findable to more potential content consumers. On-site findability On-site findability is concerned with the ability of a potential customer to find what they are looking for within a specific site. More than 90 percent of customers use internal searches in a website compared to browsing. Of those, only 50 percent find what they are looking for. Improving the quality of on-site searches highly improves the business of the website. Several factors affect findability on a website: Site search: If searchers within a site do not find what they are looking for, they tend to leave rather than browse through the website. Users who had successful site searches are twice as likely to ultimately convert. Related links and products: User experience can be enhanced by trying to understand the needs of the customer and provide suggestions for other, related information. Site match to customer needs and preferences: Site design, content creation, and recommendations are major factors for affecting the customer experience. Evaluation and measures Baseline findability is the existing findability before changes are made in order to improve it. This is measured by participants who represent the customer base of the website, who try to locate a sample set of items using the existing navigation of the website. In order to evaluate how easily information can be found by searching a site using a search engine or information retrieval system, retrievability measures were developed, and similarly, navigability measures now measure ease of information access through browsing a site (e.g. PageRank, MNav, InfoScent (see Information foraging), etc.). Findability also can be evaluated via the following techniques: Usability testing: Conducted to find out how and why users navigate through a website to accomplish tasks. Tree testing: An information architecture based technique, to determine if critical information can be found on the website. Closed card sorting: A usability technique based on information architecture, for evaluating the strength of categories. Click testing: Accounts for the implicit data collected through clicks on the user interface. Beyond findability Findability Sciences defines a findability index in terms of each user's influence, context, and sentiments. For seamless search, current websites focus on a combination of structured hypertext-based information architectures and rich Internet application-enabled visualization techniques. See also References Further reading Morville, P. (2005) Ambient findability. Sebastopol, CA: O'Reilly Wurman, R.S. (1996). Information architects. New York: Graphis. External links : a collection of links to people, software, organizations, and content related to findability The age of findability (article) Use Old Words When Writing for Findability (article on the findability impact of a site's choice of words) Building Findable Websites: Web Standards SEO and Beyond (book) The Findability Formula: The Easy, Non-Technical Guide to Search Engine Marketing by Heather Lutze Web design Knowledge representation Information science Information architecture
Findability
Engineering
1,448
22,637,027
https://en.wikipedia.org/wiki/Principle%20of%20typification
In biological nomenclature, the principle of typification is one of the guiding principles. The International Code of Zoological Nomenclature provides that any named taxon in the family group, genus group, or species group have a name-bearing type which allows the name of the taxon to be objectively applied. The type does not define the taxon: that is done by a taxonomist; and an indefinite number of competing definitions can exist side by side. Rather, a type is a point of reference. A name has a type, and a taxonomist (having defined the taxon) can determine which existing types fall within the scope of the taxon. They can then use the rules in the Code to determine the valid name for the taxon. See also Type (biology) Type species Type genus References Botanical nomenclature Taxonomy (biology) Zoological nomenclature
Principle of typification
Biology
167
19,767,753
https://en.wikipedia.org/wiki/Einasto%20profile
The Einasto profile (or Einasto model) is a mathematical function that describes how the density of a spherical stellar system varies with distance from its center. Jaan Einasto introduced his model at a 1963 conference in Alma-Ata, Kazakhstan. The Einasto profile possesses a power law logarithmic slope of the form: which can be rearranged to give The parameter controls the degree of curvature of the profile. This can be seen by computing the slope on a log-log plot: The larger , the more rapidly the slope varies with radius (see figure). Einasto's law can be described as a generalization of a power law, , which has a constant slope on a log-log plot. Einasto's model has the same mathematical form as Sersic's law, which is used to describe the surface brightness (i.e. projected density) profile of galaxies, except that the Einasto model describes a spherically symmetric density distribution in 3 dimensions, whereas the Sersic law describes a circularly symmetric surface density distribution in two dimensions. Einasto's model has been used to describe many types of system, including galaxies, and dark matter halos. See also NFW profile References External links Spherical galaxy models with power-law logarithmic slope. A comprehensive paper that derives many properties of stellar systems obeying Einasto's law. Astrophysics Dark matter Equations of astronomy
Einasto profile
Physics,Astronomy
299
77,469,193
https://en.wikipedia.org/wiki/Louis%20Meeus
Meeus Distillery, in the Belgian municipality of Wijnegem, was founded in 1863. In the early 20th century it grew to become Europe’s largest distillery. Origins of Distillery Meeus In the 18th century, the Meeus family was already a wealthy entrepreneurial family active in the distilling industry. As a Kempen family, they worked in Antwerp, among other places. In 1820, Paul Jacques van Reeth founded the distillery De Sleutel (La Clef) in the Lange Winkelstraat in Antwerp. His son-in-law Louis Jean Meeus later took the reins. When Louis Jean’s son, Louis Meeus (1816 – 1903), went looking for a suitable location to grow the company into an industrial distillery, he found one in Wijnegem. The location on the Kempen Canal, with direct access for the supply of raw materials and the transport of the finished jenever, was an important factor in his choice. After the plans had been approved, work started in 1869. By the beginning of the 19th century, the distillery had become one of the largest distilleries in Europe. 1870-1914: Growth of the company brings prosperity to the village When he founded the distillery, Louis Meeus could count on the help of his three brothers Théophile, Hippolyte and Prosper. A malt house, office buildings, workers' houses, warehouses and livestock sheds were built alongside the distillery. In 1879 part of the distillery was destroyed by a fire, but this did not stop its growth: in 1884, the first distillery, De Sleutel (La Clef), was joined by a second one, distillery De Pijl (La Flèche). The company meanwhile had twelve grain mills and nineteen fourteen-story grain silos at its disposal. In 1885 the distillery produced 101,359 litres of jenever and employed 200 to 250 people. It was paying one of the highest excise duties applied in Belgium. At the Antwerp International Exhibition (1885), its stand was described as 'l'Arc de Triomphe de la distillerie'. The distillery had an internal rail network (track gauge 600 millimetres), a connection to the local railway line to Turnhout and a connection to the wider rail network via line 205. When Louis and Théophile died, Prosper withdrew from the distillery (1895). From then on, it was managed by Hippolyte Meeus. Prosper took care of distribution, from the Lange Winkelstraat. At the beginning of the twentieth century, Hippolyte designated his sons Louis and Robert as his right-hand men. Many of the distillery’s workers lived in the surrounding workers' houses. To cater for them and for itself, the distillery not only had its own fire brigade, but also a school, a chapel and a steamboat for sailing on the canal. The distillery gave many of the villagers – and also residents of neighbouring communities – job security. For a long time it also paid the village’s municipal taxes. Via political channels, the Meeus brothers also tried to apply their expertise from the distillery to areas of benefit to the municipality, such as traffic infrastructure and fire safety. In recognition of their services, the Meeus brothers were dubbed Knights of the Order of Leopold (Belgium) by the Belgian government. When Hippolyte Meeus became mayor of Wijnegem (1892-1914) the distillery’s economic ties with the municipality of Wijnegem became even stronger. During his mayoralty, he spent a lot of the money he earned from the distillery’s prosperity on construction projects in the municipality, such as the building of the community centre ‘t Gasthuis and the town hall. World War I: beginning of an end During World War I, the German occupiers removed all the distillery’s copper, thus halting production. Moreover, the Meeus family had fled to England, so the business remained closed until 1918 when it was started up again under Louis and Robert Meeus, the sons of Hippolyte. Alcohol consumption was, however, reduced thanks to increasingly high excise duties and the Vandervelde Act of 1919, which prohibited the drinking of spirits in public places. In 1930, the distillery stopped producing alcohol, but the malt house remained active. After 1930: from industrial site to housing project When the distillery closed, the company management looked for other options. Liqueur production was transferred to Lange Winkelstraat, in Antwerp. After Robert's death, Frans Hol took over liqueur production (1957), distilling liqueurs in Limburg under the name Bal & Louis Meeùs. The malting plant passed into the hands of Dutch brewery Albert Heineken. In 1980 the Albert Malt House moved to Ruisbroek on the Sea Canal. On 8 August 1930, 'Semina' ('Société Immobilière et Industrielle Anversoise') was set up in some of the buildings of the former distillery. The other buildings were sold to various companies. Some of the buildings were listed in Wijnegem's cultural heritage inventory. In 1998 the non-listed buildings on the distillery site were purchased by art dealer Axel Vervoordt. Since 2012 he has owned the entire site and has developed the prestigious 'Kanaal' construction project, with residential, office and arts units. Apartments have been built in the former silos. References Distilleries Wijnegem
Louis Meeus
Chemistry
1,129
6,368,894
https://en.wikipedia.org/wiki/Unfinished%20building
An unfinished building is a building (or other architectural structure, as a bridge, a road or a tower) where construction work was abandoned or on hold at some stage or only exists as a design. It may also refer to buildings that are currently being built, particularly those that have been delayed or at which construction work progresses extremely slowly. Many construction or engineering projects have remained unfinished at various stages of development. The work may be finished as a blueprint or whiteprint and never be realised, or be abandoned during construction. One of the best-known perennially incomplete buildings is Antoni Gaudí's basilica Sagrada Família in Barcelona. It has been under construction since 1882 and planned to be complete by 2026, Gaudí's death centenary. Partially constructed buildings There are numerous unfinished buildings that remain partially constructed in countries around the world, some of which can be used in their incomplete state but with others remaining as a mere shell. Some projects are intentionally left with an unfinished appearance, particularly the follies of the late 16th to 18th century. Some buildings are in a cycle of near-perpetual construction, with work lasting for decades or even centuries. Antoni Gaudí's Sagrada Família in Barcelona, Spain, has been under construction for around 140 years, having started in the 1880s. Work was delayed by the Spanish Civil War, during which the original models and parts of the building itself were destroyed. Today, even with portions of the basilica incomplete, it is still the most popular tourist destination in Barcelona with 1.5 million visitors every year. Gaudí spent 40 years of his life overseeing the project and is buried in the crypt. Germany's Cologne Cathedral took even longer to complete; construction started in 1248 and finished in 1880, a total of 632 years. Buildings (and other architectural structures) never completed Buildings that were never completed and remain in that state include: Duomo di Siena (Siena Cathedral), Italy Abbey of the Santissima Trinità, Venosa, Italy San Petronio Basilica, Bologna, Italy Goodwood House, West Sussex, England, UK Klosterneuburg Monastery, Austria Herrenchiemsee, Bavaria, Germany Prora, island of Rügen, Germany Goldin Finance 117, Xiqing District, Tianjin, China Woodchester Mansion, Stroud, Gloucester, England, UK Parliament House, Wellington, New Zealand Bishop Castle, San Isabel National Forest, Colorado, US Boldt Castle, Thousand Islands, New York, US National Monument, Edinburgh, Scotland, UK Ajuda National Palace, Lisbon, Portugal Cuenca Cathedral, Cuenca, Spain Al-Rahman Mosque, Baghdad, Iraq Plaza Rakyat, Kuala Lumpur, Malaysia Ilot Voyageur, Montreal, Quebec, Canada Cathedral of St. Alban the Martyr, Toronto, Ontario, Canada Centro Financiero Confinanzas, Caracas, Venezuela Aspotogan Sea Spa, Nova Scotia, Canada (demolished) Chicago Spire, Chicago, Illinois, US Monumento a la Revolución, Mexico City, Mexico Beaumaris Castle, Anglesey, Wales, UK 2 World Trade Center, New York City, US Ryugyong Hotel, Pyongyang, North Korea Sathorn Unique Tower, Bangkok, Thailand Plaza Tower, New Orleans, US The Harmon, Las Vegas, US (demolished) In other cases, construction works proceeds extremely slowly, so one can also say form incomplete structures. Examples are: Cathedral of St. John the Divine, New York City, New York, US Sagrada Família, Barcelona, Spain Kaliakra transmitter, Cape Kaliakra, Bulgaria Westminster Cathedral, London, UK Mosul Grand Mosque, Mosul, Iraq Other unfinished structures There are also roads, railway lines and channels which remained unfinished. Roads MP-203, Madrid, Spain Interstate 710, Los Angeles County, California Route 11 Expressway, New London County, Connecticut LaSalle Expressway, Niagara County, New York South Mall Arterial/Dunn Memorial Bridge, Albany and Rensselaer, New York Seaford–Oyster Bay Expressway, Nassau County, New York Korean War Veterans Parkway (Richmond Parkway), Staten Island, New York Willowbrook Expressway/Parkway, Staten Island, New York Foothills Parkway, Tennessee Amstutz Expressway, Waukegan, Illinois Olimpijka in Poland M8 Bridge to Nowhere, Glasgow, Scotland The Foreshore Freeway Bridge in Cape Town, South Africa B 464 near Sindelfingen, Germany Strecke 19 (Berlin - Hamburg) Strecke 20 (Berlin - Wittstock) Strecke 24 (Basel - Hamburg) Strecke 37 (Koblenz - Mehren) Strecke 46 (Fulda - Würzburg) Strecke 73 (Dresden - Görlitz) Strecke 77 (Hamm - Kassel) Strecke 78 (Eisenach - Kassel) Strecke 85 (Bamberg - Eisenach) Strecke 86 (Nürnberg - Regensburg) Interstate I-170 Baltimore, MD New Central Cross-Island Highway, Taiwan (including Provincial Highway 14, 18, 21) Railway infrastructure Cincinnati Subway Mosel Railway (German: Moselbahn) Ahrtal Railway (German: Ahrtalbahn) Old Railway at Willebadessen Strategic Railway Embankment (German: Strategischer Bahndamm) Arenas Deutsches Stadion Nou Mestalla Lithuania National Stadium Ferris wheels New York Wheel, New York City, New York, US Skyvue, Las Vegas, Nevada, US Turn of Fortune, Changzhou, China Industrial plants Kramatorsk Metallurgical Plant GRES-2 Power Station, Ekibastus Nuclear power plants Crimean Atomic Energy Station, Shcholkine, Crimea Fast Breeder nuclear reactor SNR-300, Kalkar, Germany Lemoniz Nuclear Power Plant, Lemoniz, Spain Marble Hill Nuclear Power Plant, New Washington, IN, United States Satsop Nuclear Power Plant, Satsop, Washington, United States Stendal Nuclear Power Plant, Arneburg, Germany Unit 5, 6, 7, 8 of Chernobyl Nuclear Power Station Valdecaballeros Nuclear Power Plant, Valdecaballeros, Spain Lungmen Nuclear Power Plant, Taiwan Juragua Nuclear Power Plant, Cuba Żarnowiec Nuclear Power Plant, Poland Electric power transmission systems Wolmirstedt HVDC-back-to-back plant Elbe Project HVDC Ekibastuz–Centre Towers Watkins' Tower, London, UK Yekaterinburg TV Tower, Yekaterinburg, Russia (demolished) Berlin-Müggelberge TV Tower, Berlin, Germany Dubai Creek Tower, Dubai, United Arab Emirates Belgorod TV Tower, Belgorod, Russia Deutschlandsender Herzberg/Elster, Germany Wardenclyffe Tower, Shoreham, New York, USA Hakell Creative Educational Media, Haskell, Oklahoma, USA at 35°53'0"N 95°46'15"W Galich TV Mast Visions and plans Many projects do not get to the construction phase, halted during or after planning. Ludwig II of Bavaria commissioned several designs for Castle Falkenstein, with the fourth plan being vastly different from that of the first. The first two designs were turned down, one because of costs and one because the design displeased Ludwig, and the third designer withdrew from the project. The fourth and final plan was completed and some infrastructure was prepared for the site but Ludwig died before construction work began. The Palace of Whitehall, at the time the largest palace in Europe, was mostly destroyed by a fire in 1698. Sir Christopher Wren, most famous for his role in rebuilding several churches after the Great Fire of London in 1666, sketched a proposed replacement for part of the palace but financial constraints prevented construction. Even without being constructed, many architectural designs and ideas have had a lasting influence. The Russian constructivism movement started in 1914 and was taught in the Bauhaus and other architecture schools, leading to numerous architects integrating it into their style. Further examples Construction never started Cenotaph for Sir Isaac Newton The Illinois Millennium Tower Palace of the Soviets Point Park Civic Center Project of Filippo Juvarra for the Royal Palace of Madrid Pyramid City Sky City 1000 Tatlin's Tower Ville Contemporaine Volkshalle Centennial Tower, Manila or Pasig Philippines X-Seed 4000 Use of computer technology Computer technology has allowed for 3D representations of projects to be shown before they are built. In some cases the construction is never started and the computer model is the nearest that anyone will ever get to seeing the finished piece. For example, in 1999 Kent Larson's exhibition "Unbuilt Ruins: Digital Interpretations of Eight Projects by Louis I. Kahn" showed computer images of designs completed by noted architect Louis Kahn but never built. Computer simulations can also be used to create prototypes of projects and test them before they are actually built; this has allowed the design process to be more successful and efficient. See also Unfinished creative work List of visionary tall buildings and structures Off-plan property References External links Rick Edmondson's Unfinished Buildings Unbuilt British motorways at Pathetic Motorways Building engineering
Unfinished building
Engineering
1,868
4,166,591
https://en.wikipedia.org/wiki/Dashboard%20%28computing%29
In computer information systems, a dashboard is a type of graphical user interface which often provides at-a-glance views of data relevant to a particular objective or process through a combination of visualizations and summary information. In other usage, "dashboard" is another name for "progress report" or "report" and is considered a form of data visualization. The dashboard is often accessible by a web browser and is typically linked to regularly updating data sources. Dashboards are often interactive and facilitate users to explore the data themselves, usually by clicking into elements to view more detailed information. The term dashboard originates from the automobile dashboard where drivers monitor the major functions at a glance via the instrument panel. History The idea of digital dashboards followed the study of decision support systems in the 1970s. Early predecessors of the modern business dashboard were first developed in the 1980s in the form of Executive Information Systems (EISs). Due to problems primarily with data refreshing and handling, it was soon realized that the approach wasn't practical as information was often incomplete, unreliable, and spread across too many disparate sources. Thus, EISs hibernated until the 1990s when the information age quickened pace and data warehousing, and online analytical processing (OLAP) allowed dashboards to function adequately. Despite the availability of enabling technologies, the dashboard use didn't become popular until later in that decade, with the rise of key performance indicators (KPIs), and the introduction of Robert S. Kaplan and David P. Norton's balanced scorecard. In the late 1990s, Microsoft promoted a concept known as the Digital Nervous System and "digital dashboards" were described as being one leg of that concept. Today, the use of dashboards forms an important part of Business Performance Management (BPM). Initially dashboards were used for monitoring purposes, now with the advancement of technology, dashboards are being used for more analytical purposes. The use of dashboards has now been incorporating; scenario analysis, drill down capabilities, and presentation format flexibility. Benefits Digital dashboards allow managers to monitor the contribution of the various departments in their organization. In addition, they enable “rolling up” of information to present a consolidated view across an organization. To gauge exactly how well an organization is performing overall, digital dashboards allow you to capture and report specific data points from each department within the organization, thus providing a "snapshot" of performance. Benefits of using digital dashboards include: Visual presentation of performance measures Ability to identify and correct negative trends Measure efficiencies/inefficiencies Ability to generate detailed reports showing new trends Ability to make more informed decisions based on collected business intelligence Dashboards offers a holistic view of the entire business as it gives the manager a bird's eye view into the performance of sales, data inventory, web traffic, social media analytics and other associated data that is visually presented on a single dashboard. Dashboards lead to better management of marketing/financial strategies as a dashboard for the display of marketing data makes the process of marketing easier and more reliable as compared to doing it manually. Web analytics play a crucial role in shaping the marketing strategy of many businesses. Dashboards also facilitate for better tracking of sales and financial reporting as the data is more precise and in one area. Lastly, dashboards offer for better customer service through monitoring because they keep both the managers and the clients updated on the project progress through automated emails and notifications. Align strategies and organizational goals Saves time compared to running multiple reports Gain total visibility of all systems instantly Quick identification of data outliers and correlations Consolidated reporting into one location Available on mobile devices to quickly access metrics Classification Dashboards can be broken down according to role and are either strategic, analytical, operational, or informational. Dashboards are the 3rd step on the information ladder, demonstrating the conversion of data to increasingly valuable insights. Strategic dashboards support managers at any level in an organization and provide the quick overview that decision-makers need to monitor the health and opportunities of the business. Dashboards of this type focus on high-level measures of performance and forecasts. Strategic dashboards benefit from static snapshots of data (daily, weekly, monthly, and quarterly) that are not constantly changing from one moment to the next. Dashboards for analytical purposes often include more context, comparisons, and history, along with subtler performance evaluators. In addition, analytical dashboards typically support interactions with the data, such as drilling down into the underlying details. Dashboards for monitoring operations are often designed differently from those that support strategic decision making or data analysis and often require monitoring of activities and events that are constantly changing and might require attention and response at a moment's notice. Types of dashboards Digital dashboards may be laid out to track the flows inherent in the business processes that they monitor. Graphically, users may see the high-level processes and then drill down into low-level data. This level of detail is often buried deep within the corporate enterprise and otherwise unavailable to the senior executives. Three main types of digital dashboards dominate the market today: desktop software applications, web-browser-based applications, and desktop applications are also known as desktop widgets. The last are driven by a widget engine. Both Desktop and Browser-based providers enable the distribution of dashboards via a web browser. An example of the latter is web-based-browser Asana, which helps teams orchestrate their work, from daily tasks to strategic cross-functional initiatives. With it, teams can manage everything from company objectives to digital transformation to product launches and marketing campaigns. Specialized dashboards may track all corporate functions. Examples include human resources, recruiting, sales, operations, security, information technology, project management, customer relationship management, digital marketing and many more departmental dashboards. For a smaller organization like a startup a compact startup scorecard dashboard tracks important activities across lot of domains ranging from social media to sales. Digital dashboard projects involve business units as the driver and the information technology department as the enabler. Therefore, the success of dashboard projects depends on the relevancy/importance of information provided within the dashboard. This includes the metrics chosen to monitor and the timeliness of the data forming those metrics; data must be up to date and accurate. Key performance indicators, balanced scorecards, and sales performance figures are some of the content appropriate on business dashboards. Performance Dashboards Dashboards involve the combination of visual and functional features. This combination of features helps improve cognition and interpretation. A performance dashboard sits at the intersection of two powerful disciplines: business intelligence and performance management. Therefore, there are different users who could use these dashboards for different reasons. For example, a level of workers could look at monitoring inventory while those in more managerial roles can look at lagging measure. Then executives could utilize the dashboard to evaluate strategic performance against objectives. Dashboards and scorecards Balanced scorecards and dashboards have been linked together as if they were interchangeable. However, although both visually display critical information, the difference is in the format: Scorecards can open the quality of an operation while dashboards provide calculated direction. A balanced scorecard has what they called a "prescriptive" format. It should always contain these components: Perspectives – group Objectives – verb-noun phrases pulled from a strategy plan Measures – also called metric or key performance indicators (KPIs) Spotlight indicators – red, yellow, or green symbols that provide an at-a-glance view of a measure's performance. Each of these sections ensures that a Balanced Scorecard is essentially connected to the businesses critical strategic needs. The design of a dashboard is more loosely defined. Dashboards are usually a series of graphics, charts, gauges and other visual indicators that can be monitored and interpreted. Even when there is a strategic link, on a dashboard, it may not be noticed as such since objectives are not normally present on dashboards. However, dashboards can be customized to link their graphs and charts to strategic objectives. Design Digital dashboard technology is available "out-of-the-box" from many software providers. Some companies, however, continue to do in-house development and maintenance of dashboard applications. For example, GE Aviation has developed a proprietary software/portal called "Digital Cockpit" to monitor the trends in the aircraft spare parts business. Good dashboard design practices take into account and address the following: the medium it is designed for (desktop, laptop, mobile, tablet) use of visuals over the tabular presentation of data bar charts: to visualize one or more series of data line charts: to track changes in several dependent data sets over a period of time sparklines: to show the trend in a single data set scorecards: to monitor KPIs and trends use of legends anytime more than one color or shape is present on a graph spatial arrangement: place your most important view on the top left (if the language is written left to right) then arrange the following views in a Z pattern with the most important information following the top-to-bottom, left-to-right pattern use colorblind friendly palettes with color used consistently and only where necessary A good information design will clearly communicate key information to users and makes supporting information easily accessible. Assessing the quality of dashboards There are a few key elements to a good dashboard:. Simple, communicates easily Minimum distractions...it could cause confusion Supports organized business with meaning and useful data Applies human visual perception to visual presentation of information It can be accessed easily by its intended audience A research-based framework for Business Intelligence dashboard design suggests that "cross-visual interactivity" is the most impactful of all features. Dashboard software Dashboards serve as a visual representation for a company to monitor progress and trends, not only among themselves but against other companies as well. Dashboards and visualizations contain data that is updated in real time. For example, if the underlying data in an Excel spreadsheet were to change, so would the visualization. Power BI Power BI provides the tools for a user to create different types of visualizations to communicate the data that they are using. Some examples of these visualizations include graphs, maps, and clustered columns. Power BI pulls data from Excel that can be used to create dashboards and visualizations. Whereas Excel does not import data from Power BI. Excel is typically used for less data and Power BI is more complex. Power BI can be used to display trends over time. For example, a company can create a time plot that shows its costs and revenues over a certain period. The data can then be arranged to show per day, month, quarter, year, etc. This requires simple formatting tools so the data can quickly be changed and compared. Power BI allows the user to customize their visualizations by adding colors and labels. In addition, when the user clicks a data point, they are able to understand what the point or selection is showing. Power BI also has a commonly used map feature where businesses can view their sales and earnings across different states and countries. Places with the highest amounts of data will appear larger in size. For example, a state with the most revenue will be bigger than states with less data. Power BI is also interactive in that in any type of map a person can expand a specific category to look deeper into the data contained. Tableau Tableau is another program that allows users to create dashboards. Allegedly, One of Tableau's biggest advantages is how much data it can hold. Tableau can hold an unlimited amount, whereas an Excel spreadsheet has a capacity of 1,048,576 rows. However, it is possible to hold and analyze effectively billions of rows in Excel, using its Power Pivot feature. Tableau has the ability to make interactive dashboards by clicking into a specific point. For example, you can have data be displayed within a map. By clicking a specific state or city you can get a closer look at the data contained within that location. Filters and parameters can also be added. For example, if one were to analyze revenues across the United States you could set a parameter to only show salaries within a particular range. Once set, only states within this range will be highlighted. Excel Excel has many tools that help the user not only implement data but also visualize data. Excel has many built in functions that can help break down data and also separate data by scenarios. The user can easily download and add files to their Excel sheets to use for their data. Other tools Excel offers is the use of conditional formatting and basic pivot tables and charts. Excel allows the user to reference other cells which ultimately allows for complex computations to be made and conclusions to be drawn from data. Arena Calibrate Arena Calibrate provides a comprehensive business intelligence reporting tool, accompanied by hands-on data & BI support. Their offerings cater to startups, agencies, and SMEs, empowering them to fully leverage Advertising, Sales, Email, CRM, Web, and Analytics data. Offering high-tier ETL data integration, flexible data warehousing, and custom data visualization, Arena Calibrate is trusted by businesses like Amex, Gentle Dental, and National Golf Foundation. Their dedicated account managers and BI specialists work as an extension of the client's team, aiming to realize each client's unique reporting vision. Guidelines for dashboard design There are certain guidelines that can be useful when creating dashboards and other visualizations. The guidelines have a structure that can be followed and that is beginning with reading direction. When it comes to how you are arranging your information on your dashboard or a visualization it can be helpful to think about reading direction. General reading direction is from left to right and from top to bottom. Having information flow in this structure will allow others to read and understand your visuals in a more natural structure. Local proximity is another idea to keep in mind when creating a visual or a dashboard. Having information in a close proximity can be compared with a better effectiveness and allow users to draw conclusions. Being able to make sure the user is not being overloaded with information when creating your visual. Having a few key figures with significant importance can be more helpful compared to having too much information. When you have too much information and not a structure to what you are presenting then that is known as a “data graveyard”. Another aspect to keep in mind while creating visuals that goes along with information overload is the idea of interaction within your visual. Interacting with the dashboard would allow for further detail to be obtained from the user and allow them to better understand the information on your dashboard. Chart visualization is an important aspect when creating dashboards, diagrams in particular. When you have complex data it can be difficult to come to conclusions from that material and being able to have different visual elements within the dashboard can be helpful in giving a larger overview of the material. Being able to have a visual reporting system allows multiple processing operations to be carried out and that could increase the effectiveness of decisions. There are different types of visualizations that can be seen as more effective depending on the data type and the recipient. Looking at traditional business graphics in an interactive form is another aspect to keep in mind when creating a dashboard. Business charts are used mainly in the form of interactive dashboards. A major advantage of business charts is that the majority of users have an understanding of them. There are many connections between dashboards and accounting. Dashboards aid with budgeting, management control, and wage control. Dashboards are used to present data in a quick and easy to read way. The ability to present data in a quick way with a visual allows for more data to be processed and understood. Dashboards are used for performance reports, sales analysis on sectors, and inventory rotation. Dashboards should be quick visualizations that allow decisions to be made quicker than they usually would without the access to dashboard technology. Dashboards are also used in accounting decision making settings. The data can help prove a change is efficient or inefficient and therefore help with improving systems throughout an organization. In order for a dashboard to be efficient, the individual creating the dashboard needs to make sure that the information is simple and easy to read and be interpreted. See also Business activity monitoring Complex event processing Corporate performance management Data presentation architecture Event stream processing Information graphics Information design Scientific visualization Control panel (software) References Further reading Business software Business terms Computing terminology Content management systems Data warehousing Data management Information systems Website management
Dashboard (computing)
Technology
3,336
43,469,462
https://en.wikipedia.org/wiki/Entoloma%20quadratum
Entoloma quadratum is a species of agaric fungus in the family Entolomataceae. The fungus was originally described as Agaricus quadratus by Miles Joseph Berkeley and Moses Ashley Curtis in 1859; Egon Horak transferred it to Entoloma in 1976. It is found in Africa, Asia, Europe, and North America. References External links Entolomataceae Fungi described in 1859 Fungi of Africa Fungi of Asia Fungi of Europe Fungi of North America Taxa named by Miles Joseph Berkeley Taxa named by Moses Ashley Curtis Fungus species
Entoloma quadratum
Biology
116
22,203,805
https://en.wikipedia.org/wiki/Easterhegg
The Easterhegg (also Easter(H)egg or EH) is an annual hacker event, created by the German Chaos Computer Club. Since 2001 the Easterhegg takes place during the Easter celebrations. Most participants are from German-speaking countries, with others from the rest of Europe or further afield. The Easterhegg consists mostly of workshops with some lectures, with topics covering the whole spectrum from tech to culture and hackerspaces. Furthermore, Easterhegg is non-commercial and all the workers are volunteers. External links Official Easterhegg Portal Official Easterhegg Portal (old portal, not maintained anymore – now redirects to easterhegg.eu) Easterhegg Basel 2012 Easterhegg Hamburg 2011 Easterhegg München 2010 Easterhegg Hamburg 2009 Easterhegg Köln 2008 Easterhegg Hamburg 2007 Easterhegg Hamburg 2005 Easterhegg München 2004 Easterhegg Hamburg 2003 Easterhegg Düsseldorf 2002 Easterhegg Hamburg 2001 Chaos Computer Club Events Hacker conventions
Easterhegg
Technology
199
43,380,079
https://en.wikipedia.org/wiki/Copper%28II%29%20perchlorate
Copper(II) perchlorate is an inorganic compound with the chemical formula . The anhydrous solid is rarely encountered but several hydrates are known. Most important is the perchlorate salt of the aquo complex copper(II) perchlorate hexahydrate, . Infrared spectroscopic studies of anhydrous copper(II) perchlorate provided some of the first evidence for the binding of perchlorate anion to a metal ion. The structure of this compound was eventually deduced by X-ray crystallography. Copper resides in a distorted octahedral environment and the perchlorate ligands bridge between the Cu(II) centers. Safety Like other perchlorates, copper(II) perchlorate is a strong oxidant. References Copper(II) compounds Perchlorates
Copper(II) perchlorate
Chemistry
172
21,103,815
https://en.wikipedia.org/wiki/Ethopabate
Ethopabate is a coccidiostat used in poultry. References Antiparasitic agents 4-Aminobenzoate esters Salicylate esters Acetanilides Salicylyl ethers Methyl esters Ethoxy compounds
Ethopabate
Biology
54
22,763,157
https://en.wikipedia.org/wiki/Asger%20Aaboe
Asger Hartvig Aaboe (26 April 1922 – 19 January 2007) was a Danish historian of the exact sciences and mathematics who was best known for his contributions to the history of ancient Babylonian astronomy. In his studies of Babylonian astronomy, he went beyond analyses in terms of modern mathematics to seek to understand how the Babylonians conceived their computational schemes. Aaboe studied mathematics and astronomy at the University of Copenhagen, and in 1957 obtained a PhD in the History of Science from Brown University, where he studied under Otto Neugebauer, writing a dissertation "On Babylonian Planetary Theories". In 1961, he joined the Department of the History of Science and Medicine at Yale University, serving as chair from 1968 to 1971, and continuing an active career there until retiring in 1992. At Yale, his doctoral students included Alice Slotsky and Noel Swerdlow. He was elected to the Royal Danish Academy of Sciences and Letters in 1975, served as president of the Connecticut Academy of Arts and Sciences from 1970 to 1980, and was a member of many other scholarly societies. In 1987, a festschrift was published in honor of Asger Aaboe's 65th birthday. Aaboe married Joan Armstrong on 14 July 1950. The marriage produced four children: Kirsten Aaboe, Erik Harris Aaboe, Anne Aaboe, Niels Peter Aaboe. Selected publications Episodes from the Early History of Mathematics, New York: Random House, 1964. "Scientific Astronomy in Antiquity", Philosophical Transactions of the Royal Society of London, A.276, (1974: 21–42). "Mesopotamian Mathematics, Astronomy, and Astrology", The Cambridge Ancient History (2nd. ed.), Vol. III, part 2, chap. 28b, Cambridge: Cambridge University Press, 1991, ; chapter summary Episodes from the Early History of Astronomy, New York: Springer, 2001, , Notes References (Len Berggren, Professor Emeritus, Simon Fraser University) (John P. Britton's Ph.D. thesis was supervised by Asger Aaboe and Bernard R. Goldstein. See: ) {John M. Steele, Charles Edwin Wilbour Professor of Egyptology and Assyriology, Brown U.) 1922 births 2007 deaths 20th-century Danish mathematicians 21st-century Danish mathematicians Historians of science Danish historians of mathematics Historians of astronomy Danish expatriates in the United States University of Copenhagen alumni Brown University Graduate School alumni Yale University faculty
Asger Aaboe
Astronomy
500
3,248,957
https://en.wikipedia.org/wiki/Orange%20B
Orange B is a food dye from the azo dye group. It is approved by the United States Food and Drug Administration (FDA) for use only in hot dog and sausage casings or surfaces, only up to 150 parts per million of the finished food weight. It is typically prepared as a disodium salt. Orange B was first listed as an approved food dye by the FDA in 1966. In 1978, the FDA proposed removing it from the list citing concerns about the presence of carcinogenic contaminants (specifically 2-naphthylamine). Around the same time, the only supplier in the United States, the William J. Stange Company, stopped manufacturing it. Despite not having been used in any food products since then, it was not removed from the list. References Azo dyes Food colorings Benzenesulfonates Naphthalenesulfonates Ethyl esters Pyrazolones Organic sodium salts Acid dyes
Orange B
Chemistry
196
18,484,432
https://en.wikipedia.org/wiki/PacketVideo
PacketVideo Corporation or PV was a San Diego–based company that produced software for wireless multimedia, including the display of video on mobile handsets. The PacketVideo name wasn't actively used after being acquired by Lynx Technology. History PV was founded in San Diego, California on 10 August 1998 by James C. Brailean, Cheuk Chan, Osama Al-shaykh, Gene Wen and Mark R Banham. In 2003 PacketVideo sold its infrastructure division to Alcatel. In 2005, PacketVideo was acquired in NextWave Wireless. In 2010, PacketVideo became a wholly owned subsidiary of NTT DoCoMo. Corporate parent NTT DOCOMO sold PacketVideo NorthAmerica and Europe to Lynx Technology on 10 May 2015 and the remaining portion, PacketVideo Japan, exactly one year later on 10 May 2016. Products PV's customers include mobile operators such as Verizon Wireless, NTT DoCoMo and Orange, handset manufacturers, and consumer electronics companies. PV's software is embedded in more than 249 million devices worldwide and more than 248 different products. Major product groups from PV are: CORE, which provides a universal structure for mobile multimedia applications. CORE is an established framework that can support any media services. MediaFusion, a white-label client-server software application to develop and launch on-device portals for rich media services. TwonkyMedia suite of products that connect home entertainment devices. Locations PacketVideo has presence in three continents, with its headquarters in San Diego. Major development centers are in San Diego, Chicago, Chandigarh, Charlotte, Tokyo, Tampere, Berlin, Boston, and Basel. Sales and customer support centers are in San Diego, Chicago, Charlotte, Tokyo, Tampere, Nice, and Basel. Affiliations PacketVideo is a member of industry forum and consortiums, including the Open Handset Alliance, 3rd Generation Partnership Project, Broadcast Mobile Convergence Forum, FLO Forum, International Multimedia Telecommunications Consortium, Mobile DTV Alliance, MPEG Industry Forum, Open Mobile Alliance, and the Digital Living Network Alliance References References Companies formerly listed on the Nasdaq Software companies based in California Companies based in San Diego Mobile technology Defunct software companies of the United States
PacketVideo
Technology
455
6,682,192
https://en.wikipedia.org/wiki/Gabedit
Gabedit is a graphical user interface to GAMESS (US), Gaussian, MOLCAS, MOLPRO, MPQC, OpenMopac, PC GAMESS, ORCA and Q-Chem computational chemistry packages. Major features Builds molecules by atom, ring, group, amino acid and nucleoside. Creates an input file for computational chemistry packages. Reads output from the ab initio packages, and supports a number of other formats. Displays molecular orbitals or electron density as contour plots or 3D grid plots and output to a number of graphical formats. Animates molecular vibrations, contours, isosurfaces and rotation. See also List of molecular graphics systems PC GAMESS ORCA Quantum chemistry computer programs SAMSON External links Gabedit official website Computational chemistry software Science software that uses GTK Free chemistry software Chemistry software for Linux
Gabedit
Chemistry
177
2,606,126
https://en.wikipedia.org/wiki/Complete%20numbering
In computability theory complete numberings are generalizations of Gödel numbering first introduced by A.I. Mal'tsev in 1963. They are studied because several important results like the Kleene's recursion theorem and Rice's theorem, which were originally proven for the Gödel-numbered set of computable functions, still hold for arbitrary sets with complete numberings. Definition A numbering of a set is called complete (with respect to an element ) if for every partial computable function there exists a total computable function so that (Ershov 1999:482): Ershov refers to the element a as a "special" element for the numbering. A numbering is called precomplete if the weaker property holds: Examples Any numbering of a singleton set is complete The identity function on the natural numbers is not complete A Gödel numbering is precomplete References Y.L. Ershov (1999), "Theory of numberings", Handbook of Computability Theory, E.R. Griffor (ed.), Elsevier, pp. 473–506. A.I. Mal'tsev, Sets with complete numberings. Algebra i Logika, 1963, vol. 2, no. 2, 4-29 (Russian) Computability theory
Complete numbering
Mathematics
275