id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
40,253,680
https://en.wikipedia.org/wiki/Pyrenocine
Pyrenocines are antibiotic mycotoxins. Chemical structures References Antibiotics Mycotoxins
Pyrenocine
Biology
23
26,880,598
https://en.wikipedia.org/wiki/ITW%20Mima%20Packaging%20Systems
ITW Mima Packaging Systems is the European marketing division of ITW's Specialty Systems businesses, manufacturing fully automatic stretch wrapping machines in Finland, semi-automatic and automatic machines in Bulgaria and manufacturing film in Belgium and Ireland. Overview Mima was founded in 1976 in the United States to manufacture stretch wrapping machinery. Mima was acquired by ITW in 1986. Alongside this, Matti Haloila started his own company in Finland and began manufacturing HaloilaHaloila - Etusivu semi-automatic stretch wrappers in 1976. In 1983, Haloila launched an automatic, rotating ring stretch wrapper with the brand name Octopus. Haloila became part of the Illinois Tool Works (ITW) in 1995, and shortly after this, ITW acquired the stretch film business from Mobil and ITW Mima Packaging System was formed. ITW Mima Packaging Systems manufacture stretch films in Belgium and Ireland and stretch wrappers in Bulgaria and Finland. In 2006 Mima launched the Octopus Twin a wrapping machine capable of wrapping 150 pallets per hour. ITW Mima delivered their 3000th Octopus stretch wrapper in 2008. The Octopus is also currently being manufactured for the US market in Canada by ITW Muller. Other Haloila’s wrapping machines include the Cobra, Ecomat and Rolle. ITW Mima Packagings Systems is a member of the Process and Packaging Machinery Association (PPMA) References Packaging machinery
ITW Mima Packaging Systems
Engineering
286
20,932,009
https://en.wikipedia.org/wiki/Bethesda%20system
The Bethesda system (TBS), officially called The Bethesda System for Reporting Cervical Cytology, is a system for reporting cervical or vaginal cytologic diagnoses, used for reporting Pap smear results. It was introduced in 1988 and revised in 1991, 2001, and 2014. The name comes from the location (Bethesda, Maryland) of the conference, sponsored by the National Institutes of Health, that established the system. Since 2010, there is also a Bethesda system used for cytopathology of thyroid nodules, which is called The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC or BSRTC). Like TBS, it was the result of a conference sponsored by the NIH and is published in book editions (currently by Springer). Mentions of "the Bethesda system" without further specification usually refer to the cervical system, unless the thyroid context of a discussion is implicit. Cervix Abnormal results include: Atypical squamous cells Atypical squamous cells of undetermined significance (ASC-US) Atypical squamous cells – cannot exclude HSIL (ASC-H) Low-grade squamous intraepithelial lesion (LGSIL or LSIL) High grade squamous intraepithelial lesion (HGSIL or HSIL) Squamous cell carcinoma Atypical Glandular Cells not otherwise specified (AGC-NOS) Atypical Glandular Cells, suspicious for AIS or cancer (AGC-neoplastic) Adenocarcinoma in situ (AIS) The results are calculated differently following a Pap smear of the cervix. Squamous cell abnormalities LSIL: low-grade squamous intraepithelial lesion A low-grade squamous intraepithelial lesion (LSIL or LGSIL) indicates possible cervical dysplasia. LSIL usually indicates mild dysplasia (CIN 1), more than likely caused by a human papillomavirus infection. It is usually diagnosed following a Pap smear. CIN 1 is the most common and most benign form of cervical intraepithelial neoplasia and usually resolves spontaneously within two years. Because of this, LSIL results can be managed with a simple "watch and wait" philosophy. However, because there is a 12–16% chance of progression to more severe dysplasia, the physician may want to follow the results more aggressively by performing a colposcopy with biopsy. If the dysplasia progresses, treatment may be necessary. Treatment involves removal of the affected tissue, which can be accomplished by LEEP, cryosurgery, cone biopsy, or laser ablation. HSIL: high-grade squamous intraepithelial lesion High-grade squamous intraepithelial lesion (HSIL or HGSIL) indicates moderate or severe cervical intraepithelial neoplasia or carcinoma in situ. It is usually diagnosed following a Pap test. In some cases these lesions can lead to invasive cervical cancer, if not followed appropriately. HSIL does not mean that cancer is present. Of all women with HSIL results, 2% or less have invasive cervical cancer at that time, however about 20% would progress to having invasive cervical cancer without treatment. To combat this progression, HSIL is usually followed by an immediate colposcopy with biopsy to sample or remove the dysplastic tissue. This tissue is sent for pathology testing to assign a histologic classification that is more definitive than a Pap smear result (which is a cytologic finding). HSIL generally corresponds to the histological classification of CIN 2 or 3. HSIL treatment involves the removal or destruction of the affected cells, usually by LEEP. Other methods include cryotherapy, cautery, or laser ablation, but none are performed on pregnant women for fear of disrupting the pregnancy. Any of these procedures is 85% likely to cure the problem. Glandular cell abnormalities Adenocarcinoma Adenocarcinoma can arise from the endocervix, endometrium and extrauterine sites. AGC AGC, formerly AGUS, is a term for atypical glandular cells of undetermined significance. Renamed AGC to avoid confusion with ASCUS. The management of AGC is colposcopy with or without an endometrial biopsy. Thyroid nodules The Bethesda System for Reporting Thyroid Cytopathology is the system used to report whether the thyroid cytological specimen is benign or malignant on fine-needle aspiration cytology (FNAC). It can be divided into six categories: Repeated FNAC is recommended for Category I, followed by clinical follow-up in Category II, repeat FNAC for Category III, and lobectomy for Category IV, near total-thyroidectomy/lobectomy for Category V, and near total thyroidectomy for Category VI. The risk of malignancy in a malignant FNAC report is 93.7% while for a suspicious FNAC report, it is 18.9%. See also American Society for Clinical Pathology References External links ASCP: The Bethesda System Website Atlas Bethesda 2001 Workshop Diagnostic endocrinology Gynaecological cancer Medical terminology Papillomavirus-associated diseases Pathology Thyroid Cervical cancer Bethesda, Maryland
Bethesda system
Biology
1,142
185,239
https://en.wikipedia.org/wiki/Thermal%20radiation
Thermal radiation is electromagnetic radiation emitted by the thermal motion of particles in matter. All matter with a temperature greater than absolute zero emits thermal radiation. The emission of energy arises from a combination of electronic, molecular, and lattice oscillations in a material. Kinetic energy is converted to electromagnetism due to charge-acceleration or dipole oscillation. At room temperature, most of the emission is in the infrared (IR) spectrum, though above around 525 °C (977 °F) enough of it becomes visible for the matter to visibly glow. This visible glow is called incandescence. Thermal radiation is one of the fundamental mechanisms of heat transfer, along with conduction and convection. The primary method by which the Sun transfers heat to the Earth is thermal radiation. This energy is partially absorbed and scattered in the atmosphere, the latter process being the reason why the sky is visibly blue. Much of the Sun's radiation transmits through the atmosphere to the surface where it is either absorbed or reflected. Thermal radiation can be used to detect objects or phenomena normally invisible to the human eye. Thermographic cameras create an image by sensing infrared radiation. These images can represent the temperature gradient of a scene and are commonly used to locate objects at a higher temperature than their surroundings. In a dark environment where visible light is at low levels, infrared images can be used to locate animals or people due to their body temperature. Cosmic microwave background radiation is another example of thermal radiation. Blackbody radiation is a concept used to analyze thermal radiation in idealized systems. This model applies if a radiating object meets the physical characteristics of a black body in thermodynamic equilibrium. Planck's law describes the spectrum of blackbody radiation, and relates the radiative heat flux from a body to its temperature. Wien's displacement law determines the most likely frequency of the emitted radiation, and the Stefan–Boltzmann law gives the radiant intensity. Where blackbody radiation is not an accurate approximation, emission and absorption can be modeled using quantum electrodynamics (QED). Overview Thermal radiation is the emission of electromagnetic waves from all matter that has a temperature greater than absolute zero. Thermal radiation reflects the conversion of thermal energy into electromagnetic energy. Thermal energy is the kinetic energy of random movements of atoms and molecules in matter. It is present in all matter of nonzero temperature. These atoms and molecules are composed of charged particles, i.e., protons and electrons. The kinetic interactions among matter particles result in charge acceleration and dipole oscillation. This results in the electrodynamic generation of coupled electric and magnetic fields, resulting in the emission of photons, radiating energy away from the body. Electromagnetic radiation, including visible light, will propagate indefinitely in vacuum. The characteristics of thermal radiation depend on various properties of the surface from which it is emanating, including its temperature and its spectral emissivity, as expressed by Kirchhoff's law. The radiation is not monochromatic, i.e., it does not consist of only a single frequency, but comprises a continuous spectrum of photon energies, its characteristic spectrum. If the radiating body and its surface are in thermodynamic equilibrium and the surface has perfect absorptivity at all wavelengths, it is characterized as a black body. A black body is also a perfect emitter. The radiation of such perfect emitters is called black-body radiation. The ratio of any body's emission relative to that of a black body is the body's emissivity, so a black body has an emissivity of one. Absorptivity, reflectivity, and emissivity of all bodies are dependent on the wavelength of the radiation. Due to reciprocity, absorptivity and emissivity for any particular wavelength are equal at equilibrium – a good absorber is necessarily a good emitter, and a poor absorber is a poor emitter. The temperature determines the wavelength distribution of the electromagnetic radiation. The distribution of power that a black body emits with varying frequency is described by Planck's law. At any given temperature, there is a frequency fmax at which the power emitted is a maximum. Wien's displacement law, and the fact that the frequency is inversely proportional to the wavelength, indicates that the peak frequency fmax is proportional to the absolute temperature T of the black body. The photosphere of the sun, at a temperature of approximately 6000 K, emits radiation principally in the (human-)visible portion of the electromagnetic spectrum. Earth's atmosphere is partly transparent to visible light, and the light reaching the surface is absorbed or reflected. Earth's surface emits the absorbed radiation, approximating the behavior of a black body at 300 K with spectral peak at fmax. At these lower frequencies, the atmosphere is largely opaque and radiation from Earth's surface is absorbed or scattered by the atmosphere. Though about 10% of this radiation escapes into space, most is absorbed and then re-emitted by atmospheric gases. It is this spectral selectivity of the atmosphere that is responsible for the planetary greenhouse effect, contributing to global warming and climate change in general (but also critically contributing to climate stability when the composition and properties of the atmosphere are not changing). History Ancient Greece Burning glasses are known to date back to about 700 BC. One of the first accurate mentions of burning glasses appears in Aristophanes's comedy, The Clouds, written in 423 BC. According to the Archimedes' heat ray anecdote, Archimedes is purported to have developed mirrors to concentrate heat rays in order to burn attacking Roman ships during the Siege of Syracuse (c. 213–212 BC), but no sources from the time have been confirmed. Catoptrics is a book attributed to Euclid on how to focus light in order to produce heat, but the book might have been written in 300 AD. Renaissance During the Renaissance, Santorio Santorio came up with one of the earliest thermoscopes. In 1612 he published his results on the heating effects from the Sun, and his attempts to measure heat from the Moon. Earlier, in 1589, Giambattista della Porta reported on the heat felt on his face, emitted by a remote candle and facilitated by a concave metallic mirror. He also reported the cooling felt from a solid ice block. Della Porta's experiment would be replicated many times with increasing accuracy. It was replicated by astronomers Giovanni Antonio Magini and Christopher Heydon in 1603, and supplied instructions for Rudolf II, Holy Roman Emperor who performed it in 1611. In 1660, della Porta's experiment was updated by the Accademia del Cimento using a thermometer invented by Ferdinand II, Grand Duke of Tuscany. Enlightenment In 1761, Benjamin Franklin wrote a letter describing his experiments on the relationship between color and heat absorption. He found that darker color clothes got hotter when exposed to sunlight than lighter color clothes. One experiment he performed consisted of placing square pieces of cloth of various colors out in the snow on a sunny day. He waited some time and then measured that the black pieces sank furthest into the snow of all the colors, indicating that they got the hottest and melted the most snow. Caloric theory Antoine Lavoisier considered that radiation of heat was concerned with the condition of the surface of a physical body rather than the material of which it was composed. Lavoisier described a poor radiator to be a substance with a polished or smooth surface as it possessed its molecules lying in a plane closely bound together thus creating a surface layer of caloric fluid which insulated the release of the rest within. He described a good radiator to be a substance with a rough surface as only a small proportion of molecules held caloric in within a given plane, allowing for greater escape from within. Count Rumford would later cite this explanation of caloric movement as insufficient to explain the radiation of cold, which became a point of contention for the theory as a whole. In his first memoir, Augustin-Jean Fresnel responded to a view he extracted from a French translation of Isaac Newton's Optics. He says that Newton imagined particles of light traversing space uninhibited by the caloric medium filling it, and refutes this view (never actually held by Newton) by saying that a body under illumination would increase indefinitely in heat. In Marc-Auguste Pictet's famous experiment of 1790, it was reported that a thermometer detected a lower temperature when a set of mirrors were used to focus "frigorific rays" from a cold object. In 1791, Pierre Prevost a colleague of Pictet, introduced the concept of radiative equilibrium, wherein all objects both radiate and absorb heat. When an object is cooler than its surroundings, it absorbs more heat than it emits, causing its temperature to increase until it reaches equilibrium. Even at equilibrium, it continues to radiate heat, balancing absorption and emission. The discovery of infrared radiation is ascribed to astronomer William Herschel. Herschel published his results in 1800 before the Royal Society of London. Herschel used a prism to refract light from the sun and detected the calorific rays, beyond the red part of the spectrum, by an increase in the temperature recorded on a thermometer in that region. Electromagnetic theory At the end of the 19th century it was shown that the transmission of light or of radiant heat was allowed by the propagation of electromagnetic waves. Television and radio broadcasting waves are types of electromagnetic waves with specific wavelengths. All electromagnetic waves travel at the same speed; therefore, shorter wavelengths are associated with high frequencies. All bodies generate and receive electromagnetic waves at the expense of heat exchange. In 1860, Gustav Kirchhoff published a mathematical description of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. This relation is known as Stefan–Boltzmann law. Quantum theory The microscopic theory of radiation is best known as the quantum theory and was first offered by Max Planck in 1900. According to this theory, energy emitted by a radiator is not continuous but is in the form of quanta. Planck noted that energy was emitted in quantas of frequency of vibration similarly to the wave theory. The energy E an electromagnetic wave in vacuum is found by the expression E = hf, where h is the Planck constant and f is its frequency. Bodies at higher temperatures emit radiation at higher frequencies with an increasing energy per quantum. While the propagation of electromagnetic waves of all wavelengths is often referred as "radiation", thermal radiation is often constrained to the visible and infrared regions. For engineering purposes, it may be stated that thermal radiation is a form of electromagnetic radiation which varies on the nature of a surface and its temperature. Radiation waves may travel in unusual patterns compared to conduction heat flow. Radiation allows waves to travel from a heated body through a cold non-absorbing or partially absorbing medium and reach a warmer body again. An example is the case of the radiation waves that travel from the Sun to the Earth. Characteristics Frequency Thermal radiation emitted by a body at any temperature consists of a wide range of frequencies. The frequency distribution is given by Planck's law of black-body radiation for an idealized emitter as shown in the diagram at top. The dominant frequency (or color) range of the emitted radiation shifts to higher frequencies as the temperature of the emitter increases. For example, a red hot object radiates mainly in the long wavelengths (red and orange) of the visible band. If it is heated further, it also begins to emit discernible amounts of green and blue light, and the spread of frequencies in the entire visible range cause it to appear white to the human eye; it is white hot. Even at a white-hot temperature of 2000 K, 99% of the energy of the radiation is still in the infrared. This is determined by Wien's displacement law. In the diagram the peak value for each curve moves to the left as the temperature increases. Relationship to temperature The total radiation intensity of a black body rises as the fourth power of the absolute temperature, as expressed by the Stefan–Boltzmann law. A kitchen oven, at a temperature about double room temperature on the absolute temperature scale (600 K vs. 300 K) radiates 16 times as much power per unit area. An object at the temperature of the filament in an incandescent light bulb—roughly 3000 K, or 10 times room temperature—radiates 10,000 times as much energy per unit area. As for photon statistics, thermal light obeys Super-Poissonian statistics. Appearance When the temperature of a body is high enough, its thermal radiation spectrum becomes strong enough in the visible range to visibly glow. The visible component of thermal radiation is sometimes called incandescence, though this term can also refer to thermal radiation in general. The term derive from the Latin verb , 'to glow white'. In practice, virtually all solid or liquid substances start to glow around , with a mildly dull red color, whether or not a chemical reaction takes place that produces light as a result of an exothermic process. This limit is called the Draper point. The incandescence does not vanish below that temperature, but it is too weak in the visible spectrum to be perceptible. Reciprocity The rate of electromagnetic radiation emitted by a body at a given frequency is proportional to the rate that the body absorbs radiation at that frequency, a property known as reciprocity. Thus, a surface that absorbs more red light thermally radiates more red light. This principle applies to all properties of the wave, including wavelength (color), direction, polarization, and even coherence. It is therefore possible to have thermal radiation which is polarized, coherent, and directional; though polarized and coherent sources are fairly rare in nature. Fundamental principles Thermal radiation is one of the three principal mechanisms of heat transfer. It entails the emission of a spectrum of electromagnetic radiation due to an object's temperature. Other mechanisms are convection and conduction. Electromagnetic waves Thermal radiation is characteristically different from conduction and convection in that it does not require a medium and, in fact it reaches maximum efficiency in a vacuum. Thermal radiation is a type of electromagnetic radiation which is often modeled by the propagation of waves. These waves have the standard wave properties of frequency, and wavelength, which are related by the equationwhere is the speed of light in the medium. Irradiation Thermal irradiation is the rate at which radiation is incident upon a surface per unit area. It is measured in watts per square meter. Irradiation can either be reflected, absorbed, or transmitted. The components of irradiation can then be characterized by the equation where, represents the absorptivity, reflectivity and transmissivity. These components are a function of the wavelength of the electromagnetic wave as well as the material properties of the medium. Absorptivity and emissivity The spectral absorption is equal to the emissivity ; this relation is known as Kirchhoff's law of thermal radiation. An object is called a black body if this holds for all frequencies, and the following formula applies: If objects appear white (reflective in the visual spectrum), they are not necessarily equally reflective (and thus non-emissive) in the thermal infrared – see the diagram at the left. Most household radiators are painted white, which is sensible given that they are not hot enough to radiate any significant amount of heat, and are not designed as thermal radiators at all – instead, they are actually convectors, and painting them matt black would make little difference to their efficacy. Acrylic and urethane based white paints have 93% blackbody radiation efficiency at room temperature (meaning the term "black body" does not always correspond to the visually perceived color of an object). These materials that do not follow the "black color = high emissivity/absorptivity" caveat will most likely have functional spectral emissivity/absorptivity dependence. Only truly gray systems (relative equivalent emissivity/absorptivity and no directional transmissivity dependence in all control volume bodies considered) can achieve reasonable steady-state heat flux estimates through the Stefan-Boltzmann law. Encountering this "ideally calculable" situation is almost impossible (although common engineering procedures surrender the dependency of these unknown variables and "assume" this to be the case). Optimistically, these "gray" approximations will get close to real solutions, as most divergence from Stefan-Boltzmann solutions is very small (especially in most standard temperature and pressure lab controlled environments). Reflectivity Reflectivity deviates from the other properties in that it is bidirectional in nature. In other words, this property depends on the direction of the incident of radiation as well as the direction of the reflection. Therefore, the reflected rays of a radiation spectrum incident on a real surface in a specified direction forms an irregular shape that is not easily predictable. In practice, surfaces are often assumed to reflect either in a perfectly specular or a diffuse manner. In a specular reflection, the angles of reflection and incidence are equal. In diffuse reflection, radiation is reflected equally in all directions. Reflection from smooth and polished surfaces can be assumed to be specular reflection, whereas reflection from rough surfaces approximates diffuse reflection. In radiation analysis a surface is defined as smooth if the height of the surface roughness is much smaller relative to the wavelength of the incident radiation. Transmissivity A medium that experiences no transmission () is opaque, in which case absorptivity and reflectivity sum to unity: Radiation intensity Radiation emitted from a surface can propagate in any direction from the surface. Irradiation can also be incident upon a surface from any direction. The amount of irradiation on a surface is therefore dependent on the relative orientation of both the emitter and the receiver. The parameter radiation intensity, is used to quantify how much radiation makes it from one surface to another. Radiation intensity is often modeled using a spherical coordinate system. Emissive power Emissive power is the rate at which radiation is emitted per unit area. It is a measure of heat flux. The total emissive power from a surface is denoted as and can be determined by,where is in units of steradians and is the total intensity. The total emissive power can also be found by integrating the spectral emissive power over all possible wavelengths. This is calculated as,where represents wavelength. The spectral emissive power can also be determined from the spectral intensity, as follows, where both spectral emissive power and emissive intensity are functions of wavelength. Blackbody radiation A "black body" is a body which has the property of allowing all incident rays to enter without surface reflection and not allowing them to leave again. Blackbodies are idealized surfaces that act as the perfect absorber and emitter. They serve as the standard against which real surfaces are compared when characterizing thermal radiation. A blackbody is defined by three characteristics: A blackbody absorbs all incident radiation, regardless of wavelength and direction. No surface can emit more energy than a blackbody for a given temperature and wavelength. A blackbody is a diffuse emitter. The Planck distribution The spectral intensity of a blackbody, was first determined by Max Planck. It is given by Planck's law per unit wavelength as:This formula mathematically follows from calculation of spectral distribution of energy in quantized electromagnetic field which is in complete thermal equilibrium with the radiating object. Planck's law shows that radiative energy increases with temperature, and explains why the peak of an emission spectrum shifts to shorter wavelengths at higher temperatures. It can also be found that energy emitted at shorter wavelengths increases more rapidly with temperature relative to longer wavelengths. The equation is derived as an infinite sum over all possible frequencies in a semi-sphere region. The energy, , of each photon is multiplied by the number of states available at that frequency, and the probability that each of those states will be occupied. Stefan-Boltzmann law The Planck distribution can be used to find the spectral emissive power of a blackbody, as follows, The total emissive power of a blackbody is then calculated as,The solution of the above integral yields a remarkably elegant equation for the total emissive power of a blackbody, the Stefan-Boltzmann law, which is given as,where is the Steffan-Boltzmann constant. Wien's displacement law The wavelength for which the emission intensity is highest is given by Wien's displacement law as: Constants Definitions of constants used in the above equations: Variables Definitions of variables, with example values: Emission from non-black surfaces For surfaces which are not black bodies, one has to consider the (generally frequency dependent) emissivity factor . This factor has to be multiplied with the radiation spectrum formula before integration. If it is taken as a constant, the resulting formula for the power output can be written in a way that contains as a factor: This type of theoretical model, with frequency-independent emissivity lower than that of a perfect black body, is often known as a grey body. For frequency-dependent emissivity, the solution for the integrated power depends on the functional form of the dependence, though in general there is no simple expression for it. Practically speaking, if the emissivity of the body is roughly constant around the peak emission wavelength, the gray body model tends to work fairly well since the weight of the curve around the peak emission tends to dominate the integral. Heat transfer between surfaces Calculation of radiative heat transfer between groups of objects, including a 'cavity' or 'surroundings' requires solution of a set of simultaneous equations using the radiosity method. In these calculations, the geometrical configuration of the problem is distilled to a set of numbers called view factors, which give the proportion of radiation leaving any given surface that hits another specific surface. These calculations are important in the fields of solar thermal energy, boiler and furnace design and raytraced computer graphics. The net radiative heat transfer from one surface to another is the radiation leaving the first surface for the other minus that arriving from the second surface. Formulas for radiative heat transfer can be derived for more particular or more elaborate physical arrangements, such as between parallel plates, concentric spheres and the internal surfaces of a cylinder. Applications Thermal radiation is an important factor of many engineering applications, especially for those dealing with high temperatures. Solar energy Sunlight is the incandescence of the "white hot" surface of the Sun. Electromagnetic radiation from the sun has a peak wavelength of about 550 nm, and can be harvested to generate heat or electricity. Thermal radiation can be concentrated on a tiny spot via reflecting mirrors, which concentrating solar power takes advantage of. Instead of mirrors, Fresnel lenses can also be used to concentrate radiant energy. Either method can be used to quickly vaporize water into steam using sunlight. For example, the sunlight reflected from mirrors heats the PS10 Solar Power Plant, and during the day it can heat water to . A selective surface can be used when energy is being extracted from the sun. Selective surfaces are surfaces tuned to maximize the amount of energy they absorb from the sun's radiation while minimizing the amount of energy they lose to their own thermal radiation. Selective surfaces can also be used on solar collectors. Incandescent light bulbs The incandescent light bulb creates light by heating a filament to a temperature at which it emits significant visible thermal radiation. For a tungsten filament at a typical temperature of 3000 K, only a small fraction of the emitted radiation is visible, and the majority is infrared light. This infrared light does not help a person see, but still transfers heat to the environment, making incandescent lights relatively inefficient as a light source. If the filament could be made hotter, efficiency would increase; however, there are currently no materials able to withstand such temperatures which would be appropriate for use in lamps. More efficient light sources, such as fluorescent lamps and LEDs, do not function by incandescence. Thermal comfort Thermal radiation plays a crucial role in human comfort, influencing perceived temperature sensation. Various technologies have been developed to enhance thermal comfort, including personal heating and cooling devices. The mean radiant temperature is a metric used to quantify the exchange of radiant heat between a human and their surrounding environment. Personal heating Radiant personal heaters are devices that convert energy into infrared radiation that are designed to increase a user's perceived temperature. They typically are either gas-powered or electric. In domestic and commercial applications, gas-powered radiant heaters can produce a higher heat flux than electric heaters which are limited by the amount of current that can be drawn through a circuit breaker. Personal cooling Personalized cooling technology is an example of an application where optical spectral selectivity can be beneficial. Conventional personal cooling is typically achieved through heat conduction and convection. However, the human body is a very efficient emitter of infrared radiation, which provides an additional cooling mechanism. Most conventional fabrics are opaque to infrared radiation and block thermal emission from the body to the environment. Fabrics for personalized cooling applications have been proposed that enable infrared transmission to directly pass through clothing, while being opaque at visible wavelengths, allowing the wearer to remain cooler. Windows Low-emissivity windows in houses are a more complicated technology, since they must have low emissivity at thermal wavelengths while remaining transparent to visible light. To reduce the heat transfer from a surface, such as a glass window, a clear reflective film with a low emissivity coating can be placed on the interior of the surface. "Low-emittance (low-E) coatings are microscopically thin, virtually invisible, metal or metallic oxide layers deposited on a window or skylight glazing surface primarily to reduce the U-factor by suppressing radiative heat flow". By adding this coating we are limiting the amount of radiation that leaves the window thus increasing the amount of heat that is retained inside the window. Spacecraft Shiny metal surfaces, have low emissivities both in the visible wavelengths and in the far infrared. Such surfaces can be used to reduce heat transfer in both directions; an example of this is the multi-layer insulation used to insulate spacecraft. Since any electromagnetic radiation, including thermal radiation, conveys momentum as well as energy, thermal radiation also induces very small forces on the radiating or absorbing objects. Normally these forces are negligible, but they must be taken into account when considering spacecraft navigation. The Pioneer anomaly, where the motion of the craft slightly deviated from that expected from gravity alone, was eventually tracked down to asymmetric thermal radiation from the spacecraft. Similarly, the orbits of asteroids are perturbed since the asteroid absorbs solar radiation on the side facing the Sun, but then re-emits the energy at a different angle as the rotation of the asteroid carries the warm surface out of the Sun's view (the YORP effect). Nanostructures Nanostructures with spectrally selective thermal emittance properties offer numerous technological applications for energy generation and efficiency, e.g., for daytime radiative cooling of photovoltaic cells and buildings. These applications require high emittance in the frequency range corresponding to the atmospheric transparency window in 8 to 13 micron wavelength range. A selective emitter radiating strongly in this range is thus exposed to the clear sky, enabling the use of the outer space as a very low temperature heat sink. Health and safety Metabolic temperature regulation In a practical, room-temperature setting, humans lose considerable energy due to infrared thermal radiation in addition to that lost by conduction to air (aided by concurrent convection, or other air movement like drafts). The heat energy lost is partially regained by absorbing heat radiation from walls or other surroundings. Human skin has an emissivity of very close to 1.0. A human, having roughly 2m2 in surface area, and a temperature of about 307 K, continuously radiates approximately 1000 W. If people are indoors, surrounded by surfaces at 296 K, they receive back about 900 W from the wall, ceiling, and other surroundings, resulting in a net loss of 100 W. These estimates are highly dependent on extrinsic variables, such as wearing clothes. Lighter colors and also whites and metallic substances absorb less of the illuminating light, and as a result heat up less. However, color makes little difference in the heat transfer between an object at everyday temperatures and its surroundings. This is because the dominant emitted wavelengths are not in the visible spectrum, but rather infrared. Emissivities at those wavelengths are largely unrelated to visual emissivities (visible colors); in the far infra-red, most objects have high emissivities. Thus, except in sunlight, the color of clothing makes little difference as regards warmth; likewise, paint color of houses makes little difference to warmth except when the painted part is sunlit. Burns Thermal radiation is a phenomenon that can burn skin and ignite flammable materials. The time to a damage from exposure to thermal radiation is a function of the rate of delivery of the heat. Radiative heat flux and effects are given as follows: Near-field radiative heat transfer At distances on the scale of the wavelength of a radiated electromangetic wave or smaller, Planck's law is not accurate. For objects this small and close together, the quantum tunneling of EM waves has a significant impact on the rate of radiation. A more sophisticated framework involving electromagnetic theory must be used for smaller distances from the thermal source or surface. For example, although far-field thermal radiation at distances from surfaces of more than one wavelength is generally not coherent to any extent, near-field thermal radiation (i.e., radiation at distances of a fraction of various radiation wavelengths) may exhibit a degree of both temporal and spatial coherence. Planck's law of thermal radiation has been challenged in recent decades by predictions and successful demonstrations of the radiative heat transfer between objects separated by nanoscale gaps that deviate significantly from the law predictions. This deviation is especially strong (up to several orders in magnitude) when the emitter and absorber support surface polariton modes that can couple through the gap separating cold and hot objects. However, to take advantage of the surface-polariton-mediated near-field radiative heat transfer, the two objects need to be separated by ultra-narrow gaps on the order of microns or even nanometers. This limitation significantly complicates practical device designs. Another way to modify the object thermal emission spectrum is by reducing the dimensionality of the emitter itself. This approach builds upon the concept of confining electrons in quantum wells, wires and dots, and tailors thermal emission by engineering confined photon states in two- and three-dimensional potential traps, including wells, wires, and dots. Such spatial confinement concentrates photon states and enhances thermal emission at select frequencies. To achieve the required level of photon confinement, the dimensions of the radiating objects should be on the order of or below the thermal wavelength predicted by Planck's law. Most importantly, the emission spectrum of thermal wells, wires and dots deviates from Planck's law predictions not only in the near field, but also in the far field, which significantly expands the range of their applications. See also Incandescence Infrared photography Interior radiation control coating Heat transfer Microwave Radiation Planck radiation Radiant cooling Sakuma–Hattori equation Thermal dose unit View factor References Further reading E.M. Sparrow and R.D. Cess. Radiation Heat Transfer. Hemisphere Publishing Corporation, 1978. Kuenzer, C. and S. Dech (2013): Thermal Infrared Remote Sensing: Sensors, Methods, Applications (= Remote Sensing and Digital Image Processing 17). Dordrecht: Springer. External links Black Body Emission Calculator Heat transfer Atmospheric Radiation Infrared Temperature Calibration 101 Electromagnetic radiation Heat transfer Thermodynamics Temperature Infrared
Thermal radiation
Physics,Chemistry,Mathematics
6,631
16,817,373
https://en.wikipedia.org/wiki/Volta%20Prize
The Volta Prize () was originally established by Napoleon III during the Second French Empire in 1852 to honor Alessandro Volta, an Italian physicist noted for developing the electric battery. This international prize awarded 50,000 French francs to extraordinary scientific discoveries related to electricity. The prize was instituted by the Ministry of Public Instruction with the personal funding of the French Emperor, the selection committee was usually constituted by members of the French Academy of Sciences. Notable recipients have included, Heinrich Ruhmkorff, who commercialised the induction coil, and Zénobe Gramme, inventor of the Gramme dynamo and the first practical electric motor used in industry. One of its most notable awards was made in 1880, when Alexander Graham Bell received the fourth edition of the Volta Prize for the invention of the telephone. Among the committee members who judged were Victor Hugo and Alexandre Dumas, fils. Since Bell was himself becoming more affluent, he used the prize money to create institutions in and around Washington, D.C., including the prestigious Volta Laboratory Association in 1880 (also known as the 'Volta Laboratory' and as the 'Alexander Graham Bell Laboratory) precursor to Bell Labs, with his endowment fund (the 'Volta Fund'), and then in 1887 the 'Volta Bureau', which later became the Alexander Graham Bell Association for the Deaf and Hard of Hearing (AG Bell). The prize was discontinued in 1888. Inspiration Galvanism Prize The Volta prize was inspired on the earlier French Academy of Sciences Galvanism Prize () created by Napoleon Bonaparte in 1801. A Grand Prize of 60,000 francs and a medal of 30,000 francs to be given for discoveries similar to those of Volta and Benjamin Franklin. The Grand Prize never found a deserving recipient. Only four recipients received a secondary reward of 30,000 francs from the Galvanism Prize: 1806 Paul Erman. 1807 Sir Humphry Davy. 1809 Shared between Joseph Louis Gay-Lussac and Baron Louis Jacques Thénard. Napoleon III interest in science Additionally, the founder of the Volta prize and next Emperor of the French, Louis-Napoléon Bonaparte (Napoleon III), nephew of Bonaparte, was himself very invested in the development of electric science. He presented his own voltaic pile at the French Academy of Sciences in 1843, made out of a single metal and two acid solutions. Nomination rules and prize The rules of the Volta Prize were decreed by Napoleon III in Paris, on 23 February 1852. The decree contains five articles: Article 1: A prize of 50,000 French francs to be awarded to new applications of the voltaic pile in the fields of industry and heat sources, public lightning, chemistry, mechanics, and/or medicine. Article 2: Scientists and inventors from all nationalities are admitted in the competition. Article 3: The prize is open to claim for five years. Article 4: A committee is to be established to analyse the breakthrough of each of the contestants and to recognize if it fills the necessary conditions. Article 5: The Ministers of France are in charge of the execution of the present decree. The articles descriptions above are not a literal translation from the original French articles. The sum of money, 50,000 francs, was approximately $10,000 US dollars at that time (about $ in current dollars), more than five times the annual salary of a Paris Faculty Professor at that time. Between the members of the committee, Edmond Becquerel and Jean-Baptiste Dumas were known to be reporters in certain editions. Recipients All the Volta Prize editions are listed below:1858 No prize awarded. Honorary medals were awarded to Heinrich Ruhmkorff, Paul-Gustave Froment and Duchenne de Boulogne.1863 Heinrich Ruhmkorff, for developing the Ruhmkorff coil.1871 No prize awarded.1880 Alexander Graham Bell, for invention of the telephone. A secondary prize of 20,000 francs was awarded to Zénobe Gramme.1888' Zénobe Gramme, for his labours in introducing and perfecting the continuous-current dynamo. Other minor recognitions were also given to Paul-Gustave Froment for the electric motor, to Auguste Achard for the electric brake, to Gaetan Bonelli for the electric loom, to David Edward Hughes for the printing telegraph, to Giovanni Caselli for the pantelegraph, to Victor Serrin for his lightning system, to Leopold Oudry for galvanoplasty, to Duchenne de Boulogne for the applications of electricity in medicine, to Gaston Planté for a development of a secondary battery, and to for his research on electric currents. See also Alexander Graham Bell honors and tributes Edison Volta Prize List of physics awards Notes References Physics awards French awards Awards established in 1852 Alessandro Volta 1852 establishments in France
Volta Prize
Technology
961
22,923,580
https://en.wikipedia.org/wiki/Alfred%20Brauer
Alfred Theodor Brauer (April 9, 1894 – December 23, 1985) was a German-American mathematician who did work in number theory. He was born in Charlottenburg, and studied at the University of Berlin. As he served Germany in World War I, even being injured in the war, he was able to keep his position longer than many other Jewish academics who had been forced out after Hitler's rise to power. In 1935 he lost his position and in 1938 he tried to leave Germany, but was not able to until the following year. He initially worked in the Northeast, but in 1942 he settled into a position at the University of North Carolina at Chapel Hill. A good deal of his works, and the Alfred T. Brauer library, would be linked to this university. He occasionally taught at Wake Forest University after he retired from Chapel Hill at 70. He died in North Carolina, aged 91. He was the brother of the mathematician Richard Brauer, who was the founder of modular representation theory. See also Brauer chain Scholz–Brauer conjecture References Further reading External links 20th-century German mathematicians 20th-century American mathematicians Number theorists Academic staff of the Humboldt University of Berlin University of North Carolina at Chapel Hill faculty Wake Forest University faculty Jewish American scientists Scientists from Berlin 1894 births 1985 deaths Jewish emigrants from Nazi Germany to the United States German Jewish military personnel of World War I People from Charlottenburg People from the Province of Brandenburg 20th-century American Jews Humboldt University of Berlin alumni
Alfred Brauer
Mathematics
301
1,765,418
https://en.wikipedia.org/wiki/Ecosystem%20ecology
Ecosystem ecology is the integrated study of living (biotic) and non-living (abiotic) components of ecosystems and their interactions within an ecosystem framework. This science examines how ecosystems work and relates this to their components such as chemicals, bedrock, soil, plants, and animals. Ecosystem ecology examines physical and biological structures and examines how these ecosystem characteristics interact with each other. Ultimately, this helps us understand how to maintain high quality water and economically viable commodity production. A major focus of ecosystem ecology is on functional processes, ecological mechanisms that maintain the structure and services produced by ecosystems. These include primary productivity (production of biomass), decomposition, and trophic interactions. Studies of ecosystem function have greatly improved human understanding of sustainable production of forage, fiber, fuel, and provision of water. Functional processes are mediated by regional-to-local level climate, disturbance, and management. Thus ecosystem ecology provides a powerful framework for identifying ecological mechanisms that interact with global environmental problems, especially global warming and degradation of surface water. This example demonstrates several important aspects of ecosystems: Ecosystem boundaries are often nebulous and may fluctuate in time Organisms within ecosystems are dependent on ecosystem level biological and physical processes Adjacent ecosystems closely interact and often are interdependent for maintenance of community structure and functional processes that maintain productivity and biodiversity These characteristics also introduce practical problems into natural resource management. Who will manage which ecosystem? Will timber cutting in the forest degrade recreational fishing in the stream? These questions are difficult for land managers to address while the boundary between ecosystems remains unclear; even though decisions in one ecosystem will affect the other. We need better understanding of the interactions and interdependencies of these ecosystems and the processes that maintain them before we can begin to address these questions. Ecosystem ecology is an inherently interdisciplinary field of study. An individual ecosystem is composed of populations of organisms, interacting within communities, and contributing to the cycling of nutrients and the flow of energy. The ecosystem is the principal unit of study in ecosystem ecology. Population, community, and physiological ecology provide many of the underlying biological mechanisms influencing ecosystems and the processes they maintain. Flowing of energy and cycling of matter at the ecosystem level are often examined in ecosystem ecology, but, as a whole, this science is defined more by subject matter than by scale. Ecosystem ecology approaches organisms and abiotic pools of energy and nutrients as an integrated system which distinguishes it from associated sciences such as biogeochemistry. Biogeochemistry and hydrology focus on several fundamental ecosystem processes such as biologically mediated chemical cycling of nutrients and physical-biological cycling of water. Ecosystem ecology forms the mechanistic basis for regional or global processes encompassed by landscape-to-regional hydrology, global biogeochemistry, and earth system science. History Ecosystem ecology is philosophically and historically rooted in terrestrial ecology. The ecosystem concept has evolved rapidly during the last 100 years with important ideas developed by Frederic Clements, a botanist who argued for specific definitions of ecosystems and that physiological processes were responsible for their development and persistence. Although most of Clements ecosystem definitions have been greatly revised, initially by Henry Gleason and Arthur Tansley, and later by contemporary ecologists, the idea that physiological processes are fundamental to ecosystem structure and function remains central to ecology. Later work by Eugene Odum and Howard T. Odum quantified flows of energy and matter at the ecosystem level, thus documenting the general ideas proposed by Clements and his contemporary Charles Elton. In this model, energy flows through the whole system were dependent on biotic and abiotic interactions of each individual component (species, inorganic pools of nutrients, etc.). Later work demonstrated that these interactions and flows applied to nutrient cycles, changed over the course of succession, and held powerful controls over ecosystem productivity. Transfers of energy and nutrients are innate to ecological systems regardless of whether they are aquatic or terrestrial. Thus, ecosystem ecology has emerged from important biological studies of plants, animals, terrestrial, aquatic, and marine ecosystems. Ecosystem services Ecosystem services are ecologically mediated functional processes essential to sustaining healthy human societies. Water provision and filtration, production of biomass in forestry, agriculture, and fisheries, and removal of greenhouse gases such as carbon dioxide (CO2) from the atmosphere are examples of ecosystem services essential to public health and economic opportunity. Nutrient cycling is a process fundamental to agricultural and forest production. However, like most ecosystem processes, nutrient cycling is not an ecosystem characteristic which can be “dialed” to the most desirable level. Maximizing production in degraded systems is an overly simplistic solution to the complex problems of hunger and economic security. For instance, intensive fertilizer use in the midwestern United States has resulted in degraded fisheries in the Gulf of Mexico. Regrettably, a “Green Revolution” of intensive chemical fertilization has been recommended for agriculture in developed and developing countries. These strategies risk alteration of ecosystem processes that may be difficult to restore, especially when applied at broad scales without adequate assessment of impacts. Ecosystem processes may take many years to recover from significant disturbance. For instance, large-scale forest clearance in the northeastern United States during the 18th and 19th centuries has altered soil texture, dominant vegetation, and nutrient cycling in ways that impact forest productivity in the present day. An appreciation of the importance of ecosystem function in maintenance of productivity, whether in agriculture or forestry, is needed in conjunction with plans for restoration of essential processes. Improved knowledge of ecosystem function will help to achieve long-term sustainability and stability in the poorest parts of the world. Operation Biomass productivity is one of the most apparent and economically important ecosystem functions. Biomass accumulation begins at the cellular level via photosynthesis. Photosynthesis requires water and consequently global patterns of annual biomass production are correlated with annual precipitation. Amounts of productivity are also dependent on the overall capacity of plants to capture sunlight which is directly correlated with plant leaf area and N content. Net primary productivity (NPP) is the primary measure of biomass accumulation within an ecosystem. Net primary productivity can be calculated by a simple formula where the total amount of productivity is adjusted for total productivity losses through maintenance of biological processes: NPP = GPP – Rproducer Where GPP is gross primary productivity and Rproducer is photosynthate (Carbon) lost via cellular respiration. NPP is difficult to measure but a new technique known as eddy co-variance has shed light on how natural ecosystems influence the atmosphere. Figure 4 shows seasonal and annual changes in CO2 concentration measured at Mauna Loa, Hawaii from 1987 to 1990. CO2 concentration steadily increased, but within-year variation has been greater than the annual increase since measurements began in 1957. These variations were thought to be due to seasonal uptake of CO2 during summer months. A newly developed technique for assessing ecosystem NPP has confirmed seasonal variation are driven by seasonal changes in CO2 uptake by vegetation. This has led many scientists and policy makers to speculate that ecosystems can be managed to ameliorate problems with global warming. This type of management may include reforesting or altering forest harvest schedules for many parts of the world. Decomposition and nutrient cycling Decomposition and nutrient cycling are fundamental to ecosystem biomass production. Most natural ecosystems are nitrogen (N) limited and biomass production is closely correlated with N turnover. Typically external input of nutrients is very low and efficient recycling of nutrients maintains productivity. Decomposition of plant litter accounts for the majority of nutrients recycled through ecosystems (Figure 3). Rates of plant litter decomposition are highly dependent on litter quality; high concentration of phenolic compounds, especially lignin, in plant litter has a retarding effect on litter decomposition. More complex C compounds are decomposed more slowly and may take many years to completely breakdown. Decomposition is typically described with exponential decay and has been related to the mineral concentrations, especially manganese, in the leaf litter. Globally, rates of decomposition are mediated by litter quality and climate. Ecosystems dominated by plants with low-lignin concentration often have rapid rates of decomposition and nutrient cycling (Chapin et al. 1982). Simple carbon (C) containing compounds are preferentially metabolized by decomposer microorganisms which results in rapid initial rates of decomposition, see Figure 5A, models that depend on constant rates of decay; so called “k” values, see Figure 5B. In addition to litter quality and climate, the activity of soil fauna is very important However, these models do not reflect simultaneous linear and non-linear decay processes which likely occur during decomposition. For instance, proteins, sugars and lipids decompose exponentially, but lignin decays at a more linear rate Thus, litter decay is inaccurately predicted by simplistic models. A simple alternative model presented in Figure 5C shows significantly more rapid decomposition that the standard model of figure 4B. Better understanding of decomposition models is an important research area of ecosystem ecology because this process is closely tied to nutrient supply and the overall capacity of ecosystems to sequester CO2 from the atmosphere. Trophic dynamics Trophic dynamics refers to process of energy and nutrient transfer between organisms. Trophic dynamics is an important part of the structure and function of ecosystems. Figure 3 shows energy transferred for an ecosystem at Silver Springs, Florida. Energy gained by primary producers (plants, P) is consumed by herbivores (H), which are consumed by carnivores (C), which are themselves consumed by “top- carnivores”(TC). One of the most obvious patterns in Figure 3 is that as one moves up to higher trophic levels (i.e. from plants to top-carnivores) the total amount of energy decreases. Plants exert a “bottom-up” control on the energy structure of ecosystems by determining the total amount of energy that enters the system. However, predators can also influence the structure of lower trophic levels from the top-down. These influences can dramatically shift dominant species in terrestrial and marine systems The interplay and relative strength of top-down vs. bottom-up controls on ecosystem structure and function is an important area of research in the greater field of ecology. Trophic dynamics can strongly influence rates of decomposition and nutrient cycling in time and in space. For example, herbivory can increase litter decomposition and nutrient cycling via direct changes in litter quality and altered dominant vegetation. Insect herbivory has been shown to increase rates of decomposition and nutrient turnover due to changes in litter quality and increased frass inputs. However, insect outbreak does not always increase nutrient cycling. Stadler showed that C rich honeydew produced during aphid outbreak can result in increased N immobilization by soil microbes thus slowing down nutrient cycling and potentially limiting biomass production. North atlantic marine ecosystems have been greatly altered by overfishing of cod. Cod stocks crashed in the 1990s which resulted in increases in their prey such as shrimp and snow crab Human intervention in ecosystems has resulted in dramatic changes to ecosystem structure and function. These changes are occurring rapidly and have unknown consequences for economic security and human well-being. Applications and importance Lessons from two Central American cities The biosphere has been greatly altered by the demands of human societies. Ecosystem ecology plays an important role in understanding and adapting to the most pressing current environmental problems. Restoration ecology and ecosystem management are closely associated with ecosystem ecology. Restoring highly degraded resources depends on integration of functional mechanisms of ecosystems. Without these functions intact, economic value of ecosystems is greatly reduced and potentially dangerous conditions may develop in the field. For example, areas within the mountainous western highlands of Guatemala are more susceptible to catastrophic landslides and crippling seasonal water shortages due to loss of forest resources. In contrast, cities such as Totonicapán that have preserved forests through strong social institutions have greater local economic stability and overall greater human well-being. This situation is striking considering that these areas are close to each other, the majority of inhabitants are of Mayan descent, and the topography and overall resources are similar. This is a case of two groups of people managing resources in fundamentally different ways. Ecosystem ecology provides the basic science needed to avoid degradation and to restore ecosystem processes that provide for basic human needs. See also Biogeochemistry Community ecology Earth system science Holon (philosophy) Landscape ecology Systems ecology MuSIASEM References Systems ecology Global natural environment Ecological processes Ecosystems
Ecosystem ecology
Physics,Biology,Environmental_science
2,503
14,827,806
https://en.wikipedia.org/wiki/Tau%20Geminorum
Tau Geminorum, Latinized from τ Geminorum, is a star in the northern zodiac constellation of Gemini. It has the apparent visual magnitude of +4.42, making it visible to the naked eye under suitably good seeing conditions. This star is close enough to the Earth that its distance can be measured using the parallax technique, which yields a value of roughly . It is an evolved giant star of the spectral type K2 III. It has double the mass of the Sun and has expanded to 30 times the Sun's radius. Tau Geminorum is radiating 364 times as much radiation as the Sun from its expanded outer atmosphere at an effective temperature of 4,583 K, giving it the characteristic orange-hued glow of a K-type star. It appears to be rotating slowly with a projected rotational velocity of . Substellar companion This star has a brown dwarf companion designated Tau Geminorum b, whose mass is at least 20.6 Jupiter masses. It was discovered in 2004 by Mitchell and colleagues, who also discovered Nu Ophiuchi b at the same time. This brown dwarf takes to revolve around Tau Gem. It may also have a stellar companion; a magnitude 11, K0 dwarf at a projected separation of about . References K-type giants Brown dwarfs Gemini (constellation) Geminorum, Tau Durchmusterung objects Geminorum, 46 054719 034693 2697
Tau Geminorum
Astronomy
295
809,774
https://en.wikipedia.org/wiki/Torquetum
The torquetum or turquet is a medieval astronomical instrument designed by persons unknown to take and convert measurements made in three sets of coordinates: horizon, equatorial, and ecliptic. It is characterised by R. P. Lorch as a combination of Ptolemy's astrolabon () and the plane astrolabe. In a sense, the torquetum is an analog computer. Invention The origins of the torquetum are unclear. Its invention has been credited to multiple figures, including Jabir ibn Aflah, Bernard of Verdun and Franco of Poland. Jabir ibn Aflah of Al-Andalus in the early 12th century has been assumed by several historians to be the inventor of the torquetum, based on a similar instrument he described in his Islah Almajisti. However, while his device is similar in function, it has not been identified as a torquetum, but evidence suggests it inspired the torquetum. The earliest explicit accounts of the torquetum appear in the 13th century writings of Bernard of Verdun and Franco of Poland. Franco of Poland's work was published in 1284; however, Bernard of Verdun's work does not contain a date. Therefore, it is impossible to know which work was written first. Franco's work was more widely known and is credited with the distribution of knowledge about the torquetum. The instrument was first created sometime in the 12th or 13th century. However, the only surviving examples of the torquetum are dated from the 16th century. In the middle of the 16th century, the torquetum had numerous structural changes to the original design. The most important change was by instrument-maker, Erasmus Habermel. His alteration allowed for astronomers to make observations to all three of the scales. A torquetum can be seen in the famous portrait The Ambassadors (1533) by Hans Holbein the Younger. It is placed on the right side of the table, next to and above the elbow of the ambassador clad in a long brown coat or robe. The painting shows much of the details of the inscriptions on the disk and half disk, which make up the top of this particular kind of torquetum. A 14th century instrument, the rectangulus, was invented by Richard of Wallingford. This carried out the same task as the torquetum, but was calibrated with linear scales, read by plumb lines. This simplified the spherical trigonometry by resolving the polar measurements directly into their Cartesian components. Notable historic uses Following the conception of the torquetum, the device had been put through many of the following uses. The astronomer, Peter of Limoges, used this device for his observation of what is known today as Halley's Comet at the turn of the 14th century. In the early 1300s, John of Murs mentions the torquetum as his defence "of the reliability of observational astronomy", thus further solidifying its practicality and viability in ancient astronomy. Additionally, Johannes Schoner built a torquetum model for his own personal use in the observation of Halley's Comet in the 1500s. The best-documented account of the torquetum was done by Peter Apian in 1532. Peter Apian was a German humanist, specializing in astronomy, mathematics, and cartography. In his book Astronomicum Caesareum (1540), Apian gives a description of the torquetum near the end of the second part. He also details how the device is used. Apian explains that the torquetum was used for astronomical observations and how the description of the instrument was used as a basis for common astronomical instruments. He also notes the manufacturing process of the instrument and the use of the torquetum for astronomical measurements. Components The torquetum is a complex medieval analog computer that measures three sets of astronomical coordinates: the horizon, equatorial, and ecliptic. One of the defining attributes of the torquetum is its ability to interconvert between these three sets of coordinate dimensions without the use of calculations, as well as to demonstrate the relationship between the same coordinate sets. However, it is a device that requires a thorough understanding of the components and how they work together to make relative positional measurements of certain celestial objects. The anatomy of the torquetum involves many different components, which can be grouped into subdivisions of the torquetum structure, those being: the base, the midframe, and the upperframe. The base starts with the tabula orizontis, which is the bottommost rectangular piece in contact with the ground, and this component represents the horizon of the Earth, relative to the point of measurement. Hinged to the tabula orizontis is a similarly shaped component, the tabula equinoctialis, which represents the latitude of the Earth. This piece can rotate up to 90 degrees, coinciding with the latitudinal lines of the Earth from the equator to the poles. This angle of rotation is created by the stylus, which is an arm mechanism that pins to the slotted holes, which are part of the tabula orizontis. The midframe of the torquetum consists of a free-spinning disk (unnamed) that can be locked into place, and the tabula orbis signorum, directly hinged to it above. The angle between these two pieces is defined by the basilica, a solid stand piece, which is used to either set the draft angle at 0 degrees (Where the basilica is removed) or 23.5 degrees, representing the off-set of the axis of rotation of the Earth. Whether or not the basilica is included depends on the point of measurement either below or above the tropical latitudinal lines. Inscribed on the tabula equinoctialis along, although separate from, the outer perimeter of the bottom disk is a 24-hour circle, which is used to measure the angle between the longitudinal line facing the poles, and the line to the object being measured. Lastly, the upper frame is made up of the crista, the semis and the perpendiculum. The base of the crista is joined to another free-spinning disk directly above the tabula orbis signorum. Similarly, on the outer edge of the tabula orbis signorum is a zodiacal calendar and degree scale, with each of the 12 signs divided amongst it. This scale measures the zodiacal sector of the sky the object being measured is in. The crista itself is a circular piece that corresponds with the meridian of the celestial sphere, which has four quadrants inscribed along the edges, each starting at 0 degrees along the horizontal, and 90 degrees along the vertical. Adjacent, and locked with the crista at 23.5 degrees angle is the semis, which is a half-circle composed of two quadrants starting at 0 degrees along the vertical (relative to 23.5-degree placement) and 90 degrees at the horizontal. Finally, the last major component is the perpendicular, a free-hanging pendulum which measures the angle between the radial line of the Earth and the measured object using the semis. Parts and configurations The base of the instrument represents the horizon and is built on a hinge and a part known as the stylus holds the instrument up to the viewer's complementary latitude. This represents the celestial equator and the angle varies depending on where the view is located on Earth. The several plates and circles that make up the upper portion of the instrument represent the celestial sphere. These parts are built on top of the base and above the basilica, which rotates on a pin to represent the axis of the Earth. The zodiac calendar is inscribed on the tabula orbis signorum this is part of the mechanical aspects of the instrument that take away the tedious calculations required in previous instruments. The versatility of the "torquetum" can be seen in its three possible configurations for the measuring. The first method used lays the instruments flat on a table with no angles within the instrument set. This configuration gives the coordinates of celestial bodies as related to the horizon. The basilica is set so that 0 degree mark faces north. The user can now measure altitude of the target celestial body as well as use the base as a compass for viewing the possible paths they travel. The second configuration uses the stylus to elevate the base set at co-latitude of 90 degrees. The position of the celestial bodies can now be measured in hours, minutes, and seconds using the inscribed clock on the almuri. This helps give the proper ascension and decline coordinates of the celestial bodies as they travel through space. The zero point for ascension and decline coordinates of the celestial bodies as they travel through space. The zero point for ascension is set to the vernal equinox while the end measurement (decline) is the equator, this would put the North Pole at the 90 degree point. The third and most commonly seen configuration of the "torquetum" uses all its assets to make measurements. The upper portion is now set at an angle equal to the obliquity of the ecliptic, which allows the instrument to give ecliptic coordinates. This measures the celestial bodies now on celestial latitude and longitude scales which allow for greater precision and accuracy in making measurements. These three differing configurations allowed for added convenience in taking readings and made once tedious and complicated measuring more streamlined and simple. Further reading Astrolabe Jabir ibn Aflah List of astronomical instruments Notes and references Ralf Kern: Wissenschaftliche Instrumente in ihrer Zeit. Vom 15. – 19. Jahrhundert. Verlag der Buchhandlung Walther König 2010, External links Instructions for the construction of a Torquetum Sidereal pointer – to determine RA/DEC. Navigational equipment Historical scientific instruments Astronomical instruments Arab inventions
Torquetum
Astronomy
1,990
16,384,764
https://en.wikipedia.org/wiki/Busou%20Shinki
is a Japanese media mix franchise from Konami Digital Entertainment, first launched in Japan in 2006 with a line of action figures followed by a companion online game. The franchise encompasses various manga, anime, novels, video games, and more. The online game was shut down in 2011, and the original toy line was discontinued in 2012. A revival of the series was teased in December 2017 and later revealed to be centered around a smartphone game, but the game was still in development hell as of February 2020. Action Figures and Model Kits Original Line and MMS The action figure line was launched in Japan in September 2006. Many were based on character designs by prolific Japanese artists. A few of the figures have been released for distribution outside Japan. Busou Shinki action figures are presented as 1:1 scale, drawing from a fictional world featuring action figure-sized androids. The various media all take place in this same setting, though in different time periods. Busou Shinki are feminine androids with stylized body armor and/or mechanical parts (such as the mermaid-themed Ianeira having a mechanical fish tail), but do have considerable variation in aesthetic between models that reflects the artistic license given to the different designers. Due to the setting, joints and screws in the action figures are considered to be part of the designs, and are frequently depicted in art, video games, and other media, though they are sometimes omitted or less significantly depicted such as in the TV anime. All of the figures use a common 'MMS' (Multi Moveable System) body designed by . MMS figures have multiple highly articulated joints, which give them a wide range of possible poses, including a special swinging leg joint that allows for near-180 degree vertical articulation on legs. Additionally, multiple body parts are interchangeable, allowing a wide variety of customization without tools. There are three iterations of MMS (1st, 2nd, and 3rd), with 3rd coming in two body types (Short and Tall) to allow for different proportions depending on the character. Busou Shinki only uses MMS 1st and MMS 3rd Short/Tall: MMS 2nd was only used for action figures for other IPs such as Beatmania and Gurren Lagann. The series uses a 3.3/4mm standard for parts (both body parts and equipment) that allow them to be connected to other parts. This ensures compatibility throughout the line, but deviates from the 3mm standard used by most other Japanese lines, meaning that they are only compatible with each other. Busou Shinki product packages come in several varieties such as full sets, EX sets, Light Armor sets, and bodies. Full sets come with a unique painted MMS body with head and a full set of equipment. EX sets only include a head and a small assortment of equipment, with no MMS body. Light Armor sets are complete sets of unique MMS with head and equipment but with a significantly smaller amount of equipment and accessories and a smaller stand compared to regular sets. Bodies are sold in blister packages that only contain an MMS body with no equipment. The Arnval Mk 2 and Strarf Mk 2 also had Full Arms Package releases, which had more weapons and equipment in addition to the original full set releases. There were also multiple exclusive repaint versions only available from Dengeki Hobby, Konami Style, or events. Konami also released action figures for various other IPs such as Sky Girls, Otomedius, Beatmania, Gurren Lagann, and using MMS bodies that are compatible with the Busou Shinki line: These were branded under the MMS label, but not the Busou Shinki label. A Hayate no Gotoku collaboration figure that came with a limited edition version of the Hayate no Gotoku game Nightmare Paradise, however, was under the Busou Shinki label as it included Busou Shinki equipment (repaints of the Valona equipment). List of Action Figure Releases Wave 1 Japanese Release Date: 7 September 2006 US Release Date: 18 April 2007 Arnval (アーンヴァル, Ānvaru), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf (ストラーフ, Sutorāfu), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave 2 Japanese Release Date: 28 September 2006 US Release Date: 22 March 2007 Howling (ハウリン, Haurin), Dog Type, Full SetCharacter Designer: BLADE Maochao (マオチャオ, Maochao), Cat Type, Full SetCharacter Designer: BLADE Waffebunny (ヴァッフェバニー, Vaffebanī), Rabbit Type, EX SetCharacter Designer: Tetsurō Kasahara (カサハラテツロー) Wave 3 Japanese Release Date: 7 December 2006 US Release Date: 22 March 2007 (Note, this release consisted of Benio only) Xiphos (サイフォス, Saifosu), Knight Type, Full SetCharacter Designer: Rokurō Shinofusa (篠房六郎) Benio (紅緒, Benio), Samurai Type, Full SetCharacter Designer: Rokurō Shinofusa (篠房六郎) Tsugaru (ツガル, Tsugaru), Santa Claus Type, EX SetCharacter Designer: Goli Wave 4 Japanese Release Date: 22 February 2007 Zyrdarya (ジルダリア, Jirudaria), Flower Type, Full SetCharacter Designer: Okama Juvisy (ジュビジー, Jubijī), Seed Type, Full SetCharacter Designer: Okama Fort Bragg (フォートブラッグ, Fōto Buraggu), Battery Type, EX SetCharacter Designer: Takayuki Yanase (柳瀬敬之) Wave 5 Japanese Release Date: 31 May 2007 Eukrante (エウクランテ, Eukurante), Seiren Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Ianeira (イーアネイラ, ĪANEIRA), Mermaid Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Waffedolphin (ヴァッフェドルフィン, VAFFEDORUFIN), Dolphin Type, EX SetCharacter Designer: Tetsurō Kasahara (カサハラテッロー) Wave 6 Japanese Release Date: 30 August 2007 Tigris (ティグリース, Tigurīsu), Tiger Type, Full SetCharacter Designers: Eiichi Shimizu (清水栄一), Tomohiro Shimoguchi (下口智裕) Vitulus (ウィトゥルース, Witurūsu), Calf Type, Full SetCharacter Designers: Eiichi Shimizu (清水栄一), Tomohiro Shimoguchi (下口智裕) Grapprap (グラップラップ, Gurappurappu), Builder Type, EX SetCharacter Designer: Eisaku Kitō (鬼頭栄作) Wave 7 Japanese Release Date: 29 November 2007 ACH (アーク, Āku), High Speed Trike Type, Full SetCharacter Designer: Choco YDA (イーダ, Īda), High Maneuver Trike Type, Full SetCharacter Designer: Choco Schmetterling (シュメッターリング, Shumettāringu), Butterfly Type, EX SetCharacter Designer: Chibisuke Machine (ちびすけマシーン) Wave 8 Japanese Release Date: 5 April 2008 Murmeltier (ムルメルティア, Murumerutia), Panzer Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Asuka (飛鳥, Asuka), Fighter Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Zelnogrard (ゼルノグラード, Zerunogurādo), Firearms Type, EX Set+BodyCharacter Designer: Takayuki Yanase (柳瀬敬之) Wave 9 Japanese Release Date: 10 July 2008 Lançamento (ランサメント, Ransamento), Rhinoceras Beetle Type, Full SetCharacter Designer: Tanimeso (たにめそ) Espadia (エスパディア, Esupadia), Stag Beetle Type, Full SetCharacter Designer: Tanimeso (たにめそ) Wave 10 Japanese Release Date: 20 November 2008 Graffias (グラフィオス, Gurafiosu), Scorpion Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Vespelio (ウェスペリオー, Wesuperiō), Bat Type, Full SetCharacter Designer: Ryōta Magaki (間垣亮太) Wave 1 Renewal Version Japanese Release Date: 4 December 2008 Arnval Tranche 2 (アーンヴァル トランシェ2, Ānvaru Toranshe 2), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf bis (ストラーフ bis, Sutorāfu bis), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave 11 Japanese Release Date: 27 March 2010 Altlene (アルトレーネ), Valkyrie Type, Full SetCharacter Designer: Taraku Uon (羽音たらく), Armament Redesigner: Takayuki Yanase (柳瀬敬之), Concept and Original Armament Desighner: Kem by Bokusin-Contest Grand Prix Altines (アルトアイネス), Valkyrie Type, Full SetLE Konamistyle Japanese and Dengeki ExclusiveCharacter Designer: Taraku Uon (羽音たらく),Armament Redesigner: Takayuki Yanase (柳瀬敬之), Concept and Original Armament Desighner: Kem by Bokusin-Contest Grand Prix Wave 12 Japanese Release Date: 30 September 2010 Baby Razz (ベイビーラズ, Beibī Razu), Electric Guitar Type, Full SetCharacter Designer: Choco Sharatang (紗羅檀, Sharatan), Violin Type, Full SetCharacter Designer: Choco Wave 13 Japanese Release Date: 28 October 2010 Gabrine (ガブリーヌ, Gaburīnu), Hellhound Type, Full SetCharacter Designer: Yoshitsune Izuna (いずなよしつね) Renge (蓮華, Renge), Ninetailed Fox Type, Full SetCharacter Designer: Yoshitsune Izuna (いずなよしつね) Wave 14 Japanese Release Date: 16 December 2010 Artille (アーティル, Ātiru), Lynx Type, Full SetCharacter Designer: Kazuhiko Kakoi (かこいかずひこ) Raptias (ラプティアス, Raputiasu), Eagle Type, Full SetCharacter Designer: Kazuhiko Kakoi (かこいかずひこ) Wave 15 Japanese Release Date: 27 January 2011 Maryceles (マリーセレス, Marīseresu), Tentacles Type, Full SetCharacter Designer: Niθ Proxima (プロキシマ, Purokishima), Centaurus Type, Full SetCharacter Designer: Niθ Wave 16 Japanese Release Date: 24 February 2011 Oorbellen (オールベルン, Ōruberun), Fencer-Pearl Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Zielbellen (ジールベルン, Jīruberun), Fencer-Obsidian Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave 17 Japanese Release Date: 17 March 2011 Arnval Mk.2 Tempesta (アーンヴァルMk.2 テンペスタ), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 Lavina (ストラーフMk.2 ラヴィーナ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Wave18 Japanese Release Date: 17 December 2011 Vervietta (ヴェルヴィエッタ), Vicviper Type, Full Lirbiete (リルビエート), Vicviper Type, FullCharacter Designer: Mika Akitaka (明貴美加) Wave19 Japanese Release Date: 23 February 2012 Fubuki type 2 (フブキ弐型), Ninja Type, Full Set Mizuki Type 2 (ミズキ弐型), Ninja Type, Full SetCharacter Designer: Humikane Shimada (島田フミカネ) Wave 20 Japanese Release Date: 15 March 2012 Arnval Mk.2 Tempesta Full Arms Package (アーンヴァルMk.2 テンペスタ フルアームズパッケージ), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 Lavina Full Arms Package (ストラーフMk.2 ラヴィーナ フルアームズパッケージ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Light Armor Wave 1 Japanese Release Date: 4 October 2008 Valona (ヴァローナ, Varōna), Succubus Type, Light Armour Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Werkstra (ウェルクストラ, Werukusutora), Commando Angel Type, Light Armour Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Light Armor Wave 2 Japanese Release Date: 30 October 2008 Bright Feather (ブライトフェザー, Buraitofezā), Nurse Type, Light Armour Full SetCharacter Designer: Mercy Rabbit (マーシーラビット) Harmony Grace (ハーモニーグレイス, Hāmonī Gureisu), Sister (nun) Type, Light Armour Full SetCharacter Designer: Mercy Rabbit (マーシーラビット) Light Armor Wave 3 Japanese Release Date: 29 February 2009 Partio (パーティオ, Pātio), Ferret Type, Light Armor Full SetCharacter Designer: BLADE Pomock (ポモック, Pomokku), Squirrel Type, Light Armor Full SetCharacter Designer: BLADE Light Armor Wave 4 Japanese Release Date: 25 February 2010 Kohiru (こひる, Kohiru), Chopsticks Type, Light Armor Full SetCharacter Designer: Dogmask Merienda (メリエンダ, Merienda ), Spoon Type, Light Armor Full SetCharacter Designer: Dogmask Special Releases Japanese Release Date: 26 December 2008 Fubuki (フブキ, Fubuki), Ninja Type, Full SetCharacter Designer: nunoLE Konamistyle Japanese Exclusive Mizuki (ミズキ, Mizuki), Ninja Type, Full SetCharacter Designer: nunoLE Konamistyle Japanese Exclusive Japanese Release Date: 26 March 2009 Nagi (ナギ, Nagi), Ojousama Type, Light Armor Full SetCharacter Designer: Kenjiro HataOnly available from a special release with the Hayate no Gotoku PSP game Nightmare Paradise Konami Style Japanese exclusive limited edition.Character Redesigner: Fumikane Shimada (島田フミカネ), Original Character Designer: Kenjirou Hata (畑健二郎) Japanese Release Date: 15 July 2010 Arnval Mk.2 (アーンヴァルMk.2), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 (ストラーフMk.2 ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Only available from a special release with the Busou Shinki Battle Masters PSP game Konami Style Japanese exclusive limited edition. Japanese Release Date: 22 September 2011 Arnval Mk.2 Full Arms Package (アーンヴァルMk.2 フルアームズパッケージ), Angel Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Strarf Mk.2 Full Arms Package (ストラーフMk.2 フルアームズパッケージ), Devil Type, Full SetCharacter Designer: Fumikane Shimada (島田フミカネ) Swimsuit Body for Arnval MK.2 Swimsuit Body for Strarf MK.2 Only available from a special release with the Busou Shinki Battle Masters Mk.2 PSP game Konami Style Japanese exclusive limited edition Busou Shinki Variants Several limited edition versions of the Busou Shinki figures have also been released. These variants sport alternate color schemes and additional parts. Dengeki Exclusive Devil Type, Strarf Dengeki Exclusive Angel Type, Arnval Dengeki Exclusive Cat Type, Maochao Dengeki Exclusive Dog Type, Howling Wonder Festival 2008 Seiren Type, Eukrante Wonder Festival 2008 Mermaid Type, Ianeira Konami Style Exclusive Blue Santa Claus Type, Tsugaru Konami Style / Chara Hobby 2008 Prototype Squirrel Type, Pomock Konami Style / Chara Hobby 2008 Prototype Ferret Type, Partio Dengeki Hobby Magazine ed. High Speed Trike Type, ACH Stradale Dengeki Hobby Magazine ed. High Maneuver Trike Type, YDA Stradale Konami Style Exclusive Angel Type, Arnval Tranche2 Konami Style Exclusive Devil Type, Strarf Bis Konami Style Exclusive Ninja Type, Mizuki Konami Style Exclusive Commando Angel Type, Werkstra Konami Style Exclusive Succubus Type, Valona Konami Style Exclusive Panzer Type, Murmeltier Konami Style Exclusive Fighter Type, Asuka Konami Style Exclusive Valkyrie Type, Altlene Viola Konami Style Exclusive Valkyrie Type, Altines Rosa Konami Style Exclusive Fencer-Garnet Type, Oorbellen Konami Style Exclusive Fencer-Sapphire Type, Zielbellen Konami Style Exclusive Fencer-Moonstone Type, Oorbellen Lunaria Konami Style Exclusive Fencer-Amethyst Type, Zielbellen Konami Style Exclusive Lynx Type, Artille Full-Barrel Konami Style Exclusive Eagle Type, Raptias Air-Dominance Konami Style Exclusive Battery Type, Fort Bragg Dusk Konami Style Exclusive Firearms Type, Zelnogrard Belik Konami Style Exclusive Tentacles Type, Maryceles Lemuria Konami Style Exclusive Centaurus Type, Proxima Spinel MMS Naked Body Releases Exclusively sold on the Konami Style Japan page, these are unpainted, featureless MMS Figures meant for use with EX sets or for customization. They come in a variety of colors and shades of skintone intended to match other MMS figures. The MMS Naked bodies are available, like the Busou Shinki figures themselves, in three different body archetypes: MMS 1st, MMS 3rd (small) and MMS 3rd (tall). Although similar in form and construction, not all body parts are compatible among them. MMS 1st Naked White Naked Black Naked Flesh ver. 1 Naked Flesh ver. 1 - Gym Uniform Wine Red Naked Flesh ver. 2 Naked Flesh ver. 2 - Gym Uniform Navy Blue Naked Flesh ver. 2 - School Swimsuit Navy Blue Type Naked Flesh ver. 2 - School Swimsuit White Type Naked Flesh ver. 3 MMS 3rd (small) Naked White (small) Naked Black (small) Naked Flesh ver. 2 (small) Naked Flesh ver. 2 (small) - School Swimsuit White Type Naked Flesh ver. 4 (small) Naked Flesh ver. 4 (small) - School Swimsuit Navy Blue Type Naked Flesh ver. 5 (small) MMS 3rd (tall) Naked White (tall) Naked Black (tall) Naked Flesh ver. 2 (tall) Naked Flesh ver. 2 (tall) - School Swimsuit White Type Naked Flesh ver. 4 (tall) Naked Flesh ver. 4 (tall) - School Swimsuit Navy Blue Type Naked Flesh ver. 5 (tall) 2016 Reproductions Though the original line was discontinued in 2012, purchasers of the 2015 anime Blu-Ray box set received a serial code allowing them to purchase limited run reproductions of the MMS bodies of the four main Shinki from the anime (Arnval mk. 2, Strarf mk. 2, Altines, and Altlene). These were reproductions of the MMS bodies only, and did not include the equipment or most of the accessories from the original releases, and came in entirely new packaging. The reproductions were priced at 8000 yen per MMS, or 25,000 yen for a set of all four. The reproductions were popular enough that additional batches had to be produced, alongside additional Blu-Ray box sets. Though originally announced for an April 2016 release, the reproduction set was delayed to December 2016. Megami Device Collaboration Model Kits After the 2017 revival, the series has been getting collaboration model kit releases from 's line, which many former Busou Shinki key staff such as series/MMS creator , former series producer Toriyama Toriwo, and designers Fumikane Shimada and are involved with. The first revival shinki, Edelweiss, was released in January 2019 as part of a tie-in with the upcoming smartphone game Busou Shinki R (then unnamed), but the game was further delayed. The Edelweiss kit thus uses the generic Busou Shinki series logo instead of the Busou Shinki R logo in its branding, while the card from the Battle Conductor arcade game which was released after the Busou Shinki R logo was revealed uses the R logo. Releases of Arnval and Strarf as Megami Device collaboration model kits were also announced in 2015, predating the revival announcement, but as of December 2020 no release date has been announced yet. The kits are not based on the old MMS design, and instead use the new Machinica standard designed by Asai. The Megami Device line uses the Japanese model kit industry standard of 3mm joints, meaning that they are by default incompatible with the original Busou Shinki line, but Kotobukiya has released joint adapters that allow one to establish compatibility between 3.0 and 3.3mm standards. It was initially announced in early 2018 that other new designs from the revival would get Megami Device model kits, but no progress was made on them, and with Asai's distancing himself from the project in 2020 it is unclear if they will be made. Toriyama is still attached to the mobile game project and its kits, however, and a prototype body for a second collaboration kit with character design by BLADE was shown at Wonder Festival Summer 2018, but as of 2024 this kit has not been released. BLADE was later commissioned for similar but different designs for Megami Device, unrelated to the Busou Shinki brand, which were announced in 2023 and had kits released in 2024. List of Megami Device Collaboration Model Kit Releases Megami Device Japanese Release Date: 25 January 2019 Edelweiss (エーデルワイス, Ēderuwaisu), Jaeger Type (猟兵型)Character Designer: Fumikane Shimada (島田フミカネ) Japanese Release Date: 25 November 2022 Arnval (アーンヴァル, Ānvaru), Angel TypeCharacter Designer: Fumikane Shimada Japanese Release Date: 24 May 2023 Strarf (ストラーフ, Sutorāfu), Devil TypeCharacter Designer: Fumikane Shimada Video games Busou Shinki Battle Rondo On 23 April 2007 Konami released Battle Rondo. Battle Rondo was a free multiplayer online raising sim set in the fictional Busou Shinki universe. Players could unlock in-game versions of the figures, including their armor and weapons, and other gear, by inputting codes that came with each Busou Shinki figure, or through micropayments. The game consisted primarily of automated one-on-one battles with NPC or player-owned Shinki, and the main objective of the game was to have Shinki participate in battles while maintaining high win ratios in order to raise their ranks. The game also had time-limited event quests with their own storylines. The game used a battle system which had Shinki fight automatically, with the player "training" the AI to fight more effectively through feedback after each match. The game would also output battle logs as text files in which the reasons for actions taken in battle would be detailed, allowing the player to give more accurate feedback to the Shinki. Shinki personalities (influencing actions they take in battle and how they respond to feedback) and stats were affected by the initial setup, in which the player selected three "CSC" core crystals. CSCs could not be changed without resetting a Shinki entirely, which would reset them to level 1 and lore-wise erases their memories. The game was discontinued on 31 October 2011, and the official web portal was closed down. Busou Shinki Battle Masters Busou Shinki Battle Masters was developed by Konami for the PlayStation Portable released 15 July 2010. A sequel/updated version, also for the PSP, Busou Shinki Battle Masters Mk.2 was released on 22 September 2011. Both releases had limited edition releases which from Konami's online store Konami Style which included exclusive limited release action figures. Both were conventional action games, with the player taking direct control of shinki (unlike Battle Rondo). The in-world lore justification for this is that Battle Masters takes place in 2040 as opposed to Battle Rondo's 2036, and that taking control of shinki is made possible by new virtual reality technology. Busou Shinki Battle Communication Busou Shinki: Battle Communication was a social game developed by Mobage for feature phones that was launched on 31 October 2010. The service was discontinued on 22 May 2012. Busou Shinki Armored Princess Battle Conductor Busou Shinki: Armored Princess Battle Conductor is an arcade game with four-player online battle royale gameplay, in which players take control of teams of three shinki and compete to collect the greatest amount of gems in a match, that was developed by Konami and released on 24 December 2020. The game also makes use of a holographic display, and players save their progress through use of a Konami e-Amusement IC pass and by outputting shinki as physical trading cards via a Card Connect machine. The appearance of Edelweiss in the game was promoted as being a crossover/collaboration with Busou Shinki R, even though Busou Shinki R has not been released yet. The game has also had collaborations with other Konami IPs such as the Bemani series, Quiz Magic Academy series, Tokimeki Memorial series, Sky Girls series, and LovePlus series. Busou Shinki R (Tentative Title) Busou Shinki R was initially teased with no title in December 2017 before being officially announced as a smartphone game in February 2020. No release date has been revealed yet, and the title is tentative. Bibliography Busou Shinki 2036 is a manga series by BLADE. The series began its serialization in Dengeki Hobby Magazine in June 2007, with the first tankobon volume published under the Dengeki Comics label in 2008. The fifth and last volume was published in March 2013. Busou Shinki Zero A different manga series by Yuji Ihara that was also published under the Dengeki Comics label. Busou Shinki Always Together A novel by Hibiki Yu, published by Konami Novels. Gagaga Bunko Novel Series A series of novels based on the franchise by Kuga Buncho, published by Gagaga Bunko. Other manga and novels Busou Shinki: Forget-me-not was a manga by Wasaba that was serialized on Konami's feature phone portal Shukan Konami from 20 April 2007 to 26 December 2008, with 64 chapters. No books were ever released. Busou Shinki Light! is a manga by BLADE that was serialized in the magazine Figure Maniacs Otome-gumi. It did not get its own releases, but was included in volumes 2-4 of Busou Shinki 2036, which is also by BLADE. Hibusou Shinki is a webcomic by Karashiichi that ran on the official Busou Shinki website from 2008 to 2010. No books were ever released, but its first appearance, later relabelled "episode 0" when released on the website, was first published in the mook Busou Shinki Master's Book. Other books Several other books and mooks related to the franchise have also been published by Kadokawa and Konami Digital Entertainment. Radio shows Busou Shinki Radio Rondo An internet radio show hosted by Kana Asumi and Eri Kitamura to promote and discuss Battle Rondo, broadcast weekly on i-revo and Onsen.ag from 26 April 2007 to 25 October 2007 (episodes on Onsen were released one week after i-Revo). Special additional recordings were also included on the Battle Rondo soundtrack and on the Character Song & Special Radio Rondo albums. Recordings of the radio show were compiled and released on CD in 2008. Busou Shinki Master no tame no Radio desu An internet radio show hosted by Kana Asumi and Minori Chihara to promote and discuss the TV anime series, broadcast on Onsen.ag from 24 September 2012 to 1 October 2013. Episodes were released weekly up to episode 27, and then once every two weeks after. A special episode was released in 2015 to coincide with the TV series Blu-ray box release, with another in 2017 for the Blu-ray box re-release. Recordings of the radio show were compiled and released on CD in four volumes from 2012-2014. Discography Character song albums, video game soundtracks, and radio show recordings have been released on CD. Many of the individual tracks from the music CDs (but not the radio show CDs) are also available for sale by download in MP3 file format from online stores such as Amazon and iTunes. Video game soundtracks Character song albums Radio talk show recordings TV anime-related music Anime Busou Shinki has had two anime series, a 2011 OVA and a 2012 13-episode TV series. OVA is an original video animation produced by Kinema Citrus and TNK. The OVA was originally released as DLC for the PSP video game Battle Masters Mk 2, viewable through an in-game menu. The ten installments were later assembled into a 40-minute OVA that had a limited release on DVD and Blu-ray Disc via Konami's Konami Style online shop in Japan. TV series The TV series was broadcast in Japan in 2012. Individual DVD and Blu-Ray volumes were released in 2011-2012, and a Blu-Ray-only box set was released in 2015. Episode 13 of the series was not broadcast on TV and only released on disc. The TV series was licensed for distribution in North America by Sentai Filmworks and began streaming on Anime Network in 2012. Legacy After the discontinuing of the action figure line, key staff such as series/MMS creator Masaki Asai and former series producer Toriwo Toriyama went on to work on the Megami Device line of model kits for Kotobukiya, which has a similar premise and concept and is considered by many as a spiritual successor. Designers Fumikane Shimada and Takayuki Yanase, who had previously worked on Busou Shinki, also worked on designs for the line. The official Megami Device webcomic is also drawn by Karashiichi, who previously did the official Busou Shinki website webcomic Hibusou Shinki, and the comic has returning characters from Hibusou Shinki. As part of a tie-in with the upcoming smartphone game Busou Shinki R, Megami Device has also seen the release of one of the new shinki from the game, Edelweiss, as a collaboration model kit. Releases of Arnval and Strarf as Megami Device collaboration model kits were also announced in 2015. Pyramid Inc., which developed the Battle Masters games, developed the smartphone game Alice Gear Aegis which is also a "mecha girl" genre action game. Fumikane Shimada and Takayuki Yanase also worked on designs for the game, and Alice Gear Aegis has also had collaboration model kit releases from Megami Device. Pyramid staff such as president Junichi Kashiwagi are frequently present at Megami Device-related events as well, such as Wonder Festival talk shows. After Konami's Busou Shinki revival was announced in December 2017, Asai announced that he was being officially involved with the project at Konami's request in early 2018, and he worked on the Edelweiss, Arnval and Starf model kits for Megami Device as part of this. It was also announced that more new designs from the revival would be getting releases as model kits in the future. But though the Edelweiss kit (released January 2019) was supposed to be released alongside the Busou Shinki R smartphone game, the game ended up in development hell, with the title not announced yet at the time of the Edelweiss' release. Asai reported in February 2020 that development on Busou Shinki R had recently restarted from scratch. This, combined with how Konami was not keeping him up to date on developments, with him not learning about the Battle Conductor arcade game until seeing announcements on Twitter, resulted in him releasing a statement on his personal blog saying that he no longer considers himself to be part of the project, citing the aforementioned incidents and how he is being kept out of the loop. No work had been done on any collaboration model kits aside from the Edelweiss, Arnval and Strarf, and so it is unclear if any other revival designs will be released. See also Frame Arms Girl Little Battlers Experience Alice Gear Aegis Arcanadea References External links (archive) Shinki-NET (archive) Battle Masters website Battle Masters Mk 2 website Battle Conductor website 2012 anime television series debuts 2000s toys 2007 video games 2010 video games 2011 video games 2020 video games Japan-exclusive video games Video games developed in Japan Action figures Konami Mecha anime and manga Raising sims Windows games Windows-only games Inactive multiplayer online games PlayStation Portable games PlayStation Portable-only games Arcade video games Konami arcade games Toy robots Kinema Citrus TNK (company) Eight Bit (studio) Dengeki Comics Gagaga Bunko Internet radio
Busou Shinki
Technology
7,410
20,596,852
https://en.wikipedia.org/wiki/KFUPM%20Program%20of%20Industrial%20and%20Systems%20Engineering
The Industrial & Systems Engineering Program offers a Bachelor of Science degree in industrial engineering at the King Fahd University of Petroleum & Minerals (KFUPM) in the Kingdom of Saudi Arabia. With a total of 133 credit hours, the program covers the major areas of industrial engineering, such as operations research, production planning, inventory control, methods engineering, quality control, facility location, manufacturing, and facility layout. History The Industrial & Systems Engineering (ISE) program in the Systems Engineering Department was first introduced in 1984 and has been revised in 1996 based on the Accreditation Board for Engineering and Technology (ABET) recommendation after their first visit in 1993. The revision made in 1996 came after when the number of credit hours of the Bachelor of Science (B.Sc) was reduced from 141 to 133 credit hours. The program has received ABET accreditation extension in 2010. Program courses The ISE program has a total of 50 credit hours on required ISE courses, with the following descriptions: Introduction to I&SE Probability & Statistics Regression for Industrial Engineering Linear Control Systems Numerical Methods Operations Research I Statistical Quality Control Principles of Industrial Costing Engineering Economics Manufacturing Technology Work and Process Improvement Fundamental of Database Systems Seminar Industrial Engineering Design Production Systems Stochastic Systems Simulation Operations Research II Facility Layout and Location Senior Design External links Codes of courses and description Department website Industrial engineering
KFUPM Program of Industrial and Systems Engineering
Engineering
268
36,674,345
https://en.wikipedia.org/wiki/Information%20technology
Information technology (IT) is a set of related fields that encompass computer systems, software, programming languages, data and information processing, and storage. IT forms part of information and communications technology (ICT). An information technology system (IT system) is generally an information system, a communications system, or, more specifically speaking, a computer system — including all hardware, software, and peripheral equipment — operated by a limited group of IT users, and an IT project usually refers to the commissioning and implementation of an IT system. IT systems play a vital role in facilitating efficient data management, enhancing communication networks, and supporting organizational processes across various industries. Successful IT projects require meticulous planning and ongoing maintenance to ensure optimal functionality and alignment with organizational objectives. Although humans have been storing, retrieving, manipulating, analysing and communicating information since the earliest writing systems were developed, the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs. The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several products or services within an economy are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, and e-commerce. Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450 – 1840), electromechanical (1840 – 1940), and electronic (1940 to present). Information technology is a branch of computer science, defined as the study of procedures, structures, and the processing of various types of data. As this field continues to evolve globally, its priority and importance have grown, leading to the introduction of computer science-related courses in K-12 education. History Ideas of computer science were first mentioned before the 1950s under the Massachusetts Institute of Technology (MIT) and Harvard University, where they had discussed and began thinking of computer circuits and numerical calculations. As time went on, the field of information technology and computer science became more complex and was able to handle the processing of more data. Scholarly articles began to be published from different organizations. During the early computing, Alan Turing, J. Presper Eckert, and John Mauchly were considered some of the major pioneers of computer technology in the mid-1900s. Giving them such credit for their developments, most of their efforts were focused on designing the first digital computer. Along with that, topics such as artificial intelligence began to be brought up as Turing was beginning to question such technology of the time period. Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick. The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed. Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. During the Second World War, Colossus developed the first electronic digital computer to decrypt German messages. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognizably modern electronic digital stored-program computer was the Manchester Baby, which ran its first program on 21 June 1948. The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison, the first transistorized computer developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version. Several other breakthroughs in semiconductor technology include the integrated circuit (IC) invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor in 1959, silicon dioxide surface passivation by Carl Frosch and Lincoln Derick in 1955, the first planar silicon dioxide transistors by Frosch and Derick in 1957, the MOSFET demonstration by a Bell Labs team. the planar process by Jean Hoerni in 1959,and the microprocessor invented by Ted Hoff, Federico Faggin, Masatoshi Shima, and Stanley Mazor at Intel in 1971. These important inventions led to the development of the personal computer (PC) in the 1970s, and the emergence of information and communications technology (ICT). By 1984, according to the National Westminster Bank Quarterly Review, the term information technology had been redefined as "The development of cable television was made possible by the convergence of telecommunications and computing technology (…generally known in Britain as information technology)." We then begin to see the appearance of the term in 1990 contained within documents for the International Organization for Standardization (ISO). Innovations in technology have already revolutionized the world by the twenty-first century as people were able to access different online services. This has changed the workforce drastically as thirty percent of U.S. workers were already in careers in this profession. 136.9 million people were personally connected to the Internet, which was equivalent to 51 million households. Along with the Internet, new types of technology were also being introduced across the globe, which has improved efficiency and made things easier across the globe. Along with technology revolutionizing society, millions of processes could be done in seconds. Innovations in communication were also crucial as people began to rely on the computer to communicate through telephone lines and cable. The introduction of the email was considered revolutionary as "companies in one part of the world could communicate by e-mail with suppliers and buyers in another part of the world..." Not only personally, computers and technology have also revolutionized the marketing industry, resulting in more buyers of their products. In 2002, Americans exceeded $28 billion in goods just over the Internet alone while e-commerce a decade later resulted in $289 billion in sales. And as computers are rapidly becoming more sophisticated by the day, they are becoming more used as people are becoming more reliant on them during the twenty-first century. Data processing Storage Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay-line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, which was based on a standard cathode ray tube. However, the information stored in it and delay-line memory was volatile in the fact that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer. IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. , almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices, and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years. Databases Database Management Systems (DMS) emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. An early such system was IBM's Information Management System (IMS), which is still widely deployed more than 50 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows, and columns. In 1981, the first commercially available relational database management system (RDBMS) was released by Oracle. All DMS consist of components, they allow the data they store to be accessed simultaneously by many users while maintaining its integrity. All databases are common in one point that the structure of the data they contain is defined and stored separately from the data itself, in a database schema. In recent years, the extensible markup language (XML) has become a popular format for data representation. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort." As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine- and human-readable. Transmission Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels. XML has been increasingly employed as a means of data interchange since the early 2000s, particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing "data-in-transit rather than... data-at-rest". Manipulation Hilbert and Lopez identify the exponential pace of technological change (a kind of Moore's law): machines' application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world's general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world's storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years. Massive amounts of data are stored worldwide every day, but unless it can be analyzed and presented effectively it essentially resides in what have been called data tombs: "data archives that are seldom visited". To address that issue, the field of data mining — "the process of discovering interesting patterns and knowledge from large amounts of data" — emerged in the late 1980s. Services Email The technology and services it provides for sending and receiving electronic messages (called "letters" or "electronic letters") over a distributed (including global) computer network. In terms of the composition of elements and the principle of operation, electronic mail practically repeats the system of regular (paper) mail, borrowing both terms (mail, letter, envelope, attachment, box, delivery, and others) and characteristic features — ease of use, message transmission delays, sufficient reliability and at the same time no guarantee of delivery. The advantages of e-mail are: easily perceived and remembered by a person addresses of the form user_name@domain_name (for example, somebody@example.com); the ability to transfer both plain text and formatted, as well as arbitrary files; independence of servers (in the general case, they address each other directly); sufficiently high reliability of message delivery; ease of use by humans and programs. Disadvantages of e-mail: the presence of such a phenomenon as spam (massive advertising and viral mailings); the theoretical impossibility of guaranteed delivery of a particular letter; possible delays in message delivery (up to several days); limits on the size of one message and on the total size of messages in the mailbox (personal for users). Search system A software and hardware complex with a web interface that provides the ability to search for information on the Internet. A search engine usually means a site that hosts the interface (front-end) of the system. The software part of a search engine is a search engine (search engine) — a set of programs that provides the functionality of a search engine and is usually a trade secret of the search engine developer company. Most search engines look for information on World Wide Web sites, but there are also systems that can look for files on FTP servers, items in online stores, and information on Usenet newsgroups. Improving search is one of the priorities of the modern Internet (see the Deep Web article about the main problems in the work of search engines). Commercial effects Companies in the information technology field are often discussed as a group as the "tech sector" or the "tech industry." These titles can be misleading at times and should not be mistaken for "tech companies;" which are generally large scale, for-profit corporations that sell consumer technology and software. It is also worth noting that from a business perspective, Information technology departments are a "cost center" the majority of the time. A cost center is a department or staff which incurs expenses, or "costs", within a company rather than generating profits or revenue streams. Modern businesses rely heavily on technology for their day-to-day operations, so the expenses delegated to cover technology that facilitates business in a more efficient manner are usually seen as "just the cost of doing business." IT departments are allocated funds by senior leadership and must attempt to achieve the desired deliverables while staying within that budget. Government and the private sector might have different funding mechanisms, but the principles are more-or-less the same. This is an often overlooked reason for the rapid interest in automation and artificial intelligence, but the constant pressure to do more with less is opening the door for automation to take control of at least some minor operations in large companies. Many companies now have IT departments for managing the computers, networks, and other technical areas of their businesses. Companies have also sought to integrate IT with business outcomes and decision-making through a BizOps or business operations department. In a business context, the Information Technology Association of America has defined information technology as "the study, design, development, application, implementation, support, or management of computer-based information systems". The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization's technology life cycle, by which hardware and software are maintained, upgraded, and replaced. Information services Information services is a term somewhat loosely applied to a variety of IT-related services offered by commercial companies, as well as data brokers. Ethics The field of information ethics was established by mathematician Norbert Wiener in the 1940s. Some of the ethical issues associated with the use of information technology include: Breaches of copyright by those downloading files stored without the permission of the copyright holders Employers monitoring their employees' emails and other Internet usage Unsolicited emails Hackers accessing online databases Web sites installing cookies or spyware to monitor a user's online activities, which may be used by data brokers IT projects Research suggests that IT projects in business and public administration can easily become significant in scale. Work conducted by McKinsey in collaboration with the University of Oxford suggested that half of all large-scale IT projects (those with initial cost estimates of $15 million or more) often failed to maintain costs within their initial budgets or to complete on time. See also Information and communications technology (ICT) IT infrastructure Outline of information technology Knowledge society Notes References Citations Bibliography Further reading . Gitta, Cosmas and South, David (2011). Southern Innovator Magazine Issue 1: Mobile Phones and Information Technology: United Nations Office for South-South Cooperation. . Gleick, James (2011).The Information: A History, a Theory, a Flood. New York: Pantheon Books. . Shelly, Gary, Cashman, Thomas, Vermaat, Misty, and Walker, Tim. (1999). Discovering Computers 2000: Concepts for a Connected World. Cambridge, Massachusetts: Course Technology. Webster, Frank, and Robins, Kevin. (1986). Information Technology — A Luddite Analysis. Norwood, NJ: Ablex. External links Computers Intellectual capital Mass media technology
Information technology
Technology
3,484
10,907,834
https://en.wikipedia.org/wiki/Mesocarb
Mesocarb, sold under the brand name Sidnocarb or Sydnocarb and known by the developmental code name MLR-1017, is a psychostimulant medication which has been used in the treatment of psychiatric disorders and for a number of other indications in the Soviet Union and Russia. It is currently under development for the treatment of Parkinson's disease and sleep disorders. It is taken by mouth. The drug is a selective dopamine reuptake inhibitor (DRI). It is an unusual and unique DRI, acting as a negative allosteric modulator and non-competitive inhibitor of the dopamine transporter (DAT). Chemically, mesocarb contains amphetamine within its structure but has been modified and extended at the amine with a sydnone imine-containing moiety. Mesocarb was first described by 1971. It was used as a pharmaceutical drug until 2008. In 2021, its nature as a DAT allosteric modulator was reported. As of February 2023, mesocarb was in phase 1 clinical trials for Parkinson's disease. The active enantiomer, armesocarb, is also being developed. Medical uses Mesocarb was originally developed in the Soviet Union in the 1970s for a variety of indications including asthenia, apathy, adynamia, and some clinical aspects of depression and schizophrenia. Mesocarb was used for counteracting the sedative effects of benzodiazepines, increasing workload capacity and cardiovascular function, treatment of attention deficit hyperactivity disorder (ADHD) in children, as a nootropic, and as a drug to enhance resistance to extremely cold temperatures. It has also been reported to have antidepressant and anticonvulsant properties. Available forms Mesocarb was sold in Russia as 5mg oral tablets under the brand name Sydnocarb. Pharmacology Pharmacodynamics Mesocarb has been found to act as a selective dopamine reuptake inhibitor (DRI) by blocking the actions of the dopamine transporter (DAT), and lacks the dopamine release characteristic of stimulants such as dextroamphetamine. It was the most selective DAT inhibitor amongst an array of other DAT inhibitors to which it was compared and, in 2017, was reported as the most selective DAT inhibitor described to date. The affinities (Ki) of mesocarb at the human monoamine transporters in vitro have been reported to be 8.3nM for the dopamine transporter (DAT), 1,500nM for the norepinephrine transporter (NET) (181-fold lower than for the DAT), and >10,000nM for the serotonin transporter (SERT) (>1,205-fold lower than for the DAT). The inhibitory potencies () of mesocarb at the human monoamine transporters in vitro have been reported to be 0.49 ± 0.14μM at the DAT, 34.9 ± 14.08μM at the NET (71-fold lower than for the DAT), and 494.9 ± 17.00μM at the SERT (1,010-fold lower than for the DAT). In 2021, it was discovered that mesocarb is not a conventional DRI but acts as a DAT allosteric modulator or non-competitive inhibitor. In accordance with its nature as an atypical DAT blocker, the drug has atypical effects relative to conventional DRIs. As an example, it shows greater antiparkinsonian activity relative to other DRIs in animals. Similarly to other DRIs, mesocarb has been found to possess wakefulness-promoting effects. Pharmacokinetics Hydroxylated metabolites can be detected in urine for up to 10days after consumption. Mesocarb had erroneously been referred to as a prodrug of amphetamine. However, this was based on older literature that relied on gas chromatography as an analytical method. Subsequently, with the advent of mass spectroscopy, it has been shown that presence of amphetamine in prior studies was an artifact of the gas chromatography method. More recent studies using mass spectroscopy show that negligible levels of amphetamine are released from mesocarb metabolism. Chemistry Mesocarb, also known as 3-(β-phenylisopropyl)-N-phenylcarbamoylsydnonimine, is a substituted phenethylamine and amphetamine and a mesoionic sydnone imine. It has the amphetamine backbone present, except that the RN has a complicated imine side chain present. Whereas mesocarb (MLR-1017) is a racemic mixture, the enantiopure levorotatory or (R)-enantiomer is known as armesocarb (MLR-1019). Armesocarb is described as the active enantiomer of mesocarb, whereas the (S)- or D-enantiomer is said to be virtually inactive. It is structurally related to feprosidnine (Sydnophen; 3-(α-methylphenylethyl)sydnone imine). Synthesis Feprosidnine (Sydnophen) is converted from the hydrochloride salt (1) into the freebase amine (2). This is then treated with phenylisocyanate (3). History Mesocarb was first described in the scientific literature by 1971. It is said to have been used as a pharmaceutical drug from 1971 until 2008. It was said to have been discontinued by its manufacturer in 2008 for business reasons unrelated to the drug itself. Society and culture Names Mesocarb is the generic name of the drug and its . It is also known by the synonym fensidnimine as well as by the brand names Sydnocarb and Synocarb. The drug is additionally known by its developmental code name MLR-1017 (for Parkinson's disease). Status Mesocarb is almost unknown in the western world and is neither used in medicine nor studied scientifically to any great extent outside of Russia and other countries in the former Soviet Union. It has however been added to the list of drugs under international control and is a scheduled substance in most countries, despite its multiple therapeutic applications and reported lack of significant abuse potential. Research Parkinson's disease Mesocarb, has been under development for the treatment of Parkinson's disease since 2016. As of February 2023, it is in phase 1 clinical trials for this indication. However, no recent development has been reported. Mesocarb's active enantiomer armesocarb is also under development. See also List of Russian drugs References Antidepressants Antiparkinsonian agents Dopamine reuptake inhibitors Drugs in the Soviet Union Experimental drugs Imines Oxadiazoles Russian drugs Stimulants Substituted amphetamines Ureas Wakefulness-promoting agents Withdrawn drugs
Mesocarb
Chemistry
1,517
2,152,318
https://en.wikipedia.org/wiki/Armature%20%28electrical%29
In electrical engineering, the armature is the winding (or set of windings) of an electric machine which carries alternating current. The armature windings conduct AC even on DC machines, due to the commutator action (which periodically reverses current direction) or due to electronic commutation, as in brushless DC motors. The armature can be on either the rotor (rotating part) or the stator (stationary part), depending on the type of electric machine. Shapes of armature used in motors include double-T and triple-T armatures. The armature windings interact with the magnetic field (magnetic flux) in the air-gap; the magnetic field is generated either by permanent magnets, or electromagnets formed by a conducting coil. The armature must carry current, so it is always a conductor or a conductive coil, oriented normal to both the field and to the direction of motion, torque (rotating machine), or force (linear machine). The armature's role is twofold. The first is to carry current across the field, thus creating shaft torque in a rotating machine or force in a linear machine. The second role is to generate an electromotive force (EMF). In the armature, an electromotive force is created by the relative motion of the armature and the field. When the machine or motor is used as a motor, this EMF opposes the armature current, and the armature converts electrical power to mechanical power in the form of torque, and transfers it via the shaft. When the machine is used as a generator, the armature EMF drives the armature current, and the shaft's movement is converted to electrical power. In an induction generator, generated power is drawn from the stator. A growler is used to check the armature for short and open circuits and leakages to ground. Terminology The word armature was first used in its electrical sense, i.e. keeper of a magnet, in mid 19th century. The parts of an alternator or related equipment can be expressed in either mechanical terms or electrical terms. Although distinctly separate these two sets of terminology are frequently used interchangeably or in combinations that include one mechanical term and one electrical term. This may cause confusion when working with compound machines like brushless alternators, or in conversation among people who are accustomed to work with differently configured machinery. In most generators, the field magnet is rotating, and is part of the rotor, while the armature is stationary, and is part of the stator. Both motors and generators can be built either with a stationary armature and a rotating field or a rotating armature and a stationary field. The pole piece of a permanent magnet or electromagnet and the moving, iron part of a solenoid, especially if the latter acts as a switch or relay, may also be referred to as armatures. Armature reaction in a DC machine In a DC machine, two sources of magnetic fluxes are present; 'armature flux' and 'main field flux'. The effect of armature flux on the main field flux is called "armature reaction". The armature reaction changes the distribution of the magnetic field, which affects the operation of the machine. The effects of the armature flux can be offset by adding a compensating winding to the main poles, or in some machines adding intermediate magnetic poles, connected in the armature circuit. Armature reaction is essential in amplidyne rotating amplifiers. Armature reaction drop is the effect of a magnetic field on the distribution of the flux under main poles of a generator. Since an armature is wound with coils of wire, a magnetic field is set up in the armature whenever a current flows in the coils. This field is at right angles to the generator field and is called cross magnetization of the armature. The effect of the armature field is to distort the generator field and shift the neutral plane. The neutral plane is the position where the armature windings are moving parallel to the magnetic flux lines, that is why an axis lying in this plane is called as magnetic neutral axis (MNA). This effect is known as armature reaction and is proportional to the current flowing in the armature coils. The geometrical neutral axis (GNA) is the axis that bisects the angle between the centre line of adjacent poles. The magnetic neutral axis (MNA) is the axis drawn perpendicular to the mean direction of the flux passing through the centre of the armature. No e.m.f. is produced in the armature conductors along this axis because then they cut no flux. When no current is there in the armature conductors, the MNA coincides with GNA. The brushes of a generator must be set in the neutral plane; that is, they must contact segments of the commutator that are connected to armature coils having no induced emf. If the brushes were contacting commutator segments outside the neutral plane, they would short-circuit "live" coils and cause arcing and loss of power. Without armature reaction, the magnetic neutral axis (MNA) would coincide with geometrical neutral axis (GNA). Armature reaction causes the neutral plane to shift in the direction of rotation, and if the brushes are in the neutral plane at no load, that is, when no armature current is flowing, they will not be in the neutral plane when armature current is flowing. For this reason it is desirable to incorporate a corrective system into the generator design. These are two principal methods by which the effect of armature reaction is overcome. The first method is to shift the position of the brushes so that they are in the neutral plane when the generator is producing its normal load current. in the other method, special field poles, called interpoles, are installed in the generator to counteract the effect of armature reaction. The brush-setting method is satisfactory in installations in which the generator operates under a fairly constant load. If the load varies to a marked degree, the neutral plane will shift proportionately, and the brushes will not be the correct position at all times. The brush-setting method is the most common means of correcting for armature reaction in small generators (those producing approximately 1,000 W or less). Larger generators require the use of interpoles. Winding circuits Coils of the winding are distributed over the entire surface of the air gap, which may be the rotor or the stator of the machine. In a "lap" winding, there are as many current paths between the brush (or line) connections as there are poles in the field winding. In a "wave" winding, there are only two paths, and there are as many coils in series as half the number of poles. So, for a given rating of machine, a wave winding is more suitable for large currents and low voltages. Windings are held in slots in the rotor or armature covered by stator magnets. The exact distribution of the windings and selection of the number of slots per pole of the field greatly influences the design of the machine and its performance, affecting such factors as commutation in a DC machine or the waveform of an AC machine. Winding materials Armature wiring is made from copper or aluminum. Copper armature wiring enhances electrical efficiencies due to its higher electrical conductivity. Aluminum armature wiring is lighter and less expensive than copper. See also Balancing machine Commutator References External links Example Diagram of an Armature Coil and data used to specify armature coil parameters How to Check a Motor Armature for Damaged Windings Electromagnetic components Electric motors
Armature (electrical)
Technology,Engineering
1,634
58,663,888
https://en.wikipedia.org/wiki/High%20Energy%20and%20Particle%20Physics%20Prize
The High Energy and Particle Physics Prize, established in 1989, is awarded every two years by the European Physical Society (EPS) for an outstanding contribution to high energy and particle physics. Recipients Source: 1989 Georges Charpak 1991 Nicola Cabibbo 1993 Martinus Veltman 1995 Paul Söding, Bjørn Wiik, , Sau Lan Wu 1997 Robert Brout, François Englert, Peter Higgs 1999 Gerard ’t Hooft 2001 Don Perkins 2003 David Gross, David Politzer, Frank Wilczek 2005 and the NA31 Collaboration 2007 Makoto Kobayashi, Toshihide Maskawa 2009 The Gargamelle collaboration 2011 Sheldon Glashow, John Iliopoulos, Luciano Maiani 2013 The ATLAS and CMS collaborations, Michel Della Negra, Peter Jenni, Tejinder Virdee 2015 James D. Bjorken, Guido Altarelli, , Lev Lipatov, Giorgio Parisi 2017 , , 2019 The CDF and D0 collaborations 2021 Torbjörn Sjöstrand, Bryan Webber 2023 Cecilia Jarlskog and Daya Bay / RENO collaborations See also List of physics awards References Awards of the European Physical Society Physics awards
High Energy and Particle Physics Prize
Technology
242
26,306,022
https://en.wikipedia.org/wiki/A.C.%20Redfield%20Lifetime%20Achievement%20Award
The Lifetime Achievement Award was first presented in 1994 to honor major long-term achievements in the fields of limnology and oceanography, including research, education and service to the community and society. In 2004, the Association for the Sciences of Limnology and Oceanography board renamed the award in honor of Alfred C. Redfield. Recipients Notes The information in the table is according to the "A.C. Redfield Lifetime Achievement Award" webpage of the Association for the Sciences of Limnology and Oceanography unless otherwise specified by additional citations. References External links ASLO Awards and Nominations Awards established in 1994 Science and technology awards Lifetime achievement awards 1994 establishments in the United States
A.C. Redfield Lifetime Achievement Award
Technology
135
56,272,178
https://en.wikipedia.org/wiki/Carbon%20budget
A carbon budget is a concept used in climate policy to help set emissions reduction targets in a fair and effective way. It examines the "maximum amount of cumulative net global anthropogenic carbon dioxide () emissions that would result in limiting global warming to a given level". It can be expressed relative to the pre-industrial period (the year 1750). In this case, it is the total carbon budget. Or it can be expressed from a recent specified date onwards. In that case it is the remaining carbon budget. A carbon budget that will keep global warming below a specified temperature limit is also called an emissions budget or quota, or allowable emissions. Apart from limiting the global temperature increase, another objective of such an emissions budget can be to limit sea level rise. Scientists combine estimates of various contributing factors to calculate the carbon budget. The estimates take into account the available scientific evidence as well as value judgments or choices. Global carbon budgets can be further sub-divided into national emissions budgets. This can help countries set their own emission goals. Emissions budgets indicate a finite amount of carbon dioxide that can be emitted over time, before resulting in dangerous levels of global warming. The change in global temperature is independent of the source of these emissions, and is largely independent of the timing of these emissions. To translate global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the remaining carbon budget over all the different countries. This should take into account aspects of equity and fairness between countries as well as other methodological choices. There are many differences between nations, such as population size, level of industrialisation, historic emissions, and mitigation capabilities. For this reason, scientists are attempting to allocate global carbon budgets among countries using various principles of equity. Definition The IPCC Sixth Assessment Reports defines carbon budget as the following two concepts: "An assessment of carbon cycle sources and sinks on a global level, through the synthesis of evidence for fossil fuel and cement emissions, emissions and removals associated with land use and land-use change, ocean and natural land sources and sinks of carbon dioxide (CO2), and the resulting change in atmospheric CO2 concentration. This is referred to as the global carbon budget."; or "The maximum amount of cumulative net global anthropogenic CO2 emissions that would result in limiting global warming to a given level with a given probability, taking into account the effect of other anthropogenic climate forcers. This is referred to as the total carbon budget when expressed starting from the pre-industrial period, and as the remaining carbon budget when expressed from a recent specified date." Global carbon budgets can be further divided into national emissions budgets, so that countries can set specific climate mitigation goals. An emissions budget may be distinguished from an emissions target, as an emissions target may be internationally or nationally set in accordance with objectives other than a specific global temperature and are commonly applied to the annual emissions in a single year as well. Estimations Recent and currently remaining carbon budget Several organisations provide annual updates to the remaining carbon budget, including the Global Carbon Project, the Mercator Research Institute on Global Commons and Climate Change (MCC) and the CONSTRAIN project. In March 2022, before formal publication of the "Global Carbon Budget 2021" preprint, scientists reported, based on Carbon Monitor (CM) data, that after COVID-19-pandemic-caused record-level declines in 2020, global emissions rebounded sharply by 4.8% in 2021, indicating that at the current trajectory, the carbon budget for a ⅔ likelihood for limiting warming to 1.5 °C would be used up within 9.5 years. In April 2022, the now reviewed and officially published The Global Carbon Budget 2021 concluded that fossil emissions rebounded from pandemic levels by around +4.8% relative to 2020 emissions – returning to 2019 levels. It identifies three major issues for improving reliable accuracy of monitoring, shows that China and India surpassed 2019 levels (by 5.7% and 3.2%) while the EU and the US stayed beneath 2019 levels (by 5.3% and 4.5%), quantifies various changes and trends, for the first time provides models' estimates that are linked to the official country GHG inventories reporting, and suggests that the remaining carbon budget at 1. Jan 2022 for a 50% likelihood to limit global warming to 1.5 °C (albeit a temporary exceedence is to be expected) is 120 GtC (420 Gt) – or 11 years of 2021 emissions levels. This does not mean that likely 11 years remain to cut emissions but that if emissions stayed the same, instead of increasing like in 2021, 11 years of constant GHG emissions would be left in the hypothetical scenario that all emissions suddenly ceased in the 12th year. (The 50% likelihood may be describable as a kind of minimum plausible deniability requirement as lower likelihoods would make the 1.5 °C goal "unlikely".) Moreover, other trackers show (or highlight) different amounts of carbon budget left, such as the MCC, which as of May 2022 shows "7 years 1 month left" and different likelihoods have different carbon budgets: a 83% likelihood would mean 6.6 ±0.1 years left (ending in 2028) according to CM data. In October 2023 a group of researchers updated the carbon budget including the CO2 emitted at 2020-2022 and new findings about the role of reduced presence of polluting particles in the atmosphere. They found we can emit 250 GtCO2 or 6 years of emissions at current level starting from January 2023, for having a 50% chance to stay below 1.5 degrees. For reaching this target humanity will need to zero CO2 emissions by the year 2034. To have a 50% chance of staying below 2 degrees humanity can emit 1220 Gt or 30 years of emissions at current level. Carbon budget in gigatonnes and factors The finding of an almost linear relationship between global temperature rise and cumulative carbon dioxide emissions has encouraged the estimation of global emissions budgets in order to remain below dangerous levels of warming. Since the pre-industrial period (year 1750) to 2019, approximately 2390 Gigatonnes of (Gt ) has already been emitted globally. Scientific estimations of the remaining global emissions budgets/quotas differ due to varied methodological approaches, and considerations of thresholds. Estimations might not include all amplifying climate change feedbacks, although the most authoritative carbon budget assessments as summarised by the IPCC do account explicitly for these. Scientists assess the size of remaining carbon budgets using estimates of: past warming caused by human activities, the amount of warming per cumulative unit of CO2 emissions (also known as the Transient Climate Response to cumulative Emissions of carbon dioxide, or TCRE), the amount of warming that could still occur once all emissions of CO2 are halted (known as the Zero Emissions Commitment), and the impact of Earth system feedbacks that would otherwise not be covered. The estimates vary according to the global temperature target that is chosen, the probability of staying below that target, and the emission of other non- greenhouse gases (GHGs). This approach was first applied in the 2018 Special report on Global Warming of 1.5 °C by the IPCC, and was also used in its 2021 Working Group I Contribution to the Sixth Assessment Report. Carbon budget estimates depend on the likelihood or probability of avoiding a temperature limit, and the assumed warming that is projected to be caused by non- emissions. These estimates assume non- emissions are also reduced in line with deep decarbonisation scenarios that reach global net zero emissions. Carbon budget estimates thus depend on how successful society is in reducing non- emissions together with carbon dioxide emissions. Scientists estimated that remaining carbon budgets can be 220 Gt higher or lower depending on how successful non- emissions are reduced. National emissions budgets Carbon budgets are applicable to the global level. To translate these global carbon budgets to the country level, a set of value judgments have to be made on how to distribute the total and remaining carbon budget. In light of the many differences between nations, including but not limited to population, level of industrialisation, national emissions histories, and mitigation capabilities, scientists have made attempts to allocate global carbon budgets among countries using methods that follow various principles of equity. Allocating national emissions budgets is comparable to sharing the effort to reduce global emissions, underlined by some assumptions of state-level responsibility of climate change. Many authors have conducted quantitative analyses which allocate emissions budgets, often simultaneously addressing disparities in historical GHG emissions between nations. One guiding principle that is used to allocate global emissions budgets to nations is the principle of "common but differentiated responsibilities and respective capabilities" that is included in the United Nations Framework Convention on Climate Change (UNFCCC). This principle is not defined in further detail in the UNFCCC but is broadly understood to recognize nations' different cumulative historical contributions to global emissions as well as their different development stages. From this perspective, those countries with greater emissions during a set time period (for example, since the pre-industrial era to the present) are the most responsible for addressing excess emissions, as are countries that are richer. Thus, their national emissions budgets have to be smaller than those from countries that have polluted less in the past, or are poorer. The concept of national historical responsibility for climate change has prevailed in the literature since the early 1990s and has been part of the key international agreements on climate change (UNFCCC, the Kyoto Protocol and the Paris Agreement). Consequently, those countries with the highest cumulative historical emissions have the most responsibility to take the strongest actions and help developing countries to mitigate their emissions and adapt to climate change. This principle is recognized in international treaties and has been part of the diplomatic strategies by developing countries, that argue that they need larger emissions budgets to reduce inequity and achieve sustainable development. Another common equity principle for calculating national emissions budgets is the "egalitarian" principle. This principle stipulates individuals should have equal rights, and therefore emissions budgets should be distributed proportionally according to state populations. Some scientists have thus reasoned the use of national per-capita emissions in national emissions budget calculations. This principle may be favoured by nations with larger or rapidly growing populations, but raises the question whether individuals can have a right to pollute. A third equity principle that has been employed in national budget calculations considers national sovereignty. The "sovereignty" principle highlights the equal right of nations to pollute. The grandfathering method for calculating national emissions budgets uses this principle. Grandfathering allocates these budgets proportionally according to emissions at a particular base year, and has been used under international regimes such as the Kyoto Protocol and the early phase of the European Union Emissions Trading Scheme (EU ETS) This principle is often favoured by developed countries, as it allocates larger emissions budgets to them. However, recent publications highlight that grandfathering is unsupported as an equity principle as it "creates 'cascading biases' against poorer states, is not a 'standard of equity'". Other scholars have highlighted that "to treat states as the owners of emission rights has morally problematic consequences". Pathways to stay within carbon budget The steps that can be taken to stay within one's carbon budget are explained within the concept of climate change mitigation. See also Global Carbon Project References External links Global Carbon Project The CONSTRAIN Project - 4-year project (2020 to 2024) funded by European Union Horizon 2020 Greenhouse gas emissions Environmental science Climate change mitigation
Carbon budget
Chemistry,Environmental_science
2,371
32,726,780
https://en.wikipedia.org/wiki/Cray%20XK6
The Cray XK6 made by Cray is an enhanced version of the Cray XE6 supercomputer, announced in May 2011. The XK6 uses the same "blade" architecture of the XE6, with each XK6 blade comprising four compute "nodes". Each node consists of a 16-core AMD Opteron 6200 processor with 16 or 32 GB of DDR3 RAM and an Nvidia Tesla X2090 GPGPU with 6 GB of GDDR5 RAM, the two connected via PCI Express 2.0. Two Gemini router ASICs are shared between the nodes on a blade, providing a 3-dimensional torus network topology between nodes. This means that it has 576 GB of Graphics memory and over 1500 CPU cores, several orders of magnitude more powerful than the best publicly available computer on the market. An XK6 cabinet accommodates 24 blades (96 nodes). Each of the Tesla processors is rated at 665 double-precision gigaflops giving 63.8 teraflops per cabinet. The XK6 is capable of scaling to 500,000 Opteron cores, giving up to 50 petaflops total hybrid peak performance. The XK6 runs the Cray Linux Environment. This incorporates SUSE Linux Enterprise Server and Cray's Compute Node Linux. The first order for an XK6 system was an upgrade of an existing XE6m at the Swiss National Supercomputing Centre (CSCS). References External links Cray XK6 press release Xk6 Petascale computers X86 supercomputers de:Cray XK6
Cray XK6
Technology
343
74,148,465
https://en.wikipedia.org/wiki/Marketing%20automation%20in%20email%20campaigns
Marketing automation in email campaigns refers to a numerous methods implemented in marketing for segmenting, targeting, scheduling, automating, and tracking of marketing messages. Overview Marketing automation in email campaigns primarily involves the use of software or web-based services to execute, manage, and automate marketing tasks and processes. Automation methods are extensively used to replace manual and repetitive tasks where possible and to implement more personalized approaches for interactions. Features and components Segmentation Processing large marketing data requires segmentation. This means dividing the email list into smaller, more targeted groups based on various criteria such as demographics, psychographics, past purchases, and behavioral data. Personalization Personalization allows businesses to tailor their email content to each recipient. This could involve customizing the greeting or delivering personalized product recommendations based on previous purchases or browsing history. Scheduling Automated emails can be scheduled to be sent at optimal times based on data like when recipients are most likely to open and read emails. This increases the chance of engagement and interaction. Analytics and Reporting Most email marketing automation tools provide detailed analytics and reporting features. This enables marketers to measure the performance of their email campaigns and make data-driven decisions to improve future campaigns. See also Email marketing Digital marketing Marketing automation References Digital marketing Email Spamming Automation software Marketing software
Marketing automation in email campaigns
Engineering
259
69,050,582
https://en.wikipedia.org/wiki/Estradiol%20dicypionate
Estradiol dicypionate (EDC), also known as estradiol 3,17β-dicypionate, is an estrogen ester which was never marketed. It is the C3 and C17β cypionate (cyclopentylpropionate) diester of estradiol. See also List of estrogen esters § Estradiol esters References Abandoned drugs Cypionate esters Estradiol esters Synthetic estrogens
Estradiol dicypionate
Chemistry
101
47,827,936
https://en.wikipedia.org/wiki/Console%20Enterprises
Console Enterprises (commonly known as Console) is an American technology company headquartered in Chico, California, that focuses on high-performance Android platform design. It is best known for its Console OS Kickstarter campaign, a project intended on developing a native Android distribution for the PC. Console was originally titled Mobile Media Ventures, Inc. In mid-2015 the company announced its intention to do business as Console, Inc. going forward. In January 2017, the company rebranded to Console Enterprises, resolving a branding dispute with another company also calling itself Console Inc. That other company renamed itself to Console Connect Inc., and Console Enterprises claims to continue to use Console Inc. as a brand for B2B consulting services. The company was founded by Christopher Price. It is a privately held startup. The current number of employees in the company is unknown. Products Console OS Console OS is the first commercial distribution of the Android operating system, designed for traditional PC hardware. It debuted on Kickstarter in June, 2014. The funding campaign was successful, raising $78,497 from 5,695 backers. The distribution differs from open-source options such as Android-x86 by including commercial, closed-source drivers, codecs, and players. The Console OS platform, effectively, is the Intel Architecture equivalent to CyanogenMod. Console OS runs as a native operating system. Unlike alternative solutions for the PC, such as BlueStacks, it does not run Android in an emulator. This provides superior performance, particularly on lower-end systems - but with the disadvantage that the end-user must install the operating system, and cannot easily uninstall the software from inside the original operating system. According to an update on Console OS's Kickstarter page, Console OS is temporarily offline. Console cited the uncertain future regarding Intel support of Android source code in the open source community until Intel resumes phone development in a couple years. Console says they still plan to ship Marshmallow later this summer, and is focusing on hardware development to adjust to Intel's reduced processor support for Android. While Intel has discontinued formal support for Android on PC hardware - which Console has repeatedly noted/claimed upstream support a "stated risk" in its risk disclosure section of the Kickstarter - the company has committed to offering backers a courtesy refund as part of their pivot to hardware, once their new products reach general availability. Controversy, Fork from Android-x86.org The initial 2014 releases of Console OS KitKat supported most target Kickstarter devices - but not key/major tablets such as the Dell Venue 8 Pro or ASUS's Transformer Book T100, as it committed to. Releases became stalled. In 2015, the company released a Lollipop preview release, but took it offline citing major issues. Releases then stalled for most of a year. Later Console announced that Intel had discontinued Android-IA for PC hardware. Console claims this decision was made in January 2015. Console claims at this point it was unable to refund Kickstarter backers, citing that Kickstarter will not reverse payment transactions after 90 days. Despite this, Console said it had a plan to continue development. Later, Console announced that it new releases would fork the Android-x86.org kernel, to continue development. In December 2015, the creator/administrator of Android-x86.org, Chih-Wei Huang, published an article claiming Console OS "stole" Android-x86.org, and called founder Christopher Price a "cancer" on Android-x86, arguing that a fork could deprive Android.x86.org of community attention. Console, Inc. responded with evidence claiming that Chih-Wei Huang demanded a payment of $50,000 to collaborate on changes and contributions. Additionally, Console called Chih-Wei Huang's effort a "shakedown" - and responded that his letter was "... unfortunate and it’s a disgrace to open-source." Chih-Wei Huang later confirmed and admitted that he explicitly demanded the money. Later he claimed that the refusal to donate, and his criticism of Console OS shortly thereafter, were not directly linked. A technical analysis by the site XDA-Developers's own staff reporters showed that Console was under no obligation to pay funds sought or demanded by Chih-Wei Huang. Its analysis further affirmed that Console OS did not steal Android-x86 and forked it properly, with attribution on its GitHub site. However, the same analysis by XDA was critical of Console for delayed development, missing certain features, and past failures. It also was critical of Intel for a lack of any public explanation for why Android-IA for PC hardware was discontinued, shortly after Console OS began releasing code based on it. The controversy received considerable attention on several Android news and open-source community web sites. Other Products Console's first product was the (code-named "Unit 00"). The developer kit was sold from 2013 to 2014. Positioned to be a future-generation Android development system, it was built using PC hardware - but ran Android 4.2 Jelly Bean. It was the first Android device to formally ship with an Intel Core processor, the most powerful Android device sold at its time. Console announced at Mobile World Congress 2014 in Barcelona, Spain. It was shown under glass at Intel's booth. The company stated they hope to ship it by the end of 2015, and that it intends to be the most powerful Android TV stick on the market. In August 2016, Console announced that was not going to be launched. No orders or pre-orders were taken for the product. The company has cited Intel's pullbacks/downsizing in Android development as a reason for its discontinuation. The company announced at the fall 2017 Intel Developer Forum their new product, ConsoleTab, which is based on Intel technology. ConsoleTab's auxiliary battery (a planned feature) was soon removed due to hardware problems in the manufacturing process. As of June 2017, the tablet has not been launched, as Console has cited Intel possibly withdrawing from Android on the processor ConsoleTab depends on. References External links Console Enterprises homepage Console OS Wiki Console OS Kickstarter Campaign Computer companies of the United States Computer hardware companies Electronics companies of the United States
Console Enterprises
Technology
1,292
7,850,102
https://en.wikipedia.org/wiki/Quantum%20mind
The quantum mind or quantum consciousness is a group of hypotheses proposing that local physical laws and interactions from classical mechanics or connections between neurons alone cannot explain consciousness, positing instead that quantum-mechanical phenomena, such as entanglement and superposition that cause nonlocalized quantum effects, interacting in smaller features of the brain than cells, may play an important part in the brain's function and could explain critical aspects of consciousness. These scientific hypotheses are as yet unvalidated, and they can overlap with quantum mysticism. History Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that "mind, as manifested by the capacity to make choices, is to some extent inherent in every electron". Other contemporary physicists and philosophers considered these arguments unconvincing. Victor Stenger characterized quantum consciousness as a "myth" having "no scientific basis" that "should take its place along with gods, unicorns and dragons". David Chalmers argues against quantum consciousness. He instead discusses how quantum mechanics may relate to dualistic consciousness. Chalmers is skeptical that any new physics can resolve the hard problem of consciousness. He argues that quantum theories of consciousness suffer from the same weakness as more conventional theories. Just as he argues that there is no particular reason why particular macroscopic physical features in the brain should give rise to consciousness, he also thinks that there is no particular reason why a particular quantum feature, such as the EM field in the brain, should give rise to consciousness either. Approaches Bohm David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe. He claimed that both quantum theory and relativity pointed to this deeper theory, a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it. Bohm's proposed order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness. Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his "implicate order" could emerge in a way relevant to consciousness. He later collaborated on Karl Pribram's holonomic brain theory as a model of quantum consciousness. David Bohm also collaborated with Basil Hiley on work that claimed mind and matter both emerge from an "implicate order". Hiley in turn worked with philosopher Paavo Pylkkänen. According to Pylkkänen, Bohm's suggestion "leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level". Penrose and Hameroff Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as "orchestrated objective reduction" (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. They reviewed and updated their theory in 2013. Penrose's argument stemmed from Gödel's incompleteness theorems. In his first book on consciousness, The Emperor's New Mind (1989), he argued that while a formal system cannot prove its own consistency, Gödel's unprovable results are provable by human mathematicians. Penrose took this to mean that human mathematicians are not formal proof systems and not running a computable algorithm. According to Bringsjord and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation. In the same book, Penrose wrote: "One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity." Penrose determined that wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, he proposed a new form of wave function collapse that occurs in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length, they become unstable and collapse. Penrose suggested that objective reduction represents neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derives. Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior. Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8 nm apart and may contain delocalized π electrons. Tubulins have other smaller non-polar regions that contain π-electron-rich indole rings separated by about 2 nm. Hameroff proposed that these electrons are close enough to become entangled. He originally suggested that the tubulin-subunit electrons would form a Bose–Einstein condensate, but this was discredited. He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules, but this too was experimentally discredited. In other words, there is a missing link between physics and neuroscience. For instance, the proposed predominance of A-lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al., who showed that all in vivo microtubules have a B lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified. Orch-OR predicted that microtubule coherence reaches the synapses through dendritic lamellar bodies (DLBs), but De Zeeuw et al. proved this impossible by showing that DLBs are micrometers away from gap junctions. In 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013 corroborates Orch-OR theory. Experiments that showed that anaesthetic drugs reduce how long microtubules can sustain suspected quantum excitations appear to support the quantum theory of consciousness. In April 2022, the results of two related experiments at the University of Alberta and Princeton University were announced at The Science of Consciousness conference, providing further evidence to support quantum processes operating within microtubules. In a study Stuart Hameroff was part of, Jack Tuszyński of the University of Alberta demonstrated that anesthetics hasten the duration of a process called delayed luminescence, in which microtubules and tubulins trapped light. Tuszyński suspects that the phenomenon has a quantum origin, with superradiance being investigated as one possibility. In the second experiment, Gregory D. Scholes and Aarat Kalra of Princeton University used lasers to excite molecules within tubulins, causing a prolonged excitation to diffuse through microtubules further than expected, which did not occur when repeated under anesthesia. However, diffusion results have to be interpreted carefully, since even classical diffusion can be very complex due to the wide range of length scales in the fluid filled extracellular space. Nevertheless, University of Oxford quantum physicist Vlatko Vedral told that this connection with consciousness is a really long shot. Also in 2022, a group of Italian physicists conducted several experiments that failed to provide evidence in support of a gravity-related quantum collapse model of consciousness, weakening the possibility of a quantum explanation for consciousness. Although these theories are stated in a scientific framework, it is difficult to separate them from scientists' personal opinions. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote: [M]y own point of view asserts that you can't even simulate conscious activity. What's going on in conscious thinking is something you couldn't properly imitate at all by computer.... If something behaves as though it's conscious, do you say it is conscious? People argue endlessly about that. Some people would say, "Well, you've got to take the operational viewpoint; we don't know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot." Other people would say, "No, you can't say it feels something merely because it behaves as though it feels something." My view is different from both those views. The robot wouldn't even behave convincingly as though it was conscious unless it really was—which I say it couldn't be, if it's entirely computationally controlled. Penrose continues: A lot of what the brain does you could do on a computer. I'm not saying that all the brain's action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I'm not saying that consciousness is beyond physics, either—although I'm saying that it's beyond the physics we know now.... My claim is that there has to be something in physics that we don't yet understand, which is very important, and which is of a noncomputational character. It's not specific to our brains; it's out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior—that is, where quantum measurement comes in. Umezawa, Vitiello, Freeman Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage. Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain. Their quantum field theory models of brain dynamics are fundamentally different from the Penrose–Hameroff theory. Quantum brain dynamics As described by Harald Atmanspacher, "Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness." The original motivation in the early 20th century for relating quantum theory to consciousness was essentially philosophical. It is fairly plausible that conscious free decisions (“free will”) are problematic in a perfectly deterministic world, so quantum randomness might indeed open up novel possibilities for free will. (On the other hand, randomness is problematic for goal-directed volition!) Ricciardi and Umezawa proposed in 1967 a general theory of quanta of long-range coherent waves within and between brain cells, and showed a possible mechanism of memory storage and retrieval in terms of Nambu–Goldstone bosons. Mari Jibu and Kunio Yasue later popularized these results under the name "quantum brain dynamics" (QBD) as the hypothesis to explain the function of the brain within the framework of quantum field theory with implications on consciousness. Pribram Karl Pribram's holonomic brain theory (quantum holography) invoked quantum mechanics to explain higher-order processing by the mind. He argued that his holonomic model solved the binding problem. Pribram collaborated with Bohm in his work on quantum approaches to mind and he provided evidence on how much of the processing in the brain was done in wholes. He proposed that ordered water at dendritic membrane surfaces might operate by structuring Bose–Einstein condensation supporting quantum dynamics. Stapp Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp's work drew criticism from scientists such as David Bourget and Danko Georgiev. Catecholaminergic Neuron Electron Transport (CNET) CNET is a hypothesized neural signaling mechanism in catecholaminergic neurons that would use quantum mechanical electron transport. The hypothesis is based in part on the observation by many independent researchers that electron tunneling occurs in ferritin, an iron storage protein that is prevalent in those neurons, at room temperature and ambient conditions. The hypothesized function of this mechanism is to assist in action selection, but the mechanism itself would be capable of integrating millions of cognitive and sensory neural signals using a physical mechanism associated with strong electron-electron interactions. Each tunneling event would involve a collapse of an electron wave function, but the collapse would be incidental to the physical effect created by strong electron-electron interactions. CNET predicted a number of physical properties of these neurons that have been subsequently observed experimentally, such as electron tunneling in substantia nigra pars compacta (SNc) tissue and the presence of disordered arrays of ferritin in SNc tissue. The hypothesis also predicted that disordered ferritin arrays like those found in SNc tissue should be capable of supporting long-range electron transport and providing a switching or routing function, both of which have also been subsequently observed. Another prediction of CNET was that the largest SNc neurons should mediate action selection. This prediction was contrary to earlier proposals about the function of those neurons at that time, which were based on predictive reward dopamine signaling. A team led by Dr. Pascal Kaeser of Harvard Medical School subsequently demonstrated that those neurons do in fact code movement, consistent with the earlier predictions of CNET. While the CNET mechanism has not yet been directly observed, it may be possible to do so using quantum dot fluorophores tagged to ferritin or other methods for detecting electron tunneling. CNET is applicable to a number of different consciousness models as a binding or action selection mechanism, such as Integrated Information Theory (IIT) and Sensorimotor Theory (SMT). It is noted that many existing models of consciousness fail to specifically address action selection or binding. For example, O’Regan and Noë call binding a “pseudo problem,” but also state that “the fact that object attributes seem perceptually to be part of a single object does not require them to be ‘represented’ in any unified kind of way, for example, at a single location in the brain, or by a single process. They may be so represented, but there is no logical necessity for this.” Simply because there is no “logical necessity” for a physical phenomenon does not mean that it does not exist, or that once it is identified that it can be ignored. Likewise, global workspace theory (GWT) models appear to treat dopamine as modulatory, based on the prior understanding of those neurons from predictive reward dopamine signaling research, but GWT models could be adapted to include modeling of moment-by-moment activity in the striatum to mediate action selection, as observed by Kaiser. CNET is applicable to those neurons as a selection mechanism for that function, as otherwise that function could result in seizures from simultaneous actuation of competing sets of neurons. While CNET by itself is not a model of consciousness, it is able to integrate different models of consciousness through neural binding and action selection. However, a more complete understanding of how CNET might relate to consciousness would require a better understanding of strong electron-electron interactions in ferritin arrays, which implicates the many-body problem. Criticism These hypotheses of the quantum mind remain hypothetical speculation, as Penrose admits in his discussions. Until they make a prediction that is tested by experimentation, the hypotheses are not based on empirical evidence. In 2010, Lawrence Krauss was guarded in criticising Penrose's ideas. He said: "Roger Penrose has given lots of new-age crackpots ammunition... Many people are dubious that Penrose's suggestions are reasonable, because the brain is not an isolated quantum-mechanical system. To some extent it could be, because memories are stored at the molecular level, and at a molecular level quantum mechanics is significant." According to Krauss, "It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact, we can make weird quantum phenomena happen. But what quantum mechanics doesn't change about the universe is, if you want to change things, you still have to do something. You can't change the world by thinking about it." The process of testing the hypotheses with experiments is fraught with conceptual/theoretical, practical, and ethical problems. Conceptual problems The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary, but other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model, which doesn't indicate that quantum effects are needed, in his 1991 book Consciousness Explained. A philosophical argument on either side is not a scientific proof, although philosophical analysis can indicate key differences in the types of models and show what type of experimental differences might be observed. But since there is no clear consensus among philosophers, there is no conceptual support that a quantum mind theory is needed. A possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrödinger's cat. Erwin Schrödinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat's survival depended on the state of a radioactive atom—whether it had decayed and emitted radiation. According to Schrödinger, the Copenhagen interpretation implies that the cat is both alive and dead until the state has been observed. Schrödinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; he intended the example to illustrate the absurdity of the existing view of quantum mechanics. But since Schrödinger's time, physicists have given other interpretations of the mathematics of quantum mechanics, some of which regard the "alive and dead" cat superposition as quite real. Schrödinger's famous thought experiment poses the question of when a system stops existing as a quantum superposition of states. In the same way, one can ask whether the act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means "opening the box" to reduce the brain from a combination of states to one state. This analogy of decision-making uses a formalism derived from quantum mechanics, but does not indicate the actual mechanism by which the decision is made. In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind, as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm, generalized quantum paradigm, or quantum structure paradigm that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy and does not propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there is interference between two alternatives, but it is not a physical quantum interference effect. Practical problems The main theoretical argument against the quantum-mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Max Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales. No response by a brain has shown computational results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond timescales. Daniel Dennett uses an experimental result in support of his multiple drafts model of an optical illusion that happens on a timescale of less than a second or so. In this experiment, two different-colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed. Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet. These slow illusions that happen at times of less than a second do not support a proposal that the brain functions on the picosecond timescale. Penrose says: The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn't even attempt to build a quantum computer out of ordinary nerve signals, because they're just too big and in an environment that's too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there's an extremely good chance that you can get quantum-level activity inside them. For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large-scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules. There are various avenues of attack. One is directly on the physics, on quantum theory, and there are certain experiments that people are beginning to perform, and various schemes for a modification of quantum mechanics. I don't think the experiments are sensitive enough yet to test many of these specific ideas. One could imagine experiments that might test these things, but they'd be very hard to perform. Penrose also said in an interview: ...whatever consciousness is, it must be beyond computable physics.... It's not that consciousness depends on quantum mechanics, it's that it depends on where our current theories of quantum mechanics go wrong. It's to do with a theory that we don't know yet. A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory, something "we don't know yet." Ethical problems Deepak Chopra has referred a "quantum soul" existing "apart from the body", human "access to a field of infinite possibilities", and other quantum mysticism topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a "quantum-mechanical body" composed not of matter but of energy and information, he believes that "human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself", as determined by one's state of mind. Robert Carroll states that Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings. Chopra argues that what he calls "quantum healing" cures any manner of ailments, including cancer, through effects that he claims are based on the same principles as quantum mechanics. This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body. Chopra said: "I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think there’s a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics." On the other hand, he also claims that quantum effects are "just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness." In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness. According to Daniel Dennett, "On this topic, Everybody's an expert... but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable." While quantum effects are significant in the physiology of the brain, critics of quantum mind hypotheses challenge whether the effects of known or speculated quantum phenomena in biology scale up to have significance in neuronal computation, much less the emergence of consciousness as phenomenon. Daniel Dennett said, "Quantum effects are there in your car, your watch, and your computer. But most things—most macroscopic objects—are, as it were, oblivious to quantum effects. They don't amplify them; they don't hinge on them." See also Artificial consciousness Bohm interpretation of quantum mechanics Coincidence detection in neurobiology Critical brain hypothesis Electromagnetic theories of consciousness Evolutionary neuroscience Hameroff-Penrose Orchestrated Objective Reduction Hard problem of consciousness Holonomic brain theory Many-minds interpretation Mechanism (philosophy) Neuroplasticity Quantum cognition Quantum neural network References Further reading McFadden, Johnjoe (2000) Quantum Evolution HarperCollins. ; . Final chapter on the quantum mind. External links Center for Consciousness Studies, directed by Stuart Hameroff PhilPapers on Philosophy of Mind, edited by David Bourget and David Chalmers Quantum Approaches to Consciousness, entry in Stanford Encyclopedia of Philosophy Fringe science Quantum mechanics Theory of mind
Quantum mind
Physics
5,579
712,166
https://en.wikipedia.org/wiki/Wagstaff%20prime
In number theory, a Wagstaff prime is a prime number of the form where p is an odd prime. Wagstaff primes are named after the mathematician Samuel S. Wagstaff Jr.; the prime pages credit François Morain for naming them in a lecture at the Eurocrypt 1990 conference. Wagstaff primes appear in the New Mersenne conjecture and have applications in cryptography. Examples The first three Wagstaff primes are 3, 11, and 43 because Known Wagstaff primes The first few Wagstaff primes are: 3, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, ... Exponents which produce Wagstaff primes or probable primes are: 3, 5, 7, 11, 13, 17, 19, 23, 31, 43, 61, 79, 101, 127, 167, 191, 199, 313, 347, 701, 1709, 2617, 3539, 5807, ... Generalizations It is natural to consider more generally numbers of the form where the base . Since for odd we have these numbers are called "Wagstaff numbers base ", and sometimes considered a case of the repunit numbers with negative base . For some specific values of , all (with a possible exception for very small ) are composite because of an "algebraic" factorization. Specifically, if has the form of a perfect power with odd exponent (like 8, 27, 32, 64, 125, 128, 216, 243, 343, 512, 729, 1000, etc. ), then the fact that , with odd, is divisible by shows that is divisible by in these special cases. Another case is , with k a positive integer (like 4, 64, 324, 1024, 2500, 5184, etc. ), where we have the aurifeuillean factorization. However, when does not admit an algebraic factorization, it is conjectured that an infinite number of values make prime, notice all are odd primes. For , the primes themselves have the following appearance: 9091, 909091, 909090909090909091, 909090909090909090909090909091, … , and these ns are: 5, 7, 19, 31, 53, 67, 293, 641, 2137, 3011, 268207, ... . See Repunit#Repunit primes for the list of the generalized Wagstaff primes base . (Generalized Wagstaff primes base are generalized repunit primes base with odd ) The least primes p such that is prime are (starts with n = 2, 0 if no such p exists) 3, 3, 3, 5, 3, 3, 0, 3, 5, 5, 5, 3, 7, 3, 3, 7, 3, 17, 5, 3, 3, 11, 7, 3, 11, 0, 3, 7, 139, 109, 0, 5, 3, 11, 31, 5, 5, 3, 53, 17, 3, 5, 7, 103, 7, 5, 5, 7, 1153, 3, 7, 21943, 7, 3, 37, 53, 3, 17, 3, 7, 11, 3, 0, 19, 7, 3, 757, 11, 3, 5, 3, ... The least bases b such that is prime are (starts with n = 2) 2, 2, 2, 2, 2, 2, 2, 2, 7, 2, 16, 61, 2, 6, 10, 6, 2, 5, 46, 18, 2, 49, 16, 70, 2, 5, 6, 12, 92, 2, 48, 89, 30, 16, 147, 19, 19, 2, 16, 11, 289, 2, 12, 52, 2, 66, 9, 22, 5, 489, 69, 137, 16, 36, 96, 76, 117, 26, 3, ... References External links Chris Caldwell, The Top Twenty: Wagstaff at The Prime Pages. Renaud Lifchitz, "An efficient probable prime test for numbers of the form (2p + 1)/3". Tony Reix, "Three conjectures about primality testing for Mersenne, Wagstaff and Fermat numbers based on cycles of the Digraph under x2 − 2 modulo a prime". List of repunits in base -50 to 50 List of Wagstaff primes base 2 to 160 Classes of prime numbers Eponymous numbers in mathematics Unsolved problems in number theory
Wagstaff prime
Mathematics
1,029
3,877,464
https://en.wikipedia.org/wiki/C21H30O2
{{DISPLAYTITLE:C21H30O2}} The molecular formula C21H30O2 (molar mass: 314.46 g/mol) may refer to: Cannabinoid Abnormal cannabidiol Cannabichromene Cannabicitran Cannabicyclol Cannabidiol Cis-THC Delta-6-Cannabidiol Isotetrahydrocannabinol Tetrahydrocannabinol Delta-3-Tetrahydrocannabinol Delta-4-Tetrahydrocannabinol Delta-7-Tetrahydrocannabinol Delta-8-Tetrahydrocannabinol Delta-10-Tetrahydrocannabinol Steroid 17α-Allyl-19-nortestosterone 20α-Dihydrodydrogesterone 5α-Dihydrolevonorgestrel 5α-Dihydroethisterone Ethinylandrostenediol Hydroxytibolones 3α-Hydroxytibolone 3β-Hydroxytibolone 17α-Methyl-19-norprogesterone Metynodiol Progesterone Retroprogesterone Vinyltestosterone
C21H30O2
Chemistry
264
8,635,864
https://en.wikipedia.org/wiki/Amorphous%20computing
Amorphous computing refers to computational systems that use very large numbers of identical, parallel processors each having limited computational ability and local interactions. The term amorphous computing was coined at MIT in 1996 in a paper entitled "Amorphous Computing Manifesto" by Abelson, Knight, Sussman, et al. Examples of naturally occurring amorphous computations can be found in many fields, such as developmental biology (the development of multicellular organisms from a single cell), molecular biology (the organization of sub-cellular compartments and intra-cell signaling), neural networks, and chemical engineering (non-equilibrium systems). The study of amorphous computation is hardware agnostic—it is not concerned with the physical substrate (biological, electronic, nanotech, etc.) but rather with the characterization of amorphous algorithms as abstractions with the goal of both understanding existing natural examples and engineering novel systems. Amorphous computers tend to have many of the following properties: Implemented by redundant, potentially faulty, massively parallel devices. Devices having limited memory and computational abilities. Devices being asynchronous. Devices having no a priori knowledge of their location. Devices communicating only locally. Exhibit emergent or self-organizational behavior (patterns or states larger than an individual device). Fault-tolerant, especially to the occasional malformed device or state perturbation. Algorithms, tools, and patterns (Some of these algorithms have no known names. Where a name is not known, a descriptive one is given.) "Fickian communication". Devices communicate by generating messages which diffuse through the medium in which the devices dwell. Message strength will follow the inverse square law as described by Fick's law of diffusion. Examples of such communication are common in biological and chemical systems. "Link diffusive communication". Devices communicate by propagating messages down links wired from device to device. Unlike "Fickian communication", there is not necessarily a diffusive medium in which the devices dwell and thus the spatial dimension is irrelevant and Fick's law is not applicable. Examples are found in Internet routing algorithms such as the diffusing update algorithm. Most algorithms described in the amorphous computing literature assume this kind of communication. "Wave Propagation". (Ref 1) A device emits a message with an encoded hop-count. Devices which have not seen the message previously, increment the hop count, and re-broadcast. A wave propagates through the medium and the hop-count across the medium will effectively encode a distance gradient from the source. "Random ID". Each device gives itself a random id, the random space being sufficiently large to preclude duplicates. "Growing-point program". (Coore). Processes that move among devices according to 'tropism' (movement of an organism due to external stimuli). "Wave coordinates". DARPA PPT slides. To be written. "Neighborhood query". (Nagpal) A device samples the state of its neighbors by either a push or pull mechanism. "Peer-pressure". Each device maintains a state and communicates this state to its neighbors. Each device uses some voting scheme to determine whether or not to change state to its neighbor's state. The algorithm partitions space according to the initial distributions and is an example of a clustering algorithm. "Self maintaining line". (Lauren Lauren, Clement). A gradient is created from one end-point on a plane covered with devices via Link Diffusive Communication. Each device is aware of its value in the gradient and the id of its neighbor that is closer to the origin of the gradient. The opposite end-point detects the gradient and informs its closer neighbor that it is part of a line. This propagates up the gradient forming a line which is robust against disruptions in the field. (Illustration needed). "Club Formation". (Coore, Coore, Nagpal, Weiss). Local clusters of processors elect a leader to serve as a local communication hub. "Coordinate formation" (Nagpal). Multiple gradients are formed and used to form a coordinate system via triangulation. Researchers and labs Hal Abelson, MIT Jacob Beal, graduate student MIT (high level languages for amorphous computing) Daniel Coore, University of West Indies (growing point language, tropism, grown inverter series) Nikolaus Correll, University of Colorado (robotic materials) Tom Knight, MIT (computation with synthetic biology) Radhika Nagpal, Harvard (self-organizing systems) Zack Booth Simpson, Ellington Lab, Univ. of Texas at Austin. (Bacterial edge detector) Gerry Sussman, MIT AI Lab Ron Weiss, MIT (rule triggering, microbial colony language, coli pattern formation) See also Unconventional computing Documents The Amorphous Computing Home Page A collection of papers and links at the MIT AI lab Amorphous Computing (Communications of the ACM, May 2000) A review article showing examples from Coore's Growing Point Language as well as patterns created from Weiss's rule triggering language. "Amorphous computing in the presence of stochastic disturbances" A paper investigating the ability of Amorphous computers to deal with failing components. Amorphous Computing Slides from DARPA talk in 1998 An overview of ideas and proposals for implementations Amorphous and Cellular Computing PPT from 2002 NASA Lecture Almost the same as above, in PPT format Infrastructure for Engineered Emergence on Sensor/Actuator Networks, Beal and Bachrach, 2006. An amorphous computing language called "Proto". Self-repairing Topological Patterns Clement, Nagpal. Algorithms for self-repairing and self-maintaining line. Robust Methods of Amorphous Synchronization, Joshua Grochow Methods for inducing global temporal synchronization. Programmable Self-Assembly: Constructing Global Shape Using Biologically-Inspired Local Interactions and Origami Mathematics and Associated Slides Nagpal PhD Thesis A language to compile local-interaction instructions from a high-level description of an origami-like folded structure. Towards a Programmable Material, Nagpal Associated Slides Similar outline to previous paper Self-Healing Structures in Amorphous Computing Zucker Methods for detecting and maintaining topologies inspired by biological regeneration. Resilient serial execution on amorphous machines, Sutherland Master's Thesis A language for running serial processes on amorphous computers Paradigms for Structure in an Amorphous Computer, 1997 Coore, Nagpal, Weiss Techniques for creating hierarchical order in amorphous computers. Organizing a Global Coordinate System from Local Information on an Amorphous Computer, 1999 Nagpal. Techniques for creating coordinate systems by gradient formation and analyzes precision limits. Amorphous Computing: examples, mathematics and theory, 2013 W Richard Stark. This paper presents nearly 20 examples varying from simple to complex, standard mathematical tools are used to prove theorems and compute expected behavior, four programming styles are identified and explored, three uncomputability results are proved, and the computational foundations of a complex, dynamic intelligence system are sketched. Parallel computing Classes of computers
Amorphous computing
Technology
1,468
34,257,312
https://en.wikipedia.org/wiki/Association%20A%C3%A9ronautique%20et%20Astronautique%20de%20France
Association Aéronautique et Astronautique de France (3AF or AAAF) is the French national aeronautical and astronautical association. It is located in Paris. It has been created in 1971 from the Association Française des Ingénieurs et Techniciens de l'Aéronautique et de l'Espace (AFITAE) created in 1945 and the Société Française d'Astronautique (SFA) created in 1955. The 3AF activity is to bring together people interested, for professional or personal reasons, in the aerospace sector to represent their point of view and to help the development of scientific and technological knowledge related to the aerospace industry and its history. Its members are mostly technicians, engineers and researchers. Its industrial partners are the largest industries in national and European industry, such as Alcatel Space, EADS and Arianespace. It also has honorary members. 3AF is a founding member of the Confederation of European Aerospace Societies (CEAS) together with the equivalent German association Deutsche Gesellschaft für Luft - und Raumfahrt Lilienthal - Oberth eV (DGLR), Britain's Royal Aeronautical Society (RAES) and the Italian Italian Association of Aeronautics and Astronantica (AIDA). References External links Aerospace engineering organizations Clubs and societies in France Organizations established in 1971 1971 establishments in France
Association Aéronautique et Astronautique de France
Engineering
270
47,572,400
https://en.wikipedia.org/wiki/R%C3%B6mpp%20Encyclopedia%20Natural%20Products
The Römpp Encyclopedia Natural Products is an encyclopedia of natural products written by German chemists who specialize in this area of science. It is published by Thieme Medical Publishers. See also Römpp's Chemistry Lexicon References Further reading External links German encyclopedias Natural products Encyclopedias of science
Römpp Encyclopedia Natural Products
Chemistry
61
24,731,079
https://en.wikipedia.org/wiki/Observer%20%28quantum%20physics%29
Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon. The quantum mechanical observer is tied to the issue of observer effect, where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term "observable" has gained a technical meaning, denoting a Hermitian operator that represents a measurement. Foundation The theoretical foundation of the concept of measurement in quantum mechanics is a contentious issue deeply connected to the many interpretations of quantum mechanics. A key focus point is that of wave function collapse, for which several popular interpretations assert that measurement causes a discontinuous change into an eigenstate of the operator associated with the quantity that was measured, a change which is not time-reversible. More explicitly, the superposition principle ( of quantum physics dictates that for a wave function , a measurement will result in a state of the quantum system of one of the possible eigenvalues , of the operator which is in the space of the eigenfunctions . Once one has measured the system, one knows its current state; and this prevents it from being in one of its other states ⁠— it has apparently decohered from them without prospects of future strong quantum interference. This means that the type of measurement one performs on the system affects the end-state of the system. An experimentally studied situation related to this is the quantum Zeno effect, in which a quantum state would decay if left alone, but does not decay because of its continuous observation. The dynamics of a quantum system under continuous observation are described by a quantum stochastic master equation known as the Belavkin equation. Further studies have shown that even observing the results after the photon is produced leads to collapsing the wave function and loading a back-history as shown by delayed choice quantum eraser. When discussing the wave function which describes the state of a system in quantum mechanics, one should be cautious of a common misconception that assumes that the wave function amounts to the same thing as the physical object it describes. This flawed concept must then require existence of an external mechanism, such as a measuring instrument, that lies outside the principles governing the time evolution of the wave function , in order to account for the so-called "collapse of the wave function" after a measurement has been performed. But the wave function is not a physical object like, for example, an atom, which has an observable mass, charge and spin, as well as internal degrees of freedom. Instead, is an abstract mathematical function that contains all the statistical information that an observer can obtain from measurements of a given system. In this case, there is no real mystery in that this mathematical form of the wave function must change abruptly after a measurement has been performed. A consequence of Bell's theorem is that measurement on one of two entangled particles can appear to have a nonlocal effect on the other particle. Additional problems related to decoherence arise when the observer is modeled as a quantum system. Description The Copenhagen interpretation, which is the most widely accepted interpretation of quantum mechanics among physicists, posits that an "observer" or a "measurement" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg, wrote: Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the "possible" to the "actual," is absolutely necessary here and cannot be omitted from the interpretation of quantum theory. Niels Bohr, also a founder of the Copenhagen interpretation, wrote: all unambiguous information concerning atomic objects is derived from the permanent marks such as a spot on a photographic plate, caused by the impact of an electron left on the bodies which define the experimental conditions. Far from involving any special intricacy, the irreversible amplification effects on which the recording of the presence of atomic objects rests rather remind us of the essential irreversibility inherent in the very concept of observation. The description of atomic phenomena has in these respects a perfectly objective character, in the sense that no explicit reference is made to any individual observer and that therefore, with proper regard to relativistic exigencies, no ambiguity is involved in the communication of information. Likewise, Asher Peres stated that "observers" in quantum physics are similar to the ubiquitous "observers" who send and receive light signals in special relativity. Obviously, this terminology does not imply the actual presence of human beings. These fictitious physicists may as well be inanimate automata that can perform all the required tasks, if suitably programmed. Critics of the special role of the observer also point out that observers can themselves be observed, leading to paradoxes such as that of Wigner's friend; and that it is not clear how much consciousness is required. As John Bell inquired, "Was the wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer—with a PhD?" Anthropocentric interpretation The prominence of seemingly subjective or anthropocentric ideas like "observer" in the early development of the theory has been a continuing source of disquiet and philosophical dispute. A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. There is no credible peer-reviewed research that backs such claims. As an example of such claims, Fritjof Capra declared, "The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties." Confusion with uncertainty principle The uncertainty principle has been frequently confused with the observer effect, evidently even by its originator, Werner Heisenberg. The uncertainty principle in its standard form describes how precisely it is possible to measure the position and momentum of a particle at the same time. If the precision in measuring one quantity is increased, the precision in measuring the other decreases. An alternative version of the uncertainty principle, more in the spirit of an observer effect, fully accounts for the disturbance the observer has on a system and the error incurred, although this is not how the term "uncertainty principle" is most commonly used in practice. See also Observer effect (physics) Quantum foundations References Concepts in physics Quantum mechanics Interpretations of quantum mechanics
Observer (quantum physics)
Physics
1,373
72,101,233
https://en.wikipedia.org/wiki/Malamba%20%28drink%29
Malamba is a traditional alcoholic beverage in Cameroon, Equatorial Guinea, and Gabon made by fermenting sugarcane juice. The canes are crushed in a mortar, and the juice is left to ferment for approximately two weeks. The flavor and texture is similar to the Latin American drink guarapo. To accelerate the process of fermentation, bark from the Garcinia kola (bitter kola in English, known as essoc or onaé in Cameroon) can be added to the juice. Corn is also sometimes added during the fermentation process to increase the alcohol content. In Gabon, the drink is also known as musungu or vin de canne (cane wine) in French. See also Palm wine References Fermented drinks Gabonese cuisine Cameroonian cuisine
Malamba (drink)
Biology
161
518,104
https://en.wikipedia.org/wiki/Utility%20fog
Utility fog (also referred to as foglets) is a hypothetical collection of tiny nanobots that can replicate a physical structure. As such, it is a form of self-reconfiguring modular robotics. Conception The term was coined by John Storrs Hall in 1989. Hall thought of it as a nanotechnological replacement for car seatbelts. The robots would be microscopic, with extending arms reaching in several different directions, and could perform three-dimensional lattice reconfiguration. Grabbers at the ends of the arms would allow the robots (or foglets) to mechanically link to one another and share both information and energy, enabling them to act as a continuous substance with mechanical and optical properties that could be varied over a wide range. Each foglet would have substantial computing power, and would be able to communicate with its neighbors. In the original application as a replacement for seatbelts, the swarm of robots would be widely spread out, and the arms loose, allowing air flow between them. In the event of a collision the arms would lock into their current position, as if the air around the passengers had abruptly frozen solid. The result would be to spread any impact over the entire surface of the passenger's body. While the foglets would be micro-scale, construction of the foglets would require full molecular nanotechnology. Hall suggests that each bot may be in the shape of a dodecahedron with twelve arms extending outwards. Each arm would have four degrees of freedom. The foglets' bodies would be made of aluminum oxide rather than combustible diamond to avoid creating a fuel air explosive. Hall and his correspondents soon realized that utility fog could be manufactured en masse to occupy the entire atmosphere of a planet and replace any physical instrumentality necessary to human life. By foglets exerting concerted force, an object or human could be carried from location to location. Virtual buildings could be constructed and dismantled within moments, enabling the replacement of existing cities and roads with farms and gardens. While molecular nanotech might also replace the need for biological bodies, utility fog would remain a useful peripheral with which to perform physical engineering and maintenance tasks. Thus, utility fog also came to be known as "the machine of the future". See also Grey goo Molecular machines Nanorobotics Nanotechnology Programmable matter Self-reconfiguring modular robotics Smartdust Neural dust Synthetic biology The Invincible, a 1964 science fiction novel with intrigue centered on nanobotic swarms References External links Utility Fog at Nanotech Now, many links. Nanotechnology Hypothetical technology
Utility fog
Materials_science,Engineering
528
207,790
https://en.wikipedia.org/wiki/G%C3%B6del%20numbering
In mathematical logic, a Gödel numbering is a function that assigns to each symbol and well-formed formula of some formal language a unique natural number, called its Gödel number. Kurt Gödel developed the concept for the proof of his incompleteness theorems. () A Gödel numbering can be interpreted as an encoding in which a number is assigned to each symbol of a mathematical notation, after which a sequence of natural numbers can then represent a sequence of symbols. These sequences of natural numbers can again be represented by single natural numbers, facilitating their manipulation in formal theories of arithmetic. Since the publishing of Gödel's paper in 1931, the term "Gödel numbering" or "Gödel code" has been used to refer to more general assignments of natural numbers to mathematical objects. Simplified overview Gödel noted that each statement within a system can be represented by a natural number (its Gödel number). The significance of this was that properties of a statement—such as its truth or falsehood—would be equivalent to determining whether its Gödel number had certain properties. The numbers involved might be very large indeed, but this is not a barrier; all that matters is that such numbers can be constructed. In simple terms, Gödel devised a method by which every formula or statement that can be formulated in the system gets a unique number, in such a way that formulas and Gödel numbers can be mechanically converted back and forth. There are many ways to do this. A simple example is the way in which English is stored as a sequence of numbers in computers using ASCII. Since ASCII codes are in the range 0 to 127, it is sufficient to pad them to 3 decimal digits and then to concatenate them: The word is represented by . The logical formula is represented by . Gödel's encoding Gödel used a system based on prime factorization. He first assigned a unique natural number to each basic symbol in the formal language of arithmetic with which he was dealing. To encode an entire formula, which is a sequence of symbols, Gödel used the following system. Given a sequence of positive integers, the Gödel encoding of the sequence is the product of the first n primes raised to their corresponding values in the sequence: According to the fundamental theorem of arithmetic, any number (and, in particular, a number obtained in this way) can be uniquely factored into prime factors, so it is possible to recover the original sequence from its Gödel number (for any given number n of symbols to be encoded). Gödel specifically used this scheme at two levels: first, to encode sequences of symbols representing formulas, and second, to encode sequences of formulas representing proofs. This allowed him to show a correspondence between statements about natural numbers and statements about the provability of theorems about natural numbers, the proof's key observation (). There are more sophisticated (and more concise) ways to construct a Gödel numbering for sequences. Example In the specific Gödel numbering used by Nagel and Newman, the Gödel number for the symbol "0" is 6 and the Gödel number for the symbol "=" is 5. Thus, in their system, the Gödel number of the formula "0 = 0" is 26 × 35 × 56 = 243,000,000. Lack of uniqueness Infinitely many different Gödel numberings are possible. For example, supposing there are K basic symbols, an alternative Gödel numbering could be constructed by invertibly mapping this set of symbols (through, say, an invertible function h) to the set of digits of a bijective base-K numeral system. A formula consisting of a string of n symbols would then be mapped to the number In other words, by placing the set of K basic symbols in some fixed order, such that the -th symbol corresponds uniquely to the -th digit of a bijective base-K numeral system, each formula may serve just as the very numeral of its own Gödel number. For example, the numbering described here has K=1000. Application to formal arithmetic Recursion One may use Gödel numbering to show how functions defined by course-of-values recursion are in fact primitive recursive functions. Expressing statements and proofs by numbers Once a Gödel numbering for a formal theory is established, each inference rule of the theory can be expressed as a function on the natural numbers. If f is the Gödel mapping and r is an inference rule, then there should be some arithmetical function gr of natural numbers such that if formula C is derived from formulas A and B through an inference rule r, i.e. then This is true for the numbering Gödel used, and for any other numbering where the encoded formula can be arithmetically recovered from its Gödel number. Thus, in a formal theory such as Peano arithmetic in which one can make statements about numbers and their arithmetical relationships to each other, one can use a Gödel numbering to indirectly make statements about the theory itself. This technique allowed Gödel to prove results about the consistency and completeness properties of formal systems. Generalizations In computability theory, the term "Gödel numbering" is used in settings more general than the one described above. It can refer to: Any assignment of the elements of a formal language to natural numbers in such a way that the numbers can be manipulated by an algorithm to simulate manipulation of elements of the formal language. More generally, an assignment of elements from a countable mathematical object, such as a countable group, to natural numbers to allow algorithmic manipulation of the mathematical object. Also, the term Gödel numbering is sometimes used when the assigned "numbers" are actually strings, which is necessary when considering models of computation such as Turing machines that manipulate strings rather than numbers. Gödel sets Gödel sets are sometimes used in set theory to encode formulas, and are similar to Gödel numbers, except that one uses sets rather than numbers to do the encoding. In simple cases when one uses a hereditarily finite set to encode formulas this is essentially equivalent to the use of Gödel numbers, but somewhat easier to define because the tree structure of formulas can be modeled by the tree structure of sets. Gödel sets can also be used to encode formulas in infinitary languages. See also Church encoding Description number Gödel numbering for sequences Gödel's incompleteness theorems Chaitin's incompleteness theorem Notes References . Gödel's Proof by Ernest Nagel and James R. Newman (1959). This book provides a good introduction and summary of the proof, with a large section dedicated to Gödel's numbering. Further reading Gödel, Escher, Bach: an Eternal Golden Braid, by Douglas Hofstadter. This book defines and uses an alternative Gödel numbering. I Am a Strange Loop by Douglas Hofstadter. This is a newer book by Hofstadter that includes the history of Gödel's numbering. Visualizing the Turing Tarpit by Jason Hemann and Eric Holk. Uses Gödel numbering to encode programs. Mathematical logic Theory of computation Numbering
Gödel numbering
Mathematics
1,462
34,201,152
https://en.wikipedia.org/wiki/Windows%20Server%202012
Windows Server 2012, codenamed "Windows Server 8", is the ninth major version of the Windows NT operating system produced by Microsoft to be released under the Windows Server brand name. It is the server version of Windows based on Windows 8 and succeeds Windows Server 2008 R2, which is derived from the Windows 7 codebase, released nearly three years earlier. Two pre-release versions, a developer preview and a beta version, were released during development. The software was officially launched on September 4, 2012, which was the month before the release of Windows 8. It was succeeded by Windows Server 2012 R2 in 2013. Mainstream support for Windows Server 2012 ended on October 9, 2018, and extended support ended on October 10, 2023. Windows Server 2012 is eligible for the paid Extended Security Updates (ESU) program, which offers continued security updates until October 13, 2026. Windows Server 2012 removed support for Itanium and processors without PAE, SSE2 and NX. Four editions were released. Various features were added or improved over Windows Server 2008 R2 (with many placing an emphasis on cloud computing), such as an updated version of Hyper-V, an IP address management role, a new version of Windows Task Manager, and ReFS, a new file system. Windows Server 2012 received generally good reviews in spite of having included the same controversial Metro-based user interface seen in Windows 8, which includes the Charms Bar for quick access to settings in the desktop environment. Windows Server 2012 is the final version of Windows Server that supports processors without CMPXCHG16b, PrefetchW, LAHF and SAHF. Its successor, Windows Server 2012 R2, requires a processor with CMPXCHG16b, PrefetchW, LAHF and SAHF in any supported architecture. As of April 2017, 35% of servers were running Windows Server 2012, surpassing usage share of Windows Server 2008. History Windows Server 2012, codenamed "Windows Server 8", is the fifth release of Windows Server family of operating systems developed concurrently with Windows 8. Microsoft introduced Windows Server 2012 and its developer preview in the BUILD 2011 conference on September 9, 2011. However, unlike Windows 8, the developer preview of Windows Server 2012 was only made available to MSDN subscribers. It included a graphical user interface (GUI) based on Metro design language and a new Server Manager, a graphical application used for server management. On February 16, 2012, Microsoft released an update for developer preview build that extended its expiry date from April 8, 2012 to January 15, 2013. Before Windows Server 2012 was finalized, two test builds were made public. A public beta version of Windows Server 2012 was released along with the Windows 8 Consumer Preview on February 29, 2012. On April 17, 2012, Microsoft revealed "Windows Server 2012" as the final name for the operating system. The release candidate of Windows Server 2012 was released on May 31, 2012, along with the Windows 8 Release Preview. The product was released to manufacturing on August 1, 2012 (along with Windows 8) and became generally available on September 4, that year. However, not all editions of Windows Server 2012 were released at the same time. Windows Server 2012 Essentials was released to manufacturing on October 9, 2012 and was made generally available on November 1, 2012. As of September 23, 2012, all students subscribed to DreamSpark program can download Windows Server 2012 Standard or Datacenter free of charge. Windows Server 2012 is based on Windows 8 and is the second version of Windows Server which runs only on 64-bit CPUs. Coupled with fundamental changes in the structure of the client backups and the shared folders, there is no clear method for migrating from the previous version to Windows Server 2012. Features Installation options Unlike its predecessor, Windows Server 2012 users can switch between "Server Core" and "Server with a GUI" installation options without a full re-installation. Server Core – an option with a command-line interface only – is now the recommended configuration. There is also a third installation option that allows some GUI elements such as MMC and Server Manager to run, but without the normal desktop, shell or default programs like File Explorer. User interface Server Manager has been redesigned with an emphasis on easing management of multiple servers. The operating system, like Windows 8, uses the Metro-based user interface unless installed in Server Core mode. The Windows Store is available by installing the desktop experience feature from the server manager, but is not installed by default. Windows PowerShell in this version has over 2300 commandlets, compared to around 200 in Windows Server 2008 R2. Task Manager Windows Server 2012 includes a new version of Windows Task Manager together with the old version. In the new version the tabs are hidden by default, showing applications only. In the new Processes tab, the processes are displayed in varying shades of yellow, with darker shades representing heavier resource use. Information found in the older versions are now moved to the new Details tab. The Performance tab shows "CPU", "Memory", "Disk", "Wi-Fi" and "Ethernet" graphs. Unlike the Windows 8 version of Task Manager (which looks similar), the "Disk" activity graph is not enabled by default. The CPU tab no longer displays individual graphs for every logical processor on the system by default, although that remains an option. Additionally, it can display data for each non-uniform memory access (NUMA) node. When displaying data for each logical processor for machines with more than 64 logical processors, the CPU tab now displays simple utilization percentages on heat-mapping tiles. The color used for these heat maps is blue, with darker shades again indicating heavier utilization. Hovering the cursor over any logical processor's data now shows the NUMA node of that processor and its ID, if applicable. Additionally, a new Startup tab has been added that lists startup applications, however this tab does not exist in Windows Server 2012. The new task manager recognizes when a Windows Store app has the "Suspended" status. IP address management (IPAM) Windows Server 2012 has an IP address management role for discovering, monitoring, auditing, and managing the IP address space used on a corporate network. The IPAM is used for the management and monitoring of Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) servers. Both IPv4 and IPv6 are fully supported. Active Directory Windows Server 2012 has a number of changes to Active Directory from the version shipped with Windows Server 2008 R2. The Active Directory Domain Services installation wizard has been replaced by a new section in Server Manager, and a GUI has been added to the Active Directory Recycle Bin. Multiple password policies can be set in the same domain. Active Directory in Windows Server 2012 is now aware of any changes resulting from virtualization, and virtualized domain controllers can be safely cloned. Upgrades of the domain functional level to Windows Server 2012 are simplified; it can be performed entirely in Server Manager. Active Directory Federation Services is no longer required to be downloaded when installed as a role, and claims which can be used by the Active Directory Federation Services have been introduced into the Kerberos token. Windows Powershell commands used by Active Directory Administrative Center can be viewed in a "Powershell History Viewer". Hyper-V Windows Server 2012, along with Windows 8, includes a new version of Hyper-V, as presented at the Microsoft BUILD event. Many new features have been added to Hyper-V, including network virtualization, multi-tenancy, storage resource pools, cross-premises connectivity, and cloud backup. Additionally, many of the former restrictions on resource consumption have been greatly lifted. Each virtual machine in this version of Hyper-V can access up to 64 virtual processors, up to 1 terabyte of memory, and up to 64 terabytes of virtual disk space per virtual hard disk (using a new format). Up to 1024 virtual machines can be active per host, and up to 8000 can be active per failover cluster. SLAT is a required processor feature for Hyper-V on Windows 8, while for Windows Server 2012 it is only required for the supplementary RemoteFX role. ReFS Resilient File System (ReFS), codenamed "Protogon", is a new file system in Windows Server 2012 initially intended for file servers that improves on NTFS in some respects. Major new features of ReFS include: Improved reliability for on-disk structures ReFS uses B+ trees for all on-disk structures including metadata and file data. Metadata and file data are organized into tables similar to a relational database. The file size, number of files in a folder, total volume size and number of folders in a volume are limited by 64-bit numbers; as a result ReFS supports a maximum file size of 16 exabytes, a maximum of 18.4 × 1018 folders and a maximum volume size of 1 yottabyte (with 64 KB clusters) which allows large scalability with no practical limits on file and folder size (hardware restrictions still apply). Free space is counted by a hierarchical allocator which includes three separate tables for large, medium, and small chunks. File names and file paths are each limited to a 32 KB Unicode text string. Built-in resilience ReFS employs an allocation-on-write update strategy for metadata, which allocates new chunks for every update transaction and uses large IO batches. All ReFS metadata has built-in 64-bit checksums which are stored independently. The file data can have an optional checksum in a separate "integrity stream", in which case the file update strategy also implements allocation-on-write; this is controlled by a new "integrity" attribute applicable to both files and directories. If nevertheless file data or metadata becomes corrupt, the file can be deleted without taking the whole volume offline. As a result of built-in resiliency, administrators do not need to periodically run error-checking tools such as CHKDSK when using ReFS. Compatibility with existing APIs and technologies ReFS does not require new system APIs and most file system filters continue to work with ReFS volumes. ReFS supports many existing Windows and NTFS features such as BitLocker encryption, Access Control Lists, USN Journal, change notifications, symbolic links, junction points, mount points, reparse points, volume snapshots, file IDs, and oplock. ReFS seamlessly integrates with Storage Spaces, a storage virtualization layer that allows data mirroring and striping, as well as sharing storage pools between machines. ReFS resiliency features enhance the mirroring feature provided by Storage Spaces and can detect whether any mirrored copies of files become corrupt using background data scrubbing process, which periodically reads all mirror copies and verifies their checksums then replaces bad copies with good ones. Some NTFS features are not supported in ReFS, including object IDs, short names, file compression, file level encryption (EFS), user data transactions, hard links, extended attributes, and disk quotas. Sparse files are supported. Support for named streams is not implemented in Windows 8 and Windows Server 2012, though it was later added in Windows 8.1 and Windows Server 2012 R2. ReFS does not itself offer data deduplication. Dynamic disks with mirrored or striped volumes are replaced with mirrored or striped storage pools provided by Storage Spaces. In Windows Server 2012, automated error-correction with integrity streams is only supported on mirrored spaces; automatic recovery on parity spaces was added in Windows 8.1 and Windows Server 2012 R2. Booting from ReFS is not supported either. IIS 8.0 Windows Server 2012 includes version 8.0 of Internet Information Services (IIS). The new version contains new features such as SNI, CPU usage caps for particular websites, centralized management of SSL certificates, WebSocket support and improved support for NUMA, but few other substantial changes were made. Remote Desktop Protocol 8.0 Remote Desktop Protocol has new functions such as Adaptive Graphics (progressive rendering and related techniques), automatic selection of TCP or UDP as transport protocol, multi touch support, DirectX 11 support for vGPU, USB redirection supported independently of vGPU support, etc. A "connection quality" button is displayed in the RDP client connection bar for RDP 8.0 connections; clicking on it provides further information about connection, including whether UDP is in use or not. Scalability Windows Server 2012 supports the following maximum hardware specifications. Windows Server 2012 improves over its predecessor Windows Server 2008 R2: System requirements Windows Server 2012 runs only on x86-64 processors. Unlike older versions, Windows Server 2012 does not support Itanium. Upgrades from Windows Server 2008 and Windows Server 2008 R2 are supported, although upgrades from prior releases are not. Editions Windows Server 2012 has four editions: Foundation, Essentials, Standard and Datacenter. Reception Reviews of Windows Server 2012 have been generally positive. Simon Bisson of ZDNet described it as "ready for the datacenter, today," while Tim Anderson of The Register said that "The move towards greater modularity, stronger automation and improved virtualisation makes perfect sense in a world of public and private clouds" but remarked that "That said, the capability of Windows to deliver obscure and time-consuming errors is unchanged" and concluded that "Nevertheless, this is a strong upgrade overall." InfoWorld noted that Server 2012's use of Windows 8's panned "Metro" user interface was countered by Microsoft's increasing emphasis on the Server Core mode, which had been "fleshed out with new depth and ease-of-use features" and increased use of the "practically mandatory" PowerShell. However, Michael Otey of Windows IT Pro expressed dislike with the new Metro interface and the lack of ability to use the older desktop interface alone, saying that most users of Windows Server manage their servers using the graphical user interface rather than PowerShell. Paul Ferrill wrote that "Windows Server 2012 Essentials provides all the pieces necessary to provide centralized file storage, client backups, and remote access," but Tim Anderson contended that "Many businesses that are using SBS2011 and earlier will want to stick with what they have", citing the absence of Exchange, the lack of ability to synchronize with Active Directory Federation Services and the 25-user limit, while Paul Thurott wrote "you should choose Foundation only if you have at least some in-company IT staff and/or are comfortable outsourcing management to a Microsoft partner or solution provider" and "Essentials is, in my mind, ideal for any modern startup of just a few people." Windows Server 2012 R2 A second release, Windows Server 2012 R2, which is derived from the Windows 8.1 codebase, was released to manufacturing on August 27, 2013 and became generally available on October 18, 2013, by Microsoft. An updated version, formally designated Windows Server 2012 R2 Update, was released in April 2014. Support lifecycle Microsoft originally planned to end mainstream support for Windows Server 2012 and Windows Server 2012 R2 on January 9, 2018, with extended support ending on January 10, 2023. In order to provide customers the standard transition lifecycle timeline, Microsoft extended Windows Server 2012 and 2012 R2 support in March 2017 by 9 months. Windows Server 2012 reached the end of mainstream support on October 9, 2018 and entered the extended support phase, which ended on October 10, 2023. Microsoft announced in July 2021 that they will distribute paid Extended Security Updates for volume licensed editions of Windows Server 2012 and Windows Server 2012 R2 for up to 3 years after the end of extended support. For Windows Server 2012 and Windows Server 2012 R2, these updates will last until October 13, 2026. This will mark the final end of all security updates for the Windows NT 6.2 product line after 14 years, 2 months and 12 days and will also mark the final end of all security updates for the Windows NT 6.3 product line after 13 years, 1 month and 16 days. See also Comparison of Microsoft Windows versions Comparison of operating systems History of Microsoft Windows List of operating systems Microsoft Servers Notes References Further reading External links Windows Server 2012 R2 and Windows Server 2012 on TechNet Windows Server 2012 R2 on MSDN Windows Server 2012 on MSDN Tutorials and Lab Manual Articles of Windows Server 2012 R2 2012 X86-64 operating systems 2012 software Server 2012
Windows Server 2012
Technology
3,388
38,100,116
https://en.wikipedia.org/wiki/Murder%20of%20Travis%20Alexander
Travis Victor Alexander (July 28, 1977 – June 4, 2008) was an American salesman who was murdered by his ex-girlfriend, Jodi Ann Arias (born July 9, 1980), in his house in Mesa, Arizona while in the shower. Arias was convicted of first-degree murder on May 8, 2013, and sentenced to life in prison without the possibility of parole on April 13, 2015. Alexander sustained 27 stab wounds, a slit throat and a single gunshot wound to the forehead. Arias testified that she killed him in self-defense, but she was convicted by the jury of first-degree murder. During the sentencing phase, the jury deadlocked on the death penalty option, and Arias was sentenced to life imprisonment without the possibility of parole. Alexander's death and the subsequent investigation and trial attracted widespread media coverage in the United States. Background Travis Victor Alexander Travis was born on July 28, 1977 in Riverside, California to Gary David Alexander and Pamela Elizabeth Morgan Alexander. At the age of 8, Travis moved in with his paternal grandparents. After his father's death in July 1997, his seven siblings were also taken in by their paternal grandmother. Alexander told a friend that, prior to joining a church, he would frequently engage in fights. He performed stand-up comedy under the alter ego "Eddie Snell". Alexander was a salesman and motivational speaker for Pre-Paid Legal Services (PPL). Jodi Ann Arias Arias was born on July 9, 1980, in Salinas, California, to William and Sandra (née Allen) Arias. She attended school until 11th grade, at which point she dropped out of Yreka Union High School. She was an aspiring photographer and worked odd jobs until she got a sales position with PPL. Arias and Alexander met in September 2006 at a work conference in Las Vegas, Nevada. Arias converted to the Church of Jesus Christ of Latter-day Saints, of which Alexander was a member, and was baptized by him on November 26, 2006 in a ceremony in Southern California. Alexander and Arias began dating in February 2007, and Arias moved to Mesa to live closer to Alexander, but in April 2008, she moved to Yreka, California and lived there with her grandparents. Alexander and Arias dated intermittently for a year and a half, often in a long-distance relationship, taking turns traveling between their respective Arizona and California homes. Even when Alexander was in a different relationship, he and Arias would sext each other. Alexander's friends who knew Arias and observed them together reportedly had a negative opinion of her, stating that the relationship was unusually tumultuous and that Arias's behavior was worrying. Murder Alexander was murdered at his house in Mesa, Arizona on Wednesday, June 4, 2008 while he was taking a shower. He sustained 27 stab wounds, a slit throat, and a gunshot wound to the head. Medical examiner Kevin Horn would later testify that Alexander's jugular vein, common carotid artery, and trachea had been slashed and that he had defensive wounds on his hands. Horn further testified that Alexander might have been dead at the time the gunshot was inflicted and that the back wounds were shallow. Alexander's death was ruled a homicide. He was buried at Riverside's Olivewood Memorial Park cemetery. Discovery and investigation Alexander missed an important conference call on the evening of June 4. The following day, Arias met Ryan Burns in the Salt Lake City suburb of West Jordan and attended business meetings for the conference. Burns later said that he noticed that Arias's formerly blonde hair was now dark brown and that she had cuts on her hands. On June 6, she left Salt Lake City and drove west toward California. She called Alexander several times and left several voicemail messages for him. She also accessed his cell-phone voicemail system. When Arias returned the car on June 7, it had been driven about . The rental clerk testified that the car was missing its floor mats and had red stains on its front and rear seats. However, it could not be verified that the car had floor mats when Arias had picked it up, and the red stains could not be analyzed as the car had been cleaned before police could examine it. On June 9, having been unable to reach Alexander, a concerned group of friends went to his home. His roommates had not seen him for several days, but they believed that he was out of town and thus did not suspect that anything was amiss. After finding a key to Alexander's bedroom, the group entered and found large pools of blood in the hallway to the master bathroom and Alexander's body in the shower. In the 9-1-1 call (not heard by the jury), the dispatcher asked whether Alexander had been suicidal or if anyone was angry enough to hurt him. Alexander's friends mentioned Arias by name as a possible suspect, stating that Alexander had told them that she had been stalking him, accessing his Facebook account, and slashing his car's tires. While searching Alexander's home, police found his recently purchased digital camera damaged in the washing machine. Police were able to recover deleted images showing Arias and Alexander in sexually suggestive poses taken at approximately 1:40 p.m. on June 4. The final photograph of Alexander alive, showing him in the shower, was taken at 5:29 p.m. that day. Photos taken moments later show an individual believed to be Alexander "profusely bleeding" on the bathroom floor. A bloody palm print was discovered along the wall in the bathroom hallway; it contained DNA from both Arias and Alexander. On July 9, 2008, Arias was indicted by a grand jury in Maricopa County, Arizona for the first-degree murder of Alexander. She was arrested at her home six days later and was extradited to Arizona on September 5. Arias pleaded not guilty on September 11. During this time, she provided several different accounts about her involvement in Alexander's death. She first told police that she had not been in Mesa on the day of the murder and had last seen Alexander in March 2008. Arias later told police that two intruders had broken into Alexander's home, murdering him and attacking her. Two years after her arrest, Arias told police that she killed Alexander in self-defense, stating she had been a victim of domestic violence. Criminal action Pre-trial On April 6, 2009, a motion to reconsider the defendant's motion to disqualify the Maricopa County District Attorney's office was denied. On May 18, the court ordered Arias to submit to IQ and competency testing. In January 2011, a defense filing detailed Arias's attorneys' efforts to obtain text messages and emails. The prosecution initially told defense attorneys that no text messages that had been sent or received by Alexander were available, but the prosecution was then ordered to turn over several hundred such messages. Mesa police detective Esteban Flores told defense attorneys that there was nothing out of the ordinary among Alexander's emails; about 8,000 were turned over to the defense in June 2009. Trial Arias was represented by appointed counsel L. Kirk Nurmi and Jennifer Willmott. Jury selection The trial commenced in Maricopa County Superior Court before Judge Sherry K. Stephens. The voir dire proceedings began on December 10, 2012. On December 20, Arias's attorneys argued that the prosecution was "systematically excluding" women and black people; prosecutor Juan Martinez said that race and sex were irrelevant to his decisions to strike certain jurors. Stephens ruled that the prosecution had shown no bias in the jury selection. Guilt phase In opening arguments on January 2, 2013, the prosecution portrayed Arias as a jealous person who attacked Alexander, a "good man", after he attempted to end their relationship. Arias's defense, conversely, said that Alexander had been violent and abusive, and that Arias had killed him only after he had "lunged at [her] in anger". The prosecution alleged that Arias had premeditated the murder. They contended that Arias had staged a robbery at her grandparents' residence, where she was staying, in order to take a handgun to kill Alexander. A police detective who investigated the putative robbery testified that the gun was of the same caliber (.25 ACP) used in Alexander's shooting. They said that Arias had used a gas can and purchased gas in advance in order to hide her trip to Alexander's. The prosecution called Ryan Burns, who testified Arias was acting "normal[ly]" when she visited him the day after Alexander's death. Burns said that Arias told him that she had cut her hands on broken glass while working at a restaurant called Margaritaville, though a detective later testified that no such restaurant existed. Arias took the stand in her own defense on February 4, 2013, testifying for a total of 18 days, a duration described by criminal defense attorney Mark Geragos as "unprecedented". Arias detailed the abuse she had suffered at the hands of her parents and described her sex life with Alexander. A phone sex tape was played in court in which Alexander described wanting to tie Arias to a tree and sodomize her, and Arias responded, "[T]hat is so debasing; I like it." Arias testified that Alexander harbored pedophilic desires and that she tried to help him with those urges. She also said that her relationship with Alexander became increasingly physically and emotionally abusive. After detailing one argument in which she held out her hand to block Alexander from kicking her, she held up her left hand in the courtroom, showing that her ring finger was crooked. Arias said that she killed Alexander in self-defense after he had attacked her when she dropped his camera, forcing her to fight for her life. Alyce LaViolette, a psychotherapist who specializes in domestic violence, testified for the defense that Arias was a victim of domestic abuse, and that most victims do not tell anyone because they feel ashamed and humiliated. During LaViolette's testimony, the defense team alluded to email between Alexander and his friends, Chris and Sky Hughes. The defense tried to enter the emails into evidence, but the trial judge ruled that they were hearsay. A 2011 court filing revealed the contents of some of the emails the DV expert alluded to, including one in which Alexander expressed anger that Hughes had discouraged Arias from romantically pursuing Alexander. In a response email, Chris Hughes said that he believed that Arias "would be [Travis's] next victim ... and that [she] was just another girl [Travis] was playing". Alexander allegedly responded by saying, "I am a bit of a sociopath". Hughes testified during the trial, saying that, while he knew Alexander was seeing multiple women, he and his wife had been manipulated by Arias, and that they had had a falling out with Arias just months after the emails when they twice caught her eavesdropping on their conversations with Alexander. The prosecution called rebuttal witnesses, included several of Alexander's other girlfriends, who stated that they had never seen him exhibit problems with anger or violence. Beginning on March 14, psychologist Richard Samuels testified for the defense for nearly six days. He said that Arias had likely been suffering from acute stress at the time of the murder, sending her body into a "fight or flight" mode to defend herself, which caused her brain to stop retaining memory. In response to a juror's question asking whether this scenario could occur even if this was a premeditated murder, as the prosecution contended, he responded: "Is it possible? Yes. Is it probable? No." Samuels also diagnosed Arias with post traumatic stress disorder (PTSD). Martinez attacked Samuels' credibility, accusing him of bias and of having formed a relationship with Arias; Samuels had previously testified that he had compassion for Arias. The Arizona Court of Appeals later castigated Martinez's impeachment of Samuels as improper. In rebuttal, prosecution witness Janeen DeMarte, a clinical psychologist, testified that Arias was not a victim of abuse and did not have PTSD, diagnosing her, instead, with borderline-personality disorder. In response to DeMarte's testimony, the defense asked for and received permission to call a rebuttal witness, psychologist Robert Geffner, who said that all tests taken by Arias since her arrest pointed toward an anxiety disorder stemming from trauma. Geffner also suggested that the Minnesota Multiphasic Personality Inventory (MMPI) that DeMarte had used was not geared towards detecting personality disorders, suggesting that DeMarte should have used the Millon Clinical Multiaxial Inventory, which Samuels had used. The prosecution's final rebuttal witness, forensic neuropsychologist Jill Hayes, disputed Geffner's testimony that the MMPI test was not geared toward diagnosing borderline personality disorder. In closing arguments, Martinez accused Arias of being a manipulative liar, showed a text that Alexander had sent calling Arias "evil", again displayed the gruesome crime scene photographs, and said that Arias had attempted to manipulate the jury. Arias's defense asked the jury to put aside any personal dislike they may have had for Arias, and said that the prosecution's premeditation theory "[didn't] make any sense", contending that Arias's behavior—including appearing on security cameras, preserving receipts from gas cans she purchased, and spending the night at Alexander's before the killing—were inconsistent with the notion that she was on a "covert mission". In rebuttal, Martinez reemphasized the extent and variety of Alexander's wounds, calling the killing "a slaughter". Three jurors were dismissed through the course of the trial—one for misconduct, one for health-related reasons, and one after being arrested for a DUI offense. At the close of arguments, jurors were instructed that they could find Arias guilty of first degree murder if each of them, individually, found that she had premeditated the murder or had caused the death while committing a felony. On May 8, 2013, after 15 hours of deliberation, Arias was found guilty of first-degree murder. All twelve jurors found her guilty of first-degree premeditated murder; seven of the twelve jurors determined she was guilty of felony murder. As the verdict was read, Alexander's family smiled and hugged one another. Crowds outside the courtroom began cheering and chanting. Aggravation phase Following the conviction, the prosecution was required to convince the jury that the murder was "cruel, heinous, or depraved" for them to determine that Arias was eligible for the death penalty. The aggravation phase of the trial started on May 15, 2013. The only witness was the medical examiner who had performed Alexander's autopsy. Arias's attorneys, who had repeatedly asked to step down from the case, provided only brief opening statements and closing arguments in which they said that the adrenaline rushing through Alexander's body may have prevented him from feeling much pain during his death. Prosecutor Martinez showed photos of the corpse and crime scene to the jury, then paused for two minutes of silence to illustrate how long he claimed that it took for Alexander to die. After less than three hours of consideration, the jury determined that Arias was eligible for the death penalty. Penalty phase The penalty phase began on May 16, 2013, when prosecutors called Alexander's family members to offer victim impact statements in an effort to convince the jury that Arias's crime merited a death sentence. On May 21, Arias offered an allocution, during which she pleaded for a life sentence. Arias acknowledged that her plea for life was a reversal of remarks that she made to a television reporter shortly after her conviction in which she had said that she preferred the death penalty. "Each time I said that, I meant it, but I lacked perspective," she said. "Until very recently, I could not imagine standing before you and asking you to give me life." She said that she changed her mind to avoid bringing more pain to members of her family, who were in the courtroom. At one point, Arias held up a white T-shirt with the word "Survivor" written across it, telling the jurors that she would sell the clothing and donate all proceeds to victims of domestic abuse. She also said that she would donate her hair to Locks of Love while in prison, and had already done so three times while in jail. That evening, in a joint jailhouse interview with The Arizona Republic, KPNX-TV and NBC's Today, Arias said that she did not know whether the jury would decide on life or death. "Whatever they come back with I will have to deal with it; I have no other choice." Regarding the verdict, she said, "It felt like a huge sense of unreality. I felt betrayed, actually, by the jury. I was hoping they would see things for what they are. I felt really awful for my family and what they were thinking." On May 23, the sentencing phase of Arias's trial resulted in a hung jury, prompting the judge to declare a mistrial for that phase. The jury had reached an 8–4 decision in favor of the death penalty. After the jury was discharged, jury foreman Zervakos stated that the jury found the responsibility of weighing the death sentence overwhelming, but were horrified when their efforts ended in a mistrial. "By the end of it, we were mentally and emotionally exhausted," he said. "I think we were horrified when we found out that they had actually called a mistrial, and we felt like we had failed." On May 30, Maricopa County Attorney Bill Montgomery said he was confident that an impartial jury could be seated, but that it was possible that lawyers and the victim's family could agree to scrap the trial in favor of a life sentence with no parole. The defense responded, "If the diagnosis made by the State's psychologist is correct, the Maricopa County Attorney's Office is seeking to impose the death penalty upon a mentally ill woman who has no prior criminal history. It is not incumbent upon Ms. Arias'[s] defense counsel to resolve this case." And Arias, while reaffirming her belief in the criminal justice system, questioned whether an impartial jury could be seated in light of the coverage of the trial. Mistrial motions and mid-trial appeal During the trial, defense attorneys filed for mistrial in January, April and May 2013. Arias's lawyers argued in January that Esteban Flores, the lead Mesa police detective on the case, perjured himself during a 2009 pretrial hearing aimed at determining whether the death penalty should be considered an option for jurors. Flores testified at the 2009 hearing that based on his own review of the scene and a discussion with the medical examiner, it was apparent that Alexander had been shot in the forehead first. Contrary to Flores' testimony at the 2009 hearing, the medical examiner told jurors the gunshot probably would have incapacitated Alexander. Given his extensive defense wounds, including stab marks and slashes to his hands, arms and legs, it was not likely the shot came first. Flores denied perjury and said during his trial testimony that he just misunderstood what the medical examiner told him. On May 20, 2013, defense attorneys filed motion which alleged that a defense witness who had been due to testify the preceding Friday, the 17th, began receiving death threats for her scheduled testimony on Arias's behalf. The day before the filing, the witness contacted counsel for Arias, stating that she was no longer willing to testify because of the threats. The motion continued, "It should also be noted that these threats follow those made to Alyce LaViolette, a record of which was made ex-parte and under seal." The motion was denied, as was a motion for a stay in the proceedings that had been sought to give time to appeal the decisions to the Arizona Supreme Court. On May 29, 2013, the Arizona Supreme Court declined to hear an appeal filed three months earlier, which was also refused by the mid-level Arizona Court of Appeals. Nurmi had asked the high court to throw out the aggravating factor of cruelty because the judge had allowed it to go forward based on a different theory of how the murder occurred. The lead detective originally claimed that the gunshot occurred first, followed by the stabbing and slitting of the throat. Based on that theory, Stephens ruled there was probable cause to find the crime had been committed in an especially cruel manner, an aggravating factor under state law. Subsequent to this initial hearing, the medical examiner testified that the gunshot occurred postmortem. Sentencing retrial and incarceration On October 21, 2014, Arias's sentencing retrial began. Opening statements were given, and a hearing on evidence was held. Prosecution witness Amanda Webb, called in the first trial to rebut Arias's testimony that she returned a gas can to Walmart on May 8, 2007, admitted she did not know if all records were transferred after the store relocated. After a holiday break, the retrial resumed in January, 2015. Mesa police experts admitted that Alexander's laptop had viruses and pornography, contrary to testimony in the first trial in 2013. Jury deliberations began on February 12, 2015. On March 2, 2015, the jury informed Judge Stephens that they were deadlocked. Arias's attorneys requested a mistrial. Stephens denied the request, read additional instructions to the jury, and ordered them to resume deliberations. On March 5, 2015, Stephens declared a mistrial because the jurors, who deliberated for about 26 hours over five days, deadlocked at 11–1 vote in favor of the death penalty. The 11 jurors in favor of death tried, unsuccessfully, to get the holdout juror removed from the jury, arguing that juror was biased. After the result, the holdout juror reported that she received threats, and her name, address, and phone number were leaked online. Dennis Elias, a jury consultant, said "The very fact that people are making death threats and trying to out her, it is not a proud day for any single one of those people and they should be ashamed." Maricopa County Attorney Bill Montgomery released a statement calling for the attacks on the juror to "cease". Sentencing was scheduled for April 7, 2015, with Stephens having the option to sentence Arias to either life imprisonment without the possibility of parole or with the possibility of parole after 25 years. On April 13, Stephens sentenced Arias to life imprisonment without the possibility of parole. By March 5, 2015, Arias' trials cost an estimated $3 million. In June 2015, following a restitution hearing, Arias was ordered to pay more than $32,000 to Alexander's siblings. Her attorney stated this was about one third of the amount requested. As of 2023, Arias is housed at the Arizona Department of Corrections #281129, which is located at Arizona State Prison Complex - Perryville. She started her sentence in the complex's maximum security Lumley Unit, but has since been downgraded to the medium security level. Post-verdict appeal On July 6, 2018, Arias's current attorneys, Margaret M. Green (a.k.a. Peg Green) and Corey Engle, filed a 324-page appeal seeking her murder conviction be overturned to the Court of Appeals. On October 17, 2019, Arias's attorneys argued to the Court of Appeals that her sentence should be overturned on the basis that Martinez acted inappropriately throughout the trial, resulting in a media frenzy and affecting the outcome of the trial. On March 24, 2020, the court held that notwithstanding "egregious" and "self-promoting" misconduct by the prosecutor, Arias had been convicted "based upon the overwhelming evidence of her guilt," and upheld the conviction. On November 4, 2020, the Arizona Supreme Court declined to review the case. Media The Associated Press reported that the public would be able to watch testimony in the Jodi Arias trial. This decision, made by a three-judge panel of the Arizona Court of Appeals, overruled Maricopa County Superior Court Judge Sherry Stephens' original decision, which would "allow a witness to testify in private, as jurors [weighed] whether to give [Arias] the death penalty." Judge Stephens held secret (non-public) hearings. As a result of the move for secrecy, an unidentified defense witness was permitted to testify in private. Though Judge Stephens' decision had been overruled by the Arizona Court of Appeals, "the mystery witness who testified ... at the start of the defense case" was not revealed to the public. The case, featured on an episode of 48 Hours Mystery: Picture Perfect in 2008, included an interview which, for the first time in the history of 48 Hours, was used as evidence in a death penalty trial. On September 24, 2008, Inside Edition interviewed Arias at the Maricopa County Jail where she stated, "No jury is going to convict me...because I am innocent and you can mark my words on that. No jury is going to convict me." The Associated Press said the case was a "circus", a "runaway train" and said the case "grew into a worldwide sensation as thousands followed the trial via a live, unedited Web feed." They added that the trial garnered "daily coverage from cable news networks and spawned a virtual cottage industry for talk shows" and at the courthouse, "the entire case devolved into a circus-like spectacle attracting dozens of enthusiasts each day to the courthouse as they lined up for a chance to score just a few open public seats in the gallery;" "For its fans, the Arias trial became a live daytime soap opera." The Toronto Star stated, "With its mix of jealousy, religion, murder, and sex, the Jodi Arias case shows what happens when the justice system becomes entertainment." During the trial, public figures freely expressed their opinions. Arizona Governor Jan Brewer told reporters after an unrelated press event that she believed Arias to be guilty. She sidestepped a question about whether she believed Arias was guilty of manslaughter, second-degree murder or first-degree murder, but said "I don't have all the information, but I think she's guilty." After the trial, jury foreman William Zervakos told ABC's Good Morning America that Arias's long testimony had hampered her defense: "I think eighteen days hurt her. I think she was not a good witness." HLN sent out a press release titled "HLN No. 1 Among Ad-Supported Cable as Arias Pleads for Her Life," bragging that they led in the ratings. The release stated: "HLN continues to be the ratings leader and complete source for coverage of the Jodi Arias Trial. On May 21, HLN ranked No.1 among ad-supported cable networks from 1:56p to 2:15p (ET) as Arias took the stand to plead for her life in front of the jury that found her guilty of Alexander's murder. During that time period, HLN out-delivered the competition among both total viewers (2,540,000) and 25–54 demo viewers (691,000). HLN also ranked No.1 among ad-supported cable networks for the 2p hour delivering 2,227,000 total viewers and 620,000 25–54 viewers." Social media In late January 2013, artwork drawn by Arias began selling on eBay. The seller was her brother; he claimed that the profits went towards covering the family's travel expenses to the trial and "better food" for Arias while she was in jail. On April 11, USA Today reported that during the testimony of defense witness Alyce LaViolette, public outrage was extreme concerning her assertions that Arias was a victim of domestic violence. Tweets and other social media posts attacked LaViolette's reputation. More than 500 negative reviews of LaViolette's yet-to-be-released book appeared on Amazon.com calling LaViolette a fraud and a disgrace. "It's the electronic version of a lynch mob," said retired Maricopa County Superior Court Judge Kenneth Fields. Attorney Anne Bremner, who said she received death threats after she provided legal counsel in the Amanda Knox case, told The Huffington Post that the kind of online ridicule and threats LaViolette received could affect attorneys and witnesses in high-profile trials. "It's something to take into account," Bremner said. "If I had kids I would consider it even more so." On May 9, The Republic commented: "The Jodi Arias trial has been a social-media magnet. And when Arias was convicted Wednesday of first-degree murder, Twitter and Facebook exploded with reaction. Much of it was aimed at Arias, though plenty of people tweeted at the media coverage, such as the antics of HLN host Nancy Grace. During the trial, hardcore followers of the proceedings were accused of trying to use social media to intimidate witnesses, or otherwise influence the outcome. Whether it had any effect is questionable, but it's a notable development." On May 24, Victoria Washington, who was one of Arias's attorneys until she had to resign in 2011 because of a conflict, said Arias's lead attorney, Nurmi, "was pilloried in social media. At one point, an Internet denizen digitally superimposed his face onto a crime-scene photo of Alexander dead in the shower of his Mesa home. I know people were aggravated with him constantly filing for mistrial, but you have to make and preserve the record for federal review (on appeal). If you don't file for mistrial, the appeals courts will say you waived it." On May 28, Radar Online reported the jury foreman had been receiving threats ever since the panel deadlocked on the sentencing phase, and now the foreman's son was claiming that the foreman was receiving death threats. "Today I read hate mail my dad had gotten. Some person had sent him a threatening message complete with his email address, full name, and phone number (which at the very least means that this guy should retake Hate Mail 101). I also read some comments on an article online about my dad. Surreal. They say my dad was fooled by the defendant, that he was taken with her, that he hated the prosecutor," the foreman's son wrote on his public blog. In an interview on April 8, 2015, Arias's attorney Jennifer Willmott discussed the social media furor, death threats she received, Arias's statements at the sentencing, the holdout juror, and stated that she believed that Arias testified truthfully. The Twitter account in Arias's name is operated by Arias's friends on her behalf. On June 22, 2013, from that account, Arias tweeted, "Just don't know yet if I will plea or appeal." Adaptations Jodi Arias: Dirty Little Secret, a made-for-television movie, stars Lost actress Tania Raymonde as Arias and Jesse Lee Soffer, of The Mob Doctor and Chicago P.D., as Alexander. Prosecutor Juan Martinez was played by Ugly Betty actor Tony Plana and David Zayas, of Dexter, portrays detective Esteban Flores. Created for and distributed by the Lifetime Network, the film premiered June 22, 2013. On January 21, 2023, Lifetime released another Jodi Arias movie titled Bad Behind Bars: Jodi Arias which stars Celina Sinden as Jodi Arias, Tricia Black as Donovan Bering, and Lynn Rafferty as Tracy Brown. See also Attorney–client privilege Courtroom photography and broadcasting Trial by media Crime in Arizona Murder of Dale Harrell Murder of Ryan Poston Notes References External links People murdered in Arizona Deaths by person in Arizona Burials at Olivewood Memorial Park Events in Maricopa County, Arizona History of Mesa, Arizona 2008 murders in the United States June 2008 crimes in the United States 2008 in Arizona Violence against men in the United States Stalking Knife attacks in the United States Stabbing attacks in 2008
Murder of Travis Alexander
Biology
6,637
27,384,016
https://en.wikipedia.org/wiki/Lycoperdon%20mammiforme
Lycoperdon mammiforme is a rare, inedible type of puffball mushroom in the genus Lycoperdon, found in deciduous forest on chalk soil. It is found in Europe. The fruit body is spherical to pear shaped, at first pure white with slightly grainy inner skin and an outer skin which disintegrates in flakes that are soon shed, later ochre, chocolate-brown when old, up to in diameter. References External links Fungi of Europe Puffballs Fungi described in 1801 Taxa named by Christiaan Hendrik Persoon mammiforme Fungus species
Lycoperdon mammiforme
Biology
124
179,088
https://en.wikipedia.org/wiki/SPSS
SPSS Statistics is a statistical software suite developed by IBM for data management, advanced analytics, multivariate analysis, business intelligence, and criminal investigation. Long produced by SPSS Inc., it was acquired by IBM in 2009. Versions of the software released since 2015 have the brand name IBM SPSS Statistics. The software name originally stood for Statistical Package for the Social Sciences (SPSS), reflecting the original market, then later changed to Statistical Product and Service Solutions. Overview SPSS is a widely used program for statistical analysis in social science. It is also used by market researchers, health researchers, survey companies, government, education researchers, industries, marketing organizations, data miners, and others. The original SPSS manual (Nie, Bent & Hull, 1970) has been described as one of "sociology's most influential books" for allowing ordinary researchers to do their own statistical analysis. In addition to statistical analysis, data management (case selection, file reshaping and creating derived data) and data documentation (a metadata dictionary is stored in the datafile) are features of the base software. The many features of SPSS Statistics are accessible via pull-down menus or can be programmed with a proprietary 4GL command syntax language. Command syntax programming has the benefits of reproducible output, simplifying repetitive tasks, and handling complex data manipulations and analyses. Additionally, some complex applications can only be programmed in syntax and are not accessible through the menu structure. The pull-down menu interface also generates command syntax: this can be displayed in the output, although the default settings have to be changed to make the syntax visible to the user. They can also be pasted into a syntax file using the "paste" button present in each menu. Programs can be run interactively or unattended, using the supplied Production Job Facility. A "macro" language can be used to write command language subroutines. A Python programmability extension can access the information in the data dictionary and data and dynamically build command syntax programs. This extension, introduced in SPSS 14, replaced the less functional SAX Basic "scripts" for most purposes, although SaxBasic remains available. In addition, the Python extension allows SPSS to run any of the statistics in the free software package R. From version 14 onwards, SPSS can be driven externally by a Python or a VB.NET program using supplied "plug-ins". (From version 20 onwards, these two scripting facilities, as well as many scripts, are included on the installation media and are normally installed by default.) SPSS Statistics places constraints on internal file structure, data types, data processing, and matching files, which together considerably simplify programming. SPSS datasets have a two-dimensional table structure, where the rows typically represent cases (such as individuals or households) and the columns represent measurements (such as age, sex, or household income). Only two data types are defined: numeric and text (or "string"). All data processing occurs sequentially case-by-case through the file (dataset). Files can be matched one-to-one and one-to-many, but not many-to-many. In addition to that cases-by-variables structure and processing, there is a separate Matrix session where one can process data as matrices using matrix and linear algebra operations. The graphical user interface has two views which can be toggled. The 'Data View' shows a spreadsheet view of the cases (rows) and variables (columns). Unlike spreadsheets, the data cells can only contain numbers or text, and formulas cannot be stored in these cells. The 'Variable View' displays the metadata dictionary, where each row represents a variable and shows the variable name, variable label, value label(s), print width, measurement type, and a variety of other characteristics. Cells in both views can be manually edited, defining the file structure and allowing data entry without using command syntax. This may be sufficient for small datasets. Larger datasets such as statistical surveys are more often created in data entry software, or entered during computer-assisted personal interviewing, by scanning and using optical character recognition and optical mark recognition software, or by direct capture from online questionnaires. These datasets are then read into SPSS. SPSS Statistics can read and write data from ASCII text files (including hierarchical files), other statistics packages, spreadsheets and databases. It can also read and write to external relational database tables via ODBC and SQL. Statistical output is to a proprietary file format (*.spv file, supporting pivot tables) for which, in addition to the in-package viewer, a stand-alone reader can be downloaded. The proprietary output can be exported to text or Microsoft Word, PDF, Excel, and other formats. Alternatively, output can be captured as data (using the OMS command), as text, tab-delimited text, PDF, XLS, HTML, XML, SPSS dataset or a variety of graphic image formats (JPEG, PNG, BMP and EMF). Several variants of SPSS Statistics exist. SPSS Statistics Gradpacks are highly discounted versions sold only to students. SPSS Statistics Server is a version of the software with a client/server architecture. Add-on packages can enhance the base software with additional features (examples include complex samples, which can adjust for clustered and stratified samples, and custom tables, which can create publication-ready tables). SPSS Statistics is available under either an annual or a monthly subscription license. Version 25 of SPSS Statistics launched on August 8, 2017. This added new and advanced statistics, such as random effects solution results (GENLINMIXED), robust standard errors (GLM/UNIANOVA), and profile plots with error bars within the Advanced Statistics and Custom Tables add-on. V25 also includes new Bayesian statistics capabilities, a method of statistical inference, and publication ready charts, such as powerful new charting capabilities, including new default templates and the ability to share with Microsoft Office applications. Versions and ownership history SPSS 1 - 1968 SPSS 2 - 1983 SPSS 5 - 1993 SPSS 6.1 - 1995 SPSS 7.5 - 1997 SPSS 8 - 1998 SPSS 9 - 1999 SPSS 10 - 1999 SPSS 11 - 2002 SPSS 12 - 2004 SPSS 13 - 2005 SPSS 14 - 2006 SPSS 15 - 2006 SPSS 16 - 2007 SPSS 17 - 2008 PASW 17 - 2009 PASW 18 - 2009 SPSS 19 - 2010 SPSS 20 - 2011 SPSS 21 - 2012 SPSS 22 - 2013 SPSS 23 - 2015 SPSS 24 - 2016, March SPSS 25 - 2017, July SPSS 26 - 2018 SPSS 27 - 2019, June (and 27.0.1 in November, 2020) SPSS 28 - 2021, May SPSS 29 - 2022, Sept SPSS 30 - 2024, Sept SPSS was released in its first version in 1968 as the Statistical Package for the Social Sciences (SPSS) after being developed by Norman H. Nie, Dale H. Bent, and C. Hadlai Hull. Those principals incorporated as SPSS Inc. in 1975. Early versions of SPSS Statistics were written in Fortran and designed for batch processing on mainframes, including for example IBM and ICL versions, originally using punched cards for data and program input. A processing run read a command file of SPSS commands and either a raw input file of fixed-format data with a single record type, or a 'getfile' of data saved by a previous run. To save precious computer time an 'edit' run could be done to check command syntax without analysing the data. From version 10 (SPSS-X) in 1983, data files could contain multiple record types. Prior to SPSS 16.0, different versions of SPSS were available for Windows, Mac OS X and Unix. SPSS Statistics version 13.0 for Mac OS X was not compatible with Intel-based Macintosh computers, due to the Rosetta emulation software causing errors in calculations. SPSS Statistics 15.0 for Windows needed a downloadable hotfix to be installed in order to be compatible with Windows Vista. From version 16.0, the same version runs under Windows, Mac, and Linux. The graphical user interface is written in Java. The Mac OS version is provided as a Universal binary, making it fully compatible with both PowerPC and Intel-based Mac hardware. SPSS Inc announced on July 28, 2009, that it was being acquired by IBM for US$1.2 billion. Because of a dispute about ownership of the name "SPSS", between 2009 and 2010, the product was referred to as PASW (Predictive Analytics SoftWare). As of January 2010, it became "SPSS: An IBM Company". Complete transfer of business to IBM was done by October 1, 2010. By that date, SPSS: An IBM Company ceased to exist. IBM SPSS is now fully integrated into the IBM Corporation, and is one of the brands under IBM Software Group's Business Analytics Portfolio, together with IBM Algorithmics, IBM Cognos and IBM OpenPages. Companion software in the "IBM SPSS" family are used for data mining and text analytics (IBM SPSS Modeler), realtime credit scoring services (IBM SPSS Collaboration and Deployment Services), and structural equation modeling (IBM SPSS Amos). SPSS Data Collection and SPSS Dimensions were sold in 2015 to UNICOM Systems, Inc., a division of UNICOM Global, and merged into the integrated software suite UNICOM Intelligence (survey design, survey deployment, data collection, data management and reporting). (Interactive Data Analysis) IDA (Interactive Data Analysis) was a software package that originated at what was formerly the National Opinion Research Center (NORC), at the University of Chicago. Initially offered on the HP-2000, somewhat later, under the ownership of SPSS, it was also available on MUSIC/SP. Regression analysis was one of IDA's strong points. - Conversational / Columnar SPSS SCSS was a software product intended for online use of IBM mainframes. Although the "C" was for "conversational", it also represented a distinction regarding how the data was stored: it used a column-oriented rather than a row-oriented (internal) database. This gave good interactive response time for the SPSS Conversational Statistical System (SCSS), whose strong point, as with SPSS, was Cross-tabulation. Project NX In October, 2020 IBM announced the start of an Early Access Program for the "New SPSS Statistics", codenamed Project NX. It contains "many of your favorite SPSS capabilities presented in a new easy to use interface, with integrated guidance, multiple tabs, improved graphs and much more". In December, 2021, IBM opened up the Early Access Program for the next generation of SPSS Statistics for more users and shared more visuals about it. See also Comparison of statistical packages JASP and jamovi, both open-source and free of charge alternatives, offering frequentist and Bayesian models PSPP, a free SPSS replacement from the GNU Project SPSS Modeler References Further reading External links Official SPSS User Community 50 years of SPSS history Raynald Levesque's SPSS Tools – library of worked solutions for SPSS programmers (FAQ, command syntax; macros; scripts; Python) Archives of SPSSX-L Discussion – SPSS Listserv active since 1996. Discusses programming, statistics and analysis UCLA ATS Resources to help you learn SPSS  – Resources for learning SPSS UCLA ATS Technical Reports  – Report 1 compares Stata, SAS, and SPSS against R (R is a language and environment for statistical computing and graphics). SPSS Community?ref=wikipedia – Support for developers of applications using SPSS products, including materials and examples of the Python and R programmability features Biomedical Statistics - An educational website dedicated to statistical evaluation of biomedical data using SPSS software IBM software Business intelligence software Java platform software Science software for Linux Proprietary commercial software for Linux Data mining and machine learning software Statistical software Statistical programming languages Econometrics software Time series software Data warehousing Proprietary cross-platform software Extract, transform, load tools Mathematical optimization software Numerical software
SPSS
Mathematics
2,557
5,718,732
https://en.wikipedia.org/wiki/R%C3%A9union%20swamphen
The Réunion swamphen (Porphyrio caerulescens), also known as the Réunion gallinule or (French for "blue bird"), is a hypothetical extinct species of rail that was endemic to the Mascarene island of Réunion. While only known from 17th- and 18th-century accounts by visitors to the island, it was scientifically named in 1848, based on the 1674 account by Sieur Dubois. A considerable literature was subsequently devoted to its possible affinities, with current researchers agreeing it was derived from the swamphen genus Porphyrio. It has been considered mysterious and enigmatic due to the lack of any physical evidence of its existence. This bird was described as entirely blue in plumage with a red beak and legs. It was said to be the size of a Réunion ibis or chicken, which could mean in length, and it may have been similar to the takahē. While easily hunted, it was a fast runner and able to fly, though it did so reluctantly. It may have fed on plant matter and invertebrates, as do other swamphens, and was said to nest among grasses and aquatic ferns. It was only found on the Plaine des Cafres plateau, to which it may have retreated during the latter part of its existence, whereas other swamphens inhabit lowland swamps. While the last unequivocal account is from 1730, it may have survived until 1763, but overhunting and the introduction of cats likely drove it to extinction. Taxonomy Visitors to the Mascarene island of Réunion during the 17th and 18th centuries reported blue birds ( in French). The first such account is that of the French traveller Sieur Dubois, who was on Réunion from 1669 to 1672, which was published in 1674. The British naturalist Hugh Edwin Strickland stated in 1848 that he would have thought Dubois' account referred to a member of the swamphen genus Porphyrio if not for its large size and other features (and noted the term had also been erroneously used for bats on Réunion in an old account). Strickland expressed hope that remains of this and other extinct Mascarene birds would be found there. Responding to Strickland's book later that year, the Belgian scientist Edmond de Sélys Longchamps coined the scientific name Apterornis coerulescens based on Dubois' account. The specific name is Latin for "bluish, becoming blue". Sélys Longchamps also included two other Mascarene birds, at the time only known from contemporary accounts, in the genus Apterornis: the Réunion ibis (now Threskiornis solitarius); and the red rail (now Aphanapteryx bonasia). He thought them related to the dodo and Rodrigues solitaire, due to their shared rudimentary wings, tail, and the disposition of their digits. The name Apterornis had already been used for a different extinct bird genus from New Zealand (originally spelled Aptornis, the adzebills) by the British biologist Richard Owen earlier in 1848, and the French biologist Charles Lucien Bonaparte coined the new binomial Cyanornis erythrorhynchus for the in 1857. The same year, the German ornithologist Hermann Schlegel moved the species to the genus Porphyrio, as P. (Notornis) caerulescens, indicating an affinity with the takahē (now called Porphyrio hochstetteri, then also referred to as Notornis by some authors) of New Zealand. Schlegel argued that the discovery of the takahē showed that members of Porphyrio could be large, thereby disproving Strickland's earlier doubts based on size. The British ornithologist Richard Bowdler Sharpe simply used the name Porphyrio caerulescens in 1894. The British zoologist Walter Rothschild retained the name Apterornis for the bird in 1907, and considered it similar to Aptornis and the takahē, believing Dubois's account indicated it was related to those birds. The Japanese ornithologist Masauji Hachisuka used the new combination Cyanornis coerulescens for the bird in 1953 (with the specific name misspelled), also considering it related to the takahē due to its size. Throughout the 20th century the bird was usually considered a member of Porphyrio or Notornis, and the latter genus was eventually itself considered a junior synonym of Porphyrio. Some writers equated the bird with extant swamphens, including African swamphens by the French ornithologist Jacques Berlioz in 1946, and western swamphens by the French ornithologist Nicolas Barré in 1996, despite their different habitat. The French ornithologist Philippe Milon doubted the Porphyrio affiliation in 1951, since Dubois's account stated the Réunion bird was palatable, while extant swamphens are not. In 1967, the American ornithologist James Greenway stated that the bird "must remain mysterious" until Porphyrio bones are one day uncovered. In 1974, an attempt was made to find fossil localities on the Plaine des Cafres plateau, where the bird was said to have lived. No caves, which might contain kitchen middens where early settlers discarded bones of local birds, were found, and it was determined that a more careful study of the area was needed before excavations could be made. In 1977, the American ornithologist Storrs L. Olson found the old accounts consistent with an endemic derivative of Porphyrio, and considered it a probable species whose remains might one day be discovered. The British ecologist Anthony S. Cheke considered previous arguments about the bird's affinities in 1987, and supported it being a Porphyrio relative, while noting that there were two further contemporary accounts. The same year, the British writer Errol Fuller listed the bird as a hypothetical species, and expressed puzzlement as to how a considerable literature had been derived from such "flimsy material". The French palaeontologist Cécile Mourer-Chauviré and colleagues listed the bird as Cyanornis (?=Porphyrio) caerulescens in 2006, indicating the uncertainty of its classification. They stated the cause of the scarcity of its fossil remains was probably that it did not live in the parts of Réunion where fossils might have been preserved. Cheke and the British palaeontologist Julian P. Hume stated in 2008 that, since the mystery of the "Réunion solitaire" had been solved after it was identified with ibis remains, the Réunion swamphen remains the most enigmatic of the Mascarene birds from the old accounts. In his 2012 book about extinct birds and his 2019 monograph about extinct Mascarene rails, Hume stated that the Réunion swamphen had been mentioned by trustworthy observers, but was "perhaps the most enigmatic of all rails" with no evidence to resolve its taxonomy. He thought there was no doubt that it was a derivative of Porphyrio, as the all-blue colouration is only found in that genus among rails. While it may have been derived from Africa or Madagascar, genetic studies have shown that other rails have dispersed to unexpectedly great distances from their closest relatives, making alternative explanations possible. Description The Réunion swamphen was described as having entirely blue plumage with a red beak and legs, and is generally agreed to have been a large, terrestrial swamphen, with features indicative of reduced flight capability, such as larger size and more robust legs. There has been disagreement over the size of the bird, as Dubois' account compared its size with that of a Réunion ibis while that of the French engineer Jean Feuilley from 1704 compared it to a domestic chicken. Cheke stated in 1987 that Feuilley's account would indicate the bird was not unusually large, perhaps the size of a swamphen. Hume pointed out in 2019 that the Réunion ibis would have been at most, similar to the extant African sacred ibis (including the tail), while chickens could be in length (the size of their ancestor, the wild red junglefowl), and there was therefore no contradiction. The Réunion swamphen would thereby have been about the same size as the takahē. The first description of the Réunion swamphen is that of Dubois from 1674: The last definite account of the bird is that of the priest Father Brown from around 1730 (expanded from a 1717 account by Le Gentil): Olson stated the comparison to a "wood pigeon" was a reference to the common wood pigeon, implying that Brown described it as smaller than Dubois did, while Hume suggested it could be the extinct Réunion blue pigeon. The 1708 account of Hébert does not add much information, though he qualified its colouration as "dark blue". While the bird is only known from written accounts, reconstructions of it appear in Rothschild's 1907 book Extinct Birds, and Hachisuka's 1953 book The Dodo and Kindred Birds. Rothschild stated he had the Dutch artist John Gerrard Keulemans depict it as intermediate between the takahē and Aptornis, which he thought its closest relatives. Fuller found Frohawk's illustration to be a well-produced work, though almost entirely conjectural in depicting it like a slimmed-down takahē. Behaviour and ecology Little is known about the ecology of the Réunion swamphen; it was easily caught and killed, unlike other swamphens (which avoid predators by flying or hiding), though it was able to run fast. While some early researchers thought the bird to be flightless, Brown's account states it could fly, and it is thought to have been a reluctant flier. Hume suggested it may have fed on plant matter and invertebrates, as other swamphens do. At least in the latter part of its existence, it appears to have been confined to mountains (retreating there between the 1670s and 1705), in particular to the Plaine des Cafres plateau, situated at an altitude of about in south-central Réunion. The environment of this area consists of open woodland in a subalpine forest steppe, and has marshy pools. The Réunion swamphen was termed a land-bird by Dubois, while other swamphens inhabit lowland swamps. This is similar to the Réunion ibis, which lived in forest rather than wetlands, which is otherwise typical ibis habitat. Cheke and Hume proposed that the ancestors of these birds colonised Réunion before swamps had developed, and had therefore become adapted to the available habitats. They were perhaps prevented from colonising Mauritius as well due to the presence of red rails there, which may have occupied a similar ecological niche. Feuilley described some characteristics of the bird in 1704: The only account of its nesting behaviour is that of La Roque from 1708: Many other endemic species on Réunion became extinct after the arrival of humans and the resulting disruption of the island's ecosystem. The Réunion swamphen lived alongside other now-extinct birds, such as the Réunion ibis, the Mascarene parrot, the Hoopoe starling, the Réunion parakeet, the Réunion scops owl, the Réunion night heron, and the Réunion pink pigeon. Extinct Réunion reptiles include the Réunion giant tortoise and an undescribed Leiolopisma skink. The small Mauritian flying fox and the snail Tropidophora carinata lived on Réunion and Mauritius before vanishing from both islands. Extinction Many terrestrial rails are flightless, and island populations are particularly vulnerable to man-made changes; as a result, rails have suffered more extinctions than any other family of birds. All six endemic species of Mascarene rails are extinct, all caused by human activities. Overhunting was the main cause of the Réunion swamphen's extinction (it was considered good game and was easy to catch), but according to Cheke and Hume, the introduction of cats at the end of the 17th century could have contributed to the elimination of the bird once these became feral and reached its habitat. Today, cats are still a serious threat to native birds, in particular Barau's petrel, since they occur all over Réunion, including the most remote and high peaks. The eggs and chicks would also have been vulnerable to rats after their accidental introduction in 1676. On the other hand, the Réunion swamphen and other birds of the island appear to have successfully survived feral pigs. Cattle grazing on Plaine des Cafres was promoted by the French explorer Jean-Baptiste Charles Bouvet de Lozier in the 1750s, which may have also had an impact on the bird. While the last unequivocal account of the Réunion swamphen is from 1730, an anonymous account from 1763, possibly by the British Brigadier-General Richard Smith, may be the last mention of this bird, though no description of it was provided, and it might refer to another species. It is also impossible to say whether this writer saw the bird himself. It gives a contemporary impression of the Réunion swamphen's habitat, Plaine des Cafres, and of how birds were hunted there: If the Réunion swamphen survived until 1763 this would be far longer than many other extinct birds of Réunion. If so, its survival was likely because of the remoteness of its habitat. See also List of extinct animals of Réunion References Extinct birds of Indian Ocean islands Porphyrio Bird extinctions since 1500 Birds of Réunion † Birds described in 1848 Hypothetical species
Réunion swamphen
Biology
2,773
74,871,318
https://en.wikipedia.org/wiki/TMEM61
Transmembrane protein 61 (TMEM61) is a protein that is encoded by the TMEM61 gene in humans. It is located on the first chromosome in humans and is highly expressed in the intestinal regions predominantly the kidney, adrenal gland and pituitary tissues. The protein, unlike other transmembrane protein in the region does not promote cancer growth. However, the TMEM61 protein when inhibited by secondary factors restricts normal activity in the kidney. The human protein shares many Orthologs and has been prevalent on Earth for millions of years. Gene Aliases There are no known aliases of TMEM61. The human protein can be identified with any tool that uses UniProt by Q8N0U2. Location TMEM61 is located on the plus strand of the human chromosome 1 at the locus 1 p32.3. The gene is 11, 661 base pairs long, it ranges from position 54,980,628 to 54,992,288 on chromosome 1. TMEM61 lies between LOC124904184 and BSND. Transcript variants NCBI RefSeq contains seven mRNA transcript variants for TMEM61. Transcription variants X1, X2, and X2 both are splices of the original protein, but all three isoforms have their own variants. None of the variants share similar exon boundaries, domain or disordered regions. Protein Isoforms There are six known Isoforms of the TMEM61 protein, Isoform X1 is encoded by transcript variant X1, and Isoform X2 with variant X2 and so on. There are two different X2 isoforms, but both have the same amino acid sequence, both the X2 have five less amino acids in the start of the protein, which differs from isoform X1 with same protein sequence and size as the original protein. Protein characteristics The Isoform 1 of the TMEM61 protein is made up of 210 amino acids. The protein has a predicted molecular weight of about 22.2 KDa and a theoretical isoelectric point of about 4.54. In terms of amino acid composition, TMEM61 is relatively rich in both the hydrophobic Proline and hydrophilic Serine. The protein is relatively poor in both hydrophilic Asparagine and Lysine. It is also poor in both hydrophobic Isoleucine and Phenylalanine. The protein indicates acid components from it addition of Arginine and Lysine subtracted to the addition of Glutamic Acid and Aspartic Acid. Domains TMEM61 Isoform 1 contains two transmembrane domains one of encompasses a DUF domain. TMEM61 also contains a MTP domain, unlike the transmembrane domain this domains located in the Golgi Apparatus and involves spanning transportation. All four domain regions had low value scores except the second TMEM domain was not able to be scored. Secondary structure The Ali2D, and I-TASSER models predicted that the secondary structure of TMEM61 has both alpha helices and beta strands. Tertiary structure No confident model for tertiary structure for TMEM61. Post-translational modifications While the modification are few, phosphorylation will not result in a change oil the amino acid for TMEM61, this is a result of the lack of glycosylation that takes place in the sequence. Results are represented by graph on bottom right. Subcellular localization Immunofluorescent standing experiments have detected the TMEM61 protein in the endocrine tissues, kidney and Urinary bladder, and proximal digestive tract. The experiment also found slight expression in the brain tissues. Regulation and expression Transcription factors Tissue specificity According to HumanAtlas, Geoprofile, and NCBI, TMEM61 is highly expressed in the Kidney, Pituitary gland, Salivary gland, Adrenal, and brain tissues in a decreasing order. Embryonic development In situ hybridization staining a mouse embryo discovered high levels of TMEM61 in Kidneys and found no other tissues to express the protein. Immunochemistry TMEM61 was found to be very abundant in the human body in comparison to other proteins. Western blotting showed an over expression of lysate in mammalian, in this case rabbit. The staining of the human pancreas shows cytoplasmic positivity in exocrine cells. Interacting proteins The IntAct, String, and BioGrid database found eight relevant interacting protein to the TMEM61. Other TMEM protein such as TMEM124 are closely monitored together for the cancer expression both in the same region but both did not promote cancer growth. Homology and evolution Orthologs and paralogs TMEM61 has orthologs in mammals, reptiles, aves, amphibians, and fish. A table of orthologs is beside to the right. There is no known paralog of TMEM61. Evolutionary history West African lungfish is the furthest-from-human known organism to express TMEM61 approximately 408 million years ago. The expression of TMEM61 protein throughout its closely related orthologs all indicate high expression in the Kidney.Based on a molecular clock analysis, the protein sequence of TMEM61 has on average evolved faster than Cytochrome C but slower than Fibrinogen alpha. Clinical significance TMEM61 was anticipated to be associated with the formation of brain tumors but was later debunked as there was low levels expressed, however the test did indicate its location to be in the mitochondrial neural membrane region. The TMEM61 has been hoped to promote cancer or tumor growth but there has been no clinical research that proves this idea. The information obtained about the TMEM61 does show expression on the kidney beyond it human organism and the studies show MIF limiting the expression of TMEM61. The Aquaporin-11 deficiency, closing or breaking of water channels limits the protein expression in the membrane and restricts TMEM61 expression and inhibits kidney function. Interacting proteins Very close to other TMEM protein such as TMEM124 was closely monitored for the cancer expression both in the same region but both did not promote cancer growth. PMP22, YAP1. References Genes DNA Immunology
TMEM61
Biology
1,361
22,079,394
https://en.wikipedia.org/wiki/Sethi-Skiba%20point
Sethi-Skiba points, also known as DNSS points, arise in optimal control problems that exhibit multiple optimal solutions. A Sethi-Skiba point is an indifference point in an optimal control problem such that starting from such a point, the problem has more than one different optimal solutions. A good discussion of such points can be found in Grass et al. Definition Of particular interest here are discounted infinite horizon optimal control problems that are autonomous. These problems can be formulated as s.t. where is the discount rate, and are the state and control variables, respectively, at time , functions and are assumed to be continuously differentiable with respect to their arguments and they do not depend explicitly on time , and is the set of feasible controls and it also is explicitly independent of time . Furthermore, it is assumed that the integral converges for any admissible solution . In such a problem with one-dimensional state variable , the initial state is called a Sethi-Skiba point if the system starting from it exhibits multiple optimal solutions or equilibria. Thus, at least in the neighborhood of , the system moves to one equilibrium for and to another for . In this sense, is an indifference point from which the system could move to either of the two equilibria. For two-dimensional optimal control problems, Grass et al. and Zeiler et al. present examples that exhibit DNSS curves. Some references on the applications of Sethi-Skiba points are Caulkins et al., Zeiler et al., and Carboni and Russu History Suresh P. Sethi identified such indifference points for the first time in 1977. Further, Skiba, Sethi, and Deckert and Nishimura explored these indifference points in economic models. The term DNSS (Deckert, Nishimura, Sethi, Skiba) points, introduced by Grass et al., recognizes (alphabetically) the contributions of these authors. These indifference points have been also referred to as Skiba points or DNS points in earlier literature. Example A simple problem exhibiting this behavior is given by and . It is shown in Grass et al. that is a Sethi-Skiba point for this problem because the optimal path can be either or . Note that for , the optimal path is and for , the optimal path is . Extensions For further details and extensions, the reader is referred to Grass et al. References Optimal control Mathematical economics
Sethi-Skiba point
Mathematics
500
23,423,743
https://en.wikipedia.org/wiki/Advanced%20Telescope%20for%20High%20Energy%20Astrophysics
Advanced Telescope for High-ENergy Astrophysics (Athena) is an X-ray observatory mission selected by European Space Agency (ESA) within its Cosmic Vision program to address the Hot and Energetic Universe scientific theme. Athena will operate in the energy range of 0.2–12 keV and will offer spectroscopic and imaging capabilities exceeding those of currently operating X-ray astronomy satellites – e.g. the Chandra X-ray Observatory and XMM-Newton – by at least one order of magnitude on several parameter spaces simultaneously. Mission The primary goals of the mission are to map hot gas structures, determine their physical properties, and search for supermassive black holes. History and development The mission has its roots in two concepts from the early 2000s, XEUS of ESA and Constellation-X Observatory (Con-X) of NASA. Around 2008, these two proposals were merged into the joint NASA/ESA/JAXA International X-ray Observatory (IXO) proposal. In 2011, IXO was withdrawn and then ESA decided to proceed with a cost-reduced modification, which became known as ATHENA. Athena was selected in 2014 to become the second (L2) L-class Cosmic Vision mission, addressing the Hot and Energetic Universe science theme. The scientific advice for the Athena mission is provided by the Athena Science Study Team (ASST) composed of expert scientists from the community. The ASST was appointed by ESA on 16 July 2014. The ESA Study Scientist and Study Manager are Dr Matteo Guainazzi and Dr Mark Ayre respectively. Athena completed successfully its Phase A with the Mission Formulation Review on 12 November 2019. The next key milestone will be the mission adoption by ESA's Science Programme Committee (SPC) expected in 2023, leading to launch in 2035. In 2023, the mission was rescoped as NewAthena, with launch date moved to 2037. Orbit In 2035, an Ariane 64 launch vehicle will lift Athena into a large amplitude halo orbit around the point of the Sun-Earth system. The orbit around was selected due to its stable thermal environment, good sky visibility, high observing efficiency, and stable particle background. Athena will perform pre-planned scheduled observations of up to 300 celestial locations per year. A special Target of Opportunity mode will allow a re-point manoeuvre within 4 hours for 50% of any randomly occurring events in the sky. Optics and instruments The Athena X-ray observatory consists of a single X-ray telescope with a 12 m focal length, with an effective area of approx. 1.4 m2 (at 1 keV) and a spatial resolution of 5 arcseconds on-axis, degrading gracefully to less than 10 arcseconds at 30 arcminutes off-axis. The mirror is based on ESA's Silicon Pore Optics (SPO) technology. SPO provides an excellent ratio of collecting area to mass, while still offering a good angular resolution. It also benefits from a high technology readiness level and a modular design highly amenable to mass production necessary to achieve the unprecedented telescope collecting area. A movable mirror assembly can focus X-rays onto either one of Athena two instruments (WFI and X-IFU, see below) at any given time. Both the WFI and X-IFU successfully passed their Preliminary Requirements Reviews, on 31 October 2018 and 11 April 2019 respectively. Wide Field Imager (WFI) The Wide Field Imager (WFI) is a large field of view spectral-imaging camera based on the unique Silicon DEPFET technology developed in the semiconductor laboratory of the Max Planck Society. The DEPFETs provide an excellent energy resolution (<170eV at 7keV), low noise, fast readout and high time resolution, with good radiation hardness. The instrument combines the Large Detector Array, which is optimized for a wide field of view observations over a 40' x 40' instantaneous sky area, with a separate Fast Detector tailored to observe the brightest point sources of the X-ray sky with high throughput and low pile-up. These capabilities, in combination with the unprecedented effective area and wide field of the Athena telescope, will provide breakthrough capabilities in X-ray imaging spectroscopy. The WFI is developed by an international consortium composed of ESA member states. It is led by the Max Planck Institute for Extraterrestrial Physics (DEU) with partners in Germany (ECAP, IAA Tübingen), Austria (University of Vienna), Denmark (DTU), France (CEA Saclay, Strasbourg), Italy (INAF, Bologna, Palermo), Poland (SRC PAS, NCAC PAS), the United Kingdom (University of Leicester, Open University), the United States (Pennsylvania State University (Penn State), SLAC, Massachusetts Institute of Technology (MIT), SAO), Switzerland (University of Geneva), Portugal (IA), and Greece (Athens Observatory, University of Crete). The principal investigator is Prof. Kirpal Nandra, Director of the High-Energy Group at MPE. X-ray Integral Field Unit (X-IFU) The X-ray Integral Field Unit is the cryogenic X-ray spectrometer of Athena X-IFU will deliver spatially resolved X-ray spectroscopy, with a spectral resolution requirement of 2.5 eV up to 7 keV over a hexagonal field of view of 5 arc minutes (equivalent diameter). The prime detector of X-IFU is made of a large format array of Molybdenum Gold transition-edge sensors coupled to absorbers made of Au and Bi to provide the required stopping power. The pixel size corresponds to slightly less than 5 arc seconds on the sky, thus matching the angular resolution of the X-ray optics. A large part of the X-IFU related Athena science objectives relies on the observation of faint extended sources (e.g. hot gas in cluster of galaxies to measure bulk motions and turbulence or its chemical composition), imposing the lowest possible instrumental background. This is achieved by the addition of a second cryogenic detector underneath the prime focal plane array. This way non-X-ray events such as particles can be vetoed using the temporal coincidence of detecting energy in both detectors simultaneously. The focal plane array, the sensors and the cold front end electronics are cooled at a stable temperature less than 100 mK by a multi-stage cryogenic chain, assembled by a series of mechanical coolers, with interface temperatures at 15 K, 4 K and 2 K and 300 mK, pre-cooling a sub Kelvin cooler made of a 3He adsorption cooler coupled with an Adiabatic Demagnetization Refrigerator. Calibration data are acquired along with each observation from modulated X-ray sources to enable the energy calibration required to reach the targeted spectral resolution. Although an integral field unit where each and every pixel delivers a high resolution X-ray spectrum, the defocussing capability of the Athena mirror will enable the focal beam to be spread over hundreds of sensors. The X-IFU will thus be able to observe very bright X-ray sources. It will do so either with the nominal resolution, e.g. for detecting the baryons thought to reside in the Warm Hot Intergalactic Medium, using bright gamma-ray burst afterglows, as background sources shining through the cosmic web, or with a spectral resolution of 3–10 eV, e.g. for measuring the spins and characterizing the winds and outflows of bright X-ray binaries at energies where their spectral signatures are the strongest (above 5 keV). As of December 2018, when the X-IFU consortium was formally endorsed by ESA as being responsible for the procurement of the instrument to Athena, the X-IFU consortium gathered 11 European countries (Belgium, Czech Republic, Finland, France, Germany, Ireland, Italy, Netherlands, Poland, Spain, Switzerland), plus Japan and the United States. More than 50 research institutes are involved in the X-IFU consortium. The principal investigator of X-IFU is Dr Didier Barret, Director of research at the research institute in astrophysics and planetology of Toulouse (IRAP-OMP, CNRS UT3-Paul Sabatier/CNES, France). Dr Jan-Willem den Herder (SRON, The Netherlands) and Dr Luigi Piro (INAF-IAPS, Italy) are co-principal investigators of the X-IFU. CNES manages the project, and on behalf of the X-IFU consortium, is responsible for the delivery of the instrument to ESA. Athena science goals The "Hot and Energetic Universe" science theme revolves around two fundamental questions in astrophysics: How does ordinary matter assemble into the large-scale structures that we see today? And how do black holes grow and shape the Universe? Both questions can only be answered using a sensitive X-ray space observatory. Its combination of scientific performance exceeds any existing or planned X-ray missions by over one order of magnitude on several parameter spaces: effective area, weak line sensitivity, survey speed, just to mention a few. Athena will perform very sensitive measurements on a wide range of celestial objects. It will investigate the chemical evolution of the hot plasma permeating the intergalactic space in cluster of galaxies, search for elusive observational features of the Warm-Hot Intergalactic Medium, investigate powerful outflows ejected from accreting black holes across their whole mass spectrum, and study their impact on the host galaxy, and identify sizeable samples of comparatively rare populations of Active Galactic Nuclei (AGN)  that are key to understanding the concurrent cosmological evolution of accreting black holes and galaxies. Among them are highly obscured and high-redshift (z≥6) AGN. Furthermore, Athena will be an X-ray observatory open to the whole astronomical community, poised to provide wide-ranging discoveries in almost all fields of modern astrophysics, with a large discovery potential of still unknown and unexpected phenomena. It represents the X-ray contribution to the fleet of large-scale observational facilities to be operational in the 2030s (incl. SKA, ELT, ALMA, LISA...). The Athena Community Office The Athena Science Study Team (ASST) established the Athena Community Office (ACO) to obtain support in performing its tasks assigned by ESA, and most especially in the ASST role as "a focal point for the interests of the broad scientific community". Currently, this community is formed by more than 800 members spread around the world. The ACO is meant to become a focal point to facilitate the scientific exchange between the Athena activities and the scientific community at large, and to disseminate the Athena science objectives to the general public. The main tasks of the ACO can be divided into three categories: Organisational aspects and optimisation of the community efforts assisting the ASST in several aspects, as for instance helping to the promotion of Athena science capabilities in the research world, through Conferences & Workshops or supporting the production of ASST documents, including the White Papers identifying the scientific synergies of Athena with other observational facilities in the early 2030s Keep the Athena community informed on the status of the project with the regular release of the newsletter , brief news, weekly news on the Athena web portal and in the social channels. Develop communication and outreach activities, of particular interest, are the Athena nuggets . The ACO is led by the Instituto de Física de Cantabria (CSIC-UC). Further ACO contributors are the Université de Genève, Max Planck Institute for Extraterrestrial Physics (MPE) and L'Institut de Recherche en Astrophysique et Planétologie (IRAP). See also Spektr-RG List of proposed space observatories Lynx X-ray Observatory, a proposed space telescope with greater angular resolution, sensitivity, and spectroscopic power XRISM, pathfinder mission for Athena References External links The Athena X-ray observatory: Community Support Portal Athena on ESA Cosmic Vision website Athena mission proposal video on YouTube The Athena Wide Field Imager website The Athena X-ray Integral Field Unit website X-IFU, unveiling the secrets of the hot and energetic Universe video on YouTube Silicon pore optics mirror video Silicon pore optics mirror animation Space telescopes X-ray telescopes European Space Agency facilities Future spaceflights Cosmic Vision 2035 in science
Advanced Telescope for High Energy Astrophysics
Astronomy
2,544
71,672,474
https://en.wikipedia.org/wiki/WISEA%201810%E2%88%921010
WISEA J181006.18-101000.5 or WISEA 1810-1010 is a substellar object in the constellation Serpens about 8.9 parsec or 29 light-years distant from earth. It stands out because of its peculiar colors matching both L-type and T-type objects, likely due to its very low metallicity. Together with WISEA 0414−5854 it is the first discovered extreme subdwarf (esd) of spectral type T. Lodieu et al. describe WISEA 1810-1010 as a water vapor dwarf due to its atmosphere being dominated by hydrogen and water vapor. Discovery WISEA 1810-1010 was first identified with the NEOWISE proper motion survey in 2016, but the proper motion could not be confirmed because of the high density of background stars in this field near the galactic plane. In 2020 the object was re-examined with the WiseView tool by the researchers of the Backyard Worlds project and was found to have significant proper motion. Additionally the object was independently discovered by the citizen scientist Arttu Sainio via the Backyard Worlds project. Observations The object was initially observed by the Backyard Worlds researchers from US and Canada with Keck/NIRES and Palomar/TripleSpec. Later it was observed by another team from Spain, UK and Poland with NOT/ALFOSC, GTC/multiple instruments and Calar Alto/Omega2000. Analysis of the Keck and Palomar spectrum found that WISEA 1810-1010 has much deeper 1.15 μm (Y/J-band) absorption when compared to the extreme subdwarf of spectral type L7 2MASS 0532+8246, but the shape of the H-band is similar to this esdL7. The Y- and J-band spectrum does match better with spectra from subdwarfs with early spectral type T. Distance and physical properties The distance was first poorly constrained at either 14 or 67 parsec, but using archived and new data the parallax was measured, which constrained the distance to . The object has a mass of , which makes this object a brown dwarf or a sub-brown dwarf, with a temperature of 700 to 900 K. A spectral type of esdT3: was estimated based on a new work that introduced a new classification scheme for cold subdwarfs. The prefix esd stands for "extreme subdwarfs" and the double point stands for a highly uncertain numerical spectral type. Best-fitted SAND models find a temperature and radius similar to the previous estimate by Lodieu et al. The motion of WISEA 1810-1010 was used to predict a 91% probability of thin disk membership and a 9% probability of thick disk membership. It is however noted that high probability of thin disk membership, does not rule out thick disk membership. Atmosphere The only chemicals detected in the atmosphere of WISEA 1810-1010 are hydrogen and strong absorption due to water vapor. This is surprising because T-dwarfs are defined by methane in their atmosphere and the hotter L-dwarfs are partly defined by carbon monoxide in their atmosphere. Both are missing in WISEA 1810-1010. The missing of carbon monoxide and methane can be explained by a carbon-deficient and metal-poor atmosphere. Alternatively the spectrum could be explained by an oxygen-enhanced atmosphere. Model spectra suggest a very metal-poor atmosphere with . Spectral type Schneider et al. noted first the similarities of the spectrum with both L-dwarfs and T-dwarfs. The tentative classification as esdT0.0±1.0 was given due to the low estimated temperature. The discovery by Lodieu et al. that methane was not present in the near-infrared spectrum raised the question if a T-dwarf classification was possible. Methane is a key diagnostic feature for T-dwarfs. Jun-Yan Zhang et al. noted that WISEA 1810 cannot be classified as an L-dwarf either because of some key differences, such as: A redder W1-W2 color. Missing hydrides (such as FeH), which become stronger in metal-poor L-dwarfs. L-subdwarfs have little water absorptions, but WISEA 1810 has deep water absorptions JWST observations of the methane band and other molecules in the mid-infrared of WISEA 1810 or other proposed esdT might resolve the question if these objects can be classified as T-dwarfs. If these objects cannot be classified as T-dwarfs, they might be given a new spectral type. Jun-Yan Zhang et al. proposed the letters H or Z (therefore H-dwarf or Z-dwarf). New esdT (or H/Z-dwarfs) might be discovered in the future with ESA's Euclid and the Rubin Observatory. See also 2MASSI J0937347+293142 first subdwarf of spectral type T WISE 1534–1043 likely first subdwarf of spectral type Y List of star systems within 25–30 light-years References Brown dwarfs Serpens Subdwarfs
WISEA 1810−1010
Astronomy
1,044
187,317
https://en.wikipedia.org/wiki/Antenna%20%28radio%29
In radio engineering, an antenna (American English) or aerial (British English) is an electronic device that converts an alternating electric current into radio waves (transmitting), or radio waves into an electric current (receiving). It is the interface between radio waves propagating through space and electric currents moving in metal conductors, used with a transmitter or receiver. In transmission, a radio transmitter supplies an electric current to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). In reception, an antenna intercepts some of the power of a radio wave in order to produce an electric current at its terminals, that is applied to a receiver to be amplified. Antennas are essential components of all radio equipment. An antenna is an array of conductors (elements), electrically connected to the receiver or transmitter. Antennas can be designed to transmit and receive radio waves in all horizontal directions equally (omnidirectional antennas), or preferentially in a particular direction (directional, or high-gain, or "beam" antennas). An antenna may include components not connected to the transmitter, parabolic reflectors, horns, or parasitic elements, which serve to direct the radio waves into a beam or other desired radiation pattern. Strong directivity and good efficiency when transmitting are hard to achieve with antennas with dimensions that are much smaller than a half wavelength. The first antennas were built in 1886 by German physicist Heinrich Hertz in his pioneering experiments to prove the existence of electromagnetic waves predicted by the 1867 electromagnetic theory of James Clerk Maxwell. Hertz placed dipole antennas at the focal point of parabolic reflectors for both transmitting and receiving. Starting in 1895, Guglielmo Marconi began development of antennas practical for long-distance, wireless telegraphy, for which he received the 1909 Nobel Prize in physics. Terminology The words antenna and aerial are used interchangeably. Occasionally the equivalent term "aerial" is used to specifically mean an elevated horizontal wire antenna. The origin of the word antenna relative to wireless apparatus is attributed to Italian radio pioneer Guglielmo Marconi. In the summer of 1895, Marconi began testing his wireless system outdoors on his father's estate near Bologna and soon began to experiment with long wire "aerials" suspended from a pole. In Italian a tent pole is known as l'antenna centrale, and the pole with the wire was simply called l'antenna. Until then wireless radiating transmitting and receiving elements were known simply as "terminals". Because of his prominence, Marconi's use of the word antenna spread among wireless researchers and enthusiasts, and later to the general public. Antenna may refer broadly to an entire assembly including support structure, enclosure (if any), etc., in addition to the actual RF current-carrying components. A receiving antenna may include not only the passive metal receiving elements, but also an integrated preamplifier or mixer, especially at and above microwave frequencies. Overview Antennas are required by any radio receiver or transmitter to couple its electrical connection to the electromagnetic field. Radio waves are electromagnetic waves which carry signals through the air (or through space) at the speed of light with almost no transmission loss. Antennas can be classified as omnidirectional, radiating energy approximately equally in all horizontal directions, or directional, where radio waves are concentrated in some direction(s). A so-called beam antenna is unidirectional, designed for maximum response in the direction of the other station, whereas many other antennas are intended to accommodate stations in various directions but are not truly omnidirectional. Since antennas obey reciprocity the same radiation pattern applies to transmission as well as reception of radio waves. A hypothetical antenna that radiates equally in all directions (vertical as well as all horizontal angles) is called an isotropic radiator; however, these cannot exist in practice nor would they be particularly desired. For most terrestrial communications, rather, there is an advantage in reducing radiation toward the sky or ground in favor of horizontal direction(s). A dipole antenna oriented horizontally sends no energy in the direction of the conductor – this is called the antenna null – but is usable in most other directions. A number of such dipole elements can be combined into an antenna array such as the Yagi–Uda in order to favor a single horizontal direction, thus termed a beam antenna. The dipole antenna, which is the basis for most antenna designs, is a balanced component, with equal but opposite voltages and currents applied at its two terminals. The vertical antenna is a monopole antenna, not balanced with respect to ground. The ground (or any large conductive surface) plays the role of the second conductor of a monopole. Since monopole antennas rely on a conductive surface, they may be mounted with a ground plane to approximate the effect of being mounted on the Earth's surface. More complex antennas increase the directivity of the antenna. Additional elements in the antenna structure, which need not be directly connected to the receiver or transmitter, increase its directionality. Antenna "gain" describes the concentration of radiated power into a particular solid angle of space. "Gain" is perhaps an unfortunately chosen term, by comparison with amplifier "gain" which implies a net increase in power. In contrast, for antenna "gain", the power increased in the desired direction is at the expense of power reduced in undesired directions. Unlike amplifiers, antennas are electrically "passive" devices which conserve total power, and there is no increase in total power above that delivered from the power source (the transmitter), only improved distribution of that fixed total. A phased array consists of two or more simple antennas which are connected together through an electrical network. This often involves a number of parallel dipole antennas with a certain spacing. Depending on the relative phase introduced by the network, the same combination of dipole antennas can operate as a "broadside array" (directional normal to a line connecting the elements) or as an "end-fire array" (directional along the line connecting the elements). Antenna arrays may employ any basic (omnidirectional or weakly directional) antenna type, such as dipole, loop or slot antennas. These elements are often identical. Log-periodic and frequency-independent antennas employ self-similarity in order to be operational over a wide range of bandwidths. The most familiar example is the log-periodic dipole array which can be seen as a number (typically 10 to 20) of connected dipole elements with progressive lengths in an endfire array making it rather directional; it finds use especially as a rooftop antenna for television reception. On the other hand, a Yagi–Uda antenna (or simply "Yagi"), with a somewhat similar appearance, has only one dipole element with an electrical connection; the other parasitic elements interact with the electromagnetic field in order to realize a highly directional antenna but with a narrow bandwidth. Even greater directionality can be obtained using aperture antennas such as the parabolic reflector or horn antenna. Since high directivity in an antenna depends on it being large compared to the wavelength, highly directional antennas (thus with high antenna gain) become more practical at higher frequencies (UHF and above). At low frequencies (such as AM broadcast), arrays of vertical towers are used to achieve directionality and they will occupy large areas of land. For reception, a long Beverage antenna can have significant directivity. For non directional portable use, a short vertical antenna or small loop antenna works well, with the main design challenge being that of impedance matching. With a vertical antenna a loading coil at the base of the antenna may be employed to cancel the reactive component of impedance; small loop antennas are tuned with parallel capacitors for this purpose. An antenna lead-in is the transmission line, or feed line, which connects the antenna to a transmitter or receiver. The "antenna feed" may refer to all components connecting the antenna to the transmitter or receiver, such as an impedance matching network in addition to the transmission line. In a so-called "aperture antenna", such as a horn or parabolic dish, the "feed" may also refer to a basic radiating antenna embedded in the entire system of reflecting elements (normally at the focus of the parabolic dish or at the throat of a horn) which could be considered the one active element in that antenna system. A microwave antenna may also be fed directly from a waveguide in place of a (conductive) transmission line. An antenna counterpoise, or ground plane, is a structure of conductive material which improves or substitutes for the ground. It may be connected to or insulated from the natural ground. In a monopole antenna, this aids in the function of the natural ground, particularly where variations (or limitations) of the characteristics of the natural ground interfere with its proper function. Such a structure is normally connected to the return connection of an unbalanced transmission line such as the shield of a coaxial cable. An electromagnetic wave refractor in some aperture antennas is a component which due to its shape and position functions to selectively delay or advance portions of the electromagnetic wavefront passing through it. The refractor alters the spatial characteristics of the wave on one side relative to the other side. It can, for instance, bring the wave to a focus or alter the wave front in other ways, generally in order to maximize the directivity of the antenna system. This is the radio equivalent of an optical lens. An antenna coupling network is a passive network (generally a combination of inductive and capacitive circuit elements) used for impedance matching in between the antenna and the transmitter or receiver. This may be used to minimize losses on the feed line, by reducing transmission line's standing wave ratio, and to present the transmitter or receiver with a standard resistive impedance needed for its optimum operation. The feed point location(s) is selected, and antenna elements electrically similar to tuner components may be incorporated in the antenna structure itself, to improve the match. Reciprocity It is a fundamental property of antennas that most of the electrical characteristics of an antenna, such as those described in the next section (e.g. gain, radiation pattern, impedance, bandwidth, resonant frequency and polarization), are the same whether the antenna is transmitting or receiving. For example, the "receiving pattern" (sensitivity to incoming signals as a function of direction) of an antenna when used for reception is identical to the radiation pattern of the antenna when it is driven and functions as a radiator, even though the current and voltage distributions on the antenna itself are different for receiving and sending. This is a consequence of the reciprocity theorem of electromagnetics. Therefore, in discussions of antenna properties no distinction is usually made between receiving and transmitting terminology, and the antenna can be viewed as either transmitting or receiving, whichever is more convenient. A necessary condition for the aforementioned reciprocity property is that the materials in the antenna and transmission medium are linear and reciprocal. Reciprocal (or bilateral) means that the material has the same response to an electric current or magnetic field in one direction, as it has to the field or current in the opposite direction. Most materials used in antennas meet these conditions, but some microwave antennas use high-tech components such as isolators and circulators, made of nonreciprocal materials such as ferrite. These can be used to give the antenna a different behavior on receiving than it has on transmitting, which can be useful in applications like radar. Resonant antennas The majority of antenna designs are based on the resonance principle. This relies on the behaviour of moving electrons, which reflect off surfaces where the dielectric constant changes, in a fashion similar to the way light reflects when optical properties change. In these designs, the reflective surface is created by the end of a conductor, normally a thin metal wire or rod, which in the simplest case has a feed point at one end where it is connected to a transmission line. The conductor, or element, is aligned with the electrical field of the desired signal, normally meaning it is perpendicular to the line from the antenna to the source (or receiver in the case of a broadcast antenna). The radio signal's electrical component induces a voltage in the conductor. This causes an electrical current to begin flowing in the direction of the signal's instantaneous field. When the resulting current reaches the end of the conductor, it reflects, which is equivalent to a 180 degree change in phase. If the conductor is of a wavelength long, current from the feed point will undergo 90 degree phase change by the time it reaches the end of the conductor, reflect through 180 degrees, and then another 90 degrees as it travels back. That means it has undergone a total 360 degree phase change, returning it to the original signal. The current in the element thus adds to the current being created from the source at that instant. This process creates a standing wave in the conductor, with the maximum current at the feed. The ordinary half-wave dipole is probably the most widely used antenna design. This consists of two  wavelength elements arranged end-to-end, and lying along essentially the same axis (or collinear), each feeding one side of a two-conductor transmission wire. The physical arrangement of the two elements places them 180 degrees out of phase, which means that at any given instant one of the elements is driving current into the transmission line while the other is pulling it out. The monopole antenna is essentially one half of the half-wave dipole, a single  wavelength element with the other side connected to ground or an equivalent ground plane (or counterpoise). Monopoles, which are one-half the size of a dipole, are common for long-wavelength radio signals where a dipole would be impractically large. Another common design is the folded dipole which consists of two (or more) half-wave dipoles placed side by side and connected at their ends but only one of which is driven. The standing wave forms with this desired pattern at the design operating frequency, , and antennas are normally designed to be this size. However, feeding that element with 3  (whose wavelength is that of ) will also lead to a standing wave pattern. Thus, an antenna element is also resonant when its length is of a wavelength. This is true for all odd multiples of  wavelength. This allows some flexibility of design in terms of antenna lengths and feed points. Antennas used in such a fashion are known to be harmonically operated. Resonant antennas usually use a linear conductor (or element), or pair of such elements, each of which is about a quarter of the wavelength in length (an odd multiple of quarter wavelengths will also be resonant). Antennas that are required to be small compared to the wavelength sacrifice efficiency and cannot be very directional. Since wavelengths are so small at higher frequencies (UHF, microwaves) trading off performance to obtain a smaller physical size is usually not required. Current and voltage distribution The quarter-wave elements imitate a series-resonant electrical element due to the standing wave present along the conductor. At the resonant frequency, the standing wave has a current peak and voltage node (minimum) at the feed. In electrical terms, this means that at that position, the element has minimum impedance magnitude, generating the maximum current for minimum voltage. This is the ideal situation, because it produces the maximum output for the minimum input, producing the highest possible efficiency. Contrary to an ideal (lossless) series-resonant circuit, a finite resistance remains (corresponding to the relatively small voltage at the feed-point) due to the antenna's resistance to radiating, as well as any conventional electrical losses from producing heat. Recall that a current will reflect when there are changes in the electrical properties of the material. In order to efficiently transfer the received signal into the transmission line, it is important that the transmission line has the same impedance as its connection point on the antenna, otherwise some of the signal will be reflected backwards into the body of the antenna; likewise part of the transmitter's signal power will be reflected back to transmitter, if there is a change in electrical impedance where the feedline joins the antenna. This leads to the concept of impedance matching, the design of the overall system of antenna and transmission line so the impedance is as close as possible, thereby reducing these losses. Impedance matching is accomplished by a circuit called an antenna tuner or impedance matching network between the transmitter and antenna. The impedance match between the feedline and antenna is measured by a parameter called the standing wave ratio (SWR) on the feedline. Consider a half-wave dipole designed to work with signals with wavelength 1 m, meaning the antenna would be approximately 50 cm from tip to tip. If the element has a length-to-diameter ratio of 1000, it will have an inherent impedance of about 63 ohms resistive. Using the appropriate transmission wire or balun, we match that resistance to ensure minimum signal reflection. Feeding that antenna with a current of 1 Ampere will require 63 Volts, and the antenna will radiate 63 Watts (ignoring losses) of radio frequency power. Now consider the case when the antenna is fed a signal with a wavelength of 1.25 m; in this case the current induced by the signal would arrive at the antenna's feedpoint out-of-phase with the signal, causing the net current to drop while the voltage remains the same. Electrically this appears to be a very high impedance. The antenna and transmission line no longer have the same impedance, and the signal will be reflected back into the antenna, reducing output. This could be addressed by changing the matching system between the antenna and transmission line, but that solution only works well at the new design frequency. The result is that the resonant antenna will efficiently feed a signal into the transmission line only when the source signal's frequency is close to that of the design frequency of the antenna, or one of the resonant multiples. This makes resonant antenna designs inherently narrow-band: Only useful for a small range of frequencies centered around the resonance(s). Electrically short antennas It is possible to use simple impedance matching techniques to allow the use of monopole or dipole antennas substantially shorter than the or  wave, respectively, at which they are resonant. As these antennas are made shorter (for a given frequency) their impedance becomes dominated by a series capacitive (negative) reactance; by adding an appropriate size "loading coil" – a series inductance with equal and opposite (positive) reactance – the antenna's capacitive reactance may be cancelled leaving only a pure resistance. Sometimes the resulting (lower) electrical resonant frequency of such a system (antenna plus matching network) is described using the concept of electrical length, so an antenna used at a lower frequency than its resonant frequency is called an electrically short antenna For example, at 30 MHz (10 m wavelength) a true resonant  wave monopole would be almost 2.5 meters long, and using an antenna only 1.5 meters tall would require the addition of a loading coil. Then it may be said that the coil has lengthened the antenna to achieve an electrical length of 2.5 meters. However, the resulting resistive impedance achieved will be quite a bit lower than that of a true  wave (resonant) monopole, often requiring further impedance matching (a transformer) to the desired transmission line. For ever shorter antennas (requiring greater "electrical lengthening") the radiation resistance plummets (approximately according to the square of the antenna length), so that the mismatch due to a net reactance away from the electrical resonance worsens. Or one could as well say that the equivalent resonant circuit of the antenna system has a higher Q factor and thus a reduced bandwidth, which can even become inadequate for the transmitted signal's spectrum. Resistive losses due to the loading coil, relative to the decreased radiation resistance, entail a reduced electrical efficiency, which can be of great concern for a transmitting antenna, but bandwidth is the major factor that sets the size of antennas at 1 MHz and lower frequencies. Arrays and reflectors The radiant flux as a function of the distance from the transmitting antenna varies according to the inverse-square law, since that describes the geometrical divergence of the transmitted wave. For a given incoming flux, the power acquired by a receiving antenna is proportional to its effective area. This parameter compares the amount of power captured by a receiving antenna in comparison to the flux of an incoming wave (measured in terms of the signal's power density in watts per square metre). A half-wave dipole has an effective area of about 0.13  seen from the broadside direction. If higher gain is needed one cannot simply make the antenna larger. Due to the constraint on the effective area of a receiving antenna detailed below, one sees that for an already-efficient antenna design, the only way to increase gain (effective area) is by reducing the antenna's gain in another direction. If a half-wave dipole is not connected to an external circuit but rather shorted out at the feedpoint, then it becomes a resonant half-wave element which efficiently produces a standing wave in response to an impinging radio wave. Because there is no load to absorb that power, it retransmits all of that power, possibly with a phase shift which is critically dependent on the element's exact length. Thus such a conductor can be arranged in order to transmit a second copy of a transmitter's signal in order to affect the radiation pattern (and feedpoint impedance) of the element electrically connected to the transmitter. Antenna elements used in this way are known as passive radiators. A Yagi–Uda array uses passive elements to greatly increase gain in one direction (at the expense of other directions). A number of parallel approximately half-wave elements (of very specific lengths) are situated parallel to each other, at specific positions, along a boom; the boom is only for support and not involved electrically. Only one of the elements is electrically connected to the transmitter or receiver, while the remaining elements are passive. The Yagi produces a fairly large gain (depending on the number of passive elements) and is widely used as a directional antenna with an antenna rotor to control the direction of its beam. It suffers from having a rather limited bandwidth, restricting its use to certain applications. Rather than using one driven antenna element along with passive radiators, one can build an array antenna in which multiple elements are all driven by the transmitter through a system of power splitters and transmission lines in relative phases so as to concentrate the RF power in a single direction. What's more, a phased array can be made "steerable", that is, by changing the phases applied to each element the radiation pattern can be shifted without physically moving the antenna elements. Another common array antenna is the log-periodic dipole array which has an appearance similar to the Yagi (with a number of parallel elements along a boom) but is totally dissimilar in operation as all elements are connected electrically to the adjacent element with a phase reversal; using the log-periodic principle it obtains the unique property of maintaining its performance characteristics (gain and impedance) over a very large bandwidth. When a radio wave hits a large conducting sheet it is reflected (with the phase of the electric field reversed) just as a mirror reflects light. Placing such a reflector behind an otherwise non-directional antenna will insure that the power that would have gone in its direction is redirected toward the desired direction, increasing the antenna's gain by a factor of at least 2. Likewise, a corner reflector can insure that all of the antenna's power is concentrated in only one quadrant of space (or less) with a consequent increase in gain. Practically speaking, the reflector need not be a solid metal sheet, but can consist of a curtain of rods aligned with the antenna's polarization; this greatly reduces the reflector's weight and wind load. Specular reflection of radio waves is also employed in a parabolic reflector antenna, in which a curved reflecting surface effects focussing of an incoming wave toward a so-called feed antenna; this results in an antenna system with an effective area comparable to the size of the reflector itself. Other concepts from geometrical optics are also employed in antenna technology, such as with the lens antenna. Characteristics The antenna's power gain (or simply "gain") also takes into account the antenna's efficiency, and is often the primary figure of merit. Antennas are characterized by a number of performance measures which a user would be concerned with in selecting or designing an antenna for a particular application. A plot of the directional characteristics in the space surrounding the antenna is its radiation pattern. Bandwidth The frequency range or bandwidth over which an antenna functions well can be very wide (as in a log-periodic antenna) or narrow (as in a small loop antenna); outside this range the antenna impedance becomes a poor match to the transmission line and transmitter (or receiver). Use of the antenna well away from its design frequency affects its radiation pattern, reducing its directive gain. Generally an antenna will not have a feed-point impedance that matches that of a transmission line; a matching network between antenna terminals and the transmission line will improve power transfer to the antenna. A non-adjustable matching network will most likely place further limits the usable bandwidth of the antenna system. It may be desirable to use tubular elements, instead of thin wires, to make an antenna; these will allow a greater bandwidth. Or, several thin wires can be grouped in a cage to simulate a thicker element. This widens the bandwidth of the resonance. Amateur radio antennas that operate at several frequency bands which are widely separated from each other may connect elements resonant at those different frequencies in parallel. Most of the transmitter's power will flow into the resonant element while the others present a high impedance. Another solution uses traps, parallel resonant circuits which are strategically placed in breaks created in long antenna elements. When used at the trap's particular resonant frequency the trap presents a very high impedance (parallel resonance) effectively truncating the element at the location of the trap; if positioned correctly, the truncated element makes a proper resonant antenna at the trap frequency. At substantially higher or lower frequencies the trap allows the full length of the broken element to be employed, but with a resonant frequency shifted by the net reactance added by the trap. The bandwidth characteristics of a resonant antenna element can be characterized according to its where the resistance involved is the radiation resistance, which represents the emission of energy from the resonant antenna to free space. The of a narrow band antenna can be as high as 15. On the other hand, the reactance at the same off-resonant frequency of one using thick elements is much less, consequently resulting in a as low as 5. These two antennas may perform equivalently at the resonant frequency, but the second antenna will perform over a bandwidth 3 times as wide as the antenna consisting of a thin conductor. Antennas for use over much broader frequency ranges are achieved using further techniques. Adjustment of a matching network can, in principle, allow for any antenna to be matched at any frequency. Thus the small loop antenna built into most AM broadcast (medium wave) receivers has a very narrow bandwidth, but is tuned using a parallel capacitance which is adjusted according to the receiver tuning. On the other hand, log-periodic antennas are not resonant at any single frequency but can (in principle) be built to attain similar characteristics (including feedpoint impedance) over any frequency range. These are therefore commonly used (in the form of directional log-periodic dipole arrays) as television antennas. Gain Gain is a parameter which measures the degree of directivity of the antenna's radiation pattern. A high-gain antenna will radiate most of its power in a particular direction, while a low-gain antenna will radiate over a wide angle. The antenna gain, or power gain of an antenna is defined as the ratio of the intensity (power per unit surface area) radiated by the antenna in the direction of its maximum output, at an arbitrary distance, divided by the intensity radiated at the same distance by a hypothetical isotropic antenna which radiates equal power in all directions. This dimensionless ratio is usually expressed logarithmically in decibels, these units are called decibels-isotropic (dBi) A second unit used to measure gain is the ratio of the power radiated by the antenna to the power radiated by a half-wave dipole antenna ; these units are called decibels-dipole (dBd) Since the gain of a half-wave dipole is 2.15 dBi and the logarithm of a product is additive, the gain in dBi is just 2.15 decibels greater than the gain in dBd High-gain antennas have the advantage of longer range and better signal quality, but must be aimed carefully at the other antenna. An example of a high-gain antenna is a parabolic dish such as a satellite television antenna. Low-gain antennas have shorter range, but the orientation of the antenna is relatively unimportant. An example of a low-gain antenna is the whip antenna found on portable radios and cordless phones. Antenna gain should not be confused with amplifier gain, a separate parameter measuring the increase in signal power due to an amplifying device placed at the front-end of the system, such as a low-noise amplifier. Effective area or aperture The effective area or effective aperture of a receiving antenna expresses the portion of the power of a passing electromagnetic wave which the antenna delivers to its terminals, expressed in terms of an equivalent area. For instance, if a radio wave passing a given location has a flux of 1 pW / m2 (10−12 Watts per square meter) and an antenna has an effective area of 12 m2, then the antenna would deliver 12 pW of RF power to the receiver (30 microvolts RMS at 75 ohms). Since the receiving antenna is not equally sensitive to signals received from all directions, the effective area is a function of the direction to the source. Due to reciprocity (discussed above) the gain of an antenna used for transmitting must be proportional to its effective area when used for receiving. Consider an antenna with no loss, that is, one whose electrical efficiency is 100%. It can be shown that its effective area averaged over all directions must be equal to , the wavelength squared divided by . Gain is defined such that the average gain over all directions for an antenna with 100% electrical efficiency is equal to 1. Therefore, the effective area in terms of the gain in a given direction is given by: For an antenna with an efficiency of less than 100%, both the effective area and gain are reduced by that same amount. Therefore, the above relationship between gain and effective area still holds. These are thus two different ways of expressing the same quantity. eff is especially convenient when computing the power that would be received by an antenna of a specified gain, as illustrated by the above example. Radiation pattern The radiation pattern of an antenna is a plot of the relative field strength of the radio waves emitted by the antenna at different angles in the far field. It is typically represented by a three-dimensional graph, or polar plots of the horizontal and vertical cross sections. The pattern of an ideal isotropic antenna, which radiates equally in all directions, would look like a sphere. Many nondirectional antennas, such as monopoles and dipoles, emit equal power in all horizontal directions, with the power dropping off at higher and lower angles; this is called an omnidirectional pattern and when plotted looks like a torus or donut. The radiation of many antennas shows a pattern of maxima or "lobes" at various angles, separated by "nulls", angles where the radiation falls to zero. This is because the radio waves emitted by different parts of the antenna typically interfere, causing maxima at angles where the radio waves arrive at distant points in phase, and zero radiation at other angles where the radio waves arrive out of phase. In a directional antenna designed to project radio waves in a particular direction, the lobe in that direction is designed larger than the others and is called the "main lobe". The other lobes usually represent unwanted radiation and are called "sidelobes". The axis through the main lobe is called the "principal axis" or "boresight axis". The polar diagrams (and therefore the efficiency and gain) of Yagi antennas are tighter if the antenna is tuned for a narrower frequency range, e.g. the grouped antenna compared to the wideband. Similarly, the polar plots of horizontally polarized yagis are tighter than for those vertically polarized. Field regions The space surrounding an antenna can be divided into three concentric regions: The reactive near-field (also called the inductive near-field), the radiating near-field (Fresnel region) and the far-field (Fraunhofer) regions. These regions are useful to identify the field structure in each, although the transitions between them are gradual; there are no clear boundaries. The far-field region is far enough from the antenna to ignore its size and shape: It can be assumed that the electromagnetic wave is purely a radiating plane wave (electric and magnetic fields are in phase and perpendicular to each other and to the direction of propagation). This simplifies the mathematical analysis of the radiated field. Efficiency Efficiency of a transmitting antenna is the ratio of power actually radiated (in all directions) to the power absorbed by the antenna terminals. The power supplied to the antenna terminals which is not radiated is converted into heat. This is usually through loss resistance in the antenna's conductors, or loss between the reflector and feed horn of a parabolic antenna. Antenna efficiency is separate from impedance matching, which may also reduce the amount of power radiated using a given transmitter. If an SWR meter reads 150 W of incident power and 50 W of reflected power, that means 100 W have actually been absorbed by the antenna (ignoring transmission line losses). How much of that power has actually been radiated cannot be directly determined through electrical measurements at (or before) the antenna terminals, but would require (for instance) careful measurement of field strength. The loss resistance and efficiency of an antenna can be calculated once the field strength is known, by comparing it to the power supplied to the antenna. The loss resistance will generally affect the feedpoint impedance, adding to its resistive component. That resistance will consist of the sum of the radiation resistance rad and the loss resistance loss. If a current is delivered to the terminals of an antenna, then a power of 2 rad will be radiated and a power of 2 loss will be lost as heat. Therefore, the efficiency of an antenna is equal to . Only the total resistance rad + loss can be directly measured. According to reciprocity, the efficiency of an antenna used as a receiving antenna is identical to its efficiency as a transmitting antenna, described above. The power that an antenna will deliver to a receiver (with a proper impedance match) is reduced by the same amount. In some receiving applications, the very inefficient antennas may have little impact on performance. At low frequencies, for example, atmospheric or man-made noise can mask antenna inefficiency. For example, CCIR Rep. 258-3 indicates man-made noise in a residential setting at 40 MHz is about 28 dB above the thermal noise floor. Consequently, an antenna with a 20 dB loss (due to inefficiency) would have little impact on system noise performance. The loss within the antenna will affect the intended signal and the noise/interference identically, leading to no reduction in signal to noise ratio (SNR). Antennas which are not a significant fraction of a wavelength in size are inevitably inefficient due to their small radiation resistance. AM broadcast radios include a small loop antenna for reception which has an extremely poor efficiency. This has little effect on the receiver's performance, but simply requires greater amplification by the receiver's electronics. Contrast this tiny component to the massive and very tall towers used at AM broadcast stations for transmitting at the very same frequency, where every percentage point of reduced antenna efficiency entails a substantial cost. The definition of antenna gain or power gain already includes the effect of the antenna's efficiency. Therefore, if one is trying to radiate a signal toward a receiver using a transmitter of a given power, one need only compare the gain of various antennas rather than considering the efficiency as well. This is likewise true for a receiving antenna at very high (especially microwave) frequencies, where the point is to receive a signal which is strong compared to the receiver's noise temperature. However, in the case of a directional antenna used for receiving signals with the intention of rejecting interference from different directions, one is no longer concerned with the antenna efficiency, as discussed above. In this case, rather than quoting the antenna gain, one would be more concerned with the directive gain, or simply directivity which does not include the effect of antenna (in)efficiency. The directive gain of an antenna can be computed from the published gain divided by the antenna's efficiency. In equation form, gain = directivity × efficiency. Polarization The orientation and physical structure of an antenna determine the polarization of the electric field of the radio wave transmitted by it. For instance, an antenna composed of a linear conductor (such as a dipole or whip antenna) oriented vertically will result in vertical polarization; if turned on its side the same antenna's polarization will be horizontal. Reflections generally affect polarization. Radio waves reflected off the ionosphere can change the wave's polarization. For line-of-sight communications or ground wave propagation, horizontally or vertically polarized transmissions generally remain in about the same polarization state at the receiving location. Using a vertically polarized antenna to receive a horizontally polarized wave (or visa-versa) results in relatively poor reception. An antenna's polarization can sometimes be inferred directly from its geometry. When the antenna's conductors viewed from a reference location appear along one line, then the antenna's polarization will be linear in that very direction. In the more general case, the antenna's polarization must be determined through analysis. For instance, a turnstile antenna mounted horizontally (as is usual), from a distant location on Earth, appears as a horizontal line segment, so its radiation received there is horizontally polarized. But viewed at a downward angle from an airplane, the same antenna does not meet this requirement; in fact its radiation is elliptically polarized when viewed from that direction. In some antennas the state of polarization will change with the frequency of transmission. The polarization of a commercial antenna is an essential specification. In the most general case, polarization is elliptical, meaning that over each cycle the electric field vector traces out an ellipse. Two special cases are linear polarization (the ellipse collapses into a line) as discussed above, and circular polarization (in which the two axes of the ellipse are equal). In linear polarization the electric field of the radio wave oscillates along one direction. In circular polarization, the electric field of the radio wave rotates around the axis of propagation. Circular or elliptically polarized radio waves are designated as right-handed or left-handed using the "thumb in the direction of the propagation" rule. Note that for circular polarization, optical researchers use the opposite right-hand rule from the one used by radio engineers. It is best for the receiving antenna to match the polarization of the transmitted wave for optimum reception. Otherwise there will be a loss of signal strength: when a linearly polarized antenna receives linearly polarized radiation at a relative angle of θ, then there will be a power loss of cos2θ . A circularly polarized antenna can be used to equally well match vertical or horizontal linear polarizations, suffering a 3 dB signal reduction. However it will be blind to a circularly polarized signal of the opposite orientation. Impedance matching Maximum power transfer requires matching the impedance of an antenna system (as seen looking into the transmission line) to the complex conjugate of the impedance of the receiver or transmitter. In the case of a transmitter, however, the desired matching impedance might not exactly correspond to the dynamic output impedance of the transmitter as analyzed as a source impedance but rather the design value (typically 50 Ohms) required for efficient and safe operation of the transmitting circuitry. The intended impedance is normally resistive, but a transmitter (and some receivers) may have limited additional adjustments to cancel a certain amount of reactance, in order to "tweak" the match. When a transmission line is used in between the antenna and the transmitter (or receiver) one generally would like an antenna system whose impedance is resistive and nearly the same as the characteristic impedance of that transmission line, in addition to matching the impedance that the transmitter (or receiver) expects. The match is sought to minimize the amplitude of standing waves (measured via the standing wave ratio; SWR) that a mismatch raises on the line, and the increase in transmission line losses it entails. Antenna tuning at the antenna Antenna tuning, in the strict sense of modifying the antenna itself, generally refers only to cancellation of any reactance seen at the antenna terminals, leaving only a resistive impedance which might or might not be exactly the desired impedance (that of the available transmission line). Although an antenna may be designed to have a purely resistive feedpoint impedance (such as a dipole 97% of a half wavelength long) at just one frequency, this will very likely not be exactly true at other frequencies that the antenna is eventually used for. In most cases, in principle the physical length of the antenna can be "trimmed" to obtain a pure resistance, although this is rarely convenient. On the other hand, the addition of a contrary inductance or capacitance can be used to cancel a residual capacitive or inductive reactance, respectively, and may be more convenient than lowering and trimming or extending the antenna, then hoisting it back. Antenna reactance may be removed using lumped elements, such as capacitors or inductors in the main path of current traversing the antenna, often near the feedpoint, or by incorporating capacitive or inductive structures into the conducting body of the antenna to cancel the feedpoint reactance – such as open-ended "spoke" radial wires, or looped parallel wires – hence genuinely tune the antenna to resonance. In addition to those reactance-neutralizing add-ons, antennas of any kind may include a transformer and / or transformer balun at their feedpoint, to change the resistive part of the impedance to more nearly match the feedline's characteristic impedance. Line matching at the radio Antenna tuning in the loose sense, performed by an impedance matching device (somewhat inappropriately named an "antenna tuner", or the older, more appropriate term transmatch) goes beyond merely removing reactance and includes transforming the remaining resistance to match the feedline and radio. An additional problem is matching the remaining resistive impedance to the characteristic impedance of the transmission line: A general impedance matching network (an "antenna tuner" or ATU) will have at least two adjustable elements to correct both components of impedance. Any matching network will have both power losses and power restrictions when used for transmitting. Commercial antennas are generally designed to approximately match standard 50 Ohm coaxial cables, at standard frequencies; the design expectation is that a matching network will be merely used to 'tweak' any residual mismatch. Extreme examples of loaded small antennas In some cases matching is done in a more extreme manner, not simply to cancel a small amount of residual reactance, but to resonate an antenna whose resonance frequency is quite different from the intended frequency of operation. Short vertical "whip" For instance, for practical reasons a "whip antenna" can be made significantly shorter than a quarter-wavelength and then resonated, using a so-called loading coil. The physically large inductor at the base of the antenna has an inductive reactance which is the opposite of the capacitative reactance that the short vertical antenna has at the desired operating frequency. The result is a pure resistance seen at feedpoint of the loading coil; although, without further measures, the resistance will be somewhat lower than would be desired to match commercial coax. Small "magnetic" loop Another extreme case of impedance matching occurs when using a small loop antenna (usually, but not always, for receiving) at a relatively low frequency, where it appears almost as a pure inductor. When such an inductor is resonated via a capacitor attached in parallel across its feedpoint, the capacitor not only cancels the reactance but also greatly magnifies the very small radiation resistance of a small loop to produce a better-matched feedpoint resistance. This is the type of antenna used in most portable AM broadcast receivers (other than car radios): The standard AM antenna is a loop of wire wound around a ferrite rod (a "loopstick antenna"). The loop is resonated by a coupled tuning capacitor, which is configured to match the receiver's tuning, in order to keep the antenna resonant at the chosen receive frequency over the AM broadcast band. Effect of ground Ground reflections is one of the common types of multipath. The radiation pattern and even the driving point impedance of an antenna can be influenced by the dielectric constant and especially conductivity of nearby objects. For a terrestrial antenna, the ground is usually one such object of importance. The antenna's height above the ground, as well as the electrical properties (permittivity and conductivity) of the ground, can then be important. Also, in the particular case of a monopole antenna, the ground (or an artificial ground plane) serves as the return connection for the antenna current thus having an additional effect, particularly on the impedance seen by the feed line. When an electromagnetic wave strikes a plane surface such as the ground, part of the wave is transmitted into the ground and part of it is reflected, according to the Fresnel coefficients. If the ground is a very good conductor then almost all of the wave is reflected (180° out of phase), whereas a ground modeled as a (lossy) dielectric can absorb a large amount of the wave's power. The power remaining in the reflected wave, and the phase shift upon reflection, strongly depend on the wave's angle of incidence and polarization. The dielectric constant and conductivity (or simply the complex dielectric constant) is dependent on the soil type and is a function of frequency. For very low frequencies to high frequencies (< 30 MHz), the ground behaves as a lossy dielectric, thus the ground is characterized both by a conductivity and permittivity (dielectric constant) which can be measured for a given soil (but is influenced by fluctuating moisture levels) or can be estimated from certain maps. At lower mediumwave frequencies the ground acts mainly as a good conductor, which AM broadcast (0.5–1.7 MHz) antennas depend on. At frequencies between 3–30 MHz, a large portion of the energy from a horizontally polarized antenna reflects off the ground, with almost total reflection at the grazing angles important for ground wave propagation. That reflected wave, with its phase reversed, can either cancel or reinforce the direct wave, depending on the antenna height in wavelengths and elevation angle (for a sky wave). On the other hand, vertically polarized radiation is not well reflected by the ground except at grazing incidence or over very highly conducting surfaces such as sea water. However the grazing angle reflection important for ground wave propagation, using vertical polarization, is in phase with the direct wave, providing a boost of up to 6 dB, as is detailed below. At VHF and above (> 30 MHz) the ground becomes a poorer reflector. However, for shortwave frequencies, especially below ~15 MHz, it remains a good reflector especially for horizontal polarization and grazing angles of incidence. That is important as these higher frequencies usually depend on horizontal line-of-sight propagation (except for satellite communications), the ground then behaving almost as a mirror. The net quality of a ground reflection depends on the topography of the surface. When the irregularities of the surface are much smaller than the wavelength, the dominant regime is that of specular reflection, and the receiver sees both the real antenna and an image of the antenna under the ground due to reflection. But if the ground has irregularities not small compared to the wavelength, reflections will not be coherent but shifted by random phases. With shorter wavelengths (higher frequencies), this is generally the case. Whenever both the receiving or transmitting antenna are placed at significant heights above the ground (relative to the wavelength), waves reflected specularly by the ground will travel a longer distance than direct waves, inducing a phase shift which can sometimes be significant. When a sky wave is launched by such an antenna, that phase shift is always significant unless the antenna is very close to the ground (compared to the wavelength). The phase of reflection of electromagnetic waves depends on the polarization of the incident wave. Given the larger refractive index of the ground (typically  ≈ 2) compared to air ( = 1), the phase of horizontally polarized radiation is reversed upon reflection (a phase shift of  radians, or 180°). On the other hand, the vertical component of the wave's electric field is reflected at grazing angles of incidence approximately in phase. These phase shifts apply as well to a ground modeled as a good electrical conductor. This means that a receiving antenna "sees" an image of the emitting antenna but with 'reversed' currents (opposite in direction and phase) if the emitting antenna is horizontally oriented (and thus horizontally polarized). However, the received current will be in the same absolute direction and phase if the emitting antenna is vertically polarized. The actual antenna which is transmitting the original wave then also may receive a strong signal from its own image from the ground. This will induce an additional current in the antenna element, changing the current at the feedpoint for a given feedpoint voltage. Thus the antenna's impedance, given by the ratio of feedpoint voltage to current, is altered due to the antenna's proximity to the ground. This can be quite a significant effect when the antenna is within a wavelength or two of the ground. But as the antenna height is increased, the reduced power of the reflected wave (due to the inverse square law) allows the antenna to approach its asymptotic feedpoint impedance given by theory. At lower heights, the effect on the antenna's impedance is very sensitive to the exact distance from the ground, as this affects the phase of the reflected wave relative to the currents in the antenna. Changing the antenna's height by a quarter wavelength, then changes the phase of the reflection by 180°, with a completely different effect on the antenna's impedance. The ground reflection has an important effect on the net far field radiation pattern in the vertical plane, that is, as a function of elevation angle, which is thus different between a vertically and horizontally polarized antenna. Consider an antenna at a height above the ground, transmitting a wave considered at the elevation angle . For a vertically polarized transmission the magnitude of the electric field of the electromagnetic wave produced by the direct ray plus the reflected ray is: Thus the power received can be as high as 4 times that due to the direct wave alone (such as when  = 0), following the square of the cosine. The sign inversion for the reflection of horizontally polarized emission instead results in: where: is the electrical field that would be received by the direct wave if there were no ground. is the elevation angle of the wave being considered. is the wavelength. is the height of the antenna (half the distance between the antenna and its image). For horizontal propagation between transmitting and receiving antennas situated near the ground reasonably far from each other, the distances traveled by the direct and reflected rays are nearly the same. There is almost no relative phase shift. If the emission is polarized vertically, the two fields (direct and reflected) add and there is maximum of received signal. If the signal is polarized horizontally, the two signals subtract and the received signal is largely cancelled. The vertical plane radiation patterns are shown in the image at right. With vertical polarization there is always a maximum for  = 0, horizontal propagation (left pattern). For horizontal polarization, there is cancellation at that angle. The above formulae and these plots assume the ground as a perfect conductor. These plots of the radiation pattern correspond to a distance between the antenna and its image of 2.5  . As the antenna height is increased, the number of lobes increases as well. The difference in the above factors for the case of  = 0 is the reason that most broadcasting (transmissions intended for the public) uses vertical polarization. For receivers near the ground, horizontally polarized transmissions suffer cancellation. For best reception the receiving antennas for these signals are likewise vertically polarized. In some applications where the receiving antenna must work in any position, as in mobile phones, the base station antennas use mixed polarization, such as linear polarization at an angle (with both vertical and horizontal components) or circular polarization. On the other hand, analog television transmissions are usually horizontally polarized, because in urban areas buildings can reflect the electromagnetic waves and create ghost images due to multipath propagation. Using horizontal polarization, ghosting is reduced because the amount of reflection in the horizontal polarization off the side of a building is generally less than in the vertical direction. Vertically polarized analog television have been used in some rural areas. In digital terrestrial television such reflections are less problematic, due to robustness of binary transmissions and error correction. Modeling antennas with line equations In the first approximation, the current in a thin antenna is distributedexactly as in a transmission line. — Schelkunoff & Friis (1952) The flow of current in wire antennas is identical to the solution of counter-propagating waves in a single conductor transmission line, which can be solved using the telegrapher's equations. Solutions of currents along antenna elements are more conveniently and accurately obtained by numerical methods, so transmission-line techniques have largely been abandoned for precision modelling, but they continue to be a widely used source of useful, simple approximations that describe well the impedance profiles of antennas. Unlike transmission lines, currents in antennas contribute power to the radiated part electromagnetic field, which can be modeled using radiation resistance. The end of an antenna element corresponds to an unterminated (open) end of a single-conductor transmission line, resulting in a reflected wave identical to the incident wave, with its voltage in phase with the incident wave and its current in the opposite phase (thus net zero current, where there is, after all, no further conductor). The combination of the incident and reflected wave, just as in a transmission line, forms a standing wave with a current node at the conductor's end, and a voltage node one-quarter wavelength from the end (if the element is at least that long). In a resonant antenna, the feedpoint of the antenna is at one of those voltage nodes. Due to discrepancies from the simplified version of the transmission line model, the voltage one quarter wavelength from the current node is not exactly zero, but it is near a minimum, and small compared to the much large voltage at the conductor's end. Hence, a feed point matching the antenna at that spot requires a relatively small voltage but large current (the currents from the two waves add in-phase there), thus a relatively low feedpoint impedance. Feeding the antenna at other points involves a large voltage, thus a large impedance, and usually one that is primarily reactive (low power factor), which is a terrible impedance match to available transmission lines. Therefore, it is usually desired for an antenna to operate as a resonant element with each conductor having a length of one quarter wavelength (or any other odd multiples of a quarter wavelength). For instance, a half-wave dipole has two such elements (one connected to each conductor of a balanced transmission line) about one quarter wavelength long. Depending on the conductors' diameters, a small deviation from this length is adopted in order to reach the point where the antenna current and the (small) feedpoint voltage are exactly in phase. Then the antenna presents a purely resistive impedance, and ideally one close to the characteristic impedance of an available transmission line. Despite these useful properties, resonant antennas have the disadvantage that they achieve resonance (purely resistive feedpoint impedance) only at a fundamental frequency, and perhaps some of its harmonics, and the feedpoint resistance is larger at higher-order resonances. Therefore, resonant antennas can only achieve their good performance within a limited bandwidth, depending on the at the resonance. Mutual impedance and interaction between antennas The electric and magnetic fields emanating from a driven antenna element will generally affect the voltages and currents in nearby antennas, antenna elements, or other conductors. This is particularly true when the affected conductor is a resonant element (multiple of half-wavelengths in length) at about the same frequency, as is the case where the conductors are all part of the same active or passive antenna array. Because the affected conductors are in the near-field, one can not just treat two antennas as transmitting and receiving a signal according to the Friis transmission formula for instance, but must calculate the mutual impedance matrix which takes into account both voltages and currents (interactions through both the electric and magnetic fields). Thus using the mutual impedances calculated for a specific geometry, one can solve for the radiation pattern of a Yagi–Uda antenna or the currents and voltages for each element of a phased array. Such an analysis can also describe in detail reflection of radio waves by a ground plane or by a corner reflector and their effect on the impedance (and radiation pattern) of an antenna in its vicinity. Often such near-field interactions are undesired and pernicious. Currents in random metal objects near a transmitting antenna will often be in poor conductors, causing loss of RF power in addition to unpredictably altering the characteristics of the antenna. By careful design, it is possible to reduce the electrical interaction between nearby conductors. For instance, the 90 degree angle in between the two dipoles composing the turnstile antenna insures no interaction between these, allowing these to be driven independently (but actually with the same signal in quadrature phases in the turnstile antenna design). Antenna types Antennas can be classified by operating principles or by their application. Different authorities placed antennas in narrower or broader categories. Generally these include Dipole and monopole antennas Array antennas Loop antennas Aperture antennas Traveling wave antennas Log-periodic antenna Spiral antenna Horn antenna Adcock antenna Sector antenna Helical antenna These antenna types and others are summarized in greater detail in the overview article, Antenna types, as well as in each of the linked articles in the list above, and in even more detail in articles which those link to. See also Antenna feed :Category:Radio frequency antenna types :Category:Radio frequency propagation Cellular repeater Counterpoise DXing Electromagnetism Feedline matching unit Mobile broadband modem Numerical Electromagnetics Code Radial (radio) Radio masts and towers RF connector Smart antenna TETRA Shortwave broadband antenna Personal RF safety monitor Footnotes References Radio electronics
Antenna (radio)
Engineering
12,375
41,204,100
https://en.wikipedia.org/wiki/Citizens%20for%20Conservation
Citizens for Conservation (commonly called CFC) is a nonprofit organization, centered in Barrington, Illinois, established in 1971. CFC's motto is Saving Living Space for Living Things through protection, restoration and stewardship of land, conservation of natural resources and education. It is a member of Chicago Wilderness and the Land Trust Alliance. CFC specializes in habitat restoration, both on properties it owns and nearby forest preserves of Lake County Forest Preserve District and Forest Preserve District of Cook County. CFC relies almost entirely on volunteers, meeting at least once a week year-round. In addition, student interns are hired during the summer. CFC received the 2011 Conservation and Native Landscaping award from the U.S. EPA and Chicago Wilderness for its restoration work on the Flint Creek Savanna, their largest property and location of their headquarters. CFC properties As of early 2020, CFC owned 12 properties for a total of 476 acres. Much of this is agricultural land that was donated or purchased, and restored back to natural habitat, primarily oak savanna, tallgrass prairie, and wetlands. Removal of invasive species and re-seeding of native species from local seed sources is the main focus of habitat restoration. It has the largest holding of fee simple lands (direct ownership) of any non-profit in Lake County, Illinois. Education CFC offers periodic programs for children as part of the No Child Left Inside project, and works with the local school district to introduce 3rd and 4th graders to the prairie. It also provides occasional community education programs for adults. References Nature conservation organizations based in the United States Ecological restoration
Citizens for Conservation
Chemistry,Engineering
322
58,230,115
https://en.wikipedia.org/wiki/Ragulator-Rag%20complex
The Ragulator-Rag complex is a regulator of lysosomal signalling and trafficking in eukaryotic cells, which plays an important role in regulating cell metabolism and growth in response to nutrient availability in the cell. The Ragulator-Rag Complex is composed of five LAMTOR subunits, which work to regulate MAPK and mTOR complex 1. The LAMTOR subunits form a complex with Rag GTPase and v-ATPase, which sits on the cell’s lysosomes and detects the availability of amino acids. If the Ragulator complex receives signals for low amino acid count, it will start the process of catabolizing the cell. If there is an abundance of amino acids available to the cell, the Ragulator complex will signal that the cell can continue to grow. Ragulator proteins come in two different forms: Rag A/Rag B and Rag C/Rag D. These interact to form heterodimers with one another. History mTORC1 is a complex within the lysosome membrane that initiates growth when promoted by a stimulus, such as growth factors. A GTPase is a key component in cell signaling, and there were, in 2010, four RAG complexes discovered within the lysosomes of cells. In 2008, it was thought that these RAG complexes would slow down autophagy and activate cell growth by interacting with mTORC1. However, in 2010, the Ragulator was discovered. Researchers determined that the function of this Ragulator was to interact with the RAG A, B, C, and D complexes to promote cell growth. This discovery also led to the first use of the term “Rag-Ragulator” complex, because of the interaction between these two. The amino acid level, cell growth, and other important factors are influenced by the mTOR Complex 1 pathway. On the lysosomal surface, the amino acids signal the activation of the four Rag proteins (RagA, RagB, RagC, and RagD) to translocate mTORC1 to the site of activation. A 2014 study noted that AMPK (AMP-activated protein kinase) and mTOR play important roles in managing different metabolic programs. It was also found that the protein complex v-ATPase-Ragulator was essential for activation of mTOR and AMPK. The v-ATPase-Ragulator complex is also used as an initiating sensor for energy stress, and serves as an endosomal docking site for LKB1-mediated AMPK activation by forming the v-ATPase-Ragulator-AXIN/LKB1-AMPK complex. This allows a switch between catabolism and anabolism. In 2016, it was established that RagA and Lamtor4 were key to microglia functioning and biogenesis regulation within the lysosome. Further studies also indicate that the Ragulator-Rag complex interacts with proteins other than mTORC1, including an interaction with v-ATPase, which facilitates functions within microglia of the lysosome. In 2017, the Ragulator was thought to regulate the position of the lysosome, and interact with BORC, a multi subunit complex located on the surface of the lysosomal membrane. Both BORC and mTORC1 work together in activating the GTPases to change the position of the lysosome. It was concluded that BORC and GTPases compete for a binding site in the LAMTOR 2 protein to reposition the lysosome. Function While the intricate functions of the Ragulator-Rag Complex are not fully understood, it is known that the Ragulator-Rag Complex associates with the lysosome and plays a key role in mTOR (mammalian target of rapamycin) signaling regulation. mTOR signaling is sensitive to amino acid concentrations in the cytoplasm of the cell, and the Ragulator complex works to detect amino acid concentration and transmit signals that activate, or inhibit, mTORC1. The Ragulator, along with the Rag GTPases and v-ATPases, are part of an amino acid identifying pathway, and are necessary for the localization of the mTORC1 to the lysosome surface. The Ragulator and v-ATPases reside on the lysosomal surface. The Rag GTPases cannot be directly bound to the lysosome because they lack the proteins necessary to bind to its lipid bilayer, so Rag GTPases must instead be anchored to the Ragulator. The Ragulator is bound to the surface via the V-ATPase. The Ragulator is a crystalized structure composed of five different subunits; LAMTOR 1, LAMTOR 2, LAMTOR 3, LAMTOR 4, LAMTOR 5. There are two sets of obligate heterodimers in the complex, LAMTOR 2/3, which sits right above LAMTOR 4/5. The LAMTOR 1 dimer does not have the same structure as the other subunits. LAMTOR 1 surrounds most of the two heterodimers, providing structural support and keeping the heterodimers in place. When amino acids are present, the subunits are folded and positioned in such a way that allows for the Rag-GTPases to be anchored to its primary docking site of LAMTOR 2/3 on the Ragulator. The Rag-GTPases consist of two sets of heterodimers; RAGs A/B and RAGs C/D. Before Rag-GTPases can bind to the Ragulator, Rag A/B must be GTP loaded via guanine nucleotide exchange factors (GEFs), and RAG C/D must be GDP loaded. Once Rag-GTPases are bound to the regulator complex, the mTORC1 can be translocated to the surface of the lysosome. At the lysosomal surface, the mTORC1 will then bind to Rheb, but only if Rheb was first loaded to a GTP via GEFs. If the amount of nutrients and the concentration of amino acids are sufficient, mTORC1 will be activated. Activation of mTORC1 The lysosomal membrane is the main area in which mTORC1 is activated. However, some activation can occur in the Golgi apparatus and the peroxisome. In mammalian cells, GTPase RagA and RagB are heterodimers with RagC and RagD, respectively. When enough amino acids are present, RagA/B GTPase becomes activated, which leads to the translocation of mTORC1 from the cytoplasm to the lysosome surface, via the Raptor. This process brings mTORC1 in close enough proximity to Rheb for Rheb to either (1) cause a conformational change to mTORC1, leading to and increase in substrate turnover, or (2) induce kinase activity of mTORC1. Rags do not contain membrane-targeting sequences, and as a result, depend on the entire Ragulator-Rag Complex to bind to the lysosome, activating mTORC1. While most amino acids indirectly activate mTORC1 in mammals, Leucine has the ability to directly activate mTORC1 in cells that are depleted of amino acids. Yeast contain LRS (leucyltRNA synthetase), which is a molecule that can interact with Rags, directly activating the molecule. Structure The complex consists of five subunits, named LAMTOR 1-5 (Late endosomal/lysosomal adaptor, mapk and mtor activator 1), however several have alternative names. LAMTOR1 LAMTOR2 LAMTOR3 (MAP2K1IP1) LAMTOR4 LAMTOR5 (HBXIP) References Cell biology
Ragulator-Rag complex
Biology
1,616
33,486,619
https://en.wikipedia.org/wiki/Society%20for%20Psychophysiological%20Research
The Society for Psychophysiological Research is an international scientific organization with over 800 members worldwide. The society is composed of scientists whose research is focused on the study of the interrelationships between the physiological and psychological aspects of behavior. Psychophysiology “The body is the medium of experience and the instrument of action. Through its actions we shape and organize our experiences and distinguish our perceptions of the outside world from sensations that arise within the body itself.” (Jonathan Miller, The Body in Question, 1978) Like anatomy and physiology, psychophysiology is a branch of science interested in bodily systems. However, anatomy is primarily concerned with body structures and relationships amongst structures, and physiology is primarily interested in the function of these structures or systems—or with how different parts of the body work. Psychophysiological research covers both of these concerns, but is also interested in connecting anatomy and physiology with psychological phenomena. In other words, psychophysiological research can consist of the study of social, psychological, and/or behavioral phenomena as they are reflected in the body. A great deal of psychophysiological research has focused on the physiological instantiation of emotion, but with increased access to measures of the central nervous system, psychophysiological research has also examined cognitive processes. Psychophysiological methods Skin conductance (level and response) Cardiac measures (heart rate, heart rate variability, contractility, both sympathetic nervous system and parasympathetic nervous system measures, blood pressure, plethysmography) Oculomotor and pupilometric measures Electromyographic activity Respiration Gastrointestinal activity Penile and vaginal plethysmography Electroencephalography Event-related potentials (ERP) Event-related frequency changes Hormonal and endocrinological measures Immune function Functional neuroimaging Positron emission tomography Functional magnetic resonance imaging (fMRI) Optical imaging Magnetoencephalography (MEG) History As late as the 1950s, the field of psychophysiology was not a fully unified discipline. Psychophysiologists published in multiple non-specialist journals and were often not abreast of their colleagues’ work. However, in 1955, the influential early psychophysiologist Albert F. Ax (1913–1994) began circulating The Psychophysiology Newsletter, a slight collection of methodological observations and bibliographies for various psychophysiological methods. The first volume was free to subscribers, and for several years the newsletter circulated to fewer than 50 members. Nonetheless, his work on the newsletter allowed Ax to organize and open communication amongst psychophysiologists from across North America. Through his work, the discipline and field of psychophysiology began to cohere. Scientists were better able to communicate not only their scientific findings, but also methodological advances they’d made in what was—at the time—a relatively crude and fledgling science. In the 1950s, Ax also began arranging formal meetings of these early psychophysiologists in what became known as the “Psychophysiology Group.” For several years, the group met regularly at the annual American Psychological Association conference. And at the 1959 meeting in Cincinnati, Ohio, the group decided to establish its own society, in part in order to oversee the transformation of The Psychophysiology Newsletter into a peer-reviewed scientific journal (which became the journal Psychophysiology). Aside from Ax, many scientists who became officers of the fledgling society were present, including R.C. Davis (chair of the organizing board), Marion Augustus “Gus” Wenger, Robert Edelberg, Martine Orne, Clinton C. Brown, and William W. Grings. The society took the name Society for Psychophysiological Research, and since its first informal gatherings, has grown to over 800 members worldwide and has held 51 annual meetings in North America and Europe. The society continues to publish Psychophysiology, an influential monthly peer-reviewed journal interested in advancing psychophysiological science and human neuroscience, covering research on the interrelationships between the physiological and psychological aspects of brain and behavior. Annual meeting The annual meeting of Society for Psychophysiological Research is attended by scientists from around the world. The meeting includes presentations of new theory, methods, and research in the form of invited addresses, symposia, poster sessions, and Presidential and Award addresses. At each meeting, the society also typically offers preconference workshops on specific topics or methodological advances. Topics covered in the 2011 preconference workshops included a bootcamp on Event-related potential Methodologies, Genetic Approaches to the Biology of Complex Traits, and Fundamentals of Pupillary Measures and Eye tracking. Recent meetings have been held in Portland, OR, Berlin, Germany, Vancouver, British Columbia, New Orleans, Louisiana, and Florence, Italy. Meetings have been scheduled to be held at various locations around the world. Awards Distinguished Contributions to Psychophysiology Past Awardees: Chester W. Darrow (1969) Roland Clark Davis (1969) Marion A. Wenger (1970) John I. Lacey (1970) Albert F. Ax (1973) Robert Edelberg (1974) William W. Grings (1978) Frances K. Graham (1981) Donald B. Lindsley (1984) Paul A. Obrist (1985) Peter H. Venables (1987) David Shapiro (1988) Eugene Sokolov (1988) Peter J. Lang (1990) John A. Stern (1993) Emanuel Donchin (1994) Risto Naatanen (1995) David T. Lykken (1998) Steven A. Hillyard (1999) John Cacioppo (2000) Arne Ohman (2001) Michael G.H. Coles (2002) Robert M. Stern (2004) Kees Brunia (2005) Marta Kutas (2007) William Iacono (2008) Niels Birbaumer (2009) Judith M. Ford (2010) Margaret Bradley (2011) Donald Fowles (2012) Gregory A. Miller (2013) Distinguished Early Career Contributions to Psychophysiology Past awardees: Connie Duncan (1980) Kathleen C. Light (1980) John Cacioppo (1981) William Iacono (1982) Graham Turpin (1984) Ray Johnson Jr. (1985) Alan J. Fridlund (1986) J. Rick Turner (1988) Ulf Dimberg (1988) Kimmo Alho (1990) Thomas W. Kamarck (1991) Steven A. Hackley (1992) George R. Mangun (1993) Christopher J. Patrick (1993) Cyma Van Petten (1994) Friedemann Pulvermuller (1995) Erich Schroger (1996) Brett A. Clementz (1997) Gabriele Gratton (1997) Christopher R. France (1998) Axel Mecklinger (1999) John J.B. Allen (2000) James Gross (2000) Martin Heil (2001) Eddie Harmon-Jones (2002) Thomas Ritz (2003) Frank Wilhelm (2004) Kent A. Kiehl (2005) Kara Federmeier (2006) Diego Pizzagalli (2006) Bruce D. Bartholow (2007) Markus Ullsperger (2008) Sander Nieuwenhuis (2009) James Coan (2010) Eveline Crone (2011) Greg Hajcak (2012) Ilse Van Dienst (2013) Training Award Fellowships Award funds graduate students and post-doctoral students who wish to obtain training in psychophysiology which falls outside of the scope of their home labs. Student Poster Awards Award signals excellence in research presented in a poster format by a student member. References The Handbook of Psychophysiology (2007), John T. Cacioppo, Louis G. Tassinary, Gary Berntson (Eds.), Cambridge University Press Psychological societies Psychophysics
Society for Psychophysiological Research
Physics
1,624
7,460,839
https://en.wikipedia.org/wiki/Carsten%20Thomassen%20%28mathematician%29
Carsten Thomassen (born August 22, 1948 in Grindsted) is a Danish mathematician. He has been a Professor of Mathematics at the Technical University of Denmark since 1981, and since 1990 a member of the Royal Danish Academy of Sciences and Letters. His research concerns discrete mathematics and more specifically graph theory. Thomassen received his Ph.D. in 1976 from the University of Waterloo. He is editor-in-chief of the Journal of Graph Theory and the Electronic Journal of Combinatorics, and editor of Combinatorica, the Journal of Combinatorial Theory Series B, Discrete Mathematics, and the European Journal of Combinatorics. He was awarded the Dedicatory Award of the 6th International Conference on the Theory and Applications of Graphs by the Western Michigan University in May 1988, the Lester R. Ford Award by the Mathematical Association of America in 1993, and the Faculty of Mathematics Alumni Achievement Medal by the University of Waterloo in 2005. In 1990, he was an invited speaker (Graphs, random walks and electric networks) at the ICM in Kyōto. He was included on the ISI Web of Knowledge list of the 250 most cited mathematicians. Selected works with Bojan Mohar: Graphs on surfaces, Johns Hopkins University Press 2001 5-choosability of planar graphs (see List coloring) works on Hypohamiltonian graphs Hamilton connectivity of Tournaments (see Tournament (graph theory)) and of 4-connected planar graphs his proof of Gr%C3%B6tzsch%27s theorem See also List of University of Waterloo people References 1948 births Living people People from Billund Municipality 20th-century Danish mathematicians Graph theorists University of Waterloo alumni
Carsten Thomassen (mathematician)
Mathematics
337
41,356,580
https://en.wikipedia.org/wiki/Ionic%20hydrogenation
Ionic hydrogenation refers to hydrogenation achieved by the addition of a hydride to substrate that has been activated by an electrophile. Some ionic hydrogenations entail addition of H2 to the substrate and some entail replacement of a heteroatom with hydride. Traditionally, the method was developed for acid-induced reductions with hydrosilanes. Alternatively ionic hydrogenation can be achieved using H2. Ionic hydrogenation is employed when the substrate can produce a stable carbonium ion. Polar double bonds are favored substrates. Using hydrosilanes Because silicon (1.90) is more electropositive than hydrogen (2.20), hydrosilanes exhibit (mild) hydridic character. Hydrosilanes can serve as hydride donors to highly electrophilic organic substrates. Many alcohols, alkyl halides, acetals, orthoesters, alkenes, aldehydes, ketones, and carboxylic acid derivatives are suitable substrates. Such reactions often require Lewis acids. Only reactive electrophiles undergo reduction, selectivity is possible in reactions of substrates with multiple reducible functional groups. Upon the generation of a carbocation, rate-determining hydride transfer from the organosilane occurs to yield a reduced product. Retention of configuration at silicon has been observed in silane reductions of chiral triaryl methyl chlorides in benzene. This result suggests that the exchange of chlorine for hydrogen occurs through σ-bond metathesis. Reductions in more polar solvents may involve silicenium ions. Polymeric hydrosilanes, such as polymethylhydrosiloxane (PHMS) may be employed to facilitate separation of the reduced products from silicon-containing byproducts. Using H2 The proton and hydride transfers are usually sequential or concerted. Usually ionic hydrogenation is shown to occur in two steps, starting with protonation. R2C=Y + H+ → R2C+-YH R2C+-YH + "H−" → R2CH-YH Substrates In the case of metal-catalyzed ionic hydrogenation, the substrates and their products must not bind to metal sites, as this would interfere with H2 activation. Ketones are the most common substrates. Less common are imines and N-heterocycles. The reaction can also be performed in reverse to effect hydrogenolysis. Liquid substrates can sometimes be hydrogenated without solvent, a goal of green chemistry. Proton and hydride pairs The most common hydrogenating pair is an organosilane as the hydride source (e.g. triethylsilane), and a strong oxyacid as the proton source (e.g. trifluoroacetic acid or triflic acid). The hydride and proton source cannot combine to give H2, which limits the hydricity and acidity of the H− and H+ sources, respectively. Transition metal hydride complexes can be used in place of organosilanes as the hydride source. In these cases, triflic acid is a typical proton donor. Ketones such as benzophenones, and 1,1-disubstituted olefins are typical substrates. Hydrides of tungsten, chromium, osmium, and molybdenum complexes have also been reported. Tungsten dihydride complexes can hydrogenate ketones stoichiometrically with no external acids. One hydride serves as the hydride source, and the other serves as a proton source. In the case of ionic hydrogenation, a dihydride complex is regenerated by hydrogen gas following hydrogenation. Typical catalysts are tungsten or molybdenum complexes. An example of such a catalyst is CpMo(CO)2(PR3)(OCR'2)]+ where M = W or Mo. Related reactions Transfer hydrogenation (TH) catalysts, e.g. Shvo catalyst, are related to catalysts used for ionic hydrogenation. TH catalysts however do not employ strong acids and both the H− and H+ components are covalently bonded to the complex prior to transfer to the unsaturated substrates. Typically, TH catalysts are more widely employed in organic synthesis. Older literature References Hydrogenation
Ionic hydrogenation
Chemistry
909
52,768,223
https://en.wikipedia.org/wiki/Pr0201%20b
Pr0201 b (also written Pr 0201 b) is an exoplanet orbiting around the F-type main-sequence star Pr0201. Pr0201 b along with Pr0211 b are notable for being the first exoplanets discovered in the Beehive Cluster located in the constellation Cancer. Since Pr0201 b has a mass of about half of Jupiter and an orbital period of about 4 days, it is likely a hot Jupiter. Its host star, Pr0201, is rotationally variable and has a rotation period of 5.63 days. Discovery Pr0201 b and Pr 0211 b were discovered in 2012 by Sam Quinn and his colleagues while observing 53 stars in the Beehive cluster using the telescope at the University of Georgia in the United States. References Exoplanets discovered in 2012 Exoplanets detected by radial velocity Cancer (constellation) Hot Jupiters
Pr0201 b
Astronomy
192
23,658,163
https://en.wikipedia.org/wiki/Instream%20use
Instream use refers to water use taking place within a stream channel. Examples are hydroelectric power generation, navigation, fish propagation and use, and recreational activities. Some instream uses, usually associated with fish populations and navigation, require a minimum amount of water to be viable. The term is often used in discussions concerning water resources allocation and/or water rights. See also Water law International trade and water References Hydrology Water resources management
Instream use
Chemistry,Engineering,Environmental_science
87
74,731,872
https://en.wikipedia.org/wiki/Bay%20of%20Biscay%20soil
Bay of Biscay is a term used in South Australia for a dark clay soil of a highly reactive nature, forming a sticky mass when wet and shrinking during long dry spells, developing deep cracks. Though found elsewhere, it is prevalent in many parts of the Adelaide plain. It is a particular challenge to all-masonry structures, resulting in fractured foundations and vertical cracks in walls. It is not uncommon to see older buildings with walls braced with railway iron or having long steel rods at ceiling level, holding opposite walls together. A common type of construction in such areas is brick-veneer — essentially a timber-framed building with non-structural brick outer walls — accepting cracks as a likely, but cosmetic outcome, not affecting the building's performance. See also Gilgai References Environment of South Australia Geology of South Australia Building defects
Bay of Biscay soil
Materials_science
167
20,918,408
https://en.wikipedia.org/wiki/Personal%20numbering
Personal numbering is the name for the virtual telephone number service in the UK. Typically the national destination code used for this service is (0)70. The service provides a flexible virtual telephone number able to be routed to any other number, including international mobiles. For example, the UK number +44 70 0585 0070 might route to an Inmarsat satellite phone number, allowing the user to have a UK number while roaming globally. This service has however been reported as having "significant scamming activity" of various sorts as users can mistakenly assume they are calling a UK mobile telephone number that generally costs far less. (For the telephone numbering plan context of 070 numbers see Telephone numbers in the United Kingdom). History In the United States, AT&T ran a trial in 1991 which led, in 1992, to the AT&T EasyReach 700 service of follow me numbers, on area code 700. Early days After protracted lobbying of Oftel throughout 1992, FleXtel launched the UK's first Personal Telephone Number Service, using the 09567 number range in December 1993, 070 introduction In 1995 the UK telecoms regulator, Oftel (now Ofcom), reserved the whole of the 070 range exclusively for personal numbering, imitating the USA area code 700. FleXtel migrated its existing customers across during a two-year transition phase. Fraudulent use Call cost scams A range of scams revolve around UK residents being tricked into making calls to 070 numbers that attract much larger than normal call costs. False UK number scams In its full format (e.g. +44 70 0585 0070) an 070 number will be internationally recognisable as a UK number - even though it might in fact terminate to a mobile number anywhere - this feature is used in a variety of scams. Ofcom reforms Concerned at the number of scams, Ofcom consulted on removing revenue share from the 070 range and this took effect in 2009. They had previously considered other options such as moving this service to the unused 06 number range or enforcing a pre-call announcement of the call charges. Although consulted on, those other remedies were never put into effect. Concerned at the lack of transparency and the high retail charges for calls to 070 numbers, Ofcom launched a call cost review in 2017. This led to a consultation in 2018 which recommended capping the termination rate or wholesale rate at no more than the rate for calling a mobile number. Those changes took effect on 1 October 2019 and several phone providers have already passed the saving on by now including calls to 070 numbers within inclusive allowances. See also Personal Numbers, similar service in Spain Area code 700, similar US service Follow-me, similar concept for PBXs Virtual number Universal Personal Telecommunications References Telephone numbers
Personal numbering
Mathematics
572
9,701,506
https://en.wikipedia.org/wiki/Efungumab
Efungumab (trade name Mycograb) was a drug developed by NeuTec Pharma (a subsidiary of Novartis), intended to treat candidemia (a bloodstream infection caused by pathogenic yeast) in combination with amphotericin B. The European Medicines Agency has twice refused to grant marketing authorization for Mycograb, citing product safety and quality issues. Chemically, efungumab is a single-chain variable fragment of a human monoclonal antibody. As such, it "grabs" onto fungal hsp90, hence its proposed trade name. Its ability to potentiate the effects of the antifungal amphotericin B in culture were later found to be non-specific. References Drugs developed by Novartis Abandoned drugs
Efungumab
Chemistry
165
21,606,083
https://en.wikipedia.org/wiki/Construction%20surveying
Construction surveying or building surveying (otherwise known as "staking", "stake-out", "lay-out", or "setting-out") is to provide dimensional control for all stages of construction work, including the stake out of reference points and markers that will guide the construction of new structures such as roads, rail, or buildings. These markers are usually staked out according to a suitable coordinate system selected for the project. History of construction surveying The nearly perfect squareness and north–south orientation of the Great Pyramid of Giza, built c. 2700 BC, affirm the Egyptians' command of surveying. A recent reassessment of Stonehenge (c.2500 BC) suggests that the monument was set out by prehistoric surveyors using peg and rope geometry. In the sixth century BC geometric based techniques were used to construct the tunnel of Eupalinos on the island of Samos. Modern technology advanced surveying's accuracy and efficiency. For example, surveyors used to use two posts joined with a chain to measure distance. This technology could only account for distance and not elevation. Current technology uses Global Navigation Satellite Systems (GNSS) that can measure the distance from point A to point B as well as differences in elevation. Elements of the construction survey Survey existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure whenever possible (for example, measuring invert elevations and diameters of sewers at manholes) Stake out lot corners, stake limit of work and stake location of construction trailer (clear of all excavation and construction) Stake out reference points and markers that will guide the construction of new structures Verify the location of structures during construction Provide horizontal control on multiple floors Conduct an As-Built survey: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans Coordinate systems used in construction Land surveys and surveys of existing conditions are generally performed according to geodesic coordinates. However, for the purposes of construction a more suitable coordinate system will often be used. During construction surveying, the surveyor will often have to convert from geodesic coordinates to the coordinate system used for that project. Chainage or station In the case of roads or other linear infrastructure, a chainage (derived from Gunter's Chain - 1 chain is equal to 66 feet or 100 links) will be established, often to correspond with the centre line of the road or pipeline. During construction, structures would then be located in terms of chainage, offset and elevation. Offset is said to be "left" or "right" relative to someone standing on the chainage line who is looking in the direction of increasing chainage. Plans would often show plan views (viewed from above), profile views (a "transparent" section view collapsing all section views of the road parallel to the chainage) or cross-section views (a "true" section view perpendicular to the chainage). In a plan view, chainage generally increases from left to right, or from the bottom to the top of the plan. Profiles are shown with the chainage increasing from left to right, and cross-sections are shown as if the viewer is looking in the direction of increasing chainage (so that the "left" offset is to the left and the "right" offset is to the right). "Chainage" may also be referred to as "Station". Building grids In the case of buildings, an arbitrary system of grids is often established so as to correspond to the rows of columns and the major load-bearing walls of the building. The grids may be identified alphabetically in one direction, and numerically in the other direction (as in a road map). The grids are usually but not necessarily perpendicular, and are often but not necessarily evenly spaced. Floors and basement levels are also numbered. Structures, equipment or architectural details may be located in reference to the floor and the nearest intersection of the arbitrary axes. Low distortion engineering grids Typically national mapping grids have significant distortion and are often not suitable for precise engineering design and construction. For major infrastructure projects specifically designed low distortion engineering grids can be used, an example being the Transport for London London Survey Grid, or tailored snake projections which can be suitable for long linear infrastructure such as high speed rail. Such grids not only minimise the impact of distortion due to the Earth's curvature but also have the benefit of defined relationships to a geodetic datum and therefore lack the arbitrary nature of localized grids. Other coordinate systems In other types of construction projects, arbitrary "plan north" reference lines may be established, using Cartesian coordinates that may or may not necessarily correspond to true coordinates. The technique is called localized grid. This method uses the plan building grids as their own ordinates. A point of beginning is established at the southwest cross grid, e.g. [N1000.000,E3000.000]. The grids are added together heading north and east to make each line its own ordinate. Equipment and techniques used in construction surveying Surveying equipment, such as levels and theodolites, are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerisation, electronic distance measurement (EDM), total stations, GNSS surveying and laser scanning have supplemented (and to a large extent supplanted) the traditional optical instruments. The builder's level measures neither horizontal nor vertical angles. It simply combines a spirit level and telescope to allow the user to visually establish a line of sight along a level plane. When used together with a graduated staff it can be used to transfer elevations from one location to another. An alternative method to transfer elevation is to use water in a transparent hose as the level of the water in the hose at opposite ends will be at the same elevation. A double right angle prism verifies grid patterns, isolating layout errors. Survey Stakes Control of alignment and grade during construction may be established through the use of survey stakes. Stakes are generally made of wood in different sizes. Based on the use of the stake they are called alignment stakes, offset stakes, grade stakes, and slope stakes. Survey stakes are markers surveyors use in surveying projects to prepare job sites, mark out property boundaries, and provide information about claims on natural resources like timber and minerals. The stakes can be made from wood, metal, plastic, and other materials and typically come in a range of sizes and colors for different purposes. Sources can include surveying and construction suppliers, and people can also make or order their own for custom applications. A survey stake is typically small, with a pointed end to make it easy to drive into the earth. It may be color-coded or have a space for people to write information on the stake. Surveyors use stakes when assessing sites to mark out boundaries, record data, and convey information to other people. On a job site, for example, survey stakes indicate where it is necessary to backfill with soil to raise the elevation, or to cut soil away to lower it. Stakes can also provide information about slope and grading for people getting a job site ready for construction. Equipment and techniques used in mining and tunnelling Total stations are the primary survey instrument used in mining surveying. Underground mining A total station is used to record the absolute location of the tunnel walls' (stopes), ceilings (backs), and floors as the drifts of an underground mine are driven. The recorded data is then downloaded into a CAD programme, and compared to the designed layout of the tunnel. The survey party installs control stations at regular intervals. These are small steel plugs installed in pairs in holes drilled into walls or the back. For wall stations, two plugs are installed in opposite walls, forming a line perpendicular to the drift. For back stations, two plugs are installed in the back, forming a line parallel to the drift. A set of plugs can be used to locate the total station set up in a drift or tunnel by processing measurements to the plugs by intersection and resection. Profession Building Surveying emerged in the 1970s as a profession in the United Kingdom by a group of technically minded General Practice Surveyors. Building Surveying is a recognized profession within Britain and Australia. In Australia in particular, due to risk mitigation/limitation factors the employment of surveyors at all levels of the construction industry is widespread. There are still many countries where it is not widely recognized as a profession. The Services that Building Surveyors undertake are broad but include: Construction design and building works Project Management and monitoring CDM Co-ordinator under the Construction (Design & Management) Regulations 2015 Property Legislation adviser Insurance assessment and claims assistance Defect investigation and maintenance adviser Building Surveys and measured surveys Handling Planning applications Building Inspection to ensure compliance with building regulations Undertaking pre-acquisition surveys Negotiating dilapidations claims Building Surveyors also advise on many aspects of construction including: design maintenance repair refurbishment restoration conservation Clients of a building surveyor can be the public sector, Local Authorities, Government Departments as well as private sector organisations and work closely with architects, planners, homeowners and tenants groups. Building Surveyors may also be called to act as an expert witness. It is usual for building surveyors to undertake an accredited degree qualification before undertaking structured training to become a member of a professional organisation. For Chartered Building Surveyors, these courses are accredited by the Royal Institution of Chartered Surveyors. Other professional organisations that have building surveyor members include CIOB, ABE, HKIS and RICS. With the enlargement of the European community, the profession of the Chartered Building Surveyor is becoming more widely known in other European states, particularly France. Chartered Building Surveyors, where many English speaking people buy second homes. Distinction from land surveyors In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a distinct profession. Land surveyors have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary (also known as cadastral) surveys for creating new boundaries sanctioned by landowners by way of subdivision plans or plats, and for relocating the boundaries of existing land parcels using legal descriptions, registered documents, surveyors' field notes and plans, and evidence of monumentation and other marks on or under ground. See also References External links Surveying outline University of British Columbia, Carlos E.Ventura As-builts – Problems & Proposed Solutions — Discussion on Building Surveys within Construction industry by Stephen R. Pettee, CCM Further reading DELANEY, Miriam and Anne GORMAN, ‘Surveying’, in Studio Craft & Technique for Architects (London, 2015) pp. 284–317 WELLS, Matthew: Survey: Architectural Iconographies (Zurich: Park Books, 2021) YEOMANS, David: ‘The Geometry of a Piece of String’, Architectural History 54 (2011) 23-47 Construction surveying Building engineering
Construction surveying
Engineering
2,222
53,545,809
https://en.wikipedia.org/wiki/Motolimod
Motolimod (VTX-2337) is a drug which acts as a potent and selective agonist of toll-like receptor 8 (TLR8), a receptor involved in the regulation of the immune system. It is used to stimulate the immune system, and has potential application as an adjuvant therapy in cancer chemotherapy, although clinical trials have shown only modest benefits. It also worsens neuropathic pain in animal models and has been used to research the potential of targeting TLR8 in some kinds of chronic pain syndromes. See also Imiquimod Vesatolimod References Nitrogen heterocycles Amides Amines
Motolimod
Chemistry
135
18,692,872
https://en.wikipedia.org/wiki/Decidable%20sublanguages%20of%20set%20theory
In mathematical logic, various sublanguages of set theory are decidable. These include: Sets with Monotone, Additive, and Multiplicative Functions. Sets with restricted quantifiers. References Proof theory Logic in computer science Model theory
Decidable sublanguages of set theory
Mathematics
52
54,364,429
https://en.wikipedia.org/wiki/Umbralisib
Umbralisib, sold under the brand name Ukoniq, is an anti-cancer medication for the treatment of marginal zone lymphoma (MZL) and follicular lymphoma (FL). It is taken by mouth. Umbralisib is a kinase inhibitor including PI3K-delta and casein kinase CK1-epsilon. The most common side effects include increased creatinine, diarrhea-colitis, fatigue, nausea, neutropenia, transaminase elevation, musculoskeletal pain, anemia, thrombocytopenia, upper respiratory tract infection, vomiting, abdominal pain, decreased appetite, and rash. Umbralisib was granted accelerated approval for medical use in the United States in February 2021. However, due to concerns for increased long term side effects leading to inferior overall survival which led to increased FDA scrutiny in the form of an ODAC review, it has been withdrawn from the US market. Medical uses In April 2022, TG Therapeutics announced the voluntary withdrawal of Ukoniq (umbralisib) from sale for its approved use in the treatment of marginal zone lymphoma and follicular lymphoma. Furthermore, the company withdrew the pending Biologics License Application (BLA) and supplemental New Drug Application (sNDA) for the treatment of chronic lymphocytic leukemia (CLL) and small lymphocytic leukemia (SLL) which utilized umbralisib in tandem with ublituximab, known as the "U2" regimen. The decision was based on the overall survival (OS) data from the phase III trial, Unity-CLL, that illustrated and increasing imbalance in OS. Umbralisib is indicated for adults with relapsed or refractory marginal zone lymphoma (MZL) who have received at least one prior anti-CD20-based regimen; and adults with relapsed or refractory follicular lymphoma (FL) who have received at least three prior lines of systemic therapy. Adverse effects The prescribing information provides warnings and precautions for adverse reactions including infections, neutropenia, diarrhea and non-infectious colitis, hepatotoxicity, and severe cutaneous reactions. History It has undergone clinical studies for chronic lymphocytic leukemia (CLL). Three year data (including follicular lymphoma and DLBCL) was announced June 2016. It is in combination trials for various leukemias and lymphomas, such as mantle cell lymphoma (MCL) and other lymphomas. Umbralisib was granted breakthrough therapy designation by the U.S. Food and Drug Administration (FDA) for use in people with marginal zone lymphoma (MZL), a type of cancer with no specifically approved therapies. FDA approval was based on two single-arm cohorts of an open-label, multi-center, multi-cohort trial, UTX-TGR-205 (NCT02793583), in 69 participants with marginal zone lymphoma (MZL) who received at least one prior therapy, including an anti-CD20 containing regimen, and in 117 participants with follicular lymphoma (FL) after at least two prior systemic therapies. The application for umbralisib was granted priority review for the marginal zone lymphoma (MZL) indication and orphan drug designation for the treatment of MZL and follicular lymphoma (FL). Society and culture Legal status In June 2022, due to safety concerns, the US Food and Drug Administration (FDA) withdrew its approval for Ukoniq (umbralisib). Updated findings from the UNITY-CLL clinical trial show a possible increased risk of death in people receiving Ukoniq. As a result, the FDA determined the risks of treatment with Ukoniq outweigh its benefits. Based upon this determination, the drug's manufacturer, TG Therapeutics, announced it was voluntarily withdrawing Ukoniq from the market for the approved uses in MZL and FL. References External links Phosphoinositide 3-kinase inhibitors Cancer treatments Orphan drugs Withdrawn drugs
Umbralisib
Chemistry
908
1,067,485
https://en.wikipedia.org/wiki/Bismanol
Bismanol is a magnetic alloy of bismuth and manganese (manganese bismuthide) developed by the US Naval Ordnance Laboratory. History Bismanol, a permanent magnet made from powder metallurgy of manganese bismuthide, was developed by the US Naval Ordnance Laboratory in the early 1950s – at the time of invention it was one of the highest coercive force permanent magnets available, at 3000 oersteds. Coercive force reached 3650 oersteds and magnetic flux density 4800 by the mid 1950s. The material was generally strong, and stable to shock and vibration, but had a tendency to chip. Slow corrosion of the material occurred under normal conditions. The material was used to make permanent magnets for use in small electric motors. Bismanol magnets have been replaced by neodymium magnets which are both cheaper and superior in other ways, by samarium-cobalt magnets in more critical applications, and by alnico magnets. References Magnetic alloys Ferromagnetic materials Bismuth alloys Manganese alloys
Bismanol
Physics,Chemistry,Materials_science,Engineering
220
40,542,571
https://en.wikipedia.org/wiki/Equid%20alphaherpesvirus%209
Equine alphaherpesvirus 9 (EHV-9) is a species of virus in the genus Varicellovirus, subfamily Alphaherpesvirinae, family Herpesviridae, and order Herpesvirales. It was first isolated from a case of epizootic encephalitis in a herd of Thomson's gazelle (Gazella thomsoni) in 1993. Fatal encephalitis was reported from Thomson's gazelle, giraffe, and polar bear in natural infections. The virus was reported in an aborted Persian onager and a polar bear. References External links Varicelloviruses Animal viral diseases
Equid alphaherpesvirus 9
Biology
137
31,744,278
https://en.wikipedia.org/wiki/Spiraprilat
Spiraprilat is the active metabolite of spirapril. References External links ACE inhibitors Human drug metabolites 1,3-Dithiolanes
Spiraprilat
Chemistry
37
74,127,498
https://en.wikipedia.org/wiki/Arsene%20Tema%20Biwole
Arsene Tema Biwole is a cameroonian nuclear engineer and plasma physicist at the Massachusetts Institute of Technology (MIT). Biography Early life an education Arsene Tema Biwole was born on June 15, 1992, at the "Camp Bamoun" - built during German colonisation - in Bafoussam, western Cameroon. Premature and ill during his childhood, he and his brothers were raised by a single and modest mother. Arsene studied Newtonian physics in science books at home without electricity, using the light of a lamp. He studied nuclear engineering at the Polytechnic School of Turin, becoming the only Cameroonian engaged in this course. In April 2017, with a grant from the United States Department of Energy, he continued his research for a Master's thesis in San Diego, California at General Atomics. Thus working in the Fusion Theory Group of this company. Scientific career In 2017, Arsene Tema Biwole participated in the 59th Meeting of the American Physical Society Division of plasma physics with General atomics. Thus becoming the first Cameroonian in history to both join General Atomics and the Division of Plasma Physics of the American physical Society. He holds a Doctorate in Physics, obtained at the École Polytechnique Fédérale de Lausanne, titled as follows : "Measuring the electron energy distribution in tokamak plasmas from polarized electron cyclotron radiation". In June 2023, Arsene Tema Biwole joined the Massachusetts Institute of Technology (MIT), to work for the SPARC tokamak, operated by Commonwealth Fusion Systems in collaboration with the Massachusetts Institute of Technology (MIT) Plasma Science and Fusion Center (PSFC). Honors and distinctions Arsene Tema Biwole was cited by Jeune Afrique in 2018, as one of the most promising African scientists. In 2020, Arsene Tema Biwole won the Youth Excellence Prize in Cameroon and is designated Ambassador of the Youth Connekt Cameroon project. During a popular poll carried out by the online information platform Afrik-inform, Arsene Tema Biwole was designated as the favorite Cameroonian personality in the diaspora for the year 2020. From January to February 2021, he travelled through high schools and universities in Cameroon to promote science and encourage vocations among the youth. On February 10, 2021, Arsene Tema Biwole was cited by Paul Biya, President of the Republic of Cameroon, as a role model for the youth. In February 2021, Arsene Tema Biwole received, during a public address, congratulations and encouragement from Maurice Kamto for his ambitions and projects for Africa and Humanity,. Arsene Tema Biwole was the guest of "Actualités Hebdo", a weekly news program of CRTV on February 14, 2021. During the program, Arsene discussed the nuclear perspectives in Africa and the issue of electrification in Cameroon. In March 2023, Arsene Tema Biwole defended his Doctorate thesis in physics at the École Polytechnique Fédérale de Lausanne. Thesis which was unanimously proposed by the jury for the EPFL Doctoral program thesis prize. Honors EPFL Doctoral Program Thesis Distinction, 2023, Nominee. Excellence in Africa Ambassador of the Federal Polytechnic School of Lausanne. Knight of the Order of Cameroonian Merit by decree of August 31, 2021, signed by the President of the Republic of Cameroon. Banca Sella research award, 2016. EDISU Piemonte super merit student prize, 2012. Politecnico di Torino Distinguished academic achievement award, 2012. Notes and references See also Henri Hogbe Nlend 1992 births Living people Nuclear engineers Plasma physicists Cameroonian engineers People from Bafoussam École Polytechnique Fédérale de Lausanne alumni Massachusetts Institute of Technology people
Arsene Tema Biwole
Physics
767
3,869,419
https://en.wikipedia.org/wiki/Smooth%20infinitesimal%20analysis
Smooth infinitesimal analysis is a modern reformulation of the calculus in terms of infinitesimals. Based on the ideas of F. W. Lawvere and employing the methods of category theory, it views all functions as being continuous and incapable of being expressed in terms of discrete entities. As a theory, it is a subset of synthetic differential geometry. Terence Tao has referred to this concept under the name "cheap nonstandard analysis." The nilsquare or nilpotent infinitesimals are numbers ε where ε² = 0 is true, but ε = 0 need not be true at the same time. Calculus Made Easy notably uses nilpotent infinitesimals. Overview This approach departs from the classical logic used in conventional mathematics by denying the law of the excluded middle, e.g., NOT (a ≠ b) does not imply a = b. In particular, in a theory of smooth infinitesimal analysis one can prove for all infinitesimals ε, NOT (ε ≠ 0); yet it is provably false that all infinitesimals are equal to zero. One can see that the law of excluded middle cannot hold from the following basic theorem (again, understood in the context of a theory of smooth infinitesimal analysis): Every function whose domain is R, the real numbers, is continuous and infinitely differentiable. Despite this fact, one could attempt to define a discontinuous function f(x) by specifying that f(x) = 1 for x = 0, and f(x) = 0 for x ≠ 0. If the law of the excluded middle held, then this would be a fully defined, discontinuous function. However, there are plenty of x, namely the infinitesimals, such that neither x = 0 nor x ≠ 0 holds, so the function is not defined on the real numbers. In typical models of smooth infinitesimal analysis, the infinitesimals are not invertible, and therefore the theory does not contain infinite numbers. However, there are also models that include invertible infinitesimals. Other mathematical systems exist which include infinitesimals, including nonstandard analysis and the surreal numbers. Smooth infinitesimal analysis is like nonstandard analysis in that (1) it is meant to serve as a foundation for analysis, and (2) the infinitesimal quantities do not have concrete sizes (as opposed to the surreals, in which a typical infinitesimal is , where ω is a von Neumann ordinal). However, smooth infinitesimal analysis differs from nonstandard analysis in its use of nonclassical logic, and in lacking the transfer principle. Some theorems of standard and nonstandard analysis are false in smooth infinitesimal analysis, including the intermediate value theorem and the Banach–Tarski paradox. Statements in nonstandard analysis can be translated into statements about limits, but the same is not always true in smooth infinitesimal analysis. Intuitively, smooth infinitesimal analysis can be interpreted as describing a world in which lines are made out of infinitesimally small segments, not out of points. These segments can be thought of as being long enough to have a definite direction, but not long enough to be curved. The construction of discontinuous functions fails because a function is identified with a curve, and the curve cannot be constructed pointwise. We can imagine the intermediate value theorem's failure as resulting from the ability of an infinitesimal segment to straddle a line. Similarly, the Banach–Tarski paradox fails because a volume cannot be taken apart into points. See also Category theory Non-standard analysis Synthetic differential geometry Dual number References Further reading John Lane Bell, Invitation to Smooth Infinitesimal Analysis (PDF file) Ieke Moerdijk and Reyes, G.E., Models for Smooth Infinitesimal Analysis, Springer-Verlag, 1991. External links Michael O'Connor, An Introduction to Smooth Infinitesimal Analysis Nonstandard analysis Mathematics of infinitesimals
Smooth infinitesimal analysis
Mathematics
832
31,907,362
https://en.wikipedia.org/wiki/SWAP%20protein%20domain
In molecular biology, the protein domain SWAP is derived from the term Suppressor-of-White-APricot, a splicing regulator from the model organism Drosophila melanogaster. The protein domain is found in regulators that control splicing. It is found in splicing regulatory proteins. When a gene is expressed the DNA must be transcribed into messenger RNA (mRNA). However, it sometimes contains intervening or interrupting sequences named introns. mRNA splicing helps to remove these sequences, leaving a more favourable sequence. mRNA splicing is an essential event in the post-transcriptional modification process of gene expression. SWAP helps to control this process in all cells except gametes. Function The role of the protein domain SWAP is to control sex-independent pre-mRNA processing in somatic cells, that is, in every cell except the sex cells This includes autoregulation, whereby it regulates the splicing of its own pre-mRNA. The mammalian homologue of SWAP acts as a thyroid hormone regulated gene. This mean it is controlled by the thyroid. Structure SWAP proteins share a colinearly arrayed series of novel sequence motifs. This means that they have been conserved over time. The SWAP protein in different organisms share some similarity in terms of sequence and may have been related at some point in evolutionary history. References Protein families Protein domains Genetics
SWAP protein domain
Biology
279
31,975,691
https://en.wikipedia.org/wiki/Hymn%20to%20Enlil
The Hymn to Enlil, Enlil and the Ekur (Enlil A), Hymn to the Ekur, Hymn and incantation to Enlil, Hymn to Enlil the all beneficent or Excerpt from an exorcism is a Sumerian myth, written on clay tablets in the late third millennium BC. Compilation Fragments of the text were discovered in the University of Pennsylvania Museum of Archaeology and Anthropology catalogue of the Babylonian section (CBS) from their excavations at the temple library at Nippur. The myth was first published using tablet CBS 8317, translated by George Aaron Barton in 1918 as "Sumerian religious texts" in "Miscellaneous Babylonian Inscriptions", number ten, entitled "An excerpt from an exorcism". The tablet is at its thickest point. A larger fragment of the text was found on CBS tablet number 14152 and first published by Henry Frederick Lutz as "A hymn and incantation to Enlil" in "Selected Sumerian and Babylonian Texts", number 114 in 1919. Barton's tablet had only containted lines five to twenty four of the reverse of Lutz's, which had already been translated in 1918 and was used to complete several of his damaged lines. Edward Chiera published tablet CBS 7924B from the hymn in "Sumerian Epics and Myths". He also worked with Samuel Noah Kramer to publish three other tablets CBS 8473, 10226, 13869 in "Sumerian texts of varied contents" in 1934. The name given this time was "Hymn to the Ekur", suggesting the tablets were "parts of a composition which extols the ekur of Enlil at Nippur, it may, however be only an extract from a longer text". Further tablets were found to be part of the myth in the Hilprecht collection at the University of Jena, Germany, numbers 1530, 1531, 1532, 1749b, 2610, 2648a and b, 2665, 2685, 1576 and 1577. Further tablets containing the text were excavated at Isin, modern Ishan al-Bahriyat, tablet 923. Another was found amongst the texts in the Iraq Museum, tablet 44351a. Others are held in the collections of the Abbey of Montserrat in Barcelona and the Ashmolean in Oxford. Other translations were made from tablets in the Nippur collection of the Museum of the Ancient Orient in Istanbul (Ni). Samuel Noah Kramer amongst others worked to translate several others from the Istanbul collection including Ni 1039, 1180, 4005, 4044, 4150, 4339, 4377, 4584, 9563 and 9698. More were found at Henri de Genouillac's excavations at Kish (C 53). Another tablet of the myth (Si 231) was excavated at Sippar in the collections of the Istanbul Archaeological Museum. Sir Charles Leonard Woolley unearthed more tablets at Ur contained in the "Ur excavations texts" from 1928. Other tablets and versions were used to bring the myth to its present form with the latest translations presented by Thorkild Jacobsen, Miguel Civil and Joachim Krecher. Composition The hymn, noted by Kramer as one of the most important of its type, starts with praise for Enlil in his awe-inspiring dais: The hymn develops by relating Enlil founding and creating the origin of the city of Nippur and his organization of the earth. In contrast to the myth of Enlil and Ninlil where the city exists before creation, here Enlil is shown to be responsible for its planning and construction, suggesting he surveyed and drew the plans before its creation: The hymn moves on from the physical construction of the city and gives a description and veneration of its ethics and moral code: The last sentence has been compared by R. P. Gordon to the description of Jerusalem in the Book of Isaiah (), "the city of justice, righteousness dwelled in her" and in the Book of Jeremiah (), "O habitation of justice, and mountain of holiness." The myth continues with the city's inhabitants building a temple dedicated to Enlil, referred to as the Ekur. The priestly positions and responsibilities of the Ekur are listed along with an appeal for Enlil's blessings on the city, where he is regarded as the source of all prosperity: A similar passage to the last lines above has been noted in the Biblical Psalms () "The voice of the Lord makes hinds to calve and makes goats to give birth (too) quickly". The hymn concludes with further reference to Enlil as a farmer and praise for his wife, Ninlil: Andrew R. George suggested that the hymn to Enlil "can be incorporated into longer compositions" as with the Kesh temple hymn and "the hymn to temples in Ur that introduces a Shulgi hymn." Discussion The poetic form and laudatory content of the hymn have shown similarities to the Book of Psalms in the Bible, particularly Psalm 23 () "The Lord is my shepherd, I shall not want, he maketh me to lie down in green pastures." Line eighty four mentions: and in line ninety one, Enlil is referred to as a shepherd: The shepherd motif originating in this myth is also found describing Jesus in the Book of John (). Joan Westenholz noted that "The farmer image was even more popular than the shepherd in the earliest personal names, as might be expected in an agrarian society." She notes that both Falkenstein and Thorkild Jacobsen consider the farmer refers to the king of Nippur; Reisman has suggested that the farmer or 'engar' of the Ekur was likely to be Ninurta. The term appears in line sixty Wayne Horowitz discusses the use of the word abzu, normally used as a name for an abzu temple, god, cosmic place or cultic water basin. In the hymn to Enlil, its interior is described as a 'distant sea': The foundations of Enlil's temple are made of lapis lazuli, which has been linked to the "soham" stone used in the Book of Ezekiel () describing the materials used in the building of "Eden, the Garden of god" perched on "the mountain of the lord", Zion, and in the Book of Job () "The stones of it are the place of sapphires and it hath dust of gold". Moses also saw God's feet standing on a "paved work of a sapphire stone" in (). Precious stones are also later repeated in a similar context describing decoration of the walls of New Jerusalem in the Apocalypse (). Along with the Kesh Temple Hymn, Steve Tinney has identified the Hymn to Enlil as part of a standard sequence of scribal training scripts he refers to as the Decad. He suggested that "the Decad constituted a required program of literary learning, used almost without exception throughout Babylonia. The Decad thus included almost all literary types available in Sumerian." See also Barton Cylinder Debate between Winter and Summer Debate between sheep and grain Enlil and Ninlil Eridu Genesis Old Babylonian oracle Kesh temple hymn Self-praise of Shulgi (Shulgi D) Lament for Ur Sumerian religion Sumerian literature References Further reading Falkenstein, Adam, Sumerische Götterlieder (Abhandlungen der Heidelberger Akademie der Wissenschaften, Phil.-hist. Kl., Jahrgang 1959, 1. Abh.). Carl Winter UniversitätsVerlag: Heidelberg, 5-79, 1959. Jacobsen, Thorkild, The Harps that Once ... Sumerian Poetry in Translation. Yale University Press: New Haven/London, 151-166: translation, pp 101–111, 1987. Reisman, Daniel David, Two Neo-Sumerian Royal Hymns (Ph.D. dissertation). University of Pennsylvania: Philadelphia, 41-102, 1970. Römer, W.H.Ph., 'Review of Jacobsen 1987', Bibliotheca Orientalis 47, 382-390, 1990. External links Barton, George Aaron., Miscellaneous Babylonian Inscriptions, Yale University Press, 1918. Online Version Lutz, Frederick Henry., Selected Sumerian and Babylonian texts, The University Museum, pp. 54-. Online Version Cheira, Edward., Sumerian Epics and Myths, University of Chicago, Oriental Institute Publications, 1934. Online Version Chiera, Edward and Kramer, Samuel Noah., Sumerian texts of varied contents, Number 116, University of Chicago Oriental Institute Publications Volume XVI, Cuneiform series - volume IV, 1934. - Online Version Enlil and the Ekur (Enlil A)., Black, J.A., Cunningham, G., Robson, E., and Zólyomi, G., The Electronic Text Corpus of Sumerian Literature, Oxford 1998-. Enlil A - ETCSL composite text Cuneiform Digital Library Initiative - CBS 08317 Enlil in the Ekur - set to music on Youtube 3rd-millennium BC literature 1918 archaeological discoveries Hymns Sumerian literature Clay tablets Mesopotamian myths Mythological mountains Creation myths Religious cosmologies Comparative mythology Ancient Near East wisdom literature Kish (Sumer) Ur Isin
Hymn to Enlil
Astronomy
1,953
2,938,012
https://en.wikipedia.org/wiki/Constraint-based%20Routing%20Label%20Distribution%20Protocol
Constraint-based Routing Label Distribution Protocol (CR-LDP) is a control protocol used in some computer networks. As of February 2003, the IETF MPLS working group deprecated CR-LDP and decided to focus purely on RSVP-TE. It is an extension of the Label Distribution Protocol (LDP), one of the protocols in the Multiprotocol Label Switching architecture. CR-LDP contains extensions for LDP to extend its capabilities such as setup paths beyond what is available for the routing protocol. For instance, a label-switched path can be set up based on explicit route constraints, quality of service constraints, and other constraints. Constraint-based routing (CR) is a mechanism used to meet traffic engineering requirements. These requirements are met by extending LDP for support of constraint-based routed label-switched paths (CR-LSPs). Other uses for CR-LSPs include MPLS-based virtual private networks. CR-LDP is almost same as basic LDP, in packet structure, but it contains some extra TLVs which basically set up the constraint-based LSP. References MPLS networking Network protocols
Constraint-based Routing Label Distribution Protocol
Technology
239
27,154,688
https://en.wikipedia.org/wiki/Brian%20Jackson%20%28game%20designer%29
Brian Jackson (born 2 November 1972) is an American video game designer, having been in the video game industry since 1995. He has helped produce games for Electronic Arts, Microsoft, Bethesda Softworks, and Nerjyzed Entertainment. Jackson has served as a designer for BCFx, The Elder Scrolls IV: Oblivion, IHRA Drag Racing – Sportsman Edition, NFL Fever, NBA Inside Drive 2004, NCAA March Madness, Madden NFL, and Viewpoint. Jackson received a Bachelor of Business Administration in Computer Based Information Systems from Howard University in 1992, and is a member of Alpha Phi Alpha fraternity. Jackson, with Nerjyzed Entertainment CEO Jackie Beauchamp, was credited in the November 26, 2007 issue of Jet Magazine for helping create the first Black College Football video game. Jackson was featured in a January 2001, article in US Black Engineer: “Their Work Is All Play, Turning Pastimes into Careers in the Video Games Industry.” In September 1998, in an article entitled: “Who Got Game?" Source Magazine credited Jackson and designer Rob Jones with helping make John Madden Football appealing to the Hip Hop community. References American video game designers 1972 births Living people
Brian Jackson (game designer)
Technology
240
69,524,998
https://en.wikipedia.org/wiki/Spectral%20submanifold
In dynamical systems, a spectral submanifold (SSM) is the unique smoothest invariant manifold serving as the nonlinear extension of a spectral subspace of a linear dynamical system under the addition of nonlinearities. SSM theory provides conditions for when invariant properties of eigenspaces of a linear dynamical system can be extended to a nonlinear system, and therefore motivates the use of SSMs in nonlinear dimensionality reduction. SSMs are chiefly employed for the exact model reduction of dynamical systems. For the automated computation of SSMs and the analysis of the reduced dynamics, open source online software packages such as SSMTool and SSMLearn have been published. These tools allow to study system dynamics either from the underlying equations of motion or from trajectory data, supporting both analytical and data-driven approaches. Detailed documentation for SSMTool is provided online. Definition Consider a nonlinear ordinary differential equation of the form with constant matrix and the nonlinearities contained in the smooth function . Assume that for all eigenvalues of , that is, the origin is an asymptotically stable fixed point. Now select a span of eigenvectors of . Then, the eigenspace is an invariant subspace of the linearized system Under addition of the nonlinearity to the linear system, generally perturbs into infinitely many invariant manifolds. Among these invariant manifolds, the unique smoothest one is referred to as the spectral submanifold. An equivalent result for unstable SSMs holds for . Existence The spectral submanifold tangent to at the origin is guaranteed to exist provided that certain non-resonance conditions are satisfied by the eigenvalues in the spectrum of . In particular, there can be no linear combination of equal to one of the eigenvalues of outside of the spectral subspace. If there is such an outer resonance, one can include the resonant mode into and extend the analysis to a higher-dimensional SSM pertaining to the extended spectral subspace. Non-autonomous extension The theory on spectral submanifolds extends to nonlinear non-autonomous systems of the form with a quasiperiodic forcing term. Significance Spectral submanifolds are useful for rigorous nonlinear dimensionality reduction in dynamical systems. The reduction of a high-dimensional phase space to a lower-dimensional manifold can lead to major simplifications by allowing for an accurate description of the system's main asymptotic behaviour. For a known dynamical system, SSMs can be computed analytically by solving the invariance equations, and reduced models on SSMs may be employed for prediction of the response to forcing. Furthermore these manifolds may also be extracted directly from trajectory data of a dynamical system with the use of machine learning algorithms. See also Invariant manifold Nonlinear dimensionality reduction Lagrangian coherent structure References External links Tool for automated SSM computation Dynamical systems
Spectral submanifold
Physics,Mathematics
585
55,568,980
https://en.wikipedia.org/wiki/Origins%20Space%20Telescope
Origins Space Telescope (Origins) is a concept study for a far-infrared survey space telescope mission. A preliminary concept in pre-formulation, it was presented to the United States Decadal Survey in 2019 for a possible selection to NASA's large strategic science missions. Origins would provide an array of new tools for studying star formation and the energetics and physical state of the interstellar medium within the Milky Way using infrared radiation and new spectroscopic capabilities. Study groups, primarily composed of international community members, prioritized the science identification and science drivers of the mission architecture. The study groups drew upon input from the international astronomical community; such a large mission will need international participation and support to make it a reality. Overview In 2016, NASA began considering four different space telescopes for the Large strategic science missions; they are the Habitable Exoplanet Imaging Mission (HabEx), Large Ultraviolet Optical Infrared Surveyor (LUVOIR), Origins Space Telescope (Origins), and Lynx X-ray Observatory. In 2019, the four teams turned in their final reports to the National Academy of Sciences, whose independent Astronomy and Astrophysics Decadal Survey report advises NASA on which mission should take top priority. If funded, Origins would launch in approximately 2035. An evolving concept The Roadmap envisaged a mid- to far-infrared space telescope (contrasting with the near- to mid-infrared James Webb Space Telescope) with a large gain in sensitivity over the Herschel Space Observatory (a previous far-infrared telescope), and better angular resolution with at least a four-order of magnitude sensitivity improvement over Herschel. The mission development relies on the identification of primary science drivers to establish the technical requirements for the observatory. The workgroups have identified these baseline science topics: Cosmic dawn and reionization Evolution of galaxies and black holes Volume of local galaxies and the Milky Way Interstellar medium Protoplanetary disks, planet formation, exoplanets, star formation, and evolved stars The Solar System. Water transport Early and preliminary goals for the Origins Space Telescope mission include the study of water transport as both ice and gas from the interstellar medium to the inner regions of planet-forming disks, from interstellar clouds, to protoplanetary disks, to Earth itself—in order to understand the abundance and availability of water for habitable planets. In the Solar System, it will chart the role of comets in delivering water to the early Earth by tracing their molecular heredity of deuterium/hydrogen ratio. Preliminary characteristics The Origins Space Telescope would perform astrometry and astrophysics in the mid- to far-infrared range using a telescope with an aperture of 9.1 m (concept 1) or 5.9 m (concept 2). The telescope will require cryocooler systems to actively cool detectors at ~50 mK and the telescope optics at ~4 K. It will attain sensitivities 100–1000 times greater than any previous far-infrared telescope. Targeting exoplanet observations in the 3.3–25 μm wavelength range, it will measure the temperatures and search for basic chemical ingredients for life in the atmospheres of small, warm planets at habitable temperatures (~) and measure their atmospheric composition. This may be accomplished by a combination of transit spectroscopy and direct coronagraphic imaging. Important atmospheric diagnostics include spectral bands of ammonia (, a unique tracer of nitrogen), the 9 μm ozone line (ozone, is a key biosignature), the 15 μm band (carbon dioxide is an important greenhouse gas), and many water wavelength bands. Its spectrographs will enable 3D surveys of the sky that will discover and characterize the most distant galaxies, Milky-Way, exoplanets, and the outer reaches of the Solar System. Preliminary payload Based on the final report, three instruments are required, plus a fourth optional upscope: a far-infrared imaging polarimeter a mid-infrared instrument for exoplanet transit spectroscopy a versatile far-infrared spectrometer with wide-field low-resolution or single-beam high-resolution capability a very high-resolution heterodyne spectrometer. References Space telescopes Infrared telescopes Exoplanetology Exoplanet search projects Proposed NASA space probes
Origins Space Telescope
Astronomy
863
9,704,027
https://en.wikipedia.org/wiki/Ongoing%20reliability%20test
The ongoing reliability test (ORT) is a hardware test process usually used in manufacturing to ensure that quality of the products is still of the same specifications as the day it first went to production or general availability. The products currently in the manufacturing line are randomly picked every day with a predefined percentage or numbers and then put in a control drop tower or an environmental chamber. Control drop simulates physical interactions on the product, while environmental chamber simulates the stress profile of thermal cycling, elevated temperature, or combined environmental stresses to induce fatigue damage. The profile should stimulate the precipitation of latent defects that may be introduced from the manufacturing process but not remove significant life from the product or introduce flaws to risk failure during its intended mission. highly accelerated stress test is a Ongoing Reliability Test that uses the empirical operational limits as the reference for the combined vibration, thermal cycling, and other stress applied to find latent defects. Quality of the products is then measured with the results of this test. If a unit fails, it goes under investigation to see what caused the failure and then remove the cause whether it came from an assembly process or from a component being incorrectly manufactured, or any other cause. If it is proven that a real failure occurs, the batch of units that were produced along with the failed unit, is then tagged for re-test or repair to either verify or fix the problem. External links OPS A La Carte's Reliability Services in the Manufacturing Phase On-Going [sic] Reliability Testing (ORT) Sample OEM contracts with contract manufacturers (CM) which specifies ORT to be a standard process (see section 7.8) Accelerated Reliability Engineering: HALT and HASS,. Gregg K. Hobbs, John Wiley & Sons Ltd., 2000. Statistical process control Hardware testing
Ongoing reliability test
Engineering
361
1,117,290
https://en.wikipedia.org/wiki/Ionic%20liquid
An ionic liquid (IL) is a salt in the liquid state at ambient conditions. In some contexts, the term has been restricted to salts whose melting point is below a specific temperature, such as . While ordinary liquids such as water and gasoline are predominantly made of electrically neutral molecules, ionic liquids are largely made of ions. These substances are variously called liquid electrolytes, ionic melts, ionic fluids, fused salts, liquid salts, or ionic glasses. Ionic liquids have many potential applications. They are powerful solvents and can be used as electrolytes. Salts that are liquid at near-ambient temperature are important for electric battery applications, and have been considered as sealants due to their very low vapor pressure. Any salt that melts without decomposing or vaporizing usually yields an ionic liquid. Sodium chloride (NaCl), for example, melts at into a liquid that consists largely of sodium cations () and chloride anions (). Conversely, when an ionic liquid is cooled, it often forms an ionic solid—which may be either crystalline or glassy. The ionic bond is usually stronger than the Van der Waals forces between the molecules of ordinary liquids. Because of these strong interactions, salts tend to have high lattice energies, manifested in high melting points. Some salts, especially those with organic cations, have low lattice energies and thus are liquid at or below room temperature. Examples include compounds based on the 1-ethyl-3-methylimidazolium (EMIM) cation and include: EMIM:Cl, EMIMAc (acetate anion), EMIM dicyanamide, ()()·, that melts at ; and 1-butyl-3,5-dimethylpyridinium bromide which becomes a glass below . Low-temperature ionic liquids can be compared to ionic solutions, liquids that contain both ions and neutral molecules, and in particular to the so-called deep eutectic solvents, mixtures of ionic and non-ionic solid substances which have much lower melting points than the pure compounds. Certain mixtures of nitrate salts can have melting points below 100 °C. History The term "ionic liquid" in the general sense was used as early as 1943. The discovery date of the "first" ionic liquid is disputed, along with the identity of its discoverer. Ethanolammonium nitrate (m.p. 52–55 °C) was reported in 1888 by S. Gabriel and J. Weiner. In 1911 Ray and Rakshit, during preparation of the nitrite salts of ethylamine, dimethylamine, and trimethylamine observed that the reaction between ethylamine hydrochloride and silver nitrate yielded an unstable ethylammonium nitrite ()· , a heavy yellow liquid which on immersion in a mixture of salt and ice could not be solidified and was probably the first report of room-temperature ionic liquid. Later in 1914, Paul Walden reported one of the first stable room-temperature ionic liquids ethylammonium nitrate ()· (m.p. 12 °C). In the 1970s and 1980s, ionic liquids based on alkyl-substituted imidazolium and pyridinium cations, with halide or tetrahalogenoaluminate anions, were developed as potential electrolytes in batteries. For the imidazolium halogenoaluminate salts, their physical properties—such as viscosity, melting point, and acidity—could be adjusted by changing the alkyl substituents and the imidazolium/pyridinium and halide/halogenoaluminate ratios. Two major drawbacks for some applications were moisture sensitivity and acidity or basicity. In 1992, Wilkes and Zawarotko obtained ionic liquids with 'neutral' weakly coordinating anions such as hexafluorophosphate () and tetrafluoroborate (), allowing a much wider range of applications. Characteristics ILs are typically colorless viscous liquids. They are often moderate to poor conductors of electricity, and rarely self-ionize. They do, however, have a very large electrochemical window, enabling electrochemical refinement of otherwise intractable ores. They exhibit low vapor pressure, which can be as low as 10−10 Pa. Many have low combustibility and are thermally stable. The solubility properties of ILs are diverse. Saturated aliphatic compounds are generally only sparingly soluble in ionic liquids, whereas alkenes show somewhat greater solubility, and aldehydes often completely miscible. Solubility differences can be exploited in biphasic catalysis, such as hydrogenation and hydrocarbonylation processes, allowing for relatively easy separation of products and/or unreacted substrate(s). Gas solubility follows the same trend, with carbon dioxide gas showing good solubility in many ionic liquids. Carbon monoxide is less soluble in ionic liquids than in many popular organic solvents, and hydrogen is only slightly soluble (similar to the solubility in water) and may vary relatively little between the more common ionic liquids. Many classes of chemical reactions, The miscibility of ionic liquids with water or organic solvents varies with side chain lengths on the cation and with choice of anion. They can be functionalized to act as acids, bases, or ligands, and are precursors salts in the preparation of stable carbenes. Because of their distinctive properties, ionic liquids have been investigated for many applications. Some ionic liquids can be distilled under vacuum conditions at temperatures near 300 °C. The vapor is not made up of separated ions, but consists of ion pairs. ILs have a wide liquid range. Some ILs do not freeze down to very low temperatures (even −150 °C), The glass transition temperature was detected below −100 °C in the case of N-methyl-N-alkylpyrrolidinium cations fluorosulfonyl-trifluoromethanesulfonylimide (FTFSI). Low-temperature ionic liquids (below 130 K) have been proposed as the fluid base for an extremely large diameter spinning liquid-mirror telescope to be based on the Moon. Water is a common impurity in ionic liquids, as it can be absorbed from the atmosphere and influences the transport properties of RTILs, even at relatively low concentrations. Varieties Classically, ILs consist of salts of unsymmetrical, flexible organic cations with symmetrical weakly coordinating anions. Both cationic and anionic components have been widely varied. Cations Room-temperature ionic liquids (RTILs) are dominated by salts derived from 1-methylimidazole, i.e., 1-alkyl-3-methylimidazolium. Examples include 1-ethyl-3-methyl- (EMIM), 1-butyl-3-methyl- (BMIM), 1-octyl-3 methyl (OMIM), 1-decyl-3-methyl-(DMIM), 1-dodecyl-3-methyl- (dodecylMIM). Other imidazolium cations are 1-butyl-2,3-dimethylimidazolium (BMMIM or DBMIM) and 1,3-di(N,N-dimethylaminoethyl)-2-methylimidazolium (DAMI). Other N-heterocyclic cations are derived from pyridine: 4-methyl-N-butyl-pyridinium (MBPy) and N-octylpyridinium (C8Py). Conventional quaternary ammonium cations also form ILs, e.g. tetraethylammonium (TEA) and tetrabutylammonium (TBA). Anions Typical anions in ionic liquids include the following: tetrafluoroborate (BF4), hexafluorophosphate (PF6), bis-trifluoromethanesulfonimide (NTf2), trifluoromethanesulfonate (OTf), dicyanamide (N(CN)2), hydrogensulfate (), and ethyl sulfate (EtOSO3). Magnetic ionic liquids can be synthesized by incorporating paramagnetic anions, illustrated by 1-butyl-3-methylimidazolium tetrachloroferrate. Specialized ILs Protic ionic liquids are formed via a proton transfer from an acid to a base. In contrast to other ionic liquids, which generally are formed through a sequence of synthesis steps, protic ionic liquids can be created more easily by simply mixing the acid and base. Phosphonium cations (R4P+) are less common but offer some advantageous properties. Some examples of phosphonium cations are trihexyl(tetradecyl)phosphonium (P6,6,6,14) and tributyl(tetradecyl)phosphonium (P4,4,4,14). Poly(ionic liquid)s Polymerized ionic liquids, poly(ionic liquid)s or polymeric ionic liquids, all abbreviated as PIL is the polymeric form of ionic liquids. They have half of the ionicity of ionic liquids since one ion is fixed as the polymer moiety to form a polymeric chain. PILs have a similar range of applications, comparable with those of ionic liquids but the polymer architecture provides a better chance for controlling the ionic conductivity. They have extended the applications of ionic liquids for designing smart materials or solid electrolytes. Commercial applications Many applications have been considered, but few have been commercialized. ILs are used in the production of gasoline by catalyzing alkylation. An IL based on tetraalkylphosphonium iodide is a solvent for tributyltin iodide, which functions as a catalyst to rearrange the monoepoxide of butadiene. This process was commercialized as a route to 2,5-dihydrofuran, but later discontinued. Potential applications Catalysis ILs improve the catalytic performance of palladium nanoparticles. Furthermore, ionic liquids can be used as pre-catalysts for chemical transformations. In this regard dialkylimidazoliums such as [EMIM]Ac have been used in the combination with a base to generate N-heterocyclic carbenes (NHCs). These imidazolium based NHCs are known to catalyse a number transformations such as the benzoin condensation and the OTHO reaction. Pharmaceuticals Recognizing that approximately 50% of commercial pharmaceuticals are salts, ionic liquid forms of a number of pharmaceuticals have been investigated. Combining a pharmaceutically active cation with a pharmaceutically active anion leads to a Dual Active ionic liquid in which the actions of two drugs are combined. ILs can extract specific compounds from plants for pharmaceutical, nutritional and cosmetic applications, such as the antimalarial drug artemisinin from the plant Artemisia annua. Biopolymer processing The dissolution of cellulose by ILs has attracted interest. A patent application from 1930 showed that 1-alkylpyridinium chlorides dissolve cellulose. Following in the footsteps of the lyocell process, which uses hydrated N-methylmorpholine N-oxide as a solvent for pulp and paper. The "valorization" of cellulose, i.e. its conversion to more valuable chemicals, has been achieved by the use of ionic liquids. Representative products are glucose esters, sorbitol, and alkylgycosides. IL 1-butyl-3-methylimidazolium chloride dissolves freeze-dried banana pulp and with an additional 15% dimethyl sulfoxide, lends itself to carbon-13 NMR analysis. In this way the entire complex of starch, sucrose, glucose, and fructose can be monitored as a function of banana ripening. Beyond cellulose, ILs have also shown potential in the dissolution, extraction, purification, processing and modification of other biopolymers such as chitin/chitosan, starch, alginate, collagen, gelatin, keratin, and fibroin. For example, ILs allow for the preparation of biopolymer materials in different forms (e.g. sponges, films, microparticles, nanoparticles, and aerogels) and better biopolymer chemical reactions, leading to biopolymer-based drug/gene-delivery carriers. Moreover, ILs enable the synthesis of chemically modified starches with high efficiency and degrees of substitution (DS) and the development of various starch-based materials such as thermoplastic starch, composite films, solid polymer electrolytes, nanoparticles and drug carriers. Nuclear fuel reprocessing The IL 1-butyl-3-methylimidazolium chloride has been investigated for the recovery of uranium and other metals from spent nuclear fuel and other sources. Solar thermal energy ILs are potential heat transfer and storage media in solar thermal energy systems. Concentrating solar thermal facilities such as parabolic troughs and solar power towers focus the sun's energy onto a receiver, which can generate temperatures of around . This heat can then generate electricity in a steam or other cycle. For buffering during cloudy periods or to enable generation overnight, energy can be stored by heating an intermediate fluid. Although nitrate salts have been the medium of choice since the early 1980s, they freeze at and thus require heating to prevent solidification. Ionic liquids such as [C4mim][] have more favorable liquid-phase temperature ranges (-75 to 459 °C) and could therefore be excellent liquid thermal storage media and heat transfer fluids. Waste recycling ILs can aid the recycling of synthetic goods, plastics, and metals. They offer the specificity required to separate similar compounds from each other, such as separating polymers in plastic waste streams. This has been achieved using lower temperature extraction processes than current approaches and could help avoid incinerating plastics or dumping them in landfill. Batteries ILs can replace water as the electrolyte in metal-air batteries. ILs are attractive because of their low vapor pressure. Furthermore, ILs have an electrochemical window of up to six volts (versus 1.23 for water) supporting more energy-dense metals. Energy densities from 900 to 1600 watt-hours per kilogram appear possible. Dispersing agent ILs can act as dispersing agents in paints to enhance finish, appearance, and drying properties. ILs are used for dispersing nanomaterials at IOLITEC. Carbon capture ILs and amines have been investigated for capturing carbon dioxide and purifying natural gas. Tribology Some ionic liquids have been shown to reduce friction and wear in basic tribological testing, and their polar nature makes them candidate lubricants for tribotronic applications. While the comparatively high cost of ionic liquids currently prevents their use as neat lubricants, adding ionic liquids in concentrations as low as 0.5 wt% may significantly alter the lubricating performance of conventional base oils. Thus, the current focus of research is on using ionic liquids as additives to lubricating oils, often with the motivation to replace widely used, ecologically harmful lubricant additives. However, the claimed ecological advantage of ionic liquids has been questioned repeatedly and is yet to be demonstrated from a life-cycle perspective. Safety Ionic liquids' low volatility effectively eliminates a major pathway for environmental release and contamination. Ionic liquids' aquatic toxicity is as severe as or more so than many current solvents. Ultrasound can degrade solutions of imidazolium-based ionic liquids with hydrogen peroxide and acetic acid to relatively innocuous compounds. Despite low vapor pressure many ionic liquids are combustible. See also 1-Butyl-3-methylimidazolium hexafluorophosphate (BMIM-PF6) for an often encountered ionic liquid Ionic liquids in carbon capture Ion gel Ioliomics, or studies of ions in liquids MDynaMix software for ionic liquids simulations Molten salt nanoFlowcell which uses ionic liquid in its car batteries Trioctylmethylammonium bis(trifluoromethylsulfonyl)imide Further reading References External links Ionic Liquids Biological Effects Database , free database on toxicology and ecotoxicology of ionic liquids Corresponding states for ionic fluids Ions
Ionic liquid
Physics,Chemistry
3,457
77,676,512
https://en.wikipedia.org/wiki/List%20of%20star%20systems%20within%20400%E2%80%93450%20light-years
This is a list of star systems within 400–450 light years of Earth. See also List of star systems within 350–400 light-years List of star systems within 450–500 light-years References Lists by distance Star systems Lists of stars
List of star systems within 400–450 light-years
Physics,Astronomy
51
6,019
https://en.wikipedia.org/wiki/Computational%20chemistry
Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena. Overview Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. Historically, computational chemistry has had two different aspects: Computational studies, used to find a starting point for a laboratory synthesis or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments. These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms. History Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer. In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO. In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger. One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980. Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". Applications There are several fields within computational chemistry. The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. Storing and searching for data on chemical entities (see chemical databases). Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)). Computational approaches to help in the efficient synthesis of compounds. Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis). These fields can give rise to several applications as shown below. Catalysis Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties. Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions. Drug development Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs. Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them. Computational chemistry databases Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry. Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental. Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following. BindingDB: Contains experimental information about protein-small molecule interactions. RCSB: Stores publicly available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors) ChEMBL: Contains data from research on drug development such as assay results. DrugBank: Data about mechanisms of drugs can be found here. Methods Ab initio method The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz. A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones. In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface. Computational thermochemistry A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Chemical dynamics After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are: the Chebyshev (real) polynomial, the multi-configuration time-dependent Hartree method (MCTDH), the semiclassical method and the split operator technique explained below. Split operator technique How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution. This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation. The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting. There are ways to reduce this error, which include taking an average of two split equations. Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy. Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments. Density functional methods Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods. Semi-empirical methods Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics. Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules. Molecular dynamics Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion. Monte Carlo Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms. Quantum mechanics/molecular mechanics (QM/MM) QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes. Quantum Computational Chemistry Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators. Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions. Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior. While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations. Computational costs in chemistry algorithms The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems. This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains. In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately. Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems. Algorithmic complexity examples The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry. Molecular dynamics Algorithm Solves Newton's equations of motion for atoms and molecules. Complexity The standard pairwise interaction calculation in MD leads to an complexity for particles. This is because each particle interacts with every other particle, resulting in interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to or even by grouping distant particles and treating them as a single entity or using clever mathematical approximations. Quantum mechanics/molecular mechanics (QM/MM) Algorithm Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment. Complexity The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as , where is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved. Hartree-Fock method Algorithm Finds a single Fock state that minimizes the energy. Complexity NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as to depending on implementation, with being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism. Density functional theory Algorithm Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases. Complexity Traditional implementations of DFT typically scale as , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements. Standard CCSD and CCSD(T) method Algorithm CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects. Complexity CCSD Scales as where is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation. CCSD(T) With the addition of perturbative triples, the complexity increases to . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations. Linear-scaling CCSD(T) method Algorithm An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems. Complexity Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy. Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems. For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations. Accuracy Computational chemistry is not an exact description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). Software packages Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in: Biomolecular modelling programs: proteins, nucleic acid. Molecular mechanics programs. Quantum chemistry and solid state-physics software supporting several methods. Molecular design software Semi-empirical programs. Valence bond programs. Specialized journals on computational chemistry Annual Reports in Computational Chemistry Computational and Theoretical Chemistry Computational and Theoretical Polymer Science Computers & Chemical Engineering Journal of Chemical Information and Modeling Journal of Chemical Software Journal of Chemical Theory and Computation Journal of Cheminformatics Journal of Computational Chemistry Journal of Computer Aided Chemistry Journal of Computer Chemistry Japan Journal of Computer-aided Molecular Design Journal of Theoretical and Computational Chemistry Molecular Informatics Theoretical Chemistry Accounts External links NIST Computational Chemistry Comparison and Benchmark DataBase – Contains a database of thousands of computational and experimental results for hundreds of systems American Chemical Society Division of Computers in Chemistry – American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings. CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives – CSTB Report 3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course Chem 4021/8021 Computational Chemistry Free University of Minnesota Course Technology Roadmap for Computational Chemistry Applications of molecular and materials modelling. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report MD and Computational Chemistry applications on GPUs Susi Lehtola, Antti J. Karttunen:"Free and open source software for computational chemistry education", First published: 23 March 2022, https://doi.org/10.1002/wcms.1610 (Open Access) CCL.NET: Computational Chemistry List, Ltd. See also References Computational fields of study Theoretical chemistry Physical chemistry Chemical physics Computational physics
Computational chemistry
Physics,Chemistry,Technology
5,910
5,993,782
https://en.wikipedia.org/wiki/Ingenieurs%20zonder%20Grenzen
Ingenieurs zonder Grenzen (Dutch for Engineers Without Borders) is a name used by two Belgian organizations, both of which are provisional members of the Engineers Without Borders International network. See also Engineers Without Borders (Belgium) External links Ingenieurs zonder Grenzen (in Dutch) the organization started by the Royal Flemish Engineer Association (K VIV). Belgium Development charities based in Belgium
Ingenieurs zonder Grenzen
Engineering
85
52,516,406
https://en.wikipedia.org/wiki/Motion%20capture%20suit
A motion capture suit (or mo-cap suit) is a wearable device that records the body movements of the wearer. Some of these suits also function as haptic suits. History Introduced in the late 1980s, the Data Suit by VPL Research was one of the earliest mo-cap suits in the market. Sensors stitched in the Data Suit were connected by fiber-optic cables to computers that updated the visuals 15 to 30 times a second. The Data Suit was ahead of its time, selling for up to $500,000 for a complete system (along with the EyePhone and the Data Glove). Current market Tesla Suit The Tesla Suit is a mo-cap suit that also uses neuromuscular electrical stimulation (NMES) to give the wearer sensations of touch, force and even warmth. Husky Sense Suit The Husky Sense suit is a mo-cap suit that uses 18 IMU sensors (gyroscope, accelerometer, magnetometer) to track, record and analyze body motions. It's relatively cheap and can be used in various use cases, such as sports, healthcare, defense, metaverse, gaming, VR training, animation creation, etc. PrioVR The PrioVR is mo-cap suit which is available in three versions: the Core which comes with 8 sensors for upper body tracking; the Lite with 12 sensors for full body tracking; and the Pro with 17 sensors which adds for precision with the feet, shoulders and hips. Perception Neuron Perception Neuron by the Chinese company Noitom uses 9-axis IMU to capture the movements of the wearer. It also comes with motion-capturing gloves. Perception Neuron can be used in AltspaceVR. Smartsuit Pro The Smartsuit Pro by Danish company Rokoko uses an array of 19 embedded 9-degrees of freedom (9-DoF) IMU sensors to capture motion date from the person wearing the suit. This data is used to live stream user movement via WiFi, or record data to input into software such as Unity, Unreal Engine 4, or MotionBuilder. Xsens At GDC 2016, Xsens announced integration with Unreal Engine 4 Later that month, Xsens collaborated with Dutch technology company Manus VR in order to showcase an immersive VR experience. Holosuit A bi-directional, full body motion controller with haptic feedback, Holosuit comes with a full body suit and can also be used separately as just gloves, jacket or pants. G5 Mocapsuit G5 Mocapsuit by AiQ Synertial is a 17-sensor IMU-based motion capture system with an option for 4-sensor 'Pincer Gloves'. Synertial partnered with 'AiQ Smart Clothing' of Taiwan, in 2018, to integrate fabric technology into its suits reducing significant amounts of sensor artefacts. GPS enabled. Sports Motion capture models use an Android app to manage onboard recording to calibrate 'later' as well as live streams data via WiFi & Bluetooth. Compatible with Unity (game engine), Unreal Engine 4, MotionBuilder, Tecnomatix and MocapBeats software plugins. G5 has various 'Cobra' and 'Exo-Glove' options as well as an HTC Vive plugin for root positioning inside a 6 x 6 meter capture space. e-skin MEVA e-skin MEVA by Japanese company Xenoma uses 6-axis IMUs in Pants/Shirt/Headband to capture the movements. e-skin MEVA is a third-generation e-textiles that is convenient to set up and easy to use even for physical disabilities. e-skin MEVA is mainly used for healthcare applications such as gait analysis and workload measurement of workers. See also Data glove Haptic suit Virtual reality headset Virtual reality Head-mounted display References Virtual reality Video game accessories Haptic technology
Motion capture suit
Technology
801
672,740
https://en.wikipedia.org/wiki/Protoavis
Protoavis (meaning "first bird") is a problematic taxon known from fragmentary remains from Late Triassic Norian stage deposits near Post, Texas. The animal's true classification has been the subject of much controversy, and there are many different interpretations of what the taxon actually is. When it was first described, the fossils were described as being from a primitive bird which, if the identification is valid, would push back avian origins some 60–75 million years. The original describer of Protoavis texensis, Sankar Chatterjee of Texas Tech University, interpreted the type specimen to have come from a single animal, specifically a 35 cm tall bird that lived in what is now Texas, USA, around 210 million years ago. Though it existed far earlier than Archaeopteryx, its skeletal structure is more bird-like. Protoavis has been reconstructed as a carnivorous bird that had teeth on the tip of its jaws and eyes located at the front of the skull, suggesting a nocturnal or crepuscular lifestyle. Reconstructions usually depict it with feathers, as Chatterjee originally interpreted structures on the arm to be quill knobs, the attachment point for flight feathers found in some modern birds and non-avian dinosaurs. However, re-evaluation of the fossil material by subsequent authors such as Lawrence Witmer have been inconclusive regarding whether or not these structures are actual quill knobs. However, this description of Protoavis assumes that Protoavis has been correctly interpreted as a bird. Many palaeontologists doubt that Protoavis is a bird, or that all remains assigned to it even come from a single species, because of the circumstances of its discovery and unconvincing avian synapomorphies in its fragmentary material. When they were found at the Tecovas and Bull Canyon Formations in the Texas panhandle in 1973, in a sedimentary strata of a Triassic river delta, the fossils were a jumbled cache of disarticulated bones that may reflect an incident of mass mortality following a flash flood. Description Protoavis is usually depicted as being a bipedal archosaur, similar to several poposaurids and rauisuchids that lived during roughly the same time as Protoavis. In a description published by Sankar Chatterjee, structures were identified as quill knobs, although there has been debate as to whether these are actually quill knobs or not. Skull and braincase The braincase of Protoavis is similar in some respects to Troodon, with an enlarged cerebellum that shifted the optic lobes ventrolaterally, and also has a large floccular lobe. The inner ear is also pretty similar and bird-like in both taxa. The canalicular systems and the cochlear process differ in both taxa, and the vestibular region is relatively small and located in a ventral position to most of the anterior and posterior semicircular canals. The anterior semicircular canal is significantly longer than the others, and the cochlear process is a relatively long, vertically oriented tube. However, Protoavis is also remarkedly non-bird like in that it possess only a single exit for the trigeminal. However, these characters are not robust enough to identify Protoavis as a bird. The skull has an extremely narrow parietal with block like dorsal aspect, very broad, T-shaped frontals that form the "lateral wings" that Chatterjee applies to the lack of postorbitals. There are short curved ulnae with olecranon processes, and a possible scapula with bent shaft, and the cervicals have profiles and aspects to their exterior that are very similar to the Megalancosaurus cervical series. All the cervicals but the most posterior and axis/atlas have hypapophyses and those triangular neural spines; all characteristics that have been described in Megalancosaurus. This suggests that portions of Protoavis may be drepanosaurid in nature. Chatterjee presents the skull of Protoavis as complete, although only the caudal aspect of the cranium is represented in the available fossils. Chatterjee argues that the temporal region displays a streptostylic quadrate with orbital process for attachment of the M. protractor pterygoidei et quadrati, with associated confluence of the orbits with the temporal fenestrae, thus facilitating prokinesis. He further asserts that the braincase of Protoavis bears a number of characters seen in Ornithurae, including the structure of the otic capsule, the widespread pneumatization of the braincase elements, a full complement of tympanic recesses, and the presence of an epiotic. Of this material, only the quadrate and orbital roof, in addition to limited portions of the braincase are preserved with enough fidelity to permit any definitive interpretation. The quadrates of TTU P 9200 and TTU P 9201 are not particularly alike; a fact not easily explained away if the material is conspecific, as Chatterjee insists. There does not appear to be an orbital process present on either bone, and the modifications of the proximal condyle permitting wide range of motion against the squamosal, are not readily apparent. Furthermore, the quadratojugal and jugal appear far more robust in the Protoavis specimens themselves, than represented by Chatterjee. The size and development of the quadratojugal seems to contradict Chatterjee's assertion that this bone contacted the quadrate via a highly mobile pin joint. These data render the assertion of prokinesis in the skull of Protoavis questionable at best, and it seems most parsimonious to conclude that the specimen displays a conventional opisthostylic quadrate. The braincase is where Protoavis comes close to being as avian as Chatterjee has maintained. The otic capsule is allegedly organized in avian fashion, with three distinct foramina arranged as such: fenestra ovalis, fenestra pseudorotunda, and the caudal tympanic recess, with a bony metotic strut positioned between the fenestra pseudorotunda and caudal tympanic recess. The claim that the full complement of tympanic recesses seen in ornithurines, are similarly observed in Protoavis is questionable, as the preservation of the braincase is not adequate to permit concrete observations on the matter. Chatterjee omits in his 1987 account of the braincase, the presence of a substantial post-temporal fenestra, which in all Aves (including Archaeopteryx), is reduced or absent altogether, and the lack of a pneumatic sinus on the paroccipital. Furthermore, the braincase possesses multiple characters symplesiomorphic of Coelurosauria, including an expanded cerebellar auricular fossa, and a vagal canal opening into the occiput. What is preserved of the preorbital skull curiously lacks apomorphic characters to be expected in a specimen, which is allegedly more closely allied to Pygostylia than is Archaeopteryx lithographica. Most telling is the complete absence of accessory fenestrae in the antorbital fossa, leading to maxillary sinuses. Post-cranial anatomy The post-cranial remains are as badly preserved, or worse, than the cranial elements, and their interpretation by Chatterjee are in many cases unsubstantiated or speculative. Of the postcranial skeleton, Chatterjee has isolated the axial skeleton as displaying a suite of avian characters, including heterocoelus centra, hypapophyses and reduction of the neural spines. First and foremost, the preservation quality of the vertebrae is poor. While the centra are modified, they do not appear to be truly heterocoelus. The presence of incipient hypapophyses in and of itself might be considered indicative of avian affinity, but their poor development and presence on vertebrae otherwise thoroughly non-avian, is most parsimoniously regarded as mild convergence until further material should be brought to light. The reduction of the neural spines is questionable. Curiously, Gregory Paul has noted that the cervicals of Protoavis and drepanosaurs are astonishingly similar, such they are hardly distinguishable from one another. Considering the modification of the drepanosaur neck for the purposes of snap-action predation, it becomes more likely that superficial similarities in the cervicals of both taxa are in fact only convergent with Aves. Chatterjee does not identify the remaining vertebrae as particularly avian in their osteology. Pectoral girdle The pectoral girdle is discussed by Chatterjee as being highly derived in Protoavis, displaying synapomorphies of avialans more derived than Archaeopteryx, including the presence of a hypocleidium-bearing furcula, and a hypertrophied, carinate sternum. Chatterjee's interpretation of the fossils identified as such in his reviews of the Protoavis material are open to question due to the preservation quality of the elements and as of this time, it is not clear whether either character was in fact present in Protoavis. The glenoid appears to be oriented dorsolaterally permitting a wide range of humeral movement. Chatterjee implies that this is a highly derived trait which allies Protoavis to Aves, but why this should be so is not clearly discussed in the descriptions of the animal. In and of itself, the orientation of the glenoid is not a sufficient basis for placing Protoavis within Aves. The scapular blade is far broader than illustrated by Chatterjee in his 1997 account, and not particularly avian in its gross form. The coracoid, identified by Chatterjee as strut-like and retroverted, is, like the supposed furcula and sternum, too poorly preserved to permit accurate identification. Moreover, the original spatial relationship of the alleged coracoid to the scapula is entirely unknown. Uncinate processes and sternal ribs are missing. Pelvic girdle Chatterjee asserts that the pelvic girdle is apomorphic comparative to archaic birds and displays a retroverted pubis, fusion of the ischium and ilium, an antitrochanter, and the presence of a renal fossa. The pubis does appear to display opisthopuby, although this has yet to be verified. The alleged fusion of the ischium and ilium into an ilioischiadic plate is currently not substantiated by the fossils at hand, despite Chatterjee's auspicious illustration to the contrary in The Rise of Birds. At this time the pelvic girdle is not sufficiently well preserved to ascertain whether or not a renal fossa was present, although as no known avian from the Mesozoic displays a renal fossa, it is not clear why Protoavis should, even if it is more derived than Archaeopteryx. Similarly, it is unclear if the alleged antitrochanter has been correctly identified as such. Arms and legs The manus and carpus are among the few areas of the Protoavis material which are well preserved, and they are astonishingly non-avian. The distal carpals, while long, are in no way similar to those observed in the urvogel or other archaic birds. There is no semilunate element, and the structure of the radiale and ulnare would have limited the flexibility of the wrist joint. The manus is not tridactyl, and metacarpal V is present. In even the most basal avialian, Archaeopteryx, there is no vestige of the fifth metacarpal and its presence in Protoavis seems incongruous with the claim that it is a bird, let alone one more derived than Archaeopteryx. Chatterjee claims that the humerus of Protoavis is "remarkably avian", but as in all matters with the fossils referred to this taxon, accurate identification of the elaborate trochanters, ridges, etc., attributed to the humerus by Chatterjee is impossible at this time. The expanded distal condyles, which appear to be present in the humerus of Protoavis and enlarged deltopectoral crest (a ridge for the attachment of chest and shoulder muscles), are congruent with the morphology of ceratosaur humeri, as is the apparent presence of a distal brachial depression. The femur of Protoavis is astonishingly similar to non-tetanurans, namely coelophysoids. The proximal femur displays a trochanteric shelf caudal to the lesser and greater trochanters, a feature distinguishing non-tetanurans theropods from Tetanurae. Further similarities between the proximal humerus of Protoavis and that of non-tetanuran theropods are found in the shared presence of an enlarged obturator ridge, whose morphology in Protoavis is again, uncannily like that observed in robust basal theropods, e.g., "Syntarus" kayentakatae. The resemblance between the femur of Protoavis and that of a non-tetanuran theropod becomes ever more pronounced at the distal end of the bone. Both share a crista tibiofibularis groove, a feature of a non-tetanuran theropod separating the medial and lateral condyles. The tibia of Protoavis allegedly possesses both a lateral and cranial cnemial crest, though the validity of this claim is subject to question due to the preservation quality of the material. The fibula is continuous to the astragalocalcaneal unit. A tibiotarsus is absent, unusual considering Chatterjee's claims for the pygostylian affinity of Protoavis, as is a tarsometatarsus. The ascending process of the astragalus is reduced, a character entirely incongruous with a highly derived status for Protoavis. Curiously, such abbreviation of the ascending process is found in ceratosaurs, and in its general osteology, the Protoavis tarsus and pes, is quite similar to those of non-tetanuran theropods. Chatterjee's restoration of the hallux as reversed is nothing more than speculation, as the original spatial relationships of the pedal elements are impossible to ascertain at this time. Quill knobs Reconstructions usually depict Protoavis with feathers, as Chatterjee originally interpreted structures on the arm to be quill knobs, the attachment point for flight feathers found in some modern birds and non-avian dinosaurs. However, re-evaluation of the fossil material by subsequent authors such as Lawrence Witmer have been inconclusive regarding whether or not these structures are actual quill knobs. In his 1997 account, Chatterjee infers the presence of feathers from alleged quill knobs on the badly smashed ulna and metacarpals III and IV, and infers the presence of remiges from such structures (though he does caution that this is uncertain). As is the case with the alleged quill knobs on the ulna, the metacarpal structures appear to be attributable to post-mortem damage. Moreover, the thumb, unlike the case in all birds, is not medially divergent. Considering how poorly preserved the ulna is, it is entirely premature to make any definitive conclusions as to the presence of quill knobs until such time as more adequate material becomes available. Upon further examination of the material no structures were isolated that could be deemed as homologous to remigial papillae. Classification and taxonomy The taxonomy of Protoavis is controversial, with several palaeornithologists considering it to be an early ancestor of modern birds, while most others in the palaeontological community regard it as a chimaera, a mixture of several specimens. American palaeontologist Gregory Paul suggested that Protoavis is a herrerasaur. In a paper by Phil Currie and X.J. Zhao discussing a braincase of a Troodon formosus, they compared the bird-like characters of Troodon and Protoavis. In the paper, they made a number of corrections involving both Chatterjee's and Currie's own misinterpretations of parts of Troodon cranial anatomy before the particular braincase being described was found. At least a couple of the corrections (the anterior tympanic recess, and the relatively kinetic quadrate-squamosal contact) made Troodon more bird-like then Chatterjee made out in his Protoavis paper, but overall these particular corrections seemed to have little bearing on the avian features of Protoavis. Currie and Zhao did not explicitly state whether or not they considered Protoavis to be a theropod, however they suggested that although Protoavis has characters suggesting avian affinities, most of these are also found in theropod dinosaurs. Protoavis is a bird Sankar Chatterjee and a few other palaeornithologists claimed that this material documents a Triassic origin of birds and the presence of a bird more advanced than Archaeopteryx. Though it existed approximately 75 million years before the oldest known bird, its skeletal structure is allegedly more bird-like. Protoavis has been reconstructed as a carnivorous bird that had teeth on the tip of its jaws and eyes located at the front of the skull, suggesting a nocturnal or crepuscular lifestyle. The fossil bones are too badly preserved to allow an estimate of flying ability; although reconstructions usually show feathers, judging from thorough study of the fossil material there is no indication that these were present. However, this description of Protoavis assumes that Protoavis has been correctly interpreted as a bird. Almost all palaeontologists doubt that Protoavis is a bird, or that all remains assigned to it even come from a single species, because of the circumstances of its discovery and weak avialan synapomorphies in its fragmentary material. When they were found at a Dockum Group quarry in the Texas panhandle in 1984, in a sedimentary stratum of a Triassic river delta, the fossils were a jumbled cache of disarticulated bones reflecting an incident of mass mortality following a flash flood. Protoavis is a chimaera Chatterjee was convinced that some of these crushed bones belonged to two individuals – one old, one young – of the same species. However, only a few parts were found, primarily a skull and some limb bones which moreover do not well agree in their proportions respective to each other, and this has led many to believe that the Protoavis fossil is chimaeric, made up of more than one organism: the pieces of skull appear like those of a coelurosaur, while the femur and ankle bone catalogued under TTU P-9200 and TTU P-9201 respectively suggest affinities to non-tetanuran theropods and at least some vertebrae are most similar to those of Megalancosaurus, a drepanosaurid. However, those supposed similarities between the cervicals of Protoavis and drepanosaurids were the same similarities that Feduccia and Wild (1993) used to argue for an affinity between Archaeopteryx and drepanosaurids. "Everywhere one turns; the very fossils ascribed thereto challenge the validity of Protoavis. The most parsimonious conclusion to be inferred from these data is that Chatterjee's contentious find is nothing more than a chimera, a morass of long-dead archosaurs." If it really is a single animal and not a chimera, Protoavis would raise questions about when birds began to diverge from other theropods, if they are a lineage of theropod dinosaurs at all, but until better evidence is produced, the animal's status currently remains uncertain. Furthermore, paleobiogeography suggests that true birds did not colonize the Americas until the Cretaceous; the most primitive undisputed bird-like maniraptorans found to date are all Eurasian. Certainly, the fossils are most parsimoniously attributed to primitive dinosaurian and other reptiles as outlined above. However, coelurosaurs and ceratosaurs are in any case not too distantly related to the ancestors of birds and in some aspects of the skeleton not unlike them, explaining how their fossils could be mistaken as avian. Palaeontologist Zhonghe Zhou stated: "[Protoavis] has neither been widely accepted nor seriously considered as a Triassic bird ... [Witmer], who has examined the material and is one of the few workers to have seriously considered Chatterjee's proposal, argued that the avian status of P. texensis is probably not as clear as generally portrayed by Chatterjee, and further recommended minimization of the role that Protoavis plays in the discussion of avian ancestry." Welman has argued that the quadrate of Protoavis displays synapomorphies of Theropoda. Paul has demonstrated the drepanosaur affinities of the cervical vertebrae. Gauthier & Rowe, and Dingus & Rowe have argued convincingly for identifying the hind limb of Protoavis as belonging to a ceratosaur. Feduccia has argued that Protoavis represents an arboreal "thecodont". In a study of early ornithischian dinosaurs, Sterling Nesbitt and others determined some of the partial remains of Protoavis to be a non-tetanuran theropod. The entire skull and neck are considered to be most likely from a drepanosaurid because the skull and neck are too big compared to the dorsal vertebrae of Protoavis. In discussions of evolution Scientists such as Alan Feduccia have cited Protoavis in an attempt to refute the hypothesis that birds evolved from dinosaurs. However, some scientists have claimed the only consequence would be to push the point of bird divergence further back in time. At the time when such claims were originally made, the affiliation of birds and maniraptoran theropods which today is well-supported and generally accepted by most ornithologists was much more contentious; most Mesozoic birds have only been discovered since then. Chatterjee himself has since used Protoavis to support a close relationship between dinosaurs and birds. "As there remains no compelling data to support the avian status of Protoavis or taxonomic validity thereof, it seems mystifying that the matter should be so contentious. The author very much agrees with Chiappe in arguing that at present, Protoavis is irrelevant to the phylogenetic reconstruction of Aves. While further material from the Dockum beds may vindicate this peculiar archosaur, for the time being, the case for Protoavis is non-existent." Phylogenetic implications It has been argued that if valid, Protoavis will represent the death knell to the theropod descent of birds. Palaeontologists counter that if valid, Protoavis in no way falsifies the theropod origin of birds. The very fact that Chatterjee used his putative bird to defend theropod origin seems to contradict the argument of Alan Feduccia that a true bird from the Triassic would bring about the collapse of the theropod "dogma". Discovery and history Archosaur discoveries are comparatively abundant in Texas, and have been recovered in some quantity since E. D. Cope worked the redbeds of the panhandle over a century ago. The holotype specimen of Protoavis (TTU P 9200), the paratype (TTU P 9201), and all referred materials, were discovered in the Dockum Group, from the panhandle of Texas. The Dockum dates from the Carnian through the early Norian, in the terminal Triassic and is composed of four units of decreasing age: the Santa Rose Formation, the Tecovas Formation, the Trujillo Formation, the Cooper Canyon Formation, and the Bull Canyon Formation. Many skeletal elements and partial elements of Protoavis were collected from the Post (Miller) Quarry of the Bull Canyon Formation in the 1980s and other specimens referred to Protoavis were collected from the underlying Kirkpatrick Quarry of the Tecovas Formation. The specimens altogether consists of a partial skull and postcranial remains belonging to possibly several large individuals. The bones were completely freed of the surrounding matrix, and some were heavily reconstructed and the identification of some of the elements have been questioned by other palaeornithologists and palaeontologists. The type material was collected from mudstone deposits in June 1973 and initially identified as a juvenile Coelophysis bauri. The level of the Dockum group from which the Protoavis material was recovered, was most likely deposited in a deltaic river system. The bone bed excavated by Sankar Chatterjee and his students of Texas Tech University, in which Protoavis was discovered, likely reflects an incident of mass mortality following a flash flood. Chatterjee, who first described Protoavis, has assigned the binomial Protoavis texensis ("first bird from Texas") to the small cache of bones, allegedly conspecific. He interpreted the type specimen to have come from a single animal, specifically a 35 cm tall bird that lived in what is now Texas, USA, between 225 and 210 million years ago. Due to the nature of the bones being jumbled into sandstone nodules, and completely disarticulated, it has been suggested that Protoavis was reworked from later sediments. However, a basic stratigraphic principle, the "principle of inclusions", is a special case of the principle of cross-cutting relationships. It states that rock has to exist before it can be included in other sedimentary rock. Reworking is the process of weathering fossils or rock containing fossils out of rocks already present, transporting them, and redepositing them in sediments which are later lithified as new sedimentary rocks. Since the Jurassic rocks occurred after the Triassic sediments of the Dockum Group, they could not have been reworked into the Dockum sediments as inclusions. Palaeoenvironment The inferred palaeoclimate of the Dockum Group would have been subtropical and governed by a distinct dry/wet season pattern, with the latter marked by monsoonal rains. The botanical evidence indicates that the area was densely forested, and the abundance of both invertebrate and vertebrate material from the site suggests that the locale was in general richly populated by a wide variety of species. Dinosaurs were still fairly rare in the Dockum group, and only some ceratosaurs and other basal forms are well documented. The principal carnivores of the locale would have been poposaurids such as Postosuchus, a species well represented in the Triassic redbeds of Texas. Other archaic archosaurs, such as rhynchosaurs and aetosaurs, were also fairly common. Taphonomy Both the holotype and paratype were recovered from disparate locations, both disarticulated and unassociated. Consequently, spatial relationships are impossible to determine. No record of the original orientation of the material even as recovered, exists. Further material assigned to the taxon has been recovered in isolation with no apparent spatial relationships to each other, and more or less has been referred to Protoavis spuriously. Thus, the presentation of the holotype and paratype as coherent skeletons by Chatterjee is fallacious. Such representations are ad hoc conglomerations of bone whose status as conspecific is not apparent from their taphonomy. Not only were the remains recovered disarticulated and unassociated, there are glaring morphometric differences in the various components of the holotype and paratype. For instance, the scapulae and coracoids are so reduced, that the association with the axial skeleton is extremely difficult to support. Juvenile ontogeny cannot be invoked credibly to explain this discrepancy. Furthermore, the degree of morphometric variation in the holotype and paratype seems incongruent with the component material representing a conspecific assemblage of bones. The fossils themselves display significant postmortem damage, and are in some cases so badly crushed and distorted at the hand of geological processes, that accurate interpretation thereof is impossible. Consequently, the lucid analyses offered by Chatterjee are in many cases more artistic creativeness than an accurate description. In his definitive analysis of the material, The Rise of Birds (1997), Chatterjee failed to illustrate the Protoavis fossils via pictures or sketches of the fossils proper, and instead offers the reader artistic reconstructions. For this, Chatterjee has been sharply criticized. Such an approach in science is entirely intolerable in that it idealizes the material at hand, and obscures the very fragmentary nature of the fossils, and their poor state of preservation. See also Proavis Origin of birds Origin of avian flight Feathered dinosaurs Temporal paradox (paleontology) Notes References External links Protoavis at the Fossil Wiki, upon which this article is adapted from. Controversial taxa Fossil taxa described in 1991 Late Triassic archosaurs of North America Nomina dubia Paleontological chimeras Prehistoric reptile genera
Protoavis
Biology
6,158
77,959,269
https://en.wikipedia.org/wiki/WISE%20J2354%2B0240
WISE J2354+0240 (WISE J235402.77+024015.0, WISE 2354+0240) is a brown dwarf or free-floating planetary-mass object. It is a Y-dwarf, meaning it is one of the coldest directly imaged astronomical objects. It was discovered in 2015, using the Wide-field Infrared Survey Explorer and spectroscopy from the Hubble Space Telescope. The authors find that the J-band peak in the spectrum is narrower than the Y0 standard and therefore assigned a spectral type of Y1, with an estimated temperature of 300−400 Kelvin. The age was estimated to be at least 1.5 billion years. Parallax measurement places this object at 7.7 parsec from the solar system. Near-infrared photometry was later obtained with Hubble and a temperature of 335 ±11 K and a mass of 11 ±3 was estimated. WISE 2354+0240 was observed with the JWST and the temperature was estimated to be K. The object is not described in detail in this work. The authors however mention that they see a number of absorption features in their sample, including water vapor, methane, ammonia, carbon monoxide and carbon dioxide. They note that none of their objects show absorption due to phosphine, which is predicted to occur in these objects. See also List of Y-dwarfs WISE J0825+2805 another Y-dwarf discovered by Schneider et al. 2015 Notes References Y-type brown dwarfs Brown dwarf stubs Astronomical objects discovered in 2015 WISE objects Rogue planets Pisces (constellation)
WISE J2354+0240
Astronomy
330
59,632,528
https://en.wikipedia.org/wiki/Second%20neighborhood%20problem
In mathematics, the second neighborhood problem is an unsolved problem about oriented graphs posed by Paul Seymour. Intuitively, it suggests that in a social network described by such a graph, someone will have at least as many friends-of-friends as friends. The problem is also known as the second neighborhood conjecture or Seymour’s distance two conjecture. Statement An oriented graph is a finite directed graph obtained from a simple undirected graph by assigning an orientation to each edge. Equivalently, it is a directed graph that has no self-loops, no parallel edges, and no two-edge cycles. The first neighborhood of a vertex (also called its open neighborhood) consists of all vertices at distance one from , and the second neighborhood of consists of all vertices at distance two from . These two neighborhoods form disjoint sets, neither of which contains itself. In 1990, Paul Seymour conjectured that, in every oriented graph, there always exists at least one vertex whose second neighborhood is at least as large as its first neighborhood. Equivalently, in the square of the graph, the degree of is at least doubled. The problem was first published by Nathaniel Dean and Brenda J. Latka in 1995, in a paper that studied the problem on a restricted class of oriented graphs, the tournaments (orientations of complete graphs). Dean had previously conjectured that every tournament obeys the second neighborhood conjecture, and this special case became known as Dean's conjecture. A vertex in a directed graph whose second neighborhood is at least as large as its first neighborhood is called a Seymour vertex. In the second neighborhood conjecture, the condition that the graph have no two-edge cycles is necessary, for in graphs that have such cycles (for instance the complete oriented graph) all second neighborhoods may be empty or small. Partial results proved Dean's conjecture, the special case of the second neighborhood problem for tournaments. For some graphs, a vertex of minimum out-degree will be a Seymour vertex. For instance, if a directed graph has a sink, a vertex of out-degree zero, then the sink is automatically a Seymour vertex, because its first and second neighborhoods both have size zero. In a graph without sinks, a vertex of out-degree one is always a Seymour vertex. In the orientations of triangle-free graphs, any vertex of minimum out-degree is again a Seymour vertex, because for any edge from to another vertex , the out-neighbors of all belong to the second neighborhood of . For arbitrary graphs with higher vertex degrees, the vertices of minimum degree might not be Seymour vertices, but the existence of a low-degree vertex can still lead to the existence of a nearby Seymour vertex. Using this sort of reasoning, the second neighborhood conjecture has been proven to be true for any oriented graph that contains at least one vertex of out-degree ≤ 6. Random tournaments and some random directed graphs graphs have many Seymour vertices with high probability. Every oriented graph has a vertex whose second neighborhood is at least times as big as the first neighborhood, where is the real root of the polynomial . See also Friendship paradox References External links Seymour's 2nd Neighborhood Conjecture, Open Problems in Graph Theory and Combinatorics, Douglas B. West. Unsolved problems in graph theory Conjectures
Second neighborhood problem
Mathematics
661
38,966,621
https://en.wikipedia.org/wiki/Monokub
Monokub () is a computer motherboard based on the Russian Elbrus 2000 computer architecture, which form the basis for the Monoblock PC office workstation. The motherboard has a miniITX formfactor and contains a single Elbrus-2C+ microprocessor with a clock frequency of 500 MHz. The memory controller provides a dual-channel memory mode. The board has two DDR2-800 memory slots, which enables up to 16 GB of RAM memory (using ECC modules). It also supports expansion boards using PCI Express x16 bus. In addition there is an on-board Gigabit Ethernet interface, 4 USB 2.0, RS-232 interface, DVI connector and audio input/output ports. References Motherboard Embedded systems
Monokub
Technology,Engineering
162
18,827,986
https://en.wikipedia.org/wiki/Circle%20of%20antisimilitude
In inversive geometry, the circle of antisimilitude (also known as mid-circle) of two circles, α and β, is a reference circle for which α and β are inverses of each other. If α and β are non-intersecting or tangent, a single circle of antisimilitude exists; if α and β intersect at two points, there are two circles of antisimilitude. When α and β are congruent, the circle of antisimilitude degenerates to a line of symmetry through which α and β are reflections of each other. Properties If the two circles α and β cross each other, another two circles γ and δ are each tangent to both α and β, and in addition γ and δ are tangent to each other, then the point of tangency between γ and δ necessarily lies on one of the two circles of antisimilitude. If α and β are disjoint and non-concentric, then the locus of points of tangency of γ and δ again forms two circles, but only one of these is the (unique) circle of antisimilitude. If α and β are tangent or concentric, then the locus of points of tangency degenerates to a single circle, which again is the circle of antisimilitude. If the two circles α and β cross each other, then their two circles of antisimilitude each pass through both crossing points, and bisect the angles formed by the arcs of α and β as they cross. If a circle γ crosses circles α and β at equal angles, then γ is crossed orthogonally by one of the circles of antisimilitude of α and β; if γ crosses α and β in supplementary angles, it is crossed orthogonally by the other circle of antisimilitude, and if γ is orthogonal to both α and β then it is also orthogonal to both circles of antisimilitude. For three circles Suppose that, for three circles α, β, and γ, there is a circle of antisimilitude for the pair (α,β) that crosses a second circle of antisimilitude for the pair (β,γ). Then there is a third circle of antisimilitude for the third pair (α,γ) such that the three circles of antisimilitude cross each other in two triple intersection points. Altogether, at most eight triple crossing points may be generated in this way, for there are two ways of choosing each of the first two circles and two points where the two chosen circles cross. These eight or fewer triple crossing points are the centers of inversions that take all three circles α, β, and γ to become equal circles. For three circles that are mutually externally tangent, the (unique) circles of antisimilitude for each pair again cross each other at 120° angles in two triple intersection points that are the isodynamic points of the triangle formed by the three points of tangency. See also Inversive geometry Limiting point (geometry), the center of an inversion that transforms two circles into concentric position Radical axis References External links Circles Inversive geometry
Circle of antisimilitude
Mathematics
664
29,946,214
https://en.wikipedia.org/wiki/Energy%20Regulators%20Regional%20Association
The Energy Regulators Regional Association (ERRA) is a voluntary organization of independent energy regulatory bodies primarily from the Central European and Eurasian region, with Affiliates from Africa, Asia the Middle East and the US. Purpose and objectives The purpose and objectives of the association are: To improve national energy regulation in member countries; To foster development of stable energy regulators with autonomy and authority; To improve cooperation among energy regulators; To facilitate the exchange of information, research, training and experience among members and other regulators around the world. History The first energy regulatory bodies of the ERRA region were established in the mid-1990s as an essential part of restructuring and reforms taking place in these countries. ERRA began as a cooperative initiative of 12 energy regulatory bodies. They were then supported from 1999 to 2008 by the US National Association of Regulatory Utility Commissioners (NARUC), which, with the participation of USAID, arranged technical forums, meetings and study tours for mutual training and development. As a consequence, fifteen energy regulators established ERRA on 11 December 2000 in Bucharest. The association was registered in Hungary in April 2001 and its secretariat is based in Budapest. To date ERRA lists 23 full and 14 associate members. Members Current full members Albanian Energy Regulator * Public Services Regulatory Commission of Armenia * Tariff (Price) Council of Azerbaijan (joined in 2007) State Electricity Regulatory Commission of Bosnia and Herzegovina (joined in 2004) Energy and Water Regulatory Commission of Bulgaria * Croatian Energy Regulatory Agency (joined in 2002) Estonian Competition Authority * Georgian National Energy and Water Supply Regulatory Commission * Hungarian Energy and Public Utility Regulatory Authority * Committee for Regulation of Natural Monopolies and Protection of Competition at the Ministry of National Economy of Kazakhstan * State Agency for Fuel and Energy Complex Regulation under the Government of the Kyrgyz Republic * Public Utilities Commission of Latvia * National Commission for Energy Control and Prices of Lithuania * Nigerian Electricity Regulatory Commission (NERC) Energy Regulatory Commission of Macedonia (joined in 2004) National Energy Regulatory Agency of Moldova * Energy Regulatory Commission of Mongolia (joined in 2001) Energy Regulatory Office of Poland * Romanian Energy Regulatory Authority * Federal Tariff Service of the Russian Federation * Energy Agency of Serbia (joined in 2006) Regulatory Office for Network Industries of Slovakia * Energy Market Regulatory Authority of Turkey (joined in 2002) National Energy and Utilities Regulatory Commission of Ukraine * (Founding Members are marked with * above.) Current associate members Regulatory Commission for Energy in Federation of Bosnia and Herzegovina (joined in 2010) Regulatory Commission for Energy of Republika Srpska, Bosnia and Herzegovina (joined in 2010) Electricity Sector Regulatory Agency of Cameroon (joined in 2013) ERERA: ECOWAS (Economic Community of West African States) Regional Electricity Regulatory Authority (joined in 2011) Public Utilities Regulatory Commission of Ghana (joined in 2015) Energy and Mineral Regulatory Commission of Jordan (joined in 2007) Nigerian Electricity Regulatory Commission (joined in 2010) Authority for Electricity Regulation of Oman (joined in 2015) National Electric Power Regulatory Authority of Pakistan (joined in 2015) Regional Energy Commission of Moscow City, Russian Federation (joined in 2013) Electricity and Co-Generation Regulatory Authority of Saudi Arabia (joined in 2008) Regulatory and Supervisory Bureau for Electricity and Water of Dubai, UAE (joined in 2015) Energy Regulatory Office of UNMIK (joined in 2005) National Association of Regulatory Utility Commissioners, USA (joined in 2001) Languages ERRA activities are conducted in both English and Russian. Notes External links ERRA website International energy organizations Energy economics Energy regulatory authorities
Energy Regulators Regional Association
Engineering,Environmental_science
697
18,762,262
https://en.wikipedia.org/wiki/TT%20Aquilae
TT Aquilae (TT Aql) is a Classical Cepheid (δ Cep) variable star in the constellation Aquila. The visual apparent magnitude of TT Aql ranges from 6.52 to 7.65 over 13.7546 days. The light curve is asymmetric, with the rise from minimum to maximum brightness only taking half the time of the fall from maximum to minimum. The announcement that the star's brightness varies was made in 1907 by Annie Jump Cannon. It had been observed on 506 photographs taken from May 22, 1888 through November 9, 1906, from which a period of 13.75 days had been derived. TT Aql is a yellow-white supergiant around five thousand times brighter than the sun. It pulsates and varies in temperature between about 5,000 K and 6,000 K, and the spectral type varies between F6 and G5. The radius is at maximum brightness, varying between and as the star pulsates. Cepheid masses can be estimated using Baade-Wesselink relations and this gives . The mass estimated by matching to evolutionary tracks is . The mass calculated by modelling the pulsations is . The discrepancies between the masses obtained by the different methods occurs for most Cepheid variables. References External links INTEGRAL-OMC catalogue Aquila (constellation) Classical Cepheid variables 178359 Aquilae, TT F-type supergiants G-type supergiants 093390 Durchmusterung objects
TT Aquilae
Astronomy
319
14,428,025
https://en.wikipedia.org/wiki/Nylatron
Nylatron is a tradename for a family of nylon plastics, typically filled with molybdenum disulfide lubricant powder. It is used to cast plastic parts for machines, because of its mechanical properties and wear-resistance. Nylatron is a brand name of Mitsubishi Chemical Advanced Materials, Inc. and was originally developed and manufactured by Nippon Polypenco Limited. Nylatron is used in several applications such as: rotary lever actuators where unusual shapes are required heavy-duty caster wheels, normally as a replacement for cast iron or forged steel plain bearing material, especially in screw conveyor applications References External links Matweb datasheets Nylatron Plastic Material Datasheets Polymers
Nylatron
Chemistry,Materials_science
149
3,524,328
https://en.wikipedia.org/wiki/Hauppauge%20MediaMVP
The Hauppauge MediaMVP is a network media player. It consists of a hardware unit with remote control, along with software for a Windows PC. Out of the box, it is capable of playing video and audio, displaying pictures, and "tuning in" to Internet radio stations. Alternative software is also available to extend its capabilities. It can be used as a front-end for various PVR projects. Capabilities The MediaMVP can stream audio and video content from a host PC running Windows. It can display photos stored on the host PC. It can stream Internet radio via the host PC as well. It can display live TV with full PVR features with SageTV PVR software for Windows or Linux. The capabilities listed below refer to the official software and firmware supplied by Hauppauge. Video The MediaMVP supports only the MPEG (MPEG-1 and MPEG-2) video format. However, depending on the MediaMVP host software running on the host computer, the host software may be able to transcode other video file formats before sending them to the MediaMVP in the MPEG format. The maximum un-transcoded playable video size is SDTV (480i). HDTV MPEG streams (e.g. 720p) need to be transcoded in real-time on the computer to SD format. Transcoding video can tax some slower computers. With a hardware MPEG decoder as part of its PowerPC processor, it renders moving video images more smoothly than many software PVR implementations. Audio Supported audio file formats include MP3 and WMA. Playlist formats supported include M3U, PLS, ASX and B4S. See also Internet radio below. Photos Supported image file formats include JPG and GIF. Slideshows are supported. Listening to music (including streaming Internet radio) during slideshows is supported as well. Internet radio Supports streaming Internet radio stations via the host PC. Other capabilities Can schedule recording of television broadcasts when using a Hauppauge WinTV TV tuner card with the Hauppauge WinTV recording software. Hardware The MediaMVP hardware consists of a small set-top box and an infrared remote control. It can be oriented either horizontally or vertically (using a supplied base). It's normally operated via the supplied remote control. Behind the unit's red translucent front panel is a single red LED. The LED is used as a power indicator, and also flashes when the unit's remote control is used. Typical (wired) units consume less than 5W. The power supply for the original MediaMVP consists of 6VDC, 1.66A coaxial DC power connector. The outer sleeve is the negative terminal and the inner tip is the positive terminal. Connectivity The rear of the MediaMVP unit has a plug for 6 VDC power, an Ethernet port, and in the US edition, S-Video out, composite video out, and stereo audio out, while the European edition has instead a single "SCART out" connector, offering additional RGB output possibilities, and bundles a SCART lead with 2 extra stereo audio cables with female RCA connectors coming out of one of the plugs. Model 1016, the "wMVP", has Wireless G connectivity. however, this connection method can be inadequate for viewing digital television recordings depending on signal strength and reliability. A modified version of the unit is rebranded as Helius Media Stream and is fitted with a standard channel 3/4 RF modulator. This box is a better choice for TVs that lack separate RCA audio/video inputs. Processor, RAM and firmware The MediaMVP uses the IBM STB02500, a PowerPC 405 CPU integrated with functionality especially suited for use in set-top boxes, like MPEG2 decoder. The unit has 16MB of SDRAM. It runs on Linux-based firmware. Hauppauge has delivered enhancements and new functionality to the MediaMVP from time to time by releasing updated firmware. Firmware updates are delivered to the device when it is powered up. Notes Internet radio See list of Internet radio stations. Can play MMS streams, e.g. mms://some.radio.station/path/to/stream/ Can play HTTP streams, e.g. http://some.radio.station/path/to/stream/ See also Home theater PC Media center (disambiguation) Dreambox Moxi Telly (home entertainment server) External links Hauppauge MediaMVP Interactive television Digital television Digital video recorders Linux-based devices Digital media players
Hauppauge MediaMVP
Technology
967
1,673,297
https://en.wikipedia.org/wiki/Shotcrete
Shotcrete, gunite (), or sprayed concrete is concrete or mortar conveyed through a hose and pneumatically projected at high velocity onto a surface. This construction technique was invented by Carl Akeley and first used in 1907. The concrete is typically reinforced by conventional steel rods, steel mesh, or fibers. The concrete or mortar is formulated to be sticky and resist flowing when at rest to allow use on walls and ceilings, but exhibit sufficient shear thinning to be easily plumbable through hoses. Shotcrete is usually an all-inclusive term for both the wet-mix and dry-mix versions invented by Akeley. In pool construction, however, shotcrete refers to wet mix and gunite to dry mix. In this context, these terms are not interchangeable. Shotcrete is placed and compacted/consolidated at the same time, due to the force with which it is ejected from the nozzle. It can be sprayed onto any type or shape of surface, including vertical or overhead areas. Shotcrete has the characteristics of high compressive strength, good durability, water tightness and frost resistance. History Shotcrete, then known as gunite, was invented in 1907 by American taxidermist Carl Akeley to repair the crumbling façade of the Field Columbian Museum in Chicago (the old Palace of Fine Arts from the World's Columbian Exposition). He used the method of blowing dry material out of a hose with compressed air, injecting water at the nozzle as it was released. In 1911, he was granted a patent for his inventions: the "cement gun", the equipment used; and "gunite", the material that was produced. There is no evidence that Akeley ever used sprayable concrete in his taxidermy work, as is sometimes suggested. F. Trubee Davison covered this and other Akeley inventions in a special issue of Natural History magazine. The dry-mix process was used until the wet-mix process was devised in the 1950s. In the 1960s, an alternative method for gunning dry material with a rotary gun appeared, using a continuously fed open hopper. The nozzle is controlled by hand on small jobs, such as a modest swimming pool. On larger work it is attached to mechanical arms and operated by hand-held remote control. Dry vs. wet mix The dry mix method involves placing the dry ingredients into a hopper and then conveying them pneumatically through a hose to the nozzle. The nozzle operator controls the addition of water at the nozzle. The water and the dry mixture is not completely mixed, but is completed as the mixture hits the receiving surface. This requires a skilled nozzle operator, especially in the case of thick or heavily reinforced sections. Advantages of the dry mix process are that the water content can be adjusted instantaneously by the nozzle operator, allowing more effective placement in overhead and vertical applications without using accelerators. The dry mix process is useful in repair applications when it is necessary to stop frequently, as the dry material is easily discharged from the hose. Wet-mix shotcrete involves pumping of a previously prepared concrete, typically ready-mixed concrete, to the nozzle. Compressed air is introduced at the nozzle to impel the mixture onto the receiving surface. The wet-process procedure generally produces less rebound, waste (when material falls to the floor), and dust compared to the dry-mix process. The greatest advantage of the wet-mix process is all the ingredients are mixed with the water and additives required, and also larger volumes can be placed in less time than the dry process concrete. Shotcrete machines Shotcrete machines are available which control the complete process and make it very fast and easy. Manual and mechanical methods are used for the wet spraying process but wet sprayed concrete is traditionally applied by machine. The high spray outputs and large cross-sections require the work to be mechanised. Concrete spraying systems with duplex pumps are mainly used for working with wet mixes. Unlike conventional concrete pumps, these systems have to meet the additional requirement of delivering a concrete flow that is as constant as possible, and therefore continuous, to guarantee homogeneous spray application. Depending on the fineness of the filler, mortar shotcrete (fraction size up to 2.5 mm) is distinguished from shotcrete (up to 10 mm), and syringe concrete, or sprayed concrete (up to 25 mm). Shotcrete vs. gunite Gunite was originally a trademarked name that specifically referred to the dry-mix pneumatic cement application process. In the dry-mix process, the dry sand and cement mixture is blown through a hose using compressed air, with water being injected at the nozzle to hydrate the mixture, immediately before it is discharged onto the receiving surface. Gunite was the original term coined by Akeley, trademarked in 1909 and patented in North Carolina. The concrete mixture is applied by pneumatic pressure from a gun, hence gun-ite. The term Gunite became the registered trademark of Allentown Equipment, the oldest manufacturer of gunite equipment. Other manufacturers were thus compelled to use other terminology to describe the process such as shotcrete, pneumatic concrete, guncrete, etc. Shotcrete is an all-inclusive term for spraying concrete or mortar with either a dry or wet mix process. However, shotcrete may also sometimes be used to distinguish wet-mix from the dry-mix method. The term shotcrete was first defined by the American Railway Engineers Association (AREA) in the early 1930s. By 1951, shotcrete had become the official generic name of the sprayed concrete process—whether it utilizes the wet or dry process. Applications Shotcrete is commonly used to line tunnel walls, in mines, subways, and automobile tunnels. Fire-resistant shotcrete developed in Norway is used on the Marmaray tunnel in Istanbul. Shotcrete is used to reinforce both temporary and permanent excavations. It may be employed, in concert with lagging and other forms of earth anchor, to stabilize an excavation for an underground parking structure or hi-rise buildings during construction. This provides a large waterproof enclosure in which a structure can be erected. Once the structure is completed the area between its foundation and the shotcrete is backfilled and compacted. Shotcrete is also a viable means and method for placing structural concrete. Shotcrete is very useful in hard rock mining. Development of decline pathway to go underground is critical for movement of heavy machinery, miners, and material. Shotcrete helps make these paths safe from any ground fall. Also, the shotcrete is carried out much faster than the repair mixtures usual non-mechanized application. See also Cement Rebar Reinforced concrete Slurry wall Structural engineering Flintstone House – an example of shotcrete housing construction Notes External links Concrete Tunnel construction
Shotcrete
Engineering
1,407
53,494,176
https://en.wikipedia.org/wiki/List%20of%20largest%20biomedical%20companies%20by%20revenue
The following is a list of independent pharmaceutical, biotechnology and medical companies listed on a stock exchange (as indicated) that have generated a revenue of at least , ranked by their revenue in the respective financial year. It does not include biotechnology companies that are now owned by, or form a part of, larger pharmaceutical groups. Ranking by revenue The following table lists the largest biotechnology and pharmaceutical companies ranked by revenue in billion USD. The change column indicates the company's relative position in this list compared to its relative position in the preceding year; i.e., an increase would be moving closer to rank 1 and vice versa. Green cells indicate years where revenue increased compared to the preceding year. Red cells indicate those years when there has been a decrease. See also List of largest biomedical companies by market capitalization References Biotechnology Biomedical
List of largest biomedical companies by revenue
Engineering,Biology
165
38,402,739
https://en.wikipedia.org/wiki/MTORC1
mTORC1, also known as mammalian target of rapamycin complex 1 or mechanistic target of rapamycin complex 1, is a protein complex that functions as a nutrient/energy/redox sensor and controls protein synthesis. mTOR Complex 1 (mTORC1) is composed of the mTOR protein complex, regulatory-associated protein of mTOR (commonly known as raptor), mammalian lethal with SEC13 protein 8 (MLST8), PRAS40 and DEPTOR. This complex embodies the classic functions of mTOR, namely as a nutrient/energy/redox sensor and controller of protein synthesis. The activity of this complex is regulated by rapamycin, insulin, growth factors, phosphatidic acid, certain amino acids and their derivatives (e.g., -leucine and β-hydroxy β-methylbutyric acid), mechanical stimuli, and oxidative stress. Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling. The role of mTORC1 is to activate translation of proteins. In order for cells to grow and proliferate by manufacturing more proteins, the cells must ensure that they have the resources available for protein production. Thus, for protein production, and therefore mTORC1 activation, cells must have adequate energy resources, nutrient availability, oxygen abundance, and proper growth factors in order for mRNA translation to begin. Activation at the lysosome The TSC complex Almost all of the variables required for protein synthesis affect mTORC1 activation by interacting with the TSC1/TSC2 protein complex. TSC2 is a GTPase activating protein (GAP). Its GAP activity interacts with a G protein called Rheb by hydrolyzing the GTP of the active Rheb-GTP complex, converting it to the inactive Rheb-GDP complex. The active Rheb-GTP activates mTORC1 through unelucidated pathways. Thus, many of the pathways that influence mTORC1 activation do so through the activation or inactivation of the TSC1/TSC2 heterodimer. This control is usually performed through phosphorylation of the complex. This phosphorylation can cause the dimer to dissociate and lose its GAP activity, or the phosphorylation can cause the heterodimer to have increased GAP activity, depending on which amino acid residue becomes phosphorylated. Thus, the signals that influence mTORC1 activity do so through activation or inactivation of the TSC1/TSC2 complex, upstream of mTORC1. The Ragulator-Rag complex mTORC1 interacts at the Ragulator-Rag complex on the surface of the lysosome in response to amino acid levels in the cell. Even if a cell has the proper energy for protein synthesis, if it does not have the amino acid building blocks for proteins, no protein synthesis will occur. Studies have shown that depriving amino acid levels inhibits mTORC1 signaling to the point where both energy abundance and amino acids are necessary for mTORC1 to function. When amino acids are introduced to a deprived cell, the presence of amino acids causes Rag GTPase heterodimers to switch to their active conformation. Active Rag heterodimers interact with raptor, localizing mTORC1 to the surface of late endosomes and lysosomes where the Rheb-GTP is located. This allows mTORC1 to physically interact with Rheb. Thus the amino acid pathway as well as the growth factor/energy pathway converge on endosomes and lysosomes. Thus the Ragulator-Rag complex recruits mTORC1 to lysosomes to interact with Rheb. Regulation of the Ragulator-Rag complex Rag activity is regulated by at least two highly conserved complexes: the "GATOR1" complex containing DEPDC5, NPRL2 and NPRL3 and the ""GATOR2" complex containing Mios, WDR24, WDR59, Seh1L, Sec13. GATOR1 inhibits Rags (it is a GTPase-activating protein for Rag subunits A/B) and GATOR2 activates Rags by inhibiting DEPDC5. Upstream signaling Receptor tyrosine kinases Akt/PKB pathway Insulin-like growth factors can activate mTORC1 through the receptor tyrosine kinase (RTK)-Akt/PKB signaling pathway. Ultimately, Akt phosphorylates TSC2 on serine residue 939, serine residue 981, and threonine residue 1462. These phosphorylated sites will recruit the cytosolic anchoring protein 14-3-3 to TSC2, disrupting the TSC1/TSC2 dimer. When TSC2 is not associated with TSC1, TSC2 loses its GAP activity and can no longer hydrolyze Rheb-GTP. This results in continued activation of mTORC1, allowing for protein synthesis via insulin signaling. Akt will also phosphorylate PRAS40, causing it to fall off of the Raptor protein located on mTORC1. Since PRAS40 prevents Raptor from recruiting mTORC1's substrates 4E-BP1 and S6K1, its removal will allow the two substrates to be recruited to mTORC1 and thereby activated in this way. Furthermore, since insulin is a factor that is secreted by pancreatic beta cells upon glucose elevation in the blood, its signaling ensures that there is energy for protein synthesis to take place. In a negative feedback loop on mTORC1 signaling, S6K1 is able to phosphorylate the insulin receptor and inhibit its sensitivity to insulin. This has great significance in diabetes mellitus, which is due to insulin resistance. MAPK/ERK pathway Mitogens, such as insulin like growth factor 1 (IGF1), can activate the MAPK/ERK pathway, which can inhibit the TSC1/TSC2 complex, activating mTORC1. In this pathway, the G protein Ras is tethered to the plasma membrane via a farnesyl group and is in its inactive GDP state. Upon growth factor binding to the adjacent receptor tyrosine kinase, the adaptor protein GRB2 binds with its SH2 domains. This recruits the GEF called Sos, which activates the Ras G protein. Ras activates Raf (MAPKKK), which activates Mek (MAPKK), which activates Erk (MAPK). Erk can go on to activate RSK. Erk will phosphorylate the serine residue 644 on TSC2, while RSK will phosphorylate serine residue 1798 on TSC2. These phosphorylations will cause the heterodimer to fall apart, and prevent it from deactivating Rheb, which keeps mTORC1 active. RSK has also been shown to phosphorylate raptor, which helps it overcome the inhibitory effects of PRAS40. JNK pathway c-Jun N-terminal kinase (JNK) signaling is part of the mitogen-activated protein kinase (MAPK) signaling pathway essential in stress signaling pathways relating to gene expression, neuronal development, and cell survival. Recent studies have shown there is a direct molecular interaction where JNK phosphorylates Raptor at Ser-696, Thr-706, and Ser-863. Therefore, mTORC1 activity is JNK-dependent. Thus, JNK activation plays a role in protein synthesis via subsequent downstream effectors of mTORC1 such as S6 kinase and eIFs. Wnt pathway The Wnt pathway is responsible for cellular growth and proliferation during organismal development; thus, it could be reasoned that activation of this pathway also activates mTORC1. Activation of the Wnt pathway inhibits glycogen synthase kinase 3 beta (GSK3B). When the Wnt pathway is not active, GSK3B is able to phosphorylate TSC2 on Ser1341 and Ser1337 in conjunction with AMPK phosphorylation of Ser1345. It has been found that the AMPK is required to first phosphorylate Ser1345 before GSK3B can phosphorylate its target serine residues. This phosphorylation of TSC2 would activate this complex, if GSK3B were active. Since the Wnt pathway inhibits GSK3 signaling, the active Wnt pathway is also involved in the mTORC1 pathway. Thus, mTORC1 can activate protein synthesis for the developing organism. Cytokines Cytokines like tumor necrosis factor alpha (TNF-alpha) can induce mTOR activity through IKK beta, also known as IKK2. IKK beta can phosphorylate TSC1 at serine residue 487 and TSC1 at serine residue 511. This causes the heterodimer TSC complex to fall apart, keeping Rheb in its active GTP-bound state. Energy and oxygen Energy status In order for translation to take place, abundant sources of energy, particularly in the form of ATP, need to be present. If these levels of ATP are not present, due to its hydrolysis into other forms like AMP, and the ratio of AMP to ATP molecules gets too high, AMPK will become activated. AMPK will go on to inhibit energy consuming pathways such as protein synthesis. AMPK can phosphorylate TSC2 on serine residue 1387, which activates the GAP activity of this complex, causing Rheb-GTP to be hydrolyzed into Rheb-GDP. This inactivates mTORC1 and blocks protein synthesis through this pathway. AMPK can also phosphorylate Raptor on two serine residues. This phosphorylated Raptor recruits 14-3-3 to bind to it and prevents Raptor from being part of the mTORC1 complex. Since mTORC1 cannot recruit its substrates without Raptor, no protein synthesis via mTORC1 occurs. LKB1, also known as STK11, is a known tumor suppressor that can activate AMPK. More studies on this aspect of mTORC1 may help shed light on its strong link to cancer. Hypoxic stress When oxygen levels in the cell are low, it will limit its energy expenditure through the inhibition of protein synthesis. Under hypoxic conditions, hypoxia inducible factor one alpha (HIF1A) will stabilize and activate transcription of REDD1, also known as DDIT4. After translation, this REDD1 protein will bind to TSC2, which prevents 14-3-3 from inhibiting the TSC complex. Thus, TSC retains its GAP activity towards Rheb, causing Rheb to remain bound to GDP and mTORC1 to be inactive. Due to the lack of synthesis of ATP in the mitochondria under hypoxic stress or hypoxia, AMPK will also become active and thus inhibit mTORC1 through its processes. Downstream signaling mTORC1 activates transcription and translation through its interactions with p70-S6 Kinase 1 (S6K1) and 4E-BP1, the eukaryotic initiation factor 4E (eIF4E) binding protein 1, primarily via phosphorylation and dephosphorylation of its downstream targets. S6K1 and 4E-BP1 modulate translation in eukaryotic cells. Their signaling will converge at the translation initiation complex on the 5' end of mRNA, and thus activate translation. 4E-BP1 Activated mTORC1 will phosphorylate translation repressor protein 4E-BP1, thereby releasing it from eukaryotic translation initiation factor 4E (eIF4E). eIF4E is now free to join the eukaryotic translation initiation factor 4G (eIF4G) and the eukaryotic translation initiation factor 4A (eIF4A). This complex then binds to the 5' cap of mRNA and will recruit the helicase eukaryotic translation initiation factor A (eIF4A) and its cofactor eukaryotic translation initiation factor 4B (eIF4B). The helicase is required to remove hairpin loops that arise in the 5' untranslated regions of mRNA, which prevent premature translation of proteins. Once the initiation complex is assembled at the 5' cap of mRNA, it will recruit the 40S small ribosomal subunit that is now capable of scanning for the AUG start codon start site, because the hairpin loop has been degraded by the eIF4A helicase. Once the ribosome reaches the AUG codon, translation can begin. S6K Previous studies suggest that S6K signaling is mediated by mTOR in a rapamycin-dependent manner wherein S6K is displaced from the eIF3 complex upon binding of mTOR with eIF3. Hypophosphorylated S6K is located on the eIF3 scaffold complex. Active mTORC1 gets recruited to the scaffold, and once there, will phosphorylate S6K to make it active. mTORC1 phosphorylates S6K1 on at least two residues, with the most critical modification occurring on a threonine residue (T389). This event stimulates the subsequent phosphorylation of S6K1 by PDPK1. Active S6K1 can in turn stimulate the initiation of protein synthesis through activation of S6 Ribosomal protein (a component of the ribosome) and eIF4B, causing them to be recruited to the pre-initiation complex. Active S6K can bind to the SKAR scaffold protein that can get recruited to exon junction complexes (EJC). Exon junction complexes span the mRNA region where two exons come together after an intron has been spliced out. Once S6K binds to this complex, increased translation on these mRNA regions occurs. S6K1 can also participate in a positive feedback loop with mTORC1 by phosphorylating mTOR's negative regulatory domain at two sites thr-2446 and ser-2448; phosphorylation at these sites appears to stimulate mTOR activity. S6K also can phosphorylate programmed cell death 4 (PDCD4), which marks it for degradation by ubiquitin ligase Beta-TrCP (BTRC). PDCD4 is a tumor suppressor that binds to eIF4A and prevents it from being incorporated into the initiation complex. Role in disease and aging mTOR was found to be related to aging in 2001 when the ortholog of S6K, SCH9, was deleted in S. cerevisiae, doubling its lifespan. This greatly increased the interest in upstream signaling and mTORC1. Studies in inhibiting mTORC1 were thus performed on the model organisms of C. elegans, fruitflies, and mice. Inhibition of mTORC1 showed significantly increased lifespans in all model species. Disrupting the gut microbiota of infant mice was found to lead to reduced longevity with signaling of mTORC1 implicated as a potential mechanism. Based on upstream signaling of mTORC1, a clear relationship between food consumption and mTORC1 activity has been observed. Most specifically, carbohydrate consumption activates mTORC1 through the insulin growth factor pathway. In addition, amino acid consumption will stimulate mTORC1 through the branched chain amino acid/Rag pathway. Thus dietary restriction inhibits mTORC1 signaling through both upstream pathways of mTORC that converge on the lysosome. Autophagy Autophagy is the major degradation pathway in eukaryotic cells and is essential for the removal of damaged organelles via macroautophagy or proteins and smaller cellular debris via microautophagy from the cytoplasm. Thus, autophagy is a way for the cell to recycle old and damaged materials by breaking them down into their smaller components, allowing for the resynthesis of newer and healthier cellular structures. Autophagy can thus remove protein aggregates and damaged organelles that can lead to cellular dysfunction. Upon activation, mTORC1 will phosphorylate autophagy-related protein 13 (Atg 13), preventing it from entering the ULK1 kinase complex, which consists of Atg1, Atg17, and Atg101. This prevents the structure from being recruited to the preautophagosomal structure at the plasma membrane, inhibiting autophagy. mTORC1's ability to inhibit autophagy while at the same time stimulate protein synthesis and cell growth can result in accumulations of damaged proteins and organelles, contributing to damage at the cellular level. Because autophagy appears to decline with age, activation of autophagy may help promote longevity in humans. Problems in proper autophagy processes have been linked to diabetes, cardiovascular disease, neurodegenerative diseases, and cancer. Lysosomal damage mTORC1 is positioned on lysosomes and is inhibited when lysosomal membrane is damaged through a protein complex termed GALTOR. GALTOR contains galectin-8, a cytosolic lectin, which recognizes damaged lysosomal membranes by binding to the exposed glycoconjugates normally facing lysosomal lumen. Under homeostatic conditions, Galectin-8 associates with active mTOR. Following membrane damage galectin-8 no longer interacts with mTOR but instead switches to complexes containing SLC38A9, RRAGA/RRAGB, and LAMTOR1 (a component of Ragulator) thus inhibiting mTOR, mTOR inhibition in turn activates autophagy and starts a quality control program that removes damaged lysosomes, referred to as lysophagy, Reactive oxygen species Reactive oxygen species can damage the DNA and proteins in cells. A majority of them arise in the mitochondria. Deletion of the TOR1 gene in yeast increases cellular respiration in the mitochondria by enhancing the translation of mitochondrial DNA that encodes for the complexes involved in the electron transport chain. When this electron transport chain is not as efficient, the unreduced oxygen molecules in the mitochondrial cortex may accumulate and begin to produce reactive oxygen species. It is important to note that both cancer cells as well as those cells with greater levels of mTORC1 both rely more on glycolysis in the cytosol for ATP production rather than through oxidative phosphorylation in the inner membrane of the mitochondria. Inhibition of mTORC1 has also been shown to increase transcription of the NFE2L2 (NRF2) gene, which is a transcription factor that is able to regulate the expression of electrophilic response elements as well as antioxidants in response to increased levels of reactive oxygen species. Though AMPK induced eNOS has been shown to regulate mTORC1 in endothelium. Unlike the other cell type in endothelium eNOS induced mTORC1 and this pathway is required for mitochondrial biogenesis. Stem cells Conservation of stem cells in the body has been shown to help prevent against premature aging. mTORC1 activity plays a critical role in the growth and proliferation of stem cells. Knocking out mTORC1 results in embryonic lethality due to lack of trophoblast development. Treating stem cells with rapamycin will also slow their proliferation, conserving the stem cells in their undifferentiated condition. mTORC1 plays a role in the differentiation and proliferation of hematopoietic stem cells. Its upregulation has been shown to cause premature aging in hematopoietic stem cells. Conversely, inhibiting mTOR restores and regenerates the hematopoietic stem cell line. The mechanisms of mTORC1's inhibition on proliferation and differentiation of hematopoietic stem cells has yet to be fully elucidated. Rapamycin is used clinically as an immunosuppressant and prevents the proliferation of T cells and B cells. Paradoxically, even though rapamycin is a federally approved immunosuppressant, its inhibition of mTORC1 results in better quantity and quality of functional memory T cells. mTORC1 inhibition with rapamycin improves the ability of naïve T cells to become precursor memory T cells during the expansion phase of T cell development . This inhibition also allows for an increase in quality of these memory T cells that become mature T cells during the contraction phase of their development. mTORC1 inhibition with rapamycin has also been linked to a dramatic increase of B cells in old mice, enhancing their immune systems. This paradox of rapamycin inhibiting the immune system response has been linked to several reasons, including its interaction with regulatory T cells. As a biomolecular target Activators Resistance exercise, the amino acid -leucine, and beta-hydroxy beta-methylbutyric acid (HMB) are known to induce signaling cascades in skeletal muscle cells that result in mTOR phosphorylation, the activation of mTORC1, and subsequently the initiation of myofibrillar protein synthesis (i.e., the production of proteins such as myosin, titin, and actin), thereby facilitating muscle hypertrophy. The NMDA receptor antagonist ketamine has been found to activate the mTORC1 pathway in the medial prefrontal cortex (mPFC) of the brain as an essential downstream mechanism in the mediation of its rapid-acting antidepressant effects. NV-5138 is a ligand and modulator of sestrin2, a leucine amino acid sensor and upstream regulatory pathway of mTORC1, and is under development for the treatment of depression. The drug has been found to directly and selectively activate the mTORC1 pathway, including in the mPFC, and to produce rapid-acting antidepressant effects similar to those of ketamine. Inhibitors There have been several dietary compounds that have been suggested to inhibit mTORC1 signaling including EGCG, resveratrol, curcumin, caffeine, and alcohol. First generation drugs Rapamycin was the first known inhibitor of mTORC1, considering that mTORC1 was discovered as being the target of rapamycin. Rapamycin will bind to cytosolic FKBP12 and act as a scaffold molecule, allowing this protein to dock on the FRB regulatory region (FKBP12-Rapamycin Binding region/domain) on mTORC1. The binding of the FKBP12-rapamycin complex to the FRB regulatory region inhibits mTORC1 through processes not yet known. mTORC2 is also inhibited by rapamycin in some cell culture lines and tissues, particularly those that express high levels of FKBP12 and low levels of FKBP51. Rapamycin itself is not very water soluble and is not very stable, so scientists developed rapamycin analogs, called rapalogs, to overcome these two problems with rapamycin. These drugs are considered the first generation inhibitors of mTOR. These other inhibitors include everolimus and temsirolimus. Compared with the parent compound rapamycin, everolimus is more selective for the mTORC1 protein complex, with little impact on the mTORC2 complex. mTORC1 inhibition by everolimus has been shown to normalize tumor blood vessels, to increase tumor-infiltrating lymphocytes, and to improve adoptive cell transfer therapy. Sirolimus, which is the drug name for rapamycin, was approved by the U.S. Food and Drug Administration (FDA) in 1999 to prevent against transplant rejection in patients undergoing kidney transplantation. In 2003, it was approved as a stent covering for widening arteries to prevent against future heart attacks. In 2007, mTORC1 inhibitors began being approved for treatments against cancers such as renal cell carcinoma. In 2008 they were approved for treatment of mantle cell lymphoma. mTORC1 inhibitors have recently been approved for treatment of pancreatic cancer. In 2010 they were approved for treatment of tuberous sclerosis. Second generation drugs The second generation of inhibitors were created to overcome problems with upstream signaling upon the introduction of first generation inhibitors to the treated cells. One problem with the first generation inhibitors of mTORC1 is that there is a negative feedback loop from phosphorylated S6K, that can inhibit the insulin RTK via phosphorylation. When this negative feedback loop is no longer there, the upstream regulators of mTORC1 become more active than they would otherwise would have been under normal mTORC1 activity. Another problem is that since mTORC2 is resistant to rapamycin, and it too acts upstream of mTORC1 by activating Akt. Thus signaling upstream of mTORC1 still remains very active upon its inhibition via rapamycin and the rapalogs. Rapamycin and its analogues also have procoagulant side effects caused by off-target binding of the activated immunophilin FKBP12, which are not produced by structurally unrelated inhibitors of mTORC such as gedatolisib, WYE-687 and XL-388. Second generation inhibitors are able to bind to the ATP-binding motif on the kinase domain of the mTOR core protein itself and abolish activity of both mTOR complexes. In addition, since the mTOR and the PI3K proteins are both in the same phosphatidylinositol 3-kinase-related kinase (PIKK) family of kinases, some second generation inhibitors have dual inhibition towards the mTOR complexes as well as PI3K, which acts upstream of mTORC1. As of 2011, these second generation inhibitors were in phase II of clinical trials. Third generation drugs The third generation of inhibitors were created following the realization that many of the side effects of rapamycin and rapamycin analogs were mediated not as a result of direct inhibition of mTORC1, but as a consequence of off-target inhibition of mTORC2. Rapamycin analogs such as DL001, that are more selective for mTORC1 than sirolimus, have been developed and in mice have reduced side effects. mTORC1 inhibitors that have novel mechanisms of action, for example peptides like PRAS40 and small molecules like HY-124798 (Rheb inhibitor NR1), which inhibit the interaction of mTORC1 with its endogenous activator Rheb, are also being developed. Some glucose transporter inhibitors such as NV-5440 and NV-6297 are also selective inhibitors of mTORC1 There have been over 1,300 clinical trials conducted with mTOR inhibitors since 1970. References External links Protein complexes EC 2.7.11 Tor signaling pathway Human proteins
MTORC1
Chemistry
5,777
68,169,001
https://en.wikipedia.org/wiki/Nor%C5%9Funtepe
Norşuntepe is a tell, or archaeological settlement mound, in Elazığ Province (Turkey). The site was occupied between the Chalcolithic and Iron Age and is now partially submerged by Lake Keban. It was excavated between 1968 and 1974. The site and its environment Before it was flooded, Norşuntepe was located on the Altınova Plain near the mouth of the Murat River (downstream from the town of Palu, Elazığ). It is now partially submerged by the reservoir created by the Keban Dam; its top is still above the water level. The site consists of a central hill or "acropolis" measuring and high, making it the largest tell in the area. The central hill is surrounded by lower terraces encompassing an area of . History Norşuntepe was occupied from the Chalcolithic to the Iron Age. The excavators have recognized 40 different occupation levels ranging in date from the fifth millennium BC to ca. 600 BC. Its occupation levels overlap to a large degree with those excavated at nearby Arslantepe. Chalcolithic The Chalcolithic occupation at Norşuntepe can be divided in 3 phases. The oldest Phase I dates to the Middle Chalcolithic and included Ubaid-type pottery. Phase II represents the Late Chalcolithic and during its final levels, more complex architecture appeared in the excavated area. Phase II: Metallurgy and arsenical bronze Also during Phase II, copper and arsenical bronze production was practiced at the site. Norşuntepe provides first clear and unambiguous evidence of arsenical bronze production in this general area before the 4th millennium. It demonstrates that some form of arsenic alloying was being deliberately practised. Since the slag identified at Norşuntepe contains no arsenic, this means that arsenic-bearing materials were added separately. The evidence was discovered at the levels with Ubaid style ceramics, where also were found a number of structures related to the Mesopotamian architectural traditions. A related site in the area from the same time period is Değirmentepe, where arsenic-bronze was also produced around 4200 BC. Phase III The final Chalcolithic phases were characterized by small-scale single-room houses. Radiocarbon dating from the different Chalcolithic levels provided dates between 4300-3800 BC. Early Bronze The site reached a size of 3.2 hectares in the Early Bronze I and II periods and then shrank to 0.8 hectares in EB III. After a hiatus, Norşuntepe was again occupied during the Early Bronze Age. During this period, the site was surrounded by a mudbrick city wall built on a stone foundation. There is evidence for copper production and some sort of palace or large, central building appears at the site in the final phases. In terms of material culture and architecture, there are clear parallels with Transcaucasia, and the Kura–Araxes culture. The latest Early Bronze Age phase in Norşuntepe ends in fire. Middle Bronze The Middle Bronze Age settlement is smaller than its precursor and no evidence for a palace has been found. Late Bronze The Late Bronze Age remains at Norşuntepe was heavily disturbed by later Iron Age activity, but some larger buildings have been excavated. Iron Age The Early Iron Age at Norşuntepe (1150–800 BC) is characterized by a shift away from Hittite material culture, possibly as a result of the influx of immigrants such as the Mushki. The settlement seems to have been restricted to the south terrace and may have had a rural character. During its final occupation phases (800–600 BC), Norşuntepe was part of Urartu. A building with a large, columned hall was located on the mail hill, whereas a second large building, possibly a caravanserai, was excavated on the south terrace. A cemetery located on the hill top included a burial chamber where three horses together with gear and weapons were buried. The hilltop was again used as a cemetery during the Medieval Period. Excavations It was excavated between 1968 and 1974 under the direction of German archaeologist Harald Hauptmann as part of the salvage project to document archaeological sites that would be flooded by the construction of the Keban Dam. Excavation of the site focused on three areas: the western slope, the so-called "acropolis" area, and the south terrace. See also Aratashen References Archaeological sites in Eastern Anatolia Geography of Elazığ Province Chalcolithic sites of Asia Bronze Age sites in Asia Iron Age sites in Asia Urartian cities Tells (archaeology) Archaeometallurgy Kura-Araxes culture
Norşuntepe
Chemistry,Materials_science
947
11,502,909
https://en.wikipedia.org/wiki/Ribbon%20diagram
Ribbon diagrams, also known as Richardson diagrams, are 3D schematic representations of protein structure and are one of the most common methods of protein depiction used today. The ribbon depicts the general course and organisation of the protein backbone in 3D and serves as a visual framework for hanging details of the entire atomic structure, such as the balls for the oxygen atoms attached to myoglobin's active site in the adjacent figure. Ribbon diagrams are generated by interpolating a smooth curve through the polypeptide backbone. α-helices are shown as coiled ribbons or thick tubes, β-sheets as arrows, and non-repetitive coils or loops as lines or thin tubes. The direction of the polypeptide chain is shown locally by the arrows, and may be indicated overall by a colour ramp along the length of the ribbon. Ribbon diagrams are simple yet powerful, expressing the visual basics of a molecular structure (twist, fold and unfold). This method has successfully portrayed the overall organization of protein structures, reflecting their three-dimensional nature and allowing better understanding of these complex objects both by expert structural biologists and by other scientists, students, and the general public. History The first ribbon diagrams, hand-drawn by Jane S. Richardson in 1980 (influenced by earlier individual illustrations), were the first schematics of 3D protein structure to be produced systematically. They were created to illustrate a classification of protein structures for an article in Advances in Protein Chemistry (now available in annotated form on-line at Anatax). These drawings were outlined in pen on tracing paper over a printout of a Cα trace of the atomic coordinates, and shaded with colored pencil or pastels; they preserved positions, smoothed the backbone path, and incorporated small local shifts to disambiguate the visual appearance. As well as the triose isomerase ribbon drawing at the right, other hand-drawn examples depicted prealbumin, flavodoxin, and Cu,Zn superoxide dismutase. In 1982, Arthur M. Lesk and co-workers first enabled the automatic generation of ribbon diagrams through a computational implementation that uses Protein Data Bank files as input. This conceptually simple algorithm fit cubic polynomial B-spline curves to the peptide planes. Most modern graphics systems provide either B-splines or Hermite splines as a basic drawing primitive. One type of spline implementation passes through each Cα guide point, producing an exact but choppy curve. Both hand-drawn and most computer ribbons (such as those shown here) are smoothed over about four successive guide points (usually the peptide midpoint) to produce a more visually pleasing and understandable representation. To give the right radius for helical spirals while preserving smooth β-strands, the splines can be modified by offsets proportional to local curvature, as first developed by Mike Carson for his Ribbons program and later adopted by other molecular graphics software, such as the open-source Mage program for kinemage graphics that produced the ribbon image at top right (other examples: 1XK8 trimer and DNA polymerase). Since their inception, and continuing in the present, ribbon diagrams have been the single most common representation of protein structure and a common choice of cover image for a journal or textbook. Current computer programs One popular program used for drawing ribbon diagrams is Molscript. Molscript utilizes Hermite splines to create coordinates for coils, turns, strands, and helices. The curve passes through all its control points (Cα atoms) guided by direction vectors. The program was built based on traditional molecular graphics by Arthur M. Lesk, Karl Hardman, and John Priestle. Jmol is an open-source Java-based viewer for browsing molecular structures on the web; it includes a simplified "cartoon" version of ribbons. Other graphics programs such as DeepView (example: urease) and MolMol (example: SH2 domain) also produce ribbon images. KiNG is the Java-based successor to Mage (examples: α-hemolysin top view and side view). UCSF Chimera is a powerful molecular modeling program that also includes visualizations such as ribbons, notable especially for the ability to combine them with contoured shapes from cryo-electron microscopy data. PyMOL, by Warren DeLano, is a popular and flexible molecular graphics program (based on Python) that operates in interactive mode and also produces presentation-quality 2D images for ribbon diagrams and many other representations. Features See also Molecular graphics References Protein structure Scientific simulation software
Ribbon diagram
Chemistry
933
14,167,556
https://en.wikipedia.org/wiki/University%20of%20California%2C%20Riverside%20Insectary%20and%20Quarantine%20Facilities
The UCR Insectary and Quarantine Facilities are where foreign insect and mite predators and parasites are confined and screened before propagation and release in California and the United States. This complex of facilities was first established in 1923 as part of the UC Citrus Experiment Station, and is currently managed by the University of California, Riverside Department of Entomology. The complex supports integrated pest management and biological pest control, research, and includes the Insectary, Quarantine Facility, Insect Preparation Facility, eight specialized greenhouses, a lathhouse, and storage. The Quarantine Facility is one of 14 approved biological control quarantine facilities in the U.S. and the oldest non-federal facility in the nation. References I Nature conservation in the United States Pest control Biological pest control Quarantine facilities in the United States I 1923 establishments in California
University of California, Riverside Insectary and Quarantine Facilities
Biology
172
51,506,398
https://en.wikipedia.org/wiki/Cost-sharing%20mechanism
In economics and mechanism design, a cost-sharing mechanism is a process by which several agents decide on the scope of a public product or service, and how much each agent should pay for it. Cost-sharing is easy when the marginal cost is constant: in this case, each agent who wants the service just pays its marginal cost. Cost-sharing becomes more interesting when the marginal cost is not constant. With increasing marginal costs, the agents impose a negative externality on each other; with decreasing marginal costs, the agents impose a positive externality on each other (see example below). The goal of a cost-sharing mechanism is to divide this externality among the agents. There are various cost-sharing mechanisms, depending on the type of product/service and the type of cost-function. Divisible product, increasing marginal costs In this setting, several agents share a production technology. They have to decide how much to produce and how to share the cost of production. The technology has increasing marginal cost - the more is produced, the harder it becomes to produce more units (i.e., the cost is a convex function of the demand). An example cost-function is: $1 per unit for the first 10 units; $10 per unit for each additional unit. So if there are three agents whose demands are 3 and 6 and 10, then the total cost is $100. Definitions A cost-sharing problem is defined by the following functions, where i is an agent and Q is a quantity of the product: Demand(i) = the amount that agent i wants to receive. Cost(Q) = the cost of producing Q units of the product. A solution to a cost-sharing problem is defined by a payment for every agent who is served, such that the total payment equals the total cost: ; where D is the total demand: Several cost-sharing solutions have been proposed. Average cost-sharing In the literature on cost pricing of a regulated monopoly, it is common to assume that each agent should pay its average cost, i.e.: In the above example, the payments are 15.8 (for demand 3), 31.6 (for demand 6) and 52.6 (for demand 10). This cost-sharing method has several advantages: It is not affected by manipulations in which two agents openly merge their demand into a single super-agent, or one agent openly splits its demand into two sub-agents. Indeed, it is the only method immune to such manipulations. It is not affected by manipulations in which two agents secretly transfer costs and products between each other. Each agent pays at least its stand-alone cost - the cost he would have paid without the existence of other agents. This is a measure of solidarity: no agent should make a profit from a negative externality. However, it has a disadvantage: An agent might pay more than its unanimous cost - the cost he would have paid if all other agents had the same demand. This is a measure of fairness: no agent should suffer too much from the negative externality. In the above example, the agent with demand 3 can claim that, if all other agents were as modest as he is, there would have been no negative externality and each agent would have paid only $1 per unit, so he should not have to pay more than this. Marginal cost-sharing In marginal cost-sharing, the payment of each agent depends on his demand and on the marginal cost in the current production-state: In the above example, the payments are 0 (for demand 3), 30 (for demand 6) and 70 (for demand 10). This method guarantees that an agents pays at most its unanimous cost - the cost he would have paid if all other agents had the same demand. However, an agent might pay less than its stand-alone cost. In the above example, the agent with demand 3 pays nothing (in some cases it is even possible that an agent pays negative value). Serial cost-sharing Serial cost-sharing can be described as the result of the following process. At time 0, all agents enter a room. The machine starts producing one unit per minute. The produced unit and its cost are divided equally among all agents in the room. Whenever an agent feels that his demand is satisfied, he exits the room. So, if the agents are ordered in ascending order of demand: Agent 1 (with the lowest demand) pays: ; Agent 2 pays: plus ; and so on. This method guarantees that each agent pays at least its stand-alone cost and at most its unanimous cost. However, it is not immune to splitting or merging of agents, or to transfer of input and output between agents. Hence, it makes sense only when such transfers are impossible (for example, with cable TV or telephone services). Binary service, decreasing marginal costs In this setting, there is a binary service - each agent is either served or is not served. The cost of the service is higher when more agents are served, but the marginal cost is smaller than when serving each agent individually (i.e., the cost is a submodular set function). As a typical example, consider two agents, Alice and George, who live near a water-source, with the following distances: Source-Alice: 8 km Source-George: 7 km Alice-George: 2 km Suppose that each kilometer of water-pipe costs $1000. We have the following options: Nobody is connected; the cost is 0. Only George is connected; the cost is $7000. Only Alice is connected; the cost is $8000. Both Alice and George are connected; the cost is $9000, since the pipe can go from Source to George and then to Alice. Note that it is much cheaper than the sum of the costs of George and Alice. The choice between these four options should depend on the valuations of the agents - how much each of them is willing to pay for being connected to the water-source. The goal is to find a truthful mechanism that will induce the agents to reveal their true willingness-to-pay. Definitions A cost-sharing problem is defined by the following functions, where i is an agent and S is a subset of agents: Value(i) = the amount that agent i is willing to pay in order to enjoy the service. Cost(S) = the cost of serving all and only the agents in S. E.g., in the above example Cost({Alice,George})=9000. A solution to a cost-sharing problem is defined by: A subset S of agents who should be served; A payment for every agent who is served. A solution can be characterized by: The budget surplus of a solution is the total payment minus the total cost: . We would like to have budget balance, which means that the surplus should be exactly 0. The social welfare of a solution is the total utility minus the total cost: . We would like to have efficiency, which means that the social welfare is maximized. It is impossible to attain truthfulness, budget-balance and efficiency simultaneously; therefore, there are two classes of truthful mechanisms: Tatonement mechanisms - budget-balanced but not efficient A budget-balanced cost-sharing mechanism can be defined by a function Payment(i,S) - the payment that agent i has to pay when the subset of served agents is S. This function should satisfy the following two properties: budget-balance: the total payment by any subset equals the total cost of serving this subset: . So if a single agent is served, he must pay all his cost, but if two or more agents are served, each of them may pay less than his individual cost because of the submodularity. population monotonicity: the payment of an agent weakly increases when the subset of served agents shrinks: . For any such function, a cost-sharing problem with submodular costs can be solved by the following tatonnement process: Initially, let S be the set of all agents. Tell each agent i that he should pay Payment(i,S). Each agent who is not willing to pay his price, leaves S. If any agent has left S, return to step 2. Otherwise, finish and server the agents that remain in S. Note that, by the population-monotonicity property, the price always increases when people leave S. Therefore, an agent will never want to return to S, so the mechanism is truthful (the process is similar to an English auction). In addition to truthfulness, the mechanism has the following merits: Group strategyproofness - no group of agents can gain by reporting untruthfully. No positive transfers - no agent is paid money in order to be served. Individual rationality - no agent loses value from participation (in particular, a non-served agent pays nothing and a served agent pays at most his valuation). Consumer sovereignty - every agent can choose to get service, if his willingness-to-pay is sufficiently large. Moreover, any mechanism satisfying budget-balance, no-positive-transfers, individual-rationality, consumer-sovereignty and group-strategyproofness can be derived in this way using an appropriate Payment function. The mechanism can select the Payment function in order to attain such goals as fairness or efficiency. When agents have equal apriori rights, some reasonable payment functions are: The Shapley value, e.g., for two agents, the payments when both agents are served are: Payment(Alice,Both) = [Cost(Both)+Cost(Alice)-Cost(George)]/2, Payment(George,Both) = [Cost(Both)+Cost(George)-Cost(Alice)]/2. The egalitarian solution, e.g. Payment(Alice,Both) = median[Cost(Alice), Cost(Both)/2, Cost(Both)-Cost(George)], Payment(George,Both) = median[Cost(George), Cost(Both)/2, Cost(Both)-Cost(Alice)]. When agents have different rights (e.g. some agents are more senior than others), it is possible to charge the most senior agent only his marginal cost, e.g. if George is more senior, then for every subset S which does not contain George: Payment(George,S+George) = Cost(S+George)−Cost(S). Similarly, the next-most-senior agent can pay his marginal remaining cost, and so on. The above cost-sharing mechanisms are not efficient - they do not always select the allocation with the highest social welfare. But, when the payment function is selected to be the Shapley value, the loss of welfare is minimized. VCG mechanisms - efficient but not budget-balanced A different class of cost-sharing mechanisms are the VCG mechanisms. A VCG mechanism always selects the socially-optimal allocation - the allocation that maximizes the total utility of the served agents minus the cost of serving them. Then, each agent receives the welfare of the other agents, and pays an amount that depends only on the valuations of the other agents. Moreover, all VCG mechanisms satisfy the consumer-sovereignty property. There is a single VCG mechanism which also satisfies the requirements of no-positive-transfers and individual-rationality - it is the Marginal Cost Pricing mechanism. This is a special VCG mechanism in which each non-served agent pays nothing, and each served agent pays: I.e, each agent pays his value, but gets back the welfare that is added by his presence. Thus, the interests of the agent are aligned with the interests of society (maximizing the social welfare) so the mechanism is truthful. The problem with this mechanism is that it is not budget-balanced - it runs a deficit. Consider the above water-pipe example, and suppose both Alice and George value the service as $10000. When only Alice is served, the welfare is 10000-8000=2000; when only George is served; the welfare is 10000-7000=3000; when both are served, the welfare is 10000+10000-9000=11000. Therefore, the Marginal Cost Pricing mechanism selects to serve both agents. George pays 10000-(11000-2000)=1000 and Alice pays 10000-(11000-3000)=2000. The total payment is only 3000, which is less than the total cost of 9000. Moreover, the VCG mechanism is not group-strategyproof: an agent can help other agents by raising his valuation, without harming himself. See also Carpool - an application of cost-sharing. Shapley value - a possible rule for cost-sharing. Public good Facility location (cooperative game) Surplus sharing References Mechanism design
Cost-sharing mechanism
Mathematics
2,642
173,457
https://en.wikipedia.org/wiki/9
9 (nine) is the natural number following and preceding . Evolution of the Hindu–Arabic digit Circa 300 BC, as part of the Brahmi numerals, various Indians wrote a digit 9 similar in shape to the modern closing question mark without the bottom dot. The Kshatrapa, Andhra and Gupta started curving the bottom vertical line coming up with a -look-alike. How the numbers got to their Gupta form is open to considerable debate. The Nagari continued the bottom stroke to make a circle and enclose the 3-look-alike, in much the same way that the sign @ encircles a lowercase a. As time went on, the enclosing circle became bigger and its line continued beyond the circle downwards, as the 3-look-alike became smaller. Soon, all that was left of the 3-look-alike was a squiggle. The Arabs simply connected that squiggle to the downward stroke at the middle and subsequent European change was purely cosmetic. While the shape of the glyph for the digit 9 has an ascender in most modern typefaces, in typefaces with text figures the character usually has a descender, as, for example, in . The form of the number nine (9) could possibly derived from the Arabic letter waw, in which its isolated form (و) resembles the number 9. The modern digit resembles an inverted 6. To disambiguate the two on objects and labels that can be inverted, they are often underlined. It is sometimes handwritten with two strokes and a straight stem, resembling a raised lower-case letter q, which distinguishes it from the 6. Similarly, in seven-segment display, the number 9 can be constructed either with a hook at the end of its stem or without one. Most LCD calculators use the former, but some VFD models use the latter. Mathematics 9 is the fourth composite number, and the first odd composite number. 9 is also a refactorable number. Casting out nines is a quick way of testing the calculations of sums, differences, products, and quotients of integers in decimal, a method known as long ago as the 12th century. If an odd perfect number exists, it will have at least nine distinct prime factors. 9 is the sum of the cubes of the first two non-zero positive integers which makes it the first cube-sum number greater than one. A number that is 4 or 5 modulo 9 cannot be represented as the sum of three cubes. There are nine Heegner numbers, or square-free positive integers that yield an imaginary quadratic field whose ring of integers has a unique factorization, or class number of 1. Geometry A polygon with nine sides is called a nonagon. A regular nonagon can be constructed with a regular compass, straightedge, and angle trisector. The lowest number of squares needed for a perfect tiling of a rectangle is 9. 9 is the largest single-digit number in the decimal system. List of basic calculations Culture and mythology Indian culture Nine is a number that appears often in Indian culture and mythology. For example, there are nine influencers attested to in Indian astrology. In the Vaisheshika branch of Hindu philosophy, there are nine universal substances or elements: Earth, Water, Air, Fire, Ether, Time, Space, Soul, and Mind. And Navaratri is a nine-day festival dedicated to the nine forms of Durga. Chinese culture Nine (; ) is considered a good number in Chinese culture because it sounds the same as the word "long-lasting" (; ). Nine is strongly associated with the Chinese dragon, a symbol of magic and power. There are nine forms of the dragon, it is described in terms of nine attributes, and it has nine children. It has 117 scales – 81 yang (masculine, heavenly) and 36 yin (feminine, earthly). All three numbers are multiples of 9 (, , ). Anthropology Idioms "To go the whole nine yards" "A cat has nine lives" "To be on cloud nine" The word "K-9" pronounces the same as canine and is used in many US police departments to denote the police dog unit. Despite not sounding like the translation of the word canine in other languages, many police and military units around the world use the same designation. Someone dressed "to the nines" is dressed up as much as they can be. In North American urban culture, "nine" is a slang word for a 9mm pistol or homicide, the latter from the Illinois Criminal Code for homicide. Religion and philosophy Nine, as the largest single-digit number (in base ten), symbolizes completeness in the Baháʼí Faith. In addition, the word Baháʼ in the Abjad notation has a value of 9, and a 9-pointed star is used to symbolize the religion. The number 9 is revered in Hinduism and considered a complete, perfected and divine number because it represents the end of a cycle in the decimal system, which originated from the Indian subcontinent as early as 3000 BC. In Norse mythology, the number nine is associated with Odin, as that is how many days he hung from the world tree Yggdrasil before attaining knowledge of the runes. Nine is the number associated with Satan in LaVeyan Satanism. Anton LaVey wrote in The Satanic Rituals that this is because nine is the number of the ego since it "always returns to itself" even after being multiplied by any number. Science Chemistry The purity of chemicals (see Nine (purity)). Physiology A human pregnancy normally lasts nine months, the basis of Naegele's rule. Psychology Common terminal digit in psychological pricing. See also 9 (disambiguation) 0.999... Cloud Nine References Further reading Cecil Balmond, "Number 9, the search for the sigma code" 1998, Prestel 2008, , Integers 9 (number) Superstitions about numbers
9
Mathematics
1,238
65,818,072
https://en.wikipedia.org/wiki/NGC%207513
NGC 7513 is a barred spiral galaxy located in the constellation Sculptor. It is located at a distance of circa 62.5 million light years from Earth, which, given its apparent dimensions, means that NGC 7513 is about 75,000 light years across. It was discovered by Albert Marth on September 24, 1864. A large star cluster has been found in the nucleus, with an estimated mass of 107.0 . There is circumnuclear dust distributed irregularly. NGC 7513 is a member of the NGC 7507 galaxy group, named after NGC 7507, along with some smaller galaxies. NGC 7507 is an elliptical galaxy lying at a projected distance of 18 arcminutes. References External links NGC 7513 on SIMBAD Barred spiral galaxies Ring galaxies Peculiar galaxies Sculptor (constellation) 7513 70714 Discoveries by John Herschel Astronomical objects discovered in 1836
NGC 7513
Astronomy
180
37,928,741
https://en.wikipedia.org/wiki/Video%20interaction%20guidance
Video interaction guidance (VIG) is a video feedback intervention through which a “guider” helps a client to enhance communication within relationships. The client is guided to analyse and reflect on video clips of their own interactions. Applications include a caregiver and infant (often used in attachment-based therapy), and other education and care home interactions. VIG is used in more than 15 countries and by at least 4000 practitioners. Video Interaction Guidance has been used where concerns have been expressed over possible parental neglect in cases where the focus child is aged 2–12, and where the child is not the subject of a child protection plan. History Colwyn Trevarthen, a Professor at Edinburgh University, studied successful interactions between infants and their primary care givers, and found that the mother's responsiveness to her baby's initiatives supported and developed intersubjectivity (shared understanding), which he regarded as the basis of all effective communication, interaction and learning. In the 1980s Harry Biemans, in the Netherlands, applied this research using video clips, creating VIG. Research results Research results include that VIG enhances positive parenting skills, decreases/alleviates parental stress, increases parenting enjoyment, improves parental attitudes to parenting, and is related to more positive development of the children, although the effect at child-level is reduced in high-risk families. One study found an increase in sensitivity of mothers but no impact on infant attachment. VIG has also been found to increase the child sensitivity of teachers. The limitations of the experimental studies undertaken so far, such as their small number of subjects, are acknowledged, and more research is needed. Research linking VIG use to better subsequent long-term mental health of the child has not been published, but parenting is a causal risk factor for mental illness, and some mental health NGO's are pursuing programmes on expectation of a positive link. Video Interaction Guidance has been used where concerns have been expressed over possible parental neglect in cases where the focus child is aged 2–12, and where the child is not the subject of a child protection plan. Am evaluation of the project demonstrated that VIG produced a significant change in the emotional and behavioural difficulties of the population of children who received the service, and improvement in reported level of parenting and reported parental relationship with their children in the population of parents whose children received the service. The data excludes to parents who failed to complete the programme, parents who completed the programme but decided not to complete evaluation measures, and on some measures parents who completed measures but whose feedback was adjudged to have been positively biased. Parents also reported developing a better understanding of the following aspects of good parenting: · Giving each of their children one-to-one time. · Giving children space to make choices and develop skills. · Listening to children and not interrupting. · Making eye contact when talking to children. · Taking children out to parks and finding activities for them to do. · The importance of good relationships between separated parents. Theories of effectiveness Theories of why VIG is effective includes that the use of video clips enables a shared space to be created, where positive sensitivity and attunement moments can be seen. This allows clients to improve their relationship attunement skills, by developing their ability to mentalise about their own and their infants mental states, and by encouraging mind-minded interactions. (Trevarthen focuses particularly on how babies seek companionship, rather than using the term attachment, and has said "I think the ideal companion... is a familiar person who really treats the baby with playful human respect.") Understanding the mechanisms through which Video Interaction Guidance works Qualitative research studies have also illuminated some of the ways in which Video Interaction Guidance can help individual parents. Social learning theory in action Evaluations have demonstrated that in certain cases parents have learned to improve their parenting in the way described by social learning theory. Social Learning Theory suggests people learn by observing positive desired outcomes resulting from the observed behaviour. Parents, with several children, who traditionally spent all their time with the children with the children together in the group, started spending one-to-one time with individual children, after having been required by Video Interaction Guidance, to do one-on-one activities with a particular child, for the first time. Some parents started to do activities with their children, which involved a small element of risk, after having agreed to do them for the first time as part of Video Interaction Guidance. Similar findings are reported in an evaluation of the Triple P intervention. The importance of the relationship between the practitioner and the parent A principal factor which influences parents' engagement and perception is the quality of the relationship that they are able to build up with the practitioner delivering the programme. Key factors in helping practitioners engage parents into the intervention include: Establishing a sense that the practitioner will support the family beyond what is necessary to complete the intervention. Giving family members time to talk about their problems both during and out of appointments. Advocating for the family on issues with which the intervention is not directly concerned. Ensuring that fun forms a part of the interaction. Making family members feel cared for through the provision of clothes, food and gifts. Giving parents a lead in analysing family functioning and parenting. Carrying out the intervention in the home of the parent. Practitioners working on weekday evenings. In the case of Video Interaction Guidance, when parents were asked about their experience of the intervention, parents invariably referred to the care and support provided by the practitioner. Effectively the intervention is experienced as an aspect of the overall relationship of care. Recommendations and use VIG is recommended in the UK by NICE (the National Institute for Health and Clinical Excellence) and is one of two interventions recommended by the NSPCC to improve parenting. It is also recommended for health visitors. The European Union DataPrev database also recommends VIG. VIG is used by NHS and other health services providers. In 2014 the UK NGO Mental Health Foundation and partners began to use VIG in an early years intervention to prevent mental illness in later life. Training AVIGuk, a UK 'association of supervisors', manages 18 month training programmes in the UK. Most research results have involved guiders who have undertaken such training. In the United States, CVIG-USA, The Center for Video Interaction Guidance USA, the national training institute for VIG, trains agency staff and supervisors in applying the model for parent education, family support and therapy, staff training and development and leadership development. Criticisms VIG has been criticised for only focusing on positive factors, but this criticism has not been substantiated in terms of making VIG ineffective. The length and cost of the VIG training that AVIGuk provides has been criticised, on the grounds that this limits scalability and prevents wider use of VIG. This is shown in the emergence of similar video feedback interventions with much shorter training, such as Video Enhanced Reflective Practice (VERP), a particular application of VIG, and Video-feedback Intervention to Promote Positive Parenting and Sensitive Discipline (VIPP-SD), and other 'introductory' VIG courses. See also Attachment-based therapy Attachment theory Attachment measures Attachment in children Child psychotherapy Mental health Colwyn Trevarthen Mental illness References Parenting Infancy Interpersonal relationships Relationship counseling
Video interaction guidance
Biology
1,472
10,094,198
https://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood%20maximal%20function
In mathematics, the Hardy–Littlewood maximal operator M is a significant non-linear operator used in real analysis and harmonic analysis. Definition The operator takes a locally integrable function f : Rd → C and returns another function Mf. For any point x ∈ Rd, the function Mf returns the maximum of a set of reals, namely the set of average values of f for all the balls B(x, r) of any radius r at x. Formally, where |E| denotes the d-dimensional Lebesgue measure of a subset E ⊂ Rd. The averages are jointly continuous in x and r, so the maximal function Mf, being the supremum over r > 0, is measurable. It is not obvious that Mf is finite almost everywhere. This is a corollary of the Hardy–Littlewood maximal inequality. Hardy–Littlewood maximal inequality This theorem of G. H. Hardy and J. E. Littlewood states that M is bounded as a sublinear operator from Lp(Rd) to itself for p > 1. That is, if f ∈ Lp(Rd) then the maximal function Mf is weak L1-bounded and Mf ∈ Lp(Rd). Before stating the theorem more precisely, for simplicity, let {f > t} denote the set {x | f(x) > t}. Now we have: Theorem (Weak Type Estimate). For d ≥ 1, there is a constant Cd > 0 such that for all λ > 0 and f ∈ L1(Rd), we have: With the Hardy–Littlewood maximal inequality in hand, the following strong-type estimate is an immediate consequence of the Marcinkiewicz interpolation theorem: Theorem (Strong Type Estimate). For d ≥ 1, 1 < p ≤ ∞, and f ∈ Lp(Rd), there is a constant Cp,d > 0 such that In the strong type estimate the best bounds for Cp,d are unknown. However subsequently Elias M. Stein used the Calderón-Zygmund method of rotations to prove the following: Theorem (Dimension Independence). For 1 < p ≤ ∞ one can pick Cp,d = Cp independent of d. Proof While there are several proofs of this theorem, a common one is given below: For p = ∞, the inequality is trivial (since the average of a function is no larger than its essential supremum). For 1≤ p < ∞, first we shall use the following version of the Vitali covering lemma to prove the weak-type estimate. (See the article for the proof of the lemma.) Lemma. Let X be a separable metric space and a family of open balls with bounded diameter. Then has a countable subfamily consisting of disjoint balls such that where 5B is B with 5 times radius. For every x such that Mf(x) > t, by definition, we can find a ball Bx centered at x such that Thus {Mf > t} is a subset of the union of such balls, as x varies in {Mf > t}. This is trivial since x is contained in Bx. By the lemma, we can find, among such balls, a sequence of disjoint balls Bj such that the union of 5Bj covers {Mf > t}. It follows: This completes the proof of the weak-type estimate. The Lp bounds for p > 1 can be deduced from the weak bound by the Marcinkiewicz interpolation theorem. Here is how the argument goes in this particular case. Define the function by if and 0 otherwise. We have then and, by the definition of maximal function By the weak-type estimate applied to , we have: Then By the estimate above we have: This completes the proof of the theorem. Note that the constant in the proof can be improved to by using the inner regularity of the Lebesgue measure, and the finite version of the Vitali covering lemma. See the Discussion section below for more about optimizing the constant. Applications Some applications of the Hardy–Littlewood Maximal Inequality include proving the following results: Lebesgue differentiation theorem Rademacher differentiation theorem Fatou's theorem on nontangential convergence. Fractional integration theorem Here we use a standard trick involving the maximal function to give a quick proof of Lebesgue differentiation theorem. (But remember that in the proof of the maximal theorem, we used the Vitali covering lemma.) Let f ∈ L1(Rn) and where We write f = h + g where h is continuous and has compact support and g ∈ L1(Rn) with norm that can be made arbitrary small. Then by continuity. Now, Ωg ≤ 2Mg and so, by the theorem, we have: Now, we can let and conclude Ωf = 0 almost everywhere; that is, exists for almost all x. It remains to show the limit actually equals f(x). But this is easy: it is known that (approximation of the identity) and thus there is a subsequence almost everywhere. By the uniqueness of limit, fr → f almost everywhere then. Discussion It is still unknown what the smallest constants Cp,d and Cd are in the above inequalities. However, a result of Elias Stein about spherical maximal functions can be used to show that, for 1 < p < ∞, we can remove the dependence of Cp,d on the dimension, that is, Cp,d = Cp for some constant Cp > 0 only depending on p. It is unknown whether there is a weak bound that is independent of dimension. There are several common variants of the Hardy-Littlewood maximal operator which replace the averages over centered balls with averages over different families of sets. For instance, one can define the uncentered HL maximal operator (using the notation of Stein-Shakarchi) where the balls Bx are required to merely contain x, rather than be centered at x. There is also the dyadic HL maximal operator where Qx ranges over all dyadic cubes containing the point x. Both of these operators satisfy the HL maximal inequality. See also Rising sun lemma References John B. Garnett, Bounded Analytic Functions. Springer-Verlag, 2006 G. H. Hardy and J. E. Littlewood. A maximal theorem with function-theoretic applications. Acta Math. 54, 81–116 (1930). Antonios D. Melas, The best constant for the centered Hardy–Littlewood maximal inequality, Annals of Mathematics, 157 (2003), 647–688 Rami Shakarchi & Elias M. Stein, Princeton Lectures in Analysis III: Real Analysis. Princeton University Press, 2005 Elias M. Stein, Maximal functions: spherical means, Proc. Natl. Acad. Sci. U.S.A. 73 (1976), 2174–2175 Elias M. Stein, Singular Integrals and Differentiability Properties of Functions. Princeton University Press, 1971 Gerald Teschl, Topics in Real and Functional Analysis (lecture notes) Real analysis Harmonic analysis Types of functions
Hardy–Littlewood maximal function
Mathematics
1,475
20,014,458
https://en.wikipedia.org/wiki/Jerzy%20Browkin
Jerzy Browkin (5 November 1934 – 23 November 2015) was a Polish mathematician, studying mainly algebraic number theory. He was a professor at the Institute of Mathematics of the Polish Academy of Sciences. In 1994, together with Juliusz Brzeziński, he formulated the n-conjecture—a version of the abc conjecture involving n > 2 integers. References 1934 births 2015 deaths 20th-century Polish mathematicians 21st-century Polish mathematicians Number theorists Abc conjecture
Jerzy Browkin
Mathematics
92
43,580,500
https://en.wikipedia.org/wiki/Glossary%20of%20set%20theory
This is a glossary of terms and definitions related to the topic of set theory. Greek !$@ A B C D E F G H I See proper, below. J K .}} L M N O P Q R S References T U V W XYZ See also Glossary of Principia Mathematica List of topics in set theory Set-builder notation References Set theory Set theory Wikipedia glossaries using description lists
Glossary of set theory
Mathematics
91
15,154
https://en.wikipedia.org/wiki/IBM%203270
The IBM 3270 is a family of block oriented display and printer computer terminals introduced by IBM in 1971 and normally used to communicate with IBM mainframes. The 3270 was the successor to the IBM 2260 display terminal. Due to the text color on the original models, these terminals are informally known as green screen terminals. Unlike a character-oriented terminal, the 3270 minimizes the number of I/O interrupts required by transferring large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coaxial cable. IBM no longer manufactures 3270 terminals, but the IBM 3270 protocol is still commonly used via TN3270 clients, 3270 terminal emulation or web interfaces to access mainframe-based applications, which are sometimes referred to as green screen applications. Principles The 3270 series was designed to connect with mainframe computers, often at a remote location, using the technology then available in the early 1970s. The main goal of the system was to maximize the number of terminals that could be used on a single mainframe. To do this, the 3270 was designed to minimize the amount of data transmitted, and minimize the frequency of interrupts to the mainframe. By ensuring the CPU is not interrupted at every keystroke, a 1970s-era IBM 3033 mainframe fitted with only 16 MB of main memory was able to support up to 17,500 3270 terminals under CICS. Most 3270 devices are clustered, with one or more displays or printers connected to a control unit (the 3275 and 3276 included an integrated control unit). Originally devices were connected to the control unit over coaxial cable; later Token Ring, twisted pair, or Ethernet connections were available. A local control unit attaches directly to the channel of a nearby mainframe. A remote control unit is connected to a communications line by a modem. Remote 3270 controllers are frequently multi-dropped, with multiple control units on a line. IBM 3270 devices are connected to a 3299 multiplexer or to the cluster controller, e.g., 3271, 3272, 3274, 3174, using 93ohm RG-62 coaxial cables in a point-to-point configuration with one dedicated cable per terminal. Data is sent with a bit rate of 2.3587 Mbit/s using a slightly modified differential Manchester encoding. Cable runs of up to are supported, although IBM documents routinely stated the maximum supported coax cable length was . Originally devices were equipped with BNC connectors, which were later replaced with special dual-purpose connectors (DPCs) supporting the IBM shielded twisted-pair cabling system without the need for red baluns. In a data stream, both text and control (or formatting functions) are interspersed allowing an entire screen to be painted as a single output operation. The concept of formatting in these devices allows the screen to be divided into fields (clusters of contiguous character cells) for which numerous field attributes, e.g., color, highlighting, character set, and protection from modification, can be set. A field attribute occupies a physical location on the screen that also determines the beginning and end of a field. There are also character attributes associated with individual screen locations. Using a technique known as read modified, a single transmission back to the mainframe can contain the changes from any number of formatted fields that have been modified, but without sending any unmodified fields or static data. This technique enhances the terminal throughput of the CPU, and minimizes the data transmitted. Some users familiar with character interrupt-driven terminal interfaces find this technique unusual. There is also a read buffer capability that transfers the entire content of the 3270-screen buffer including field attributes. This is mainly used for debugging purposes to preserve the application program screen contents while replacing it, temporarily, with debugging information. Early 3270s offered three types of keyboards. The typewriter keyboard came in both a 66 key version, with no programmed function (PF) keys, and a 78 key version with twelve. Both versions had two Program Attention (PA) keys. The data entry keyboard had five PF keys and two PA keys. The operator console keyboard had twelve PF keys and two PA keys. Later 3270s had an Attention key, a Cursor Select key, a System Request key, twenty-four PF keys and three PA keys. There was also a TEST REQ key. When one of these keys is pressed, it will cause its control unit to generate an I/O interrupt to the host computer and present an Attention ID (AID) identifying which key was pressed. Application program functions such as termination, page-up, page-down, or help can be invoked by a single key press, thereby reducing the load on very busy processors. A downside to this approach was that vi-like behavior, responding to individual keystrokes, was not possible. For the same reason, a port of Lotus 1-2-3 to mainframes with 3279 screens did not meet with success because its programmers were not able to properly adapt the spreadsheet's user interface to a screen at a time rather than character at a time device. But end-user responsiveness was arguably more predictable with 3270, something users appreciated. Applications Following its introduction the 3270 and compatibles were by far the most commonly used terminals on IBM System/370 and successor systems. IBM and third-party software that included an interactive component took for granted the presence of 3270 terminals and provided a set of ISPF panels and supporting programs. Conversational Monitor System (CMS) in VM has support for the 3270 continuing to z/VM. Time Sharing Option (TSO) in OS/360 and successors has line mode command line support and also has facilities for full screen applications, e.g., ISPF. Device Independent Display Operator Console Support (DIDOCS) in Multiple Console Support (MCS) for OS/360 and successors supports 3270 devices and, in fact, MCS in current versions of MVS no longer supports line mode, 2250 and 2260 devices. The SPF and Program Development Facility (ISPF/PDF) editors for MVS and VM/SP (ISPF/PDF was available for VM, but little used) and the XEDIT editors for VM/SP through z/VM make extensive use of 3270 features. Customer Information Control System (CICS) has support for 3270 panels. Indeed, from the early 1970s on, CICS applications were often written for the 3270. Various versions of Wylbur have support for 3270, including support for full-screen applications. McGill University's MUSIC/SP operating system provided support for 3270 terminals and applications, including a full-screen text editor, a menu system, and a PANEL facility to create 3270 full-screen applications. The modified data tag is well suited to converting formatted, structured punched card input onto the 3270 display device. With the appropriate programming, any batch program that uses formatted, structured card input can be layered onto a 3270 terminal. IBM's OfficeVision office productivity software enjoyed great success with 3270 interaction because of its design understanding. And for many years the PROFS calendar was the most commonly displayed screen on office terminals around the world. A version of the WordPerfect word processor ported to System/370 was designed for the 3270 architecture. SNA 3270 devices can be a part of an SNA – System Network Architecture network or non-SNA network. If the controllers are SNA connected, they appear to SNA as PU – Physical Unit type 2.0 (PU2.1 for APPN) nodes typically with LU – Logical Unit type 1, 2, and 3 devices connected. Local, channel attached, controllers are controlled by VTAM – Virtual Telecommunications Access Method. Remote controllers are controlled by the NCP – Network Control Program in the Front End Processor i.e. 3705, 3720, 3725, 3745, and VTAM. Third parties One of the first groups to write and provide operating system support for the 3270 and its early predecessors was the University of Michigan, who created the Michigan Terminal System in order for the hardware to be useful outside of the manufacturer. MTS was the default OS at Michigan for many years, and was still used at Michigan well into the 1990s. Many manufacturers, such as GTE, Hewlett-Packard, Honeywell/Incoterm Div, Memorex, ITT Courier, McData, Harris, Alfaskop and Teletype/AT&T created 3270 compatible terminals, or adapted ASCII terminals such as the HP 2640 series to have a similar block-mode capability that would transmit a screen at a time, with some form validation capability. The industry distinguished between 'System compatible controllers' and 'Plug compatibility controllers', where 'System compatibility' meant that the 3rd party system was compatible with the 3270 data stream terminated in the unit, but not as 'Plug compatibility' equipment, also were compatible at the coax level thereby allowing IBM terminals to be connected to a 3rd party controller or vice versa. Modern applications are sometimes built upon legacy 3270 applications, using software utilities to capture (screen scraping) screens and transfer the data to web pages or GUI interfaces. In the early 1990s a popular solution to link PCs with the mainframes was the Irma board, an expansion card that plugged into a PC and connected to the controller through a coaxial cable. 3270 simulators for IRMA and similar adapters typically provide file transfers between the PC and the mainframe using the same protocol as the IBM 3270 PC. Models The IBM 3270 display terminal subsystem consists of displays, printers and controllers. Optional features for the 3275 and 3277 are the selector-pen, ASCII rather than EBCDIC character set, an audible alarm, and a keylock for the keyboard. A keyboard numeric lock was available and will lock the keyboard if the operator attempts to enter non-numeric data into a field defined as numeric. Later an Operator Identification Card Reader was added which could read information encoded on a magnetic stripe card. Displays Generally, 3277 models allow only upper-case input, except for the mixed EBCDIC/APL or text keyboards, which have lower case. Lower-case capability and dead keys were available as an RPQ (Request Price Quotation); these were added to the later 3278 & 3279 models. A version of the IBM PC called the 3270 PC, released in October 1983, includes 3270 terminal emulation. Later, the 3270 PC/G (graphics), 3270 PC/GX (extended graphics), 3270 Personal Computer AT, 3270 PC AT/G (graphics) and 3270 PC AT/GX (extended graphics) followed. CUT vs. DFT There are two types of 3270 displays in respect to where the 3270 data stream terminates. For CUT (Control Unit Terminal) displays, the stream terminates in the display controller, the controller instructs the display to move the cursor, position a character, etc. EBCDIC is translated by the controller into '3270 Character Set', and keyboard scan-codes from the terminal, read by the controller through a poll, is translated by the controller into EBCDIC. For DFT (Distributed Function Terminal) type displays, most of the 3270 data stream is forwarded to the display by the controller. The display interprets the 3270 protocol itself. In addition to passing the 3270 data stream directly to the terminal, allowing for features like EAB — Extended Attributes, Graphics, etc., DFT also enabled multi sessions (up to 5 simultaneous), featured in the 3290 and 3194 multisession displays. This feature was also widely used in 2nd generation 3270 terminal emulation software. The MLT — Multiple Logical Terminals feature of the 3174 controller also enabled multiple sessions from a CUT type terminal. 3277 3277 model 1: 40×12 terminal 3277 model 2: 80×24 terminal, the biggest success of all 3277 GA: a 3277 with an RS232C I/O, often used to drive a Tektronix 4013 or 4015 graphic screen (monochrome) 3278 3278 models 1–5: next-generation, with accented characters and dead keys in countries that needed them model 1: 80x12 model 2: 80×24 model 2A: 80x24 (console) with 4 lines reserved model 3: 80×32 or 80x24 (switchable) model 4: 80×43 or 80x24 (switchable) model 5: 132×27 or 80×24 (switchable) Extended Highlighting: ability to set highlighting attributes on individual characters as well as on fields. For the 3278 that includes: blinking character set reverse video underscored Programmed Symbols {PS): programmable characters; able to display monochrome graphics The 3278, along with the 3279 color display and the 3287 printer, introduced the Extended Display Stream (EDS) as the framework for new features. 3279 The IBM 3279 was IBM's first color terminal. IBM initially announced four models, and later added a fifth model for use as a processor console. Models model 2A: 80-24 base color model 2B: 80-24 extended color model 2C: 80-24 base color (console) with 4 lines reserved model 3A: 80-32 base color model 3B: 80-32 extended color model S3G: 80-32 extended color with programmed symbol set graphics Base color In base color mode the protection and intensity field attributes determine the color: {| class="wikitable" |+ Base color mode |- ! Protection ! Intensity ! Color |- | Unprotected | Normal | style="color: Green;background-color:DarkGray; font-weight: bold;" | Green |- | Unprotected | Intensified | style="color: Red; background-color:DarkGray; font-weight: bold;" | Red |- | Protected | Normal | style="color: Blue; background-color:DarkGray; font-weight: bold;" | Blue |- | Protected | Intensified | style="color: White; background-color:DarkGray; font-weight: bold;" | White |} Extended color In extended color mode the color field and character attributes determine the color as one of Neutral (White) Red Blue Green Pink Yellow Turquoise The 3279 was introduced in 1979. The 3279 was widely used as an IBM mainframe terminal before PCs became commonly used for the purpose. It was part of the 3270 series, using the 3270 data stream. Terminals could be connected to a 3274 controller, either channel connected to an IBM mainframe or linked via an SDLC (Synchronous Data Link Control) link. In the Systems Network Architecture (SNA) protocol these terminals were logical unit type 2 (LU2). The basic models 2A and 3A used red, green for input fields, and blue and white for output fields. However, the models 2B and 3B supported seven colors, and when equipped with the optional Programmed Symbol Set feature had a loadable character set that could be used to show graphics. The Programmed Symbol Set feature could be added in the field, and was standard in the model S3G. The IBM 3279 with its graphics software support, Graphical Data Display Manager (GDDM), was designed at IBM's Hursley Development Laboratory, near Winchester, England. 3290 The 3290 Information Panel a 17", amber monochrome plasma display unit announced March 8, 1983, capable of displaying in various modes, including four independent 3278 model 2 terminals, or a single 160×62 terminal; it also supports partitioning. The 3290 supports graphics through the use of programmed symbols. A 3290 application can divide its screen area up into as many as 16 separate explicit partitions (logical screens). The 3290 is a Distributed Function Terminal (DFT) and requires that the controller do a downstream load (DSL) of microcode from floppy or hard disk. 317x 3178: lower cost terminal (1983) 3179: low cost color terminal announced March 20, 1984. 3180 The 3180 was a monochrome display, introduced on March 20, 1984, that the user could configure for several different basic and extended display modes; all of the basic modes have a primary screen size of 24x80. Modes 2 and 2+ have a secondary size of 24x80, 3 and 3+ have a secondary size of 32x80, 4 and 4+ have a secondary size of 43x80 and 5 and 5+ have a secondary size of 27x132. An application can override the primary and alternate screen sizes for the extended mode. The 3180 also supported a single explicit partition that could be reconfigured under application control. 3191 The IBM 3191 Display Station is an economical monochrome CRT. Models A and B are 1920 characters 12-inch CRTs. Models D, E and L are 1920 or 2560 character 14-inch CRTs. 3192 Model C provides a 7-color 14 inch CRT with 80x24 or 80x32 characters Model D provides a green monochrome 15 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters Model F provides a 7-color high-resolution 14 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters Model G provides a 7-color 14 inch CRT with 80x24 or 80x32 characters Model L provides a green monochrome 15 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters with a selector pen capability Model W provides a black and while 15 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters 3193 The IBM 3193 Display Station is a high-resolution, portrait-type, monochrome, 380mm (15 inch) CRT image display providing up to letter or A4 size document display capabilities in addition to alphanumeric data. Compressed images can be sent to the 3193 from a scanner and decompression is performed in the 3193. Image data compression is a technique to save transmission time and reduce storage requirements. 3194 The IBM 3194 is a Display Station that features a 1.44 MB 3.5" floppy drive and IND$FILE transfer. Model C provides a 12-inch color CRT with 80x24 or 80x32 characters Model D provides a 15-inch monochrome CRT with 80x24, 80x31, 80x44 or 132x27 characters Model H provides a 14-inch color CRT with 80x24, 80x31, 80x44 or 132x27 characters Subsequent 3104: low-cost R-loop connected terminal for the IBM 8100 system 3472 Infowindow Non-IBM Displays Several third-party manufacturers produced 3270 displays besides IBM. GTE GTE manufactured the IS/7800 Video Display System, nominally compatible with IBM 3277 displays attached to a 3271 or 3272. An incompatibility with the RA buffer order broke the logon screen in VM/SE (SEPP). Harris Harris manufactured the 8000 Series Terminal Systems, compatible with IBM 3277 displays attached to a 3271 or 3272. Harris later manufactured the 9100–9200 Information Processing Systems, which included 9178 9278 9279-2A 9279-3G 9280 Informer 270 376/SNA Informer Computer Terminals manufactured a special version of their model 270 terminal that was compatible with IBM 3270 and its associated coax port to connect to a 3x74. Memorex Telex Memorex 1377, compatible with IBM 3277Attaches to 1371 or 1372 Documentation for the following is available at Memorex/Telex 2078 Memorex/Telex 2079 Memorex/Telex 2080 Memorex/Telex 2178 Memorex/Telex 2179 Nokia/Alfaskop Alfaskop Display Unit 4110 Alfaskop Display Unit 4112 AT&T AT&T introduced the Dataspeed 40 terminal/controller, compatible with the IBM 3275, in 1980. Graphics models IBM had two different implementations for supporting graphics. The first was implemented in the optional Programmed Symbol Sets (PSS) of the 3278, 3279 and 3287, which became a standard feature on the later 3279-S3G, a.k.a. 3279G, and was based on piecing together graphics with on-the-fly custom-defined symbols downloaded to the terminal. The second later implementation provided All Points Addressable (APA) graphics, a.k.a. Vector Graphics, allowing more efficient graphics than the older technique. The first terminal to support APA / Vector graphics was the 3179G terminal that later was replaced by first the 3192G and later the 3472G. Both implementations are supported by IBM GDDM — Graphical Data Display Manager first released in 1979, and by SAS with their SAS/GRAPH software. IBM 3279G IBM 3279-S3G, a.k.a. 3279G, terminal, announced in 1979, was IBM's graphics replacement for the 3279-3B with PSS. The terminal supported 7 colors and the graphics were made up of Programmable Symbol sets loaded to the terminal by the graphical application GDDM — Graphical Data Display Manager using Write Structured Field command. Programmable Symbols is an addition to the normal base character set consisting of Latin characters, numbers, etc. hardwired into the terminal. The 3279G supports six additional sets of symbols each supporting 190 symbols, resulting in a total of 1.140 programmable symbols. Three of the Programmable Symbols sets have three planes each enabling coloring (red, blue, green) the Programmable Symbols downloaded to those sets, thereby supporting a total of seven colors. Each 'character' cell consists of a 9x12 or a 9x16 dot matrix depending on the screen model. In order to program a cell with a symbol 18 bytes of data is needed making the data load quite heavy in some instances when compared to classic text screens. If one for example wishes to draw a hyperbola on the screen, the application must first compute the required Programmable Symbols to make up hyperbola and load them to the terminal. The next step is then for the application to paint the screen by addressing the screen cell position and select the appropriate symbol in one of the Programmable Symbols sets. The 3279G could be ordered with Attribute Select Keyboard enabling the operator to select attributes, colors and Programmable Symbols sets, making that version of the terminal quite distinctive. IBM 3179G The IBM 3179G announced June 18, 1985, is an IBM mainframe computer terminal providing 80×24 or 80×32 characters, 16 colors, plus graphics and is the first terminal to support the APA graphics apart from the 3270 PC/G, 3270 PC/GX, PC AT/G and PC AT/GX. 3179-G terminals combine text and graphics as separate layers on the screen. Although the text and graphics appear combined on the screen, the text layer actually sits over the graphics layer. The text layer contains the usual 3270-style cells which display characters (letters, numbers, symbols, or invisible control characters). The graphics layer is an area of 720×384 pixels. All Points Addressable or vector graphics is used to paint each pixel in one of sixteen colors. As well as being separate layers on the screen, the text and graphics layers are sent to the display in separate data streams, making them completely independent. The application i.e. GDDM sends the vector definitions to the 3179-G, and the work of activating the pixels that represent the picture (the vector-to-raster conversion) is done in the terminal itself. The datastream is related to the number of graphics primitives (lines, arcs, and so on) in the picture. Arcs are split into short vectors, that are sent to the 3179-G to be drawn. The 3179-G does not store graphic data, and so cannot offload any manipulation function from GDDM. In particular, with user control, each new viewing operation means that the data has to be regenerated and retransmitted. The 3179G is a distributed function terminal (DFT) and requires a downstream load (DSL) to load its microcode from the cluster controller's floppy disk or hard drive. The G10 model is a standard 122-key typewriter keyboard, while the G20 model offers APL on the same layout. Compatible with IBM System/370, IBM 4300 series, 303x, 308x, IBM 3090, and IBM 9370. IBM 3192G The IBM 3192G, announced in 1987 was the successor to 3179G. It featured 16 colors, and support for printers (i.e., IBM Proprinter) for local hardcopy with graphical support, or system printer, text only, implemented as an additional LU. IBM 3472G The IBM 3472G announced in 1989 was the successor to 3192G and featured five concurrent sessions, one of which could be graphics. Unlike the 3192-G, it needed no expansion unit to attach a mouse or color plotter, and it could also attach a tablet device for digitised input and a bar code reader. APL / APL2 Most IBM terminals, starting with the 3277, could be delivered with an APL keyboard, allowing the operator/programmer to enter APL symbolic instructions directly into the editor. In order to display APL symbols on the terminal, it had to be equipped with an APL character set in addition to the normal 3270-character set. The APL character set is addressed with a preceding Graphic Escape X'08' instruction. With the advent of the graphic terminal 3179G, the APL character set was expandable to 138 characters, called APL2. The added characters were: Diamond, Quad Null, Iota Underbar, Epsilon Underbar, Left Tack, Right Tack, Equal Underbar, Squished Quad, Quad Slope, and Dieresis Dot. Later APL2 symbols were supported by 3191 Models D, E, L, the CUT version of 3192, and 3472. Please note that IBM's version's of APL also is called APL2. Display-Controller 3275 remote display with controller function (no additional displays up to one printer) 3276 remote display with controller function. IBM 3276, announced in 1981, was a combined remote controller and display terminal, offering support for up to 8 displays, the 3276 itself included. By default, the 3276 had two type A coax ports, one for its own display, and one free for an additional terminal or printer. Up to three additional adapters, each supporting two coax devices, could be installed. The 3276 could connect to a non-SNA or SNA host using BSC or SDLC with line speed of up to 9,600 bit/s. The 3276 looked very much like the 3278 terminal, and the terminal feature of the 3276 itself, was more or less identical to those of the 3278. Printers 3284 matrix printer 3286 matrix printer 3287 printer, including a color model 3288 line printer 3268-1 R-loop connected stand-alone printer for the IBM 8100 system 4224 matrix printer In 1984 announced IPDS – Intelligent Printer Data Stream for online printing of AFP — Advanced Function Presentation documents, using bidirectional communications between the application and the printer. IPDS support among others printing of text, fonts, images, graphics, and barcodes. The IBM 4224 is one of the IPDS capable dot matrix printers. With the emergence of printers, including laser printers, from HP, Canon, and others, targeted the PC market, 3270 customers got an alternative to IBM 3270 printers by connecting this type of printers through printer protocol converters from manufactures like I-data, MPI Tech, Adacom, and others. The printer protocol converters basically emulate a 3287 type printer, and later extended to support IPDS. The IBM 3482 terminal, announced in 1992, offered a printer port, which could be used for host addressable printing as well as local screen copy. In the later versions of 3174 the Asynchronous Emulation Adapter (AEA), supporting async RS-232 character-based type terminals, was enhanced to support printers equipped with a serial interface. Controllers 3271 remote controller 3272 local controller 3274 cluster controller (different models could be channel-attached or remote via BSC or SDLC communication lines, and had between eight and 32 co-ax ports) 3174 cluster controller On the 3274 and 3174, IBM used the term configuration support letter, sometimes followed by a release number, to designate a list of features together with the hardware and microcode needed to support them. By 1994 the 3174 Establishment Controller supported features such as attachment to multiple hosts via Token Ring, Ethernet, or X.25 in addition to the standard channel attach or SDLC; terminal attachment via twisted pair, Token Ring or Ethernet in addition to co-ax; and TN3270. They also support attachment of asynchronous ASCII terminals, printers, and plotters alongside 3270 devices. 3274 controller IBM introduced the 3274 controller family in 1977, replacing the 3271–2 product line. Where the features of the 3271–2 was hardcoded, the 3274 was controlled by its microcode that was read from the 3274's built-in 8" floppy drive. 3274 models included 8, 12, 16, and 32 port remote controllers and 32-port local channel attached units. In total 16 different models were over time released to the market. The 3274-1A was an SNA physical Unit type 2.0 (PU2.0), required only a single address on the channel for all 32 devices and was not compatible with the 3272. The 3274-1B and 3274-1D were compatible with the 3272 and were referred to as local non-SNA models. The 3274 controllers introduced a new generation of the coax protocol, named Category A, to differentiate them from the Category B coax devices, such as the 3277 terminal and the 3284 printer. The first Category A coax devices were the 3278 and the first color terminal, the IBM 3279 Color Display Station. Enabling backward compatibility, it was possible to install coax boards, so-called 'panels', in groups of 4 or 8 supporting the now older Category B coax devices. A maximum of 16 Category B terminals could be supported, and only 8 if the controller were fully loaded with a maximum of 4 panels each supporting 8 Category A devices. During its life span, the 3274 supported several features including: Extended Data Stream Extended Highlighting Programmed Symbol Set (PSS) V.24 interfaces with speed up to 14.4 kbit/s V.35 interfaces with speed up to 56 kbit/s X.25 network attachment DFT – Distributed Function Terminal DSL – Downstream load for 3290 and 3179G 9901 and 3299 multiplexer Entry Assist Dual Logic (the feature of having two sessions from a CUT mode display). 3174 controller IBM introduced the 3174 Subsystem Control Unit in 1986, replacing the 3274 product line. The 3174 was designed to enhance the 3270 product line with many new connectivity options and features. Like the 3274, it was customizable, the main difference was that it used smaller (5.25-inch) diskettes than the 3274 (8-inch diskettes), and that the larger floor models had 10 slots for adapters, some of them were per default occupied by channel adapter/serial interface, coax adapter, etc. Unlike the 3274, any local models could be configured as either local SNA or local non-SNA, including PU2.1 (APPN). The models included: 01L, 01R, 02R, 03R, 51R, 52R, 53R, 81R and 82R. The 01L were local channel attached, the R models remotely connected, and the x3R Token Ring (upstream) connected. The 0xL/R models were floor units supporting up to 32 coax devices through the use of internal or external multiplexers (TMA/3299). The 5xR, models were shelf units with 9 coax ports, expandable to 16, by the connection of a 3299 multiplexer. The smallest desktop units, 8xR, had 4 coax ports expandable to 8, by the connection of a 3299 multiplexer. In the 3174 controller line IBM also slightly altered the classical BNC coax connector by changing the BNC connector to DPC – Dual Purpose Connector. The DPC female connector was a few millimeters longer and with a built-in switch that detected if a normal BNC connector were connected or a newer DPC connector was connected, thereby changing the physical layer from 93 ohm unbalanced coax, to 150 ohm balanced twisted-pair, thereby directly supporting the IBM Cabling system without the need for a so-called red balun. Configuration Support A was the first microcode offered with the 3174. It supported all the hardware modules present at the time, almost all the microcode features found in 3274 and introduced a number of new features including: Intelligent Printer Data Stream (IPDS), Multiple Logical Terminals, Country Extended Code Page (CECP), Response Time Monitor, and Token Ring configured as host interface. Configuration Support S, strangely following release A, introduced that a local or remote controller could act as 3270 Token-Ring DSPU Gateway, supporting up to 80 Downstream PU's. In 1989, IBM introduced a new range of 3174 models and changed the name from 3174 Subsystem Control Unit to 3174 Establishment Controller. The main new feature was support for an additional 32 coax port in floor models. The models included: 11L, 11R, 12R, 13R, 61R, 62R, 63R, 91R, and 92R. The new line of controllers came with Configuration Support B release 1, increased the number of supported DSPU on the Token-Ring gateway to 250 units, and introduced at the same time 'Group Polling' that offloaded the mainframe/VTAM polling requirement on the channel. Configuration Support B release 2 to 5, enabled features like: Local Format Storage (CICS Screen Buffer), Type Ahead, Null/Space Processing, ESCON channel support. In 1990–1991, a total of 7 more models were added: 21R, 21L, 12L, 22L, 22R, 23R, and 90R. The 12L offered ESCON fibreoptic channel attachment. The models with 2xx designation were equal to the 1xx models but repacked for rackmount and offered only 4 adapter slots. The 90R was not intended as a coax controller, it was positioned as a Token Ring 3270 DSPU gateway. However, it did have one coax port for configuring the unit, which with a 3299 multiplexer could be expanded to 8. The line of controllers came with Configuration Support C to support ISDN, APPN and Peer Communication. The ISDN feature allowed downstream devices, typically PC's, to connect to the 3174 via the ISDN network. The APPN support enabled the 3174 to be a part of an APPN network, and the Peer Communication allowed coax attached PC's with 'Peer Communication Support' to access resources on the Token-Ring network attached to the 3174. The subsequent releases 2 to 6 of Configuration Support C enables support for: Split screen, Copy from session to session, Calculator function, Access to AS/400 host and 5250 keyboard emulation, Numerous APPN enhancements, TCP/IP Telnet support that allowed 3270 CUT terminals to communicate with TCP/IP servers using Telnet, and at the same time in another screen to communicate with the mainframe using native 3270. TN3270 support where the 3174 could connect to a TN3270 host/gateway, eliminating SNA, but preserving the 3270 data stream. IP forwarding allowing bridging of LAN (Token-Ring or Ethernet) connected devices downstream to the 3174 to route IP traffic onto the Frame Relay WAN interface. In 1993, three new models were added with the announcement of Ethernet Adapter (FC 3045). The models were: 14R, 24R, and 64R. This was also IBM's final hardware announcement of 3174. The floor models, and the rack-mountable units, could be expanded with a range of special 3174 adapters, that by 1993 included: Channel adapter, ESCON adapter, Serial (V.24/V.35) adapter, Concurrent Communication Adapter, Coax adapter, Fiber optic "coax" adapter, Async adapter, ISDN adapter, Token-Ring adapter, Ethernet adapter, and line encryption adapter. In 1994, IBM incorporated the functions of RPQ 8Q0935 into Configuration Support-C release 3, including the TN3270 client. Non-IBM Controllers GTE The GTE IS/7800 Video Display Systems used one of two nominally IBM compatible controllers: 7801 (remote, 3271 equivalent) 7802 (local, 3277 equivalent) Harris The Harris 8000 Series Terminal Systems used one of four controllers: 8171 (remote, 3271 equivalent) 8172 (local, 3277 equivalent) 8181 (remote, 3271 equivalent) 8182 (local, 3277 equivalent) 9116 9210 9220 Home grown An alternative implementation of an establishment controller exists in form of OEC (Open Establishment Controller). It's a combination of an Arduino shield with a BNC connector and a Python program that runs on a POSIX system. OEC allows to connect a 3270 display to IBM mainframes via TN3270 or to other systems via VT100. Currently only CUT but not DFT displays are supported. Memorex Memorex had two controllers for its 3277-compatible 1377; the 1371 for remote connection and the 1372 for local connection. Later Memorex offered a series of controllers compatible with the IBM 3274 and 3174 2074 2076 2174 2274 Multiplexers IBM offered a device called 3299 that acted as a multiplexer between an accordingly configured 3274 controller, with the 9901 multiplexer feature, and up to eight displays/printers, thereby reducing the number of coax cables between the 3x74 controller and the displays/printers. With the introduction of the 3174 controller internal or external multiplexers (3299) became mainstream as the 3174-1L controller was equipped with four multiplexed ports each supporting eight devices. The internal 3174 multiplexer card was named TMA – Terminal Multiplexer adapter 9176. A number of vendors manufactured 3270 multiplexers before and alongside IBM including Fibronics and Adacom offering multiplexers that supported TTP – Telephone Twisted Pair as an alternative to coax, and fiber-optic links between the multiplexers. In some instances, the multiplexer worked as an "expansion" unit on smaller remote controllers including the 3174-81R / 91R, where the 3299 expanded the number of coax ports from four to eight, or the 3174-51R / 61R, where the 3299 expanded the number of coax ports from eight to 16. Manufacture The IBM 3270 display terminal subsystem was designed and developed by IBM's Kingston, New York, laboratory (which later closed during IBM's difficult time in the mid-1990s). The printers were developed by the Endicott, New York, laboratory. As the subsystem expanded, the 3276 display-controller was developed by the Fujisawa laboratory, Japan, and later the Yamato laboratory; and the 3279 color display and 3287 color printer by the Hursley, UK, laboratory. The subsystem products were manufactured in Kingston (displays and controllers), Endicott (printers), and Greenock, Scotland, UK, (most products) and shipped to users in U.S. and worldwide. 3278 terminals continued to be manufactured in Hortolândia, near Campinas, Brazil as far as late 1980s, having its internals redesigned by a local engineering team using modern CMOS technology, while retaining its external look and feel. Telnet 3270 Telnet 3270, or tn3270 describes both the process of sending and receiving 3270 data streams using the telnet protocol and the software that emulates a 3270 class terminal that communicates using that process. tn3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Telnet 3270 can be used for either terminal or print connections. Standard telnet clients cannot be used as a substitute for tn3270 clients, as they use fundamentally different techniques for exchanging data. TN3270 is typically deployed for online IBM mainframe application access via VTAM. Technical Information 3270 character set The 3270 displays are available with a variety of keyboards and character sets. The following table shows the 3275/3277/3284–3286 character set for US English EBCDIC (optional characters were available for US ASCII, and UK, French, German, and Italian EBCDIC). On the 3275 and 3277 terminals without the a text feature, lower case characters display as uppercase. NL, EM, DUP, and FM control characters display and print as 5, 9, *, and ; characters, respectively, except by the printer when WCC or CCC bits 2 and 3 = '00'b, in which case NL and EM serve their control function and do not print. Data stream Data sent to the 3270 consist of commands, a Copy Control Character (CCC) or Write Control Character (WCC) if appropriate, a device address for copy, orders, character data and structured fields. Commands instruct the 3270 control unit to perform some action on a specified device, such as a read or write. Orders are sent as part of the data stream to control the format of the device buffer. Structured fields are to convey additional control functions and data to or from the terminal. On a local non-SNA controller, the command is a CCW opcode rather than the first byte of the outbound display stream; on all other controllers, the command is the first byte of the display stream, exclusive of protocol headers. Commands The following table includes datastream commands and CCW opcodes for local non-SNA controllers; it does not include CCW opcodes for local SNA controllers. Write control character The data sent by Write or Erase/Write consists of the command code itself followed by a Write Control Character (WCC) optionally followed by a buffer containing orders or data (or both). The WCC controls the operation of the device. Bits may start printer operation and specify a print format. Other bit settings will sound the audible alarm if installed, unlock the keyboard to allow operator entry, or reset all the Modified Data Tags in the device buffer. Orders Orders consist of the order code byte followed by zero to three bytes of variable information. Attributes The 3270 has three kinds of attributes: Field attributes Extended attributes Character attributes Field attributes The original 3277 and 3275 displays used an 8-bit field attribute byte of which five bits were used. Bits 0 and 1 are set so that the attribute will always be a valid EBCDIC (or ASCII) character. Bit 2 is zero to indicate that the associated field is unprotected (operator could enter data) or one for protected. Bit 3 is zero to indicate that this field, if unprotected, could accept alphanumeric input. One indicates that only numeric input is accepted, and automatically shifts to numeric for some keyboards. Bit 4 and 5 operate in tandem: '00'B indicate that the field is displayed on the screen and is not selector-pen detectable. '01'B indicates that the field is displayable and selector-pen detectable. '10'B indicates that the field is intensified (bright), displayable, and selector-pen detectable. '11'B indicates that the field is non-display, non-printable, and not pen detectable. This last can be used in conjunction with the modified data tag to imbed static data on the screen that will be read each time data was read from the device. Bit 7 is the "Modified Data Tag", where '0' indicates that the associated field has not been modified by the operator and '1' indicates that it has been modified. As noted above, this bit can be set programmatically to cause the field to be treated as modified. Later models include base color: "Base color (four colors) can be produced on color displays and color printers from current 3270 application programs by use of combinations of the field intensify and field protection attribute bits. For more information on color, refer to IBM 3270 Information System: Color and Programmed Symbols, GA33-3056." Extended attributes The 3278 and 3279 and later models used extended attributes to add support for seven colors, blinking, reverse video, underscoring, field outlining, field validation, and programmed symbols. Character attributes The 3278 and 3279 and later models allowed attributes on individual characters in a field to override the corresponding field attributes. This allowed programs (such as the LEXX text editor) to assign any font (including the programmable fonts), colour, etc. to any character on the screen. Buffer addressing 3270 displays and printers have a buffer containing one byte for every screen position. For example, a 3277 model 2 featured a screen size of 24 rows of 80 columns for a buffer size of 1920 bytes. Bytes are addressed from zero to the screen size minus one, in this example 1919. "There is a fixed relationship between each ... buffer storage location and its position on the display screen." Most orders start operation at the "current" buffer address, and executing an order or writing data will update this address. The buffer address can be set directly using the Set Buffer Address (SBA) order, often followed by Start Field or Start Field Extended. For a device with a 1920 character display a twelve bit address is sufficient. Later 3270s with larger screen sizes use fourteen or sixteen bits. Addresses are encoded within orders in two bytes. For twelve bit addresses the high order two bits of each byte are set to form valid EBCDIC (or ASCII) characters. For example, address 0 is coded as X'4040', or space-space, address 1919 is coded as X'5D7F', or ''. Programmers hand-coding panels usually keep the table of addresses from the 3270 Component Description or the 3270 Reference Card handy. For fourteen and sixteen-bit address, the address uses contiguous bits in two bytes. Example The following data stream writes an attribute in row 24, column 1, writes the (protected) characters '> ' in row 24, columns 2 and 3, and creates an unprotected field on row 24 from columns 5-79. Because the buffer wraps around an attribute is placed on row 24, column 80 to terminate the input field. This data stream would normally be written using an Erase/Write command which would set undefined positions on the screen to '00'x. Values are given in hexadecimal. Data Description D3 WCC [reset device + restore (unlock) keyboard + reset MDT] 11 5C F0 SBA Row 24 Column 1 1D F0 SF/Attribute [protected, alphanumeric, display normal intensity, not pen-detectable, MDT off] 6E 40 '> ' 1D 40 SF/Attribute [unprotected, alphanumeric, display normal intensity, not pen-detectable, MDT off] SBA is not required here since this is being written at the current buffer position 13 IC — cursor displays at current position: Row 24, column 5 11 5D 7F SBA Row 24 Column 80 1D F0 SF/Attribute [protected, alphanumeric, display normal intensity, not pen-detectable, MDT off] Extended Data Stream Most 3270 terminals newer than the 3275, 3277, 3284 and 3286 support an extended data stream (EDS) that allows many new capabilities, including: Display buffers larger than 4096 characters Additional field attributes, e.g., color Character attributes within a field Redefining display geometry Querying terminal characteristics Programmed Symbol Sets All Points Addressable (APA) graphics See also 3270 emulator IBM 5250 display terminal subsystem for IBM AS/400 and IBM System/3X family List of IBM products Notes References External links Partial IBM history noting the unveiling of the 3270 display system in 1971 3270 Data Stream Programming rbanffy/3270font: A TTF remake of the font from the 3270 3270 3270 Block-oriented terminal 3270 Multimodal interaction History of human–computer interaction Computer-related introductions in 1971
IBM 3270
Technology
10,362
77,739,537
https://en.wikipedia.org/wiki/Chaplygin%27s%20theorem
In mathematical theory of differential equations the Chaplygin's theorem (Chaplygin's method) states about existence and uniqueness of the solution to an initial value problem for the first order explicit ordinary differential equation. This theorem was stated by Sergey Chaplygin. It is one of many comparison theorems. Important definitions Consider an initial value problem: differential equation in , with an initial condition . For the initial value problem described above the upper boundary solution and the lower boundary solution are the functions and respectively, both of which are smooth in and continuous in , such as the following inequalities are true: ; and for . Statement Source: Given the aforementioned initial value problem and respective upper boundary solution and lower boundary solution for . If the right part is continuous in , ; satisfies the Lipschitz condition over variable between functions and : there exists constant such as for every , , the inequality holds, then in there exists one and only one solution for the given initial value problem and moreover for all . Remarks Source: Weakning inequalities Inside inequalities within both of definitions of the upper boundary solution and the lower boundary solution signs of inequalities (all at once) can be altered to unstrict. As a result, inequalities sings at Chaplygin's theorem concusion would change to unstrict by and respectively. In particular, any of , could be chosen. Proving inequality only If is already known to be an existent solution for the initial value problem in , the Lipschitz condition requirement can be omitted entirely for proving the resulting inequality. There exists applications for this method while researching whether the solution is stable or not ( pp. 7–9). This is often called "Differential inequality method" in literature and, for example, Grönwall's inequality can be proven using this technique. Continuation of the solution towards positive infinity Chaplygin's theorem answers the question about existence and uniqueness of the solution in and the constant from the Lipschitz condition is, generally speaking, dependent on : . If for both functions and retain their smoothness and for a set is bounded, the theorem holds for all . References Further reading Theorems in analysis Ordinary differential equations Uniqueness theorems
Chaplygin's theorem
Mathematics
459
1,460,629
https://en.wikipedia.org/wiki/Effective%20temperature
The effective temperature of a body such as a star or planet is the temperature of a black body that would emit the same total amount of electromagnetic radiation. Effective temperature is often used as an estimate of a body's surface temperature when the body's emissivity curve (as a function of wavelength) is not known. When the star's or planet's net emissivity in the relevant wavelength band is less than unity (less than that of a black body), the actual temperature of the body will be higher than the effective temperature. The net emissivity may be low due to surface or atmospheric properties, such as the greenhouse effect. Star The effective temperature of a star is the temperature of a black body with the same luminosity per surface area () as the star and is defined according to the Stefan–Boltzmann law . Notice that the total (bolometric) luminosity of a star is then , where is the stellar radius. The definition of the stellar radius is obviously not straightforward. More rigorously the effective temperature corresponds to the temperature at the radius that is defined by a certain value of the Rosseland optical depth (usually 1) within the stellar atmosphere. The effective temperature and the bolometric luminosity are the two fundamental physical parameters needed to place a star on the Hertzsprung–Russell diagram. Both effective temperature and bolometric luminosity depend on the chemical composition of a star. The effective temperature of the Sun is around . The nominal value defined by the International Astronomical Union for use as a unit of measure of temperature is . Stars have a decreasing temperature gradient, going from their central core up to the atmosphere. The "core temperature" of the Sun—the temperature at the centre of the Sun where nuclear reactions take place—is estimated to be 15,000,000 K. The color index of a star indicates its temperature from the very cool—by stellar standards—red M stars that radiate heavily in the infrared to the very hot blue O stars that radiate largely in the ultraviolet. Various colour-effective temperature relations exist in the literature. Their relations also have smaller dependencies on other stellar parameters, such as the stellar metallicity and surface gravity. The effective temperature of a star indicates the amount of heat that the star radiates per unit of surface area. From the hottest surfaces to the coolest is the sequence of stellar classifications known as O, B, A, F, G, K, M. A red star could be a tiny red dwarf, a star of feeble energy production and a small surface or a bloated giant or even supergiant star such as Antares or Betelgeuse, either of which generates far greater energy but passes it through a surface so large that the star radiates little per unit of surface area. A star near the middle of the spectrum, such as the modest Sun or the giant Capella radiates more energy per unit of surface area than the feeble red dwarf stars or the bloated supergiants, but much less than such a white or blue star as Vega or Rigel. Planet Blackbody temperature To find the effective (blackbody) temperature of a planet, it can be calculated by equating the power received by the planet to the known power emitted by a blackbody of temperature . Take the case of a planet at a distance from the star, of luminosity . Assuming the star radiates isotropically and that the planet is a long way from the star, the power absorbed by the planet is given by treating the planet as a disc of radius , which intercepts some of the power which is spread over the surface of a sphere of radius (the distance of the planet from the star). The calculation assumes the planet reflects some of the incoming radiation by incorporating a parameter called the albedo (a). An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then: The next assumption we can make is that the entire planet is at the same temperature , and that the planet radiates as a blackbody. The Stefan–Boltzmann law gives an expression for the power radiated by the planet: Equating these two expressions and rearranging gives an expression for the effective temperature: Where is the Stefan–Boltzmann constant. Note that the planet's radius has cancelled out of the final expression. The effective temperature for Jupiter from this calculation is 88 K and 51 Pegasi b (Bellerophon) is 1,258 K. A better estimate of effective temperature for some planets, such as Jupiter, would need to include the internal heating as a power input. The actual temperature depends on albedo and atmosphere effects. The actual temperature from spectroscopic analysis for HD 209458 b (Osiris) is 1,130 K, but the effective temperature is 1,359 K. The internal heating within Jupiter raises the effective temperature to about 152 K. Surface temperature of a planet The surface temperature of a planet can be estimated by modifying the effective-temperature calculation to account for emissivity and temperature variation. The area of the planet that absorbs the power from the star is which is some fraction of the total surface area , where is the radius of the planet. This area intercepts some of the power which is spread over the surface of a sphere of radius . We also allow the planet to reflect some of the incoming radiation by incorporating a parameter called the albedo. An albedo of 1 means that all the radiation is reflected, an albedo of 0 means all of it is absorbed. The expression for absorbed power is then: The next assumption we can make is that although the entire planet is not at the same temperature, it will radiate as if it had a temperature over an area which is again some fraction of the total area of the planet. There is also a factor , which is the emissivity and represents atmospheric effects. ranges from 1 to 0 with 1 meaning the planet is a perfect blackbody and emits all the incident power. The Stefan–Boltzmann law gives an expression for the power radiated by the planet: Equating these two expressions and rearranging gives an expression for the surface temperature: Note the ratio of the two areas. Common assumptions for this ratio are for a rapidly rotating body and for a slowly rotating body, or a tidally locked body on the sunlit side. This ratio would be 1 for the subsolar point, the point on the planet directly below the sun and gives the maximum temperature of the planet — a factor of (1.414) greater than the effective temperature of a rapidly rotating planet. Also note here that this equation does not take into account any effects from internal heating of the planet, which can arise directly from sources such as radioactive decay and also be produced from frictions resulting from tidal forces. Earth effective temperature Earth has an albedo of about 0.306 and a solar irradiance () of at its mean orbital radius of 1.5×108 km. The calculation with ε=1 and remaining physical constants then gives an Earth effective temperature of . The actual temperature of Earth's surface is an average as of 2020. The difference between the two values is called the greenhouse effect. The greenhouse effect results from materials in the atmosphere (greenhouse gases and clouds) absorbing thermal radiation and reducing emissions to space, i.e., reducing the planet's emissivity of thermal radiation from its surface into space. Substituting the surface temperature into the equation and solving for ε gives an effective emissivity of about 0.61 for a 288 K Earth. Furthermore, these values calculate an outgoing thermal radiation flux of (with ε=0.61 as viewed from space) versus a surface thermal radiation flux of (with ε≈1 at the surface). Both fluxes are near the confidence ranges reported by the IPCC. See also References External links Effective temperature scale for solar type stars Surface Temperature of Planets Planet temperature calculator Concepts in astrophysics Stellar astronomy Planetary science Thermodynamic properties Electromagnetic radiation Concepts in astronomy
Effective temperature
Physics,Chemistry,Astronomy,Mathematics
1,673
30,802
https://en.wikipedia.org/wiki/Tragedy%20of%20the%20commons
The tragedy of the commons is a concept which states that if many people enjoy unfettered access to a finite, valuable resource, such as a pasture, they will tend to overuse it and may end up destroying its value altogether. Even if some users exercised voluntary restraint, the other users would merely replace them, the predictable result being a "tragedy" for all. The concept has been widely discussed, and criticised, in economics, ecology and other sciences. The metaphorical term is the title of a 1968 essay by ecologist Garrett Hardin. The concept itself did not originate with Hardin but rather extends back to classical antiquity, being discussed by Aristotle. The principal concern of Hardin's essay was overpopulation of the planet. To prevent the inevitable tragedy (he argued) it was necessary to reject the principle (supposedly enshrined in the Universal Declaration of Human Rights) according to which every family has a right to choose the number of its offspring, and to replace it by "mutual coercion, mutually agreed upon". Some scholars have argued that over-exploitation of the common resource is by no means inevitable, since the individuals concerned may be able to achieve mutual restraint by consensus. Others have contended that the metaphor is inapposite because its exemplar – unfettered access to common land – did not exist historically, the right to exploit common land being controlled by law. The work of Elinor Ostrom, who received the Nobel Prize in Economics, is seen by some economists as having refuted Hardin's claims. Hardin's views on over-population have been criticised as simplistic and racist. Expositions Classical The concept of unrestricted-access resources becoming spent, where personal use does not incur personal expense, has been discussed for millennia. Aristotle wrote that "That which is common to the greatest number gets the least amount of care. Men pay most attention to what is their own: they care less for what is common." Lloyd's pamphlet In 1833, the English economist William Forster Lloyd published "Two Lectures on the Checks to Population", a pamphlet that included a hypothetical example of over-use of a common resource. This was the situation of cattle herders sharing a common parcel of land on which they were each entitled to let their cows graze. He postulated that if a herder put more than his allotted number of cattle on the common, overgrazing could result. For each additional animal, a herder could receive additional benefits, while the whole group shared the resulting damage to the commons. If all herders made this individually rational economic decision, the common could be depleted or even destroyed, to the detriment of all. Lloyd's pamphlet was written after the enclosure movement had eliminated the open field system of common property as the standard model for land exploitation in England (though there remained, and still remain, millions acres of "common land": see below, Commons in historical reality). Carl Dahlman and others have asserted that his description was historically inaccurate, pointing to the fact that the system endured for hundreds of years without producing the disastrous effects claimed by Lloyd. Garrett Hardin's article In 1968, ecologist Garrett Hardin explored this social dilemma in his article "The Tragedy of the Commons", published in the journal Science. The essay derived its title from the pamphlet by Lloyd, which he cites, on the over-grazing of common land: Hardin discussed problems that cannot be solved by technical means, as distinct from those with solutions that require "a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality". Hardin focused on human population growth, the use of the Earth's natural resources, and the welfare state. Hardin argued that if individuals relied on themselves alone, and not on the relationship between society and man, then people will treat other people as resources, which would lead to the world population growing and for the process to continue. Parents breeding excessively would leave fewer descendants because they would be unable to provide for each child adequately. Such negative feedback is found in the animal kingdom. Hardin said that if the children of improvident parents starved to death, if overbreeding was its own punishment, then there would be no public interest in controlling the breeding of families. Political inferences Hardin blamed the welfare state for allowing the tragedy of the commons; where the state provides for children and supports over breeding as a fundamental human right, a Malthusian catastrophe is inevitable. Consequently, in his article, Hardin lamented the following proposal from the United Nations: In addition, Hardin also pointed out the problem of individuals acting in rational self-interest by claiming that if all members in a group used common resources for their own gain and with no regard for others, all resources would still eventually be depleted. Overall, Hardin argued against relying on conscience as a means of policing commons, suggesting that this favors selfish individuals – often known as free riders – over those who are more altruistic. In the context of avoiding over-exploitation of common resources, Hardin concluded by restating Hegel's maxim (which was quoted by Engels), "freedom is the recognition of necessity". He suggested that "freedom" completes the tragedy of the commons. By recognizing resources as commons in the first place, and by recognizing that, as such, they require management, Hardin believed that humans "can preserve and nurture other and more precious freedoms". The "Commons" as a modern resource concept Hardin's article marked the mainstream acceptance of the term "commons" as used to connote a shared resource. As Frank van Laerhoven and Elinor Ostrom have stated: "Prior to the publication of Hardin’s article on the tragedy of the commons (1968), titles containing the words 'the commons', 'common pool resources,' or 'common property' were very rare in the academic literature." They go on to say: "In 2002, Barrett and Mabry conducted a major survey of biologists to determine which publications in the twentieth century had become classic books or benchmark publications in biology. They report that Hardin’s 1968 article was the one having the greatest career impact on biologists and is the most frequently cited". However, the Ostroms point out that Hardin's analysis was based on crucial misconceptions about the nature of common property systems. System archetype In systems theory, the commons problem is one of the ten most common system archetypes. The Tragedy of the Commons archetype can be illustrated using a causal loop diagram. Application Metaphoric meaning Like Lloyd and Thomas Malthus before him, Hardin was primarily interested in the problem of human population growth. But in his essay, he also focused on the use of larger (though finite) resources such as the Earth's atmosphere and oceans, as well as pointing out the "negative commons" of pollution (i.e., instead of dealing with the deliberate privatization of a positive resource, a "negative commons" deals with the deliberate commonization of a negative cost, pollution). As a metaphor, the tragedy of the commons should not be taken too literally. The "tragedy" is not in the word's conventional or theatric sense, nor a condemnation of the processes that lead to it. Similarly, Hardin's use of "commons" has frequently been misunderstood, leading him to later remark that he should have titled his work "The Tragedy of the Unregulated Commons". The metaphor illustrates the argument that free access and unrestricted demand for a finite resource ultimately reduces the resource through over-exploitation, temporarily or permanently. This occurs because the benefits of exploitation accrue to individuals or groups, each of whom is motivated to maximize the use of the resource to the point in which they become reliant on it, while the costs of the exploitation are borne by all those to whom the resource is available (which may be a wider class of individuals than those who are exploiting it). This, in turn, causes demand for the resource to increase, which causes the problem to snowball until the resource collapses (even if it retains a capacity to recover). The rate at which depletion of the resource is realized depends primarily on three factors: the number of users wanting to consume the common in question, the consumptive nature of their uses, and the relative robustness of the common. The same concept is sometimes called the "tragedy of the fishers", because fishing too many fish before or during breeding could cause stocks to plummet. Modern commons The tragedy of the commons can be considered in relation to environmental issues such as sustainability. The commons dilemma stands as a model for a great variety of resource problems in society today, such as water, forests, fish, and non-renewable energy sources such as oil, gas, and coal. Hardin's model posits that the tragedy of the commons may emerge if individuals prioritize self-interest. Another case study involves beavers in Canada, historically crucial for natives who, as stewards, organized to hunt them for food and commerce. Non-native trappers, motivated by fur prices, contributed to resource degradation, wresting control from the indigenous population. Conservation laws enacted in the 1930s in response to declining beaver populations led to the expulsion of trappers, legal acknowledgment of natives, and enforcement of customary laws. This intervention resulted in productive harvests by the 1950s. Situations exemplifying the "tragedy of the commons" include the overfishing and destruction of the Grand Banks of Newfoundland, the destruction of salmon runs on rivers that have been dammed (most prominently in modern times on the Columbia River in the Northwest United States and historically in North Atlantic rivers), and the devastation of the sturgeon fishery (in modern Russia, but historically in the United States as well). In terms of water supply, another example is the limited water available in arid regions (e.g., the area of the Aral Sea and the Los Angeles water system supply, especially at Mono Lake and Owens Lake). In economics, an externality is a cost or benefit that affects a party who did not choose to incur that cost or benefit. Negative externalities are a well-known feature of the "tragedy of the commons". For example, driving cars has many negative externalities; these include pollution, carbon emissions, and traffic accidents. Every time Person A gets in a car, it becomes more likely that Person Z will suffer in each of those areas. Economists often urge the government to adopt policies that "internalize" an externality. The tragedy of the commons can also refer to the idea of open data. Anonymised data are crucial for useful social research and represent therefore a public resource better said, a common good which is liable to exhaustion. Some feel that the law should provide a safe haven for the dissemination of research data, since it can be argued that current data protection policies overburden valuable research without mitigating realistic risks. An expansive application of the concept can also be seen in Vyse's analysis of differences between countries in their responses to the COVID-19 pandemic. Vyse argues that those who defy public health recommendations can be thought of as spoiling a set of common goods, "the economy, the healthcare system, and the very air we breathe, for all of us. In a similar vein, it has been argued that higher sickness and mortality rates from COVID-19 in individualistic cultures with less obligatory collectivism, is another instance of the "tragedy of the commons". Tragedy of the digital commons In the past two decades, scholars have been attempting to apply the concept of the tragedy of the commons to the digital environment. However, between scholars there are differences on some very basic notions inherent to the tragedy of the commons: the idea of finite resources and the extent of pollution. On the other hand, there seems to be some agreement on the role of the digital divide and how to solve a potential tragedy of the digital commons. Resources Many digital resources have properties that make them vulnerable to the tragedy of the commons, including data, virtual artifacts and even limited user attention. Closely related are the physical computational resources, such as CPU, RAM, and network bandwidth, that digital communities on shared servers rely upon and govern. Some scholars argue that digital resources are infinite, and therefore immune to the tragedy of the commons, because downloading a file does not constitute the destruction of the file in the digital environment, and because it can be replicated and disseminated throughout the digital environment. However, it can still be considered a finite resource within the context of privacy laws and regulations that limit access to it. Finite digital resources can thus be digital commons. An example is a database that requires persistent maintenance, such as Wikipedia. As a non-profit, it survives on a network of people contributing to maintain a knowledge base without expectation of direct compensation. This digital resource will deplete as Wikipedia may only survive if it is contributed to and used as a commons. The motivation for individuals to contribute is reflective of the theory because, if humans act in their own immediate interest and no longer participate, then the resource becomes misinformed or depleted. Arguments surrounding the regulation and mitigation requirements for digital resources may become reflective of natural resources. This raises the question whether one can view access itself as a finite resource in the context of a digital environment. Some scholars argue this point, often pointing to a proxy for access that is more concrete and measurable. One such proxy is bandwidth, which can become congested when too many people try to access the digital environment. Alternatively, one can think of the network itself as a common resource which can be exhausted through overuse. Therefore, when talking about resources running out in a digital environment, it could be more useful to think in terms of the access to the digital environment being restricted in some way; this is called information entropy. Pollution In terms of pollution, there are some scholars who look only at the pollution that occurs in the digital environment itself. They argue that unrestricted use of digital resources can cause an overproduction of redundant data which causes noise and corrupts communication channels within the digital environment. Others argue that the pollution caused by the overuse of digital resources also causes pollution in the physical environment. They argue that unrestricted use of digital resources causes misinformation, fake news, crime, and terrorism, as well as problems of a different nature such as confusion, manipulation, insecurity, and loss of confidence. Digital divide and solutions Scholars disagree on the particularities underlying the tragedy of the digital commons; however, there does seem to be some agreement on the cause and the solution. The cause of the tragedy of the commons occurring in the digital environment is attributed by some scholars to the digital divide. They argue that there is too large a focus on bridging this divide and providing unrestricted access to everyone. Such a focus on increasing access without the necessary restrictions causes the exploitation of digital resources for individual self-interest that is underlying any tragedy of the commons. In terms of the solution, scholars agree that cooperation rather than regulation is the best way to mitigate a tragedy of the digital commons. The digital world is not a closed system in which a central authority can regulate the users, as such some scholars argue that voluntary cooperation must be fostered. This could perhaps be done through digital governance structure that motivates multiple stakeholders to engage and collaborate in the decision-making process. Other scholars argue more in favor of formal or informal sets of rules, like a code of conduct, to promote ethical behaviour in the digital environment and foster trust. Alternative to managing relations between people, some scholars argue that it is access itself that needs to be properly managed, which includes expansion of network capacity. Patents and technology Patents are effectively a limited-time exploitation monopoly given to inventors. Once the period has elapsed, the invention is in principle free to all, and many companies do indeed commercialize such products, now market-proven. However, around 50% of all patent applications do not reach successful commercialization at all, often due to immature levels of components or marketing failures by the innovators. Scholars have suggested that since investment is often connected to patentability, such inactive patents form a rapidly growing category of underprivileged technologies and ideas that, under current market conditions, are effectively unavailable for use. Thus, "Under the current system, people are encouraged to register new patents, and are discouraged from using publicly available patents." The case might be particularly relevant to technologies that are relatively more environmentally/human damaging but also somewhat costlier than other alternatives developed contemporaneously. Examples More general examples (some alluded to by Hardin) of potential and actual tragedies include: Physical resources Uncontrolled human population growth leading to overpopulation. Atmosphere: through the release of pollution that leads to ozone depletion, global warming, ocean acidification (by way of increased atmospheric being absorbed by the sea), and particulate pollution. Light pollution: with the loss of the night sky for research and cultural significance, affected human, flora and fauna health, nuisance, trespass and the loss of enjoyment or function of private property. Water: Water pollution, water crisis of over-extraction of groundwater and wasting water due to overirrigation. Forests: Frontier logging of old growth forest and slash and burn. Energy resources and climate: Environmental residue of mining and drilling, burning of fossil fuels and consequential global warming. Animals: Habitat destruction and poaching leading to the Holocene mass extinction. Oceans: Overfishing Space debris in Earth's surrounding space leading to limited locations for new satellites and the obstruction of universal observations. Health Antibioticsantibiotic resistance: Misuse of antibiotics anywhere in the world may eventually result in the global antibiotic resistance, both in human and agricultural settings, which would cause an irreparable harm to the societal health, seen as a common goods. The survey of Kieran S. O'Brien et al. stated that many consider the misuse of antibiotics to be the case of the "tragedy of the commons", however the research results in this respect were inconclusive (as of 2014). VaccinesHerd immunity: Avoiding a vaccine shot and relying on the established herd immunity instead will avoid potential vaccine risks, but if everyone does this, it will diminish herd immunity and bring risk to people who cannot receive vaccines for medical reasons. The analogy with the "tragedy of the commons" is based on the interpretation that the common goods here is the pool of the vaccinated people, and avoiding vaccination diminishes it. Other Knowledge commons encompass immaterial and collectively owned goods in the information age, including, for example: Source code and software documentation in software projects that can get "polluted" with messy code or inaccurate information. Skills acquisition and training, when all parties involved pass the buck on implementing it. Application to evolutionary biology A parallel was drawn in 2006 between the tragedy of the commons and the competing behaviour of parasites that, through acting selfishly, eventually diminish or destroy their common host. The idea has also been applied to areas such as the evolution of virulence or sexual conflict, where males may fatally harm females when competing for matings. The idea of evolutionary suicide, where adaptation at the level of the individual causes the whole species or population to be driven extinct, can be seen as an extreme form of an evolutionary tragedy of the commons. From an evolutionary point of view, the creation of the tragedy of the commons in pathogenic microbes may provide us with advanced therapeutic methods. Microbial ecology studies have also addressed if resource availability modulates the cooperative or competitive behaviour in bacteria populations. When resources availability is high, bacterial populations become competitive and aggressive with each other, but when environmental resources are low, they tend to be cooperative and mutualistic. Ecological studies have hypothesised that competitive forces between animals are major in high carrying capacity zones (i.e., near the Equator), where biodiversity is higher, because of natural resources abundance. This abundance or excess of resources, causes animal populations to have reproduction strategies (many offspring, short gestation, less parental care, and a short time until sexual maturity), so competition is affordable for populations. Also, competition could select populations to have behaviour in a positive feedback regulation. Contrarily, in low carrying capacity zones (i.e., far from the equator), where environmental conditions are harsh, K strategies are common (longer life expectancy, produce relatively fewer offspring and tend to be altricial, requiring extensive care by parents when young) and populations tend to have cooperative or mutualistic behaviours. If populations have a competitive behaviour in hostile environmental conditions, they mostly are filtered out (die) by environmental selection; hence, populations in hostile conditions are selected to be cooperative. Climate change The effects of climate change have been given as a mass example of the tragedy of the commons. This perspective proposes that the earth, being the commons, has suffered a depletion of natural resources without regard to the externalities, the impact on neighboring and future populations. The collective actions of individuals, organisations, and governments continue to contribute to environmental degradation. Mitigation of the long-term impacts and tipping points require strict controls or other solution, but this may come as a loss to different industries. The sustainability of population and industry growth is the subject of climate change discussion. The global commons of environmental resource consumption or selfishness, as in the fossil fuel industry has been theorised as not realistically manageable. This is due to the crossing of irreversible thresholds of impact before the costs are entirely realised. Commons dilemma The commons dilemma is a specific class of social dilemma in which people's short-term selfish interests are at odds with long-term group interests and the common good. In academia, a range of related terminology has also been used as shorthand for the theory or aspects of it, including resource dilemma, take-some dilemma, and common pool resource. Commons dilemma researchers have studied conditions under which groups and communities are likely to under- or over-harvest common resources in both the laboratory and field. Research programs have concentrated on a number of motivational, strategic, and structural factors that might be conducive to management of commons. In game theory, which constructs mathematical models for individuals' behavior in strategic situations, the corresponding "game", developed by Hardin, is known as the Commonize Costs – Privatize Profits Game (CC–PP game). Psychological factors Kopelman, Weber, & Messick (2002), in a review of the experimental research on cooperation in commons dilemmas, identify nine classes of independent variables that influence cooperation in commons dilemmas: social motives, gender, payoff structure, uncertainty, power and status, group size, communication, causes, and frames. They organize these classes and distinguish between psychological individual differences (stable personality traits) and situational factors (the environment). Situational factors include both the task (social and decision structure) and the perception of the task. Empirical findings support the theoretical argument that the cultural group is a critical factor that needs to be studied in the context of situational variables. Rather than behaving in line with economic incentives, people are likely to approach the decision to cooperate with an appropriateness framework. An expanded, four factor model of the Logic of Appropriateness, suggests that the cooperation is better explained by the question: "What does a person like me (identity) do (rules) in a situation like this (recognition) given this culture (group)?" Strategic factors Strategic factors also matter in commons dilemmas. One often-studied strategic factor is the order in which people take harvests from the resource. In simultaneous play, all people harvest at the same time, whereas in sequential play people harvest from the pool according to a predetermined sequence – first, second, third, etc. There is a clear order effect in the latter games: the harvests of those who come first – the leaders – are higher than the harvest of those coming later – the followers. The interpretation of this effect is that the first players feel entitled to take more. With sequential play, individuals adopt a first come-first served rule, whereas with simultaneous play people may adopt an equality rule. Another strategic factor is the ability to build up reputations. Research found that people take less from the common pool in public situations than in anonymous private situations. Moreover, those who harvest less gain greater prestige and influence within their group. Structural factors Hardin stated in his analysis of the tragedy of the commons that "Freedom in a commons brings ruin to all." One of the proposed solutions is to appoint a leader to regulate access to the common. Groups are more likely to endorse a leader when a common resource is being depleted and when managing a common resource is perceived as a difficult task. Groups prefer leaders who are elected, democratic, and prototypical of the group, and these leader types are more successful in enforcing cooperation. A general aversion to autocratic leadership exists, although it may be an effective solution, possibly because of the fear of power abuse and corruption. The provision of rewards and punishments may also be effective in preserving common resources. Selective punishments for overuse can be effective in promoting domestic water and energy conservation – for example, through installing water and electricity meters in houses. Selective rewards work, provided that they are open to everyone. An experimental carpool lane in the Netherlands failed because car commuters did not feel they were able to organize a carpool. The rewards do not have to be tangible. In Canada, utilities considered putting "smiley faces" on electricity bills of customers below the average consumption of that customer's neighborhood. Solutions Articulating solutions to the tragedy of the commons is one of the main problems of political philosophy. In some situations, locals implement (often complex) social schemes that work well. When these fail, there are many possible governmental solutions such as privatization, internalizing the externalities, and regulation. Non-governmental solution Robert Axelrod contends that even self-interested individuals will often find ways to cooperate, because collective restraint serves both the collective and individual interests. Anthropologist G. N. Appell criticised those who cited Hardin to "impos[e] their own economic and environmental rationality on other social systems of which they have incomplete understanding and knowledge." Political scientist Elinor Ostrom, who was awarded 2009's Nobel Memorial Prize in Economic Sciences for her work on the issue, and others revisited Hardin's work in 1999. They found the tragedy of the commons not as prevalent or as difficult to solve as Hardin maintained, since locals have often come up with solutions to the commons problem themselves. For example, another group found that a commons in the Swiss Alps has been run by a collective of farmers there to their mutual and individual benefit since 1517, in spite of the farmers also having access to their own farmland. In general, it is in the interest of the users of a commons to keep them functioning and so complex social schemes are often invented by the users for maintaining them at optimum efficiency. Another prominent example of this is the deliberative process of granting legal personhood to a part of nature, for example rivers, with the aim of preserving their water resources and prevent environmental degradation. This process entails that a river is regarded as its own legal entity that can sue against environmental damage done to it while being represented by an independently appointed guardian advisory group. This has happened as a bottom-up process in New Zealand: Here debates initiated by the Whanganui Iwi tribe have resulted in legal personhood for the river. The river is considered as a living whole, stretching from mountain to sea and even includes not only the physical but also its metaphysical elements. Similarly, geographer Douglas L. Johnson remarks that many nomadic pastoralist societies of Africa and the Middle East in fact "balanced local stocking ratios against seasonal rangeland conditions in ways that were ecologically sound", reflecting a desire for lower risk rather than higher profit; in spite of this, it was often the case that "the nomad was blamed for problems that were not of his own making and were a product of alien forces." Independently finding precedent in the opinions of previous scholars such as Ibn Khaldun as well as common currency in antagonistic cultural attitudes towards non-sedentary peoples, governments and international organizations have made use of Hardin's work to help justify restrictions on land access and the eventual sedentarization of pastoral nomads despite its weak empirical basis. Examining relations between historically nomadic Bedouin Arabs and the Syrian state in the 20th century, Dawn Chatty notes that "Hardin's argument was curiously accepted as the fundamental explanation for the degradation of the steppe land" in development schemes for the arid interior of the country, downplaying the larger role of agricultural overexploitation in desertification as it melded with prevailing nationalist ideology which viewed nomads as socially backward and economically harmful. Elinor Ostrom and her colleagues looked at how real-world communities manage communal resources, such as fisheries, land irrigation systems, and farmlands, and they identified a number of factors conducive to successful resource management. One factor is the resource itself; resources with definable boundaries (e.g. land) can be preserved much more easily. A second factor is resource dependence; there must be a perceptible threat of resource depletion, and it must be difficult to find substitutes. The third is the presence of a community; small and stable populations with a thick social network and social norms promoting conservation do better. A final condition is that there be appropriate community-based rules and procedures in place with built-in incentives for responsible use and punishments for overuse. When the commons is taken over by non-locals, those solutions can no longer be used. Many of the economic and social structures recommended by Ostrom coincide with the structures recommended by anarchists, particularly green anarchism. The largest contemporary societies that use these organizational strategies are the Rebel Zapatista Autonomous Municipalities and the Autonomous Administration of North and East Syria which have heavily been influenced by anarchism and other versions of libertarian and ecological socialism. Individuals may act in a deliberate way to avoid consumption habits that deplete natural resources. This consciousness promotes the boycotting of products or brands and seeking alternative, more sustainable options. Altruistic punishment Various well-established theories, such as theory of kin selection and direct reciprocity, have limitations in explaining patterns of cooperation emerging between unrelated individuals and in non-repeatable short-term interactions. Studies have shown that punishment is an efficacious motivator for cooperation among humans. Altruistic punishment entails the presence of individuals that punish defectors from a cooperative agreement, although doing so is costly and provides no material gain. These punishments effectively resolve tragedy of the commons scenarios by addressing both first-order free rider problems (i.e. defectors free riding on cooperators) and second-order free rider problems (i.e. cooperators free riding on work of punishers). Such results can only be witnessed when the punishment levels are high enough. While defectors are motivated by self-interest and cooperators feel morally obliged to practice self-restraint, punishers pursue this path when their emotions are clouded by annoyance and anger at free riders. Governmental solutions Governmental solutions are used when the above conditions are not met (such as a community being larger than the cohesion of its social network). Examples of government regulation include population control, privatization, regulation, and internalizing the externalities. Population control In Hardin's essay, he proposed that the solution to the problem of overpopulation must be based on "mutual coercion, mutually agreed upon" and result in "relinquishing the freedom to breed". Hardin discussed this topic further in a 1979 book, Managing the Commons, co-written with John A. Baden. He framed this prescription in terms of needing to restrict the "reproductive right", to safeguard all other rights. Several countries have a variety of population control laws in place. In the context of United States policy debates, Hardin advocated restrictions on migration, particularly of non-whites. In a 1991 article, he stated Privatization One solution for some resources is to convert common good into private property (Coase 1960), giving the new owner an incentive to enforce its sustainability. Libertarians and classical liberals cite the tragedy of the commons as an example of what happens when Lockean property rights to homestead resources are prohibited by a government. They argue that the solution to the tragedy of the commons is to allow individuals to take over the property rights of a resource, that is, to privatize it. In England, this solution was attempted in the inclosure acts. According to Karl Marx in , this solution leads to increasing numbers of people being pushed into smaller and smaller pockets of common land which has yet to be privatised, thereby merely displacing and exacerbating the problem while putting an increasing number of people in precarious situations. Economic historian Bob Allen coined the term "Engels' pause" to describe the period from 1790 to 1840, when British working-class wages stagnated and per-capita gross domestic product expanded rapidly during a technological upheaval. Regulation In a typical example, governmental regulations can limit the amount of a common good that is available for use by any individual. Permit systems for extractive economic activities including mining, fishing, hunting, livestock raising, and timber extraction are examples of this approach. Similarly, limits to pollution are examples of governmental intervention on behalf of the commons. This idea is used by the United Nations Moon Treaty, Outer Space Treaty and Law of the Sea Treaty as well as the UNESCO World Heritage Convention (treaty) which involves the international law principle that designates some areas or resources the Common Heritage of Mankind. German historian Joachim Radkau thought Hardin advocates strict management of common goods via increased government involvement or international regulation bodies. An asserted impending "tragedy of the commons" is frequently warned of as a consequence of the adoption of policies which restrict private property and espouse expansion of public property. Giving legal rights of personhood to objects in nature is another proposed solution. The idea of giving land a legal personality is intended to enable the democratic system of the rule of law to allow for prosecution, sanction, and reparation for damage to the earth. For example, this has been put into practice in Ecuador in the form of a constitutional principle known as "Pacha Mama" (Mother Earth). Internalizing externalities Privatization works when the person who owns the property (or rights of access to that property) pays the full price of its exploitation. As discussed above negative externalities (negative results, such as air or water pollution, that do not proportionately affect the user of the resource) is often a feature driving the tragedy of the commons. Internalizing the externalities, in other words ensuring that the users of resource pay for all of the consequences of its use, can provide an alternate solution between privatization and regulation. One example is gasoline taxes which are intended to include both the cost of road maintenance and of air pollution. This solution can provide the flexibility of privatization while minimizing the amount of government oversight and overhead that is needed. The mid-way solution One of the significant actions areas which can dwell as potential solution is to have co-shared communities that have partial ownership from governmental side and partial ownership from the community. By ownership, here it is referred to planning, sharing, using, benefiting and supervision of the resources which ensure that the power is not held in one or two hands only. Since, involvement of multiple stakeholders is necessary responsibilities can be shared across them based on their abilities and capacities in terms of human resources, infrastructure development ability, and legal aspects, etc. Criticism Commons in historical reality The status of common land in England as mentioned in Lloyd's pamphlet has been widely misunderstood. Millions of acres were "common land", but this did not mean public land open to everybody, a popular fallacy. There was no such thing as ownerless land. Every parcel of "common" land had a legal owner, who was a private person or corporation. The owner was called the lord of the manor (which, like landlord, was a legal term denoting ownership, not aristocratic status). It was true that there were local people, called commoners, defined as those who had a legal right to use his land for some purpose of their own, typically grazing their animals. Certainly their rights were strong, because the lord was not entitled to build on his own land, or fence off any part of it, unless he could prove he had left enough pasture for the commoners. But these individuals were not the general public at large: not everyone in the vicinity was a commoner. Furthermore the commoners' right to graze the lord's land with their animals was restricted by law - precisely in order to prevent overgrazing. If overgrazing did nevertheless occur, which it sometimes did, it was because of incompetent or weak land management, and not because of the pressure of an unlimited right to graze, which did not exist. Hence Christopher Rodgers said that "Hardin's influential thesis on the 'tragedy of the commons' ... has no application to common land in England and Wales. It is based on a false premise". Rodgers, professor of law at Newcastle University, added: Every productive unit ("manor") had a manorial court; without it, the manor ceased to exist. Manorial courts could fine commoners, and the lord of the manor for that matter, for breaches of customary law, e.g. grazing too many cattle on the land. Customary law varied locally. It could not be altered without the consent of the whole body of the commoners, except by getting an Act of Parliament. By the time of Lloyd's pamphlet (1833) the majority of land in England had been enclosed and had ceased to be common land. That which remained may not have been good agricultural land anyway, or the best managed. Lloyd takes for granted that common lands were inferior and argues his over-grazing theory to explain it. He does not examine other possible causes e.g. common land was difficult to drain, to keep disease-free, and to use for improved cattle breeding. Likewise, Susan Jane Buck Cox argues that the common land example used to argue this economic concept is on very weak historical ground, and misrepresents what she terms was actually the "triumph of the commons": the successful common usage of land for many centuries. She argues that social changes and agricultural innovation, and not the behaviour of the commoners, led to the demise of the commons. In a similar vein, Carl Dahlman argues that commons were effectively managed to prevent overgrazing. Others Hardin's work is criticised as historically inaccurate in failing to account for the demographic transition, and for failing to distinguish between common property and open access resources. Radical environmentalist Derrick Jensen claims the tragedy of the commons is used as propaganda for private ownership. He says it has been used by the political right wing to hasten the final enclosure of the "common resources" of third world and indigenous people worldwide, as a part of the Washington Consensus. He argues that in true situations, those who abuse the commons would have been warned to desist and if they failed would have punitive sanctions against them. He says that rather than being called "The Tragedy of the Commons", it should be called "the Tragedy of the Failure of the Commons". Marxist geographer David Harvey has a similar criticism: "The dispossession of indigenous populations in North America by 'productive' colonists, for instance, was justified because indigenous populations did not produce value", asking: "Why, for instance, do we not focus in Hardin's metaphor on the individual ownership of the cattle rather than on the pasture as a common?" Some authors, like Yochai Benkler, say that with the rise of the Internet and digitalisation, an economics system based on commons becomes possible again. He wrote in his book The Wealth of Networks in 2006 that cheap computing power plus networks enable people to produce valuable products through non-commercial processes of interaction: "as human beings and as social beings, rather than as market actors through the price system". He uses the term networked information economy to refer to a "system of production, distribution, and consumption of information goods characterized by decentralized individual action carried out through widely distributed, nonmarket means that do not depend on market strategies." He also coined the term commons-based peer production for collaborative efforts based on sharing information. Examples of commons-based peer production are Wikipedia, free and open source software and open-source hardware. Tragedy of the commons has served as a pretext for powerful private companies and/or governments to introduce regulatory agents or outsourcing on less powerful entities or governments, for the exploitation of their natural resources. Powerful companies and governments can easily corrupt and bribe less powerful institutions or governments, to allow them exploit or privatize their resources, which causes more concentration of power and wealth in powerful entities. This phenomenon is known as the resource curse. Other criticisms have focused on Hardin's racist and eugenicist views, claiming that his arguments are directed towards forcible population control, particularly for people of color. Comedy of the commons In certain cases, exploiting a resource more may be a good thing. Carol M. Rose, in a 1986 article, discussed the concept of the "comedy of the commons", where the public property in question exhibits "increasing returns to scale" in usage (hence the phrase, "the more the merrier"), in that the more people use the resource, the higher the benefit to each one. Rose cites as examples commerce and group recreational activities. According to Rose, public resources with the "comedic" characteristic may suffer from under-investment rather than over usage. A modern example presented by Garrett Richards in environmental studies is that the issue of excessive carbon emissions can be tackled effectively only when the efforts are directly addressing the issues along with the collective efforts from the world economies. Additionally, the more that nations are willing to collaborate and contribute resources, the higher the chances are for successful technological developments. See also Related concepts , depriving commoners of their ancient rights References Notes Bibliography Angus, I. (2008). "The myth of the tragedy of the commons", Climate & Capitalism (August 25). Foddy, M., Smithson, M., Schneider, S., and Hogg, M. (1999). Resolving social dilemmas. Philadelphia: Psychology Press. . pp. 462, 463 External links The Digital Library of the Commons The Myth of the Tragedy of the Commons by Ian Angus "Global Tragedy of the Commons" by John Hickman and Sarah Bartlett "Tragedy of the Commons Explained with Smurfs" by Ryan Somma Public vs. Private Goods & Tragedy of the Commons On averting the Tragedy of the Commons 1968 introductions Economic inequality Environmental economics Environmental social science concepts Inefficiency in game theory Land use Market failure Metaphors Public commons
Tragedy of the commons
Mathematics,Environmental_science
8,848
43,934,042
https://en.wikipedia.org/wiki/Disk%20Detective
Disk Detective is the first NASA-led and funded-collaboration project with Zooniverse. It is NASA's largest crowdsourcing citizen science project aiming at engaging the general public in search of stars, which are surrounded by dust-rich circumstellar disks, where planets usually dwell and are formed. Initially launched by NASA Citizen Science Officer, Marc Kuchner, the principal investigation of the project was turned over to Steven Silverberg. Details Disk Detective was launched in January 2014, and was expected to continue until 2017. In April 2019 Disk Detective uploaded partly classified subjects, as Zooniverse did stop to support the old platform for projects, which was completed in May 2019. The project team began working on Disk Detective 2.0 that was then launched May 24, 2020, utilizing Zooniverse's new platform. The project invites the public to search through images captured by NASA's Wide-field Infrared Survey Explorer (WISE) and other sky surveys. Disk Detective 1.0 compared images from the WISE mission to the Two Micron All Sky Survey (2MASS), the Digitized Sky Survey (DSS) and the Sloan Digital Sky Survey (SDSS). Version 2.0 compares WISE images to 2MASS, Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), Australia's SkyMapper telescope, and the unblurred coadds of WISE imaging (unWISE). The images in Disk Detective have all been pre-selected to be extra bright at wavelengths where circumstellar dust emits thermal radiation. They are at mid-infrared, near-infrared and optical wavelengths. Disks are not the only heavenly objects that appear bright at infrared wavelengths; active galactic nuclei, galaxies, asteroids and interstellar dust clouds also emit at these wavelengths. Computer algorithms cannot distinguish the difference, so it is necessary to examine all images by "eye" to make sure that the selected candidates are stars with disks, and not other celestial objects. After the initial and subsequent discovery of several Peter Pan disks—M dwarf primordial gas-rich circumstellar disk systems that retain their gas 2 to 10 times longer than that of other disks—by the Disk Detective science team, research began to understand how these unusual systems fit into disk development. On September 29, 2022, NASA announced version 2.1 of the project, releasing new data containing thousands of images of nearby stars located in young star-forming regions and to provide a better view of "extreme" debris disks—circumstellar disks that have brighter than expected luminosity—in the galactic plane. The 2.1 dataset targets stars with brightness at a wavelength of 12 μm in an effort to discover more Peter Pan disks. Classification At the Disk Detective website, the images are presented in animated forms which are called flip books. Each image of the flip book is formatted to focus on the subject of interest within a series of circles and crosshairs. Website visitors—whether or not they are registered member users of Zooniverse—examine the flip book images and classify the target subjects based on simple criteria. Disk Detective 2.0 elimination criteria include whether the subject "moves" off the center crosshairs in 2MASS images only, if it moves off of crosshairs in two or more images, if the subject is not round in Pan-STARRS, SkyMapper, or 2MASS images, if it becomes extended beyond the outer circle in WISE images, and if two or more images show objects between the inner and outer circles. The ideal target is classified as a "good candidate," and is further vetted by the advanced research group into a list of "debris disk of interest" (DDOI) candidates. Particular interest is paid to good candidates that have two or more images where objects other than the subject are present within the inner circle only. The selected disk candidates will eventually become the future targets for NASA's Hubble Space Telescope and its successor, the James Webb Space Telescope. They will also be the topic for future publications in scientific literature. Seeking objects The disks that NASA's scientists at the Goddard Space Flight Center aim to find are debris disks, which are older than 5 million years; and young stellar object (YSO) disks, which are younger than 5 million years. Advanced user group Volunteers who have registered as citizen scientists with Zooniverse can join an exclusive group on the Disk Detective project, called "advanced users" or "super users," after they have done 300 classifications. Advanced users might then further vet candidates marked as "good," compare candidate subjects with literature, or analyze follow-up data. This advanced user group is similar to other groups that have formed in citizen science projects, such as the Peas Corps in Galaxy Zoo. Discoveries The Disk Detective project discovered the first example of a Peter Pan disk. At the 235th meeting of the American Astronomical Society the discovery of four new Peter Pan disks was presented. Three objects are high-probability members of the Columba and Carina stellar associations. The forth object has an intermediate likelihood of being part of a moving group. All four objects are young M dwarfs. The project has also discovered the first debris disk with a white dwarf companion (HD 74389) and a new kind of M dwarf disk (WISE J080822.18-644357.3) in a moving group. The project found 37 new disks (including HD 74389) and four Be stars in the first paper and 213 newly-identified disk candidates in the third paper. Together with WISE J080822.18-644357.3, the Disk Detective project found 251 new disks or disk candidates. The third paper also found HD 150972 (WISEA J164540.79-310226.6) as a likely member of the Scorpius–Centaurus moving group, 12 candidates that are co-moving binaries and 31 that are closer than 125 parsec, making them possible targets for direct imaging of exoplanets. Additionally, the project published the discovery a nearby young brown dwarf with a warm class-II type circumstellar disk, WISEA J120037.79−784508.3 (W1200−7845), located in the ε Chamaeleontis association. Found 102 parsecs (~333 lightyears) from the Sun, this puts it within the solar neighborhood, making it ideal for study since brown dwarfs are very faint due to their low masses of about 13-80 MJ. Therefore, it is within distance to observe greater details if viewed with large telescope arrays or space telescopes. W1200-7845 is also very young, with measurements putting it at about 3.7 million years old, meaning that—along with its relatively close proximity—it could serve as a benchmark for future studies of brown dwarf system formation. A study with JWST MIRI found that the disk around WISEA J044634.16-262756.1B, which was first discovered by the Disk Detective project, has a carbon-rich disk. The study found clear evidence that the disk has long-lived primordial gas. 14 molecules were found within the disk, many of them being hydrocarbons. False positive rate and applications The project did make estimates about the amount of high-quality disk candidates in AllWISE and lower-limit false-positive rates for several catalogs, based on classification false-positive rates, follow-up imaging and literature review. Out of the 149,273 subjects on the Disk Detective website 7.9±0.2% are likely candidates. 90.2% of the subjects are eliminated by website evaluation, 1.35% were eliminated by literature review and 0.52% were eliminated by high-resolution follow-up imaging (Robo-AO + Dupont/Retrocam). From this result AllWISE might contain ~21,600 high quality disk candidates and 4-8% of the disk candidates from high-quality surveys might show background objects in high-resolution images, which are bright enough to affect the infrared excess. The project also has a database that is available through the Mikulski Archive for Space Telescopes (MAST). It contains the "goodFraction", describing how often a source was voted as a good source on the website, as well as other information about the source, such as comments from the science team, machine learned classification, cross-matched catalog information and SED fits. A group at MIT did use the Disk Detective classifications to train a machine-learning system. They found that their machine-learning system agreed with user identifications of debris disks 97% of the time. The group has found 367 promising candidates for follow-up observations with this method. See also Exoplanet Kuiper belt Planetesimal Protoplanetary disk T Tauri star WISEA J120037.79-784508.3 Zooniverse projects: Asteroid Zoo Backyard Worlds: Planet 9 Backyard Worlds Galaxy Zoo The Milky Way Project Old Weather Planet Hunters SETILive References External links NASA's Disk Detective page Disk Detective official website Disk Detective Facebook page Disk Detective Twitter page Disk Detective project blog Astronomy websites Astronomy projects Citizen science Human-based computation Internet properties established in 2014
Disk Detective
Astronomy,Technology
1,914
44,927,719
https://en.wikipedia.org/wiki/C8H7NO4
{{DISPLAYTITLE:C8H7NO4}} The molecular formula C8H7NO4 (molar mass: 181.15 g/mol) may refer to: 3-Aminophthalic acid Homoquinolinic acid (HQA) (2-Nitrophenyl)acetic acid Uvitonic acid, or 6-methyl-2,4-pyridinedicarboxylic acid Molecular formulas
C8H7NO4
Physics,Chemistry
96