id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
6,907,585 | https://en.wikipedia.org/wiki/Ultraviolet%20germicidal%20irradiation | Ultraviolet germicidal irradiation (UVGI) is a disinfection technique employing ultraviolet (UV) light, particularly UV-C (180–280 nm), to kill or inactivate microorganisms. UVGI primarily inactivates microbes by damaging their genetic material, thereby inhibiting their capacity to carry out vital functions.
The use of UVGI extends to an array of applications, encompassing food, surface, air, and water disinfection. UVGI devices can inactivate microorganisms including bacteria, viruses, fungi, molds, and other pathogens. Recent studies have substantiated the ability of UV-C light to inactivate SARS-CoV-2, the strain of coronavirus that causes COVID-19.
UV-C wavelengths demonstrate varied germicidal efficacy and effects on biological tissue. Many germicidal lamps like low-pressure mercury (LP-Hg) lamps, with peak emissions around 254 nm, contain UV wavelengths that can be hazardous to humans. As a result, UVGI systems have been primarily limited to applications where people are not directly exposed, including hospital surface disinfection, upper-room UVGI, and water treatment. More recently, the application of wavelengths between 200-235 nm, often referred to as far-UVC, has gained traction for surface and air disinfection. These wavelengths are regarded as much safer due to their significantly reduced penetration into human tissue.. Moreover, their efficiency relies on the fact, that in addition to the DNA damage related to the formation of pyrimidine dimers, they provoke important DNA photoionization, leading to oxidative damage.
Notably, UV-C light is virtually absent in sunlight reaching the Earth's surface due to the absorptive properties of the ozone layer within the atmosphere.
History
Origins of UV germicidal action
The development of UVGI traces back to 1878 when Arthur Downes and Thomas Blunt found that sunlight, particularly its shorter wavelengths, hindered microbial growth. Expanding upon this work, Émile Duclaux, in 1885, identified variations in sunlight sensitivity among different bacterial species. A few years later, in 1890, Robert Koch demonstrated the lethal effect of sunlight on Mycobacterium tuberculosis, hinting at UVGI's potential for combating diseases like tuberculosis.
Subsequent studies further defined the wavelengths most efficient for germicidal inactivation. In 1892, it was noted that the UV segment of sunlight had the most potent bactericidal effect. Research conducted in the early 1890s demonstrated the superior germicidal efficacy of UV-C compared to UV-A and UV-B.
The mutagenic effects of UV were first unveiled in a 1914 study that observed metabolic changes in Bacillus anthracis upon exposure to sublethal doses of UV. Frederick Gates, in the late 1920s, offered the first quantitative bactericidal action spectra for Staphylococcus aureus and Bacillus coli, noting peak effectiveness at 265 nm. This matched the absorption spectrum of nucleic acids, hinting at DNA damage as the key factor in bacterial inactivation. This understanding was solidified by the 1960s through research demonstrating the ability of UV-C to form thymine dimers, leading to microbial inactivation. These early findings collectively laid the groundwork for modern UVGI as a disinfection tool.
UVGI for air disinfection
The utilization of UVGI for air disinfection began in earnest in the mid-1930s. William F. Wells demonstrated in 1935 that airborne infectious organisms, specifically aerosolized B. coli exposed to 254 nm UV, could be rapidly inactivated. This built upon earlier theories of infectious droplet nuclei transmission put forth by Carl Flügge and Wells himself. Prior to this, UV radiation had been studied predominantly in the context of liquid or solid media, rather than airborne microbes.
Shortly after Wells' initial experiments, high-intensity UVGI was employed to disinfect a hospital operating room at Duke University in 1936. The method proved a success, reducing postoperative wound infections from 11.62% without the use of UVGI to 0.24% with the use of UVGI. Soon, this approach was extended to other hospitals and infant wards using UVGI "light curtains", designed to prevent respiratory cross-infections, with noticeable success.
Adjustments in the application of UVGI saw a shift from "light curtains" to upper-room UVGI, confining germicidal irradiation above human head level. Despite its dependency on good vertical air movement, this approach yielded favorable outcomes in preventing cross-infections. This was exemplified by Wells' successful usage of upper-room UVGI between 1937 and 1941 to curtail the spread of measles in suburban Philadelphia day schools. His study found that 53.6% of susceptibles in schools without UVGI became infected, while only 13.3% of susceptibles in schools with UVGI were infected.
Richard L. Riley, initially a student of Wells, continued the study of airborne infection and UVGI throughout the 1950s and 60s, conducting significant experiments in a Veterans Hospital TB ward. Riley successfully demonstrated that UVGI could efficiently inactivate airborne pathogens and prevent the spread of tuberculosis.
Despite initial successes, the use of UVGI declined in the second half of the 20th century era due to various factors, including a rise in alternative infection control and prevention methods, inconsistent efficacy results, and concerns regarding its safety and maintenance requirements. However, recent events like a rise in multiple drug-resistant bacteria and the COVID-19 pandemic have renewed interest in UVGI for air disinfection.
UVGI for water treatment
Using UV light for disinfection of drinking water dates back to 1910 in Marseille, France. The prototype plant was shut down after a short time due to poor reliability. In 1955, UV water treatment systems were applied in Austria and Switzerland; by 1985 about 1,500 plants were employed in Europe. In 1998 it was discovered that protozoa such as cryptosporidium and giardia were more vulnerable to UV light than previously thought; this opened the way to wide-scale use of UV water treatment in North America. By 2001, over 6,000 UV water treatment plants were operating in Europe.
Over time, UV costs have declined as researchers develop and use new UV methods to disinfect water and wastewater. Several countries have published regulations and guidance for the use of UV to disinfect drinking water supplies, including the US and the UK.
Method of operation
UV light is electromagnetic radiation with wavelengths shorter than visible light but longer than X-rays. UV is categorised into several wavelength ranges, with short-wavelength UV (UV-C) considered "germicidal UV". Wavelengths between about 200 nm and 300 nm are strongly absorbed by nucleic acids. The absorbed energy can result in defects including pyrimidine dimers. These dimers can prevent replication or can prevent the expression of necessary proteins, resulting in the death or inactivation of the organism. Recently, it has been shown that these dimers are fluorescent.
Mercury-based lamps operating at low vapor pressure emit UV light at the 253.7 nm line.
Ultraviolet light-emitting diode (UV-C LED) lamps emit UV light at selectable wavelengths between 255 and 280 nm.
Pulsed-xenon lamps emit UV light across the entire UV spectrum with a peak emission near 230 nm.
This process is similar to, but stronger than, the effect of longer wavelengths (UV-B) producing sunburn in humans. Microorganisms have less protection against UV and cannot survive prolonged exposure to it.
A UVGI system is designed to expose environments such as water tanks, rooms and forced air systems to germicidal UV. Exposure comes from germicidal lamps that emit germicidal UV at the correct wavelength, thus irradiating the environment. The forced flow of air or water through this environment ensures exposure of that air or water.
Effectiveness
The effectiveness of germicidal UV depends on the UV dose, i.e. how much UV light reaches the microbe (measured as radiant exposure) and how susceptible the microbe is to the given wavelength(s) of UV light, defined by the germicidal effectiveness curve.
UV Dose
The UV dose is measured in light energy per area, i.e. radiant exposure or fluence. The fluence a microbe is exposed to is the product of the light intensity, i.e. irradiance and the time of exposure, according to:
UV dose (μJ/cm2) = UV intensity (μW/cm2) × exposure time (seconds)
Likewise, the irradiance depends on the brightness (radiant intensity, W/sr) of the UV source, the distance between the UV source and the microbe, the attenuation of filters (e.g. fouled glass) in the light path, the attenuation of the medium (e.g. microbes in turbid water), the presence of particles or objects that can shield the microbes from UV, and the presence of reflectors that can direct the same UV-light through the medium multiple times. Additionally, if the microbes are not free-flowing, such as in a biofilm, they will block each other from irradiation.
The U.S. Environmental Protection Agency (EPA) published UV dosage guidelines for water treatment applications in 1986. It is difficult to measure UV dose directly but it can also be estimated from:
Flow rate (contact time)
Transmittance (light reaching the target)
Turbidity (cloudiness)
Lamp age or fouling or outages (reduction in UV intensity)
Bulbs require periodic cleaning and replacement to ensure effectiveness. The lifetime of germicidal UV bulbs varies depending on design. Also, the material that the bulb is made of can absorb some of the germicidal rays. Lamp cooling under airflow can also lower UV output. The UV dose should be calculated using the end of lamp life (EOL is specified in number of hours when the lamp is expected to reach 80% of its initial UV output). Some shatter-proof lamps are coated with a fluorated ethylene polymer to contain glass shards and mercury in case of breakage; this coating reduces UV output by as much as 20%.
UV source intensity is sometimes specified as irradiance at a distance of 1 meter, which can be easily converted to radiant intensity. UV intensity is inversely proportional to the square of the distance so it decreases at longer distances. Alternatively, it rapidly increases at distances shorter than 1m. In the above formula, the UV intensity must always be adjusted for distance unless the UV dose is calculated at exactly from the lamp. The UV dose should be calculated at the furthest distance from the lamp on the periphery of the target area. Increases in fluence can be achieved by using reflection, such that the same light passes through the medium several times before being absorbed. Aluminum has the highest reflectivity rate versus other metals and is recommended when using UV.
In static applications the exposure time can be as long as needed for an effective UV dose to be reached. In waterflow/airflow disinfection, exposure time can be increased by increasing the illuminated volume, decreasing the fluid speed, or recirculating the air or water repeatedly through the illuminated section. This ensures multiple passes so that the UV is effective against the highest number of microorganisms and will irradiate resistant microorganisms more than once to break them down.
Inactivation of microorganisms
Microbes are more susceptible to certain wavelengths of UV light, a function called the germicidal effectiveness curve. The curve for E. coli is given in the figure, with the most effective UV light having a wavelength of 265 nm. This applies to most bacteria and does not change significantly for other microbes. Dosages for a 90% kill rate of most bacteria and viruses range between 2,000 and 8,000 μJ/cm2. Larger parasites such as Cryptosporidium require a lower dose for inactivation. As a result, US EPA has accepted UV disinfection as a method for drinking water plants to obtain Cryptosporidium, Giardia or virus inactivation credits. For example, for a 90% reduction of Cryptosporidium, a minimum dose of 2,500 μW·s/cm2 is required based on EPA's 2006 guidance manual.
"Sterilization" is often misquoted as being achievable. While it is theoretically possible in a controlled environment, it is very difficult to prove and the term "disinfection" is generally used by companies offering this service as to avoid legal reprimand. Specialist companies will often advertise a certain log reduction, e.g., 6-log reduction or 99.9999% effective, instead of sterilization. This takes into consideration a phenomenon known as light and dark repair (photoreactivation and base excision repair, respectively), in which a cell can repair DNA that has been damaged by UV light.
Safety
Skin and eye safety
Many UVGI systems use UV wavelengths that can be harmful to humans, resulting in both immediate and long-term effects. Acute impacts on the eyes and skin can include conditions such as photokeratitis (often termed "snow blindness") and erythema (reddening of the skin), while chronic exposure may heighten the risk of skin cancer.
However, the safety and effects of UV vary extensively by wavelength, implying that not all UVGI systems pose the same level of hazards. Humans typically encounter UV light in the form of solar UV, which comprises significant portions of UV-A and UV-B, but excludes UV-C. The UV-B band, able to penetrate deep into living, replicating tissue, is recognized as the most damaging and carcinogenic.
Many standard UVGI systems, such as low-pressure mercury (LP-Hg) lamps, produce broad-band emissions in the UV-C range and also peaks in the UV-B band. This often makes it challenging to attribute damaging effects to a specific wavelength. Nevertheless, longer wavelengths in the UV-C band can cause conditions like photokeratitis and erythema. Hence, many UVGI systems are used in settings where direct human exposure is limited, such as with upper-room UVGI air cleaners and water disinfection systems.
Precautions are commonly implemented to protect users of these UVGI systems, including:
Warning labels: Labels alert users to the dangers of UV light.
Interlocking systems: Shielded systems, such as closed water tanks or air circulation units, often have interlocks that automatically shut off the UV lamps if the system is opened for human access. Clear viewports that block UV-C are also available.
Personal protective equipment: Most protective eyewear, particularly those compliant with ANSI Z87.1, block UV-C. Similarly, clothing, plastics, and most types of glass (excluding fused silica) effectively impede UV-C.
Since the early 2010s there has been growing interest in the far-UVC wavelengths of 200-235 nm for whole-room exposure. These wavelengths are generally considered safer due to their limited penetration depth caused by increased protein absorption. This feature confines far-UVC exposure to the superficial layers of tissue, such as the outer layer of dead skin (the stratum corneum) and the tear film and surface cells of the cornea. As these tissues do not contain replicating cells, damage to them poses less carcinogenic risk. It has also been demonstrated that far-UVC does not cause erythema or damage to the cornea at levels many times that of solar UV or conventional 254 nm UVGI systems.
Exposure limits
Exposure limits for UV, particularly the germicidal UV-C range, have evolved over time due to scientific research and changing technology. The American Conference of Governmental Industrial Hygienists (ACGIH) and the International Commission on Non-Ionizing Radiation Protection (ICNIRP) have set exposure limits to safeguard against both immediate and long-term effects of UV exposure. These limits, also referred to as Threshold Limit Values (TLVs), form the basis for emission limits in product safety standards.
The UV-C photobiological spectral band is defined as 100–280 nm, with limits currently applying only from 180 to 280 nm. This reflects concerns about acute damage such as erythema and photokeratitis as well as long-term delayed effects like photocarcinogenesis. However, with the increased safety evidence surrounding UV-C for germicidal applications, the existing ACGIH TLVs were revised in 2022.
The TLVs for the 222 nm UV-C wavelength (peak emissions from KrCl excimer lamps), following the 2022 revision, are now 161 mJ/cm2 for eye exposure and 479 mJ/cm2 for skin exposure over an eight-hour period. For the 254 nm UV wavelength, the updated exposure limit is now set at 6 mJ/cm2 for eyes and 10 mJ/cm2 for skin.
Indoor air chemistry
UV can influence indoor air chemistry, leading to the formation of ozone and other potentially harmful pollutants, including particulate pollution. This occurs primarily through photolysis, where UV photons break molecules into smaller radicals that form radicals such as OH. The radicals can react with volatile organic compounds (VOCs) to produce oxidized VOCs (OVOCs) and secondary organic aerosols (SOA).
Wavelengths below 242 nm can also generate ozone, which not only contributes to OVOCs and SOA formation but can be harmful in itself. When inhaled in high quantities, these pollutants can irritate the eyes and respiratory system and exacerbate conditions like asthma.
The specific pollutants produced depend on the initial air chemistry and the UV source power and wavelength. To control ozone and other indoor pollutants, ventilation and filtration methods are used, diluting airborne pollutants and maintaining indoor air quality.
Polymer damage
UVC radiation is able to break chemical bonds. This leads to rapid aging of plastics and other material, and insulation and gaskets. Plastics sold as "UV-resistant" are tested only for the lower-energy UVB since UVC does not normally reach the surface of the Earth. When UV is used near plastic, rubber, or insulation, these materials may be protected by metal tape or aluminum foil.
Applications
Air disinfection
UVGI can be used to disinfect air with prolonged exposure. In the 1930s and 40s, an experiment in public schools in Philadelphia showed that upper-room ultraviolet fixtures could significantly reduce the transmission of measles among students.
UV and violet light are able to neutralize the infectivity of SARS-CoV-2. Viral titers usually found in the sputum of COVID-19 patients are completely inactivated by levels of UV-A and UV-B irradiation that are similar to those levels experienced from natural sun exposure. This finding suggests that the reduced incidence of SARS-COV-2 in the summer may be, in part, due to the neutralizing activity of solar UV irradiation.
Various UV-emitting devices can be used for SARS-CoV-2 disinfection, and these devices may help in reducing the spread of infection. SARS-CoV-2 can be inactivated by a wide range of UVC wavelengths, and the wavelength of 222 nm provides the most effective disinfection performance.
Disinfection is a function of UV intensity and time. For this reason, it is in theory not as effective on moving air, or when the lamp is perpendicular to the flow, as exposure times are dramatically reduced. However, numerous professional and scientific publications have indicated that the overall effectiveness of UVGI actually increases when used in conjunction with fans and HVAC ventilation, which facilitate whole-room circulation that exposes more air to the UV source. Air purification UVGI systems can be free-standing units with shielded UV lamps that use a fan to force air past the UV light. Other systems are installed in forced air systems so that the circulation for the premises moves microorganisms past the lamps. Key to this form of sterilization is placement of the UV lamps and a good filtration system to remove the dead microorganisms. For example, forced air systems by design impede line-of-sight, thus creating areas of the environment that will be shaded from the UV light. However, a UV lamp placed at the coils and drain pans of cooling systems will keep microorganisms from forming in these naturally damp places.
Water disinfection
Ultraviolet disinfection of water is a purely physical, chemical-free process. Even parasites such as Cryptosporidium or Giardia, which are extremely resistant to chemical disinfectants, are efficiently reduced. UV can also be used to remove chlorine and chloramine species from water; this process is called photolysis, and requires a higher dose than normal disinfection. The dead microorganisms are not removed from the water. UV disinfection does not remove dissolved organics, inorganic compounds or particles in the water. The world's largest water disinfection plant treats drinking water for New York City. The Catskill-Delaware Water Ultraviolet Disinfection Facility, commissioned on 8 October 2013, incorporates a total of 56 energy-efficient UV reactors treating up to a day.
Ultraviolet can also be combined with ozone or hydrogen peroxide to produce hydroxyl radicals to break down trace contaminants through an advanced oxidation process.
It used to be thought that UV disinfection was more effective for bacteria and viruses, which have more-exposed genetic material, than for larger pathogens that have outer coatings or that form cyst states (e.g., Giardia) that shield their DNA from UV light. However, it was recently discovered that ultraviolet radiation can be somewhat effective for treating the microorganism Cryptosporidium. The findings resulted in the use of UV radiation as a viable method to treat drinking water. Giardia in turn has been shown to be very susceptible to UV-C when the tests were based on infectivity rather than excystation. It has been found that protists are able to survive high UV-C doses but are sterilized at low doses.
UV water treatment devices can be used for well water and surface water disinfection. UV treatment compares favourably with other water disinfection systems in terms of cost, labour and the need for technically trained personnel for operation. Water chlorination treats larger organisms and offers residual disinfection, but these systems are expensive because they need special operator training and a steady supply of a potentially hazardous material. Finally, boiling of water is the most reliable treatment method but it demands labour and imposes a high economic cost. UV treatment is rapid and, in terms of primary energy use, approximately 20,000 times more efficient than boiling.
UV disinfection is most effective for treating high-clarity, purified reverse osmosis distilled water. Suspended particles are a problem because microorganisms buried within particles are shielded from the UV light and pass through the unit unaffected. However, UV systems can be coupled with a pre-filter to remove those larger organisms that would otherwise pass through the UV system unaffected. The pre-filter also clarifies the water to improve light transmittance and therefore UV dose throughout the entire water column. Another key factor of UV water treatment is the flow rate—if the flow is too high, water will pass through without sufficient UV exposure. If the flow is too low, heat may build up and damage the UV lamp. A disadvantage of UVGI is that while water treated by chlorination is resistant to reinfection (until the chlorine off-gasses), UVGI water is not resistant to reinfection. UVGI water must be transported or delivered in such a way as to avoid reinfection.
A 2006 project at University of California, Berkeley produced a design for inexpensive water disinfection in resource deprived settings. The project was designed to produce an open source design that could be adapted to meet local conditions. In a somewhat similar proposal in 2014, Australian students designed a system using potato chip (crisp) packet foil to reflect solar UV radiation into a glass tube that disinfects water without power.
Modeling
Sizing of a UV system is affected by three variables: flow rate, lamp power, and UV transmittance in the water. Manufacturers typically developed sophisticated computational fluid dynamics (CFD) models validated with bioassay testing. This involves testing the UV reactor's disinfection performance with either MS2 or T1 bacteriophages at various flow rates, UV transmittance, and power levels in order to develop a regression model for system sizing. For example, this is a requirement for all public water systems in the United States per the EPA UV manual.
The flow profile is produced from the chamber geometry, flow rate, and particular turbulence model selected. The radiation profile is developed from inputs such as water quality, lamp type (power, germicidal efficiency, spectral output, arc length), and the transmittance and dimension of the quartz sleeve. Proprietary CFD software simulates both the flow and radiation profiles. Once the 3D model of the chamber is built, it is populated with a grid or mesh that comprises thousands of small cubes.
Points of interest—such as at a bend, on the quartz sleeve surface, or around the wiper mechanism—use a higher resolution mesh, whilst other areas within the reactor use a coarse mesh. Once the mesh is produced, hundreds of thousands of virtual particles are "fired" through the chamber. Each particle has several variables of interest associated with it, and the particles are "harvested" after the reactor. Discrete phase modeling produces delivered dose, head loss, and other chamber-specific parameters.
When the modeling phase is complete, selected systems are validated using a professional third party to provide oversight and to determine how closely the model is able to predict the reality of system performance. System validation uses non-pathogenic surrogates such as MS 2 phage or Bacillus subtilis to determine the Reduction Equivalent Dose (RED) ability of the reactors. Most systems are validated to deliver 40 mJ/cm2 within an envelope of flow and transmittance.
To validate effectiveness in drinking water systems, the method described in the EPA UV guidance manual is typically used by US water utilities, whilst Europe has adopted Germany's DVGW 294 standard. For wastewater systems, the NWRI/AwwaRF Ultraviolet Disinfection Guidelines for Drinking Water and Water Reuse protocols are typically used, especially in wastewater reuse applications.
Wastewater treatment
Ultraviolet in sewage treatment is commonly replacing chlorination. This is in large part because of concerns that reaction of the chlorine with organic compounds in the waste water stream could synthesize potentially toxic and long lasting chlorinated organics and also because of the environmental risks of storing chlorine gas or chlorine containing chemicals. Individual wastestreams to be treated by UVGI must be tested to ensure that the method will be effective due to potential interferences such as suspended solids, dyes, or other substances that may block or absorb the UV radiation. According to the World Health Organization, "UV units to treat small batches (1 to several liters) or low flows (1 to several liters per minute) of water at the community level are estimated to have costs of US$20 per megaliter, including the cost of electricity and consumables and the annualized capital cost of the unit."
Large-scale urban UV wastewater treatment is performed in cities such as Edmonton, Alberta. The use of ultraviolet light has now become standard practice in most municipal wastewater treatment processes. Effluent is now starting to be recognized as a valuable resource, not a problem that needs to be dumped. Many wastewater facilities are being renamed as water reclamation facilities, whether the wastewater is discharged into a river, used to irrigate crops, or injected into an aquifer for later recovery. Ultraviolet light is now being used to ensure water is free from harmful organisms.
Aquarium and pond
Ultraviolet sterilizers are often used to help control unwanted microorganisms in aquaria and ponds. UV irradiation ensures that pathogens cannot reproduce, thus decreasing the likelihood of a disease outbreak in an aquarium.
Aquarium and pond sterilizers are typically small, with fittings for tubing that allows the water to flow through the sterilizer on its way from a separate external filter or water pump. Within the sterilizer, water flows as close as possible to the ultraviolet light source. Water pre-filtration is critical as water turbidity lowers UV-C penetration.
Many of the better UV sterilizers have long dwell times and limit the space between the UV-C source and the inside wall of the UV sterilizer device.
Laboratory hygiene
UVGI is often used to disinfect equipment such as safety goggles, instruments, pipettors, and other devices. Lab personnel also disinfect glassware and plasticware this way. Microbiology laboratories use UVGI to disinfect surfaces inside biological safety cabinets ("hoods") between uses.
Food and beverage protection
Since the U.S. Food and Drug Administration issued a rule in 2001 requiring that virtually all fruit and vegetable juice producers follow HACCP controls, and mandating a 5-log reduction in pathogens, UVGI has seen some use in sterilization of juices such as fresh-pressed.
UV Sources
Mercury vapor lamps
Germicidal UV for disinfection is most typically generated by a mercury-vapor lamp. Low-pressure mercury vapor has a strong emission line at 254 nm, which is within the range of wavelengths that demonstrate strong disinfection effect. The optimal wavelengths for disinfection are close to 260 nm.
Mercury vapor lamps may be categorized as either low-pressure (including amalgam) or medium-pressure lamps. Low-pressure UV lamps offer high efficiencies (approx. 35% UV-C) but lower power, typically 1 W/cm power density (power per unit of arc length). Amalgam UV lamps utilize an amalgam to control mercury pressure to allow operation at a somewhat higher temperature and power density. They operate at higher temperatures and have a lifetime of up to 16,000 hours. Their efficiency is slightly lower than that of traditional low-pressure lamps (approx. 33% UV-C output), and power density is approximately 2–3 W/cm3. Medium-pressure UV lamps operate at much higher temperatures, up to about 800 degrees Celsius, and have a polychromatic output spectrum and a high radiation output but lower UV-C efficiency of 10% or less. Typical power density is 30 W/cm3 or greater.
Depending on the quartz glass used for the lamp body, low-pressure and amalgam UV emit radiation at 254 nm and also at 185 nm, which has chemical effects. UV radiation at 185 nm is used to generate ozone.
The UV lamps for water treatment consist of specialized low-pressure mercury-vapor lamps that produce ultraviolet radiation at 254 nm, or medium-pressure UV lamps that produce a polychromatic output from 200 nm to visible and infrared energy. The UV lamp never contacts the water; it is either housed in a quartz glass sleeve inside the water chamber or mounted externally to the water, which flows through the transparent UV tube. Water passing through the flow chamber is exposed to UV rays, which are absorbed by suspended solids, such as microorganisms and dirt, in the stream.
LEDs
Recent developments in LED technology have led to commercially available UV-C LEDs. UV-C LEDs use semiconductors to emit light between 255 nm and 280 nm. The wavelength emission is tuneable by adjusting the material of the semiconductor. , the electrical-to-UV-C conversion efficiency of LEDs was lower than that of mercury lamps. The reduced size of LEDs opens up options for small reactor systems allowing for point-of-use applications and integration into medical devices. Low power consumption of semiconductors introduce UV disinfection systems that utilized small solar cells in remote or Third World applications.
UV-C LEDs don't necessarily last longer than traditional germicidal lamps in terms of hours used, instead having more-variable engineering characteristics and better tolerance for short-term operation. A UV-C LED can achieve a longer installed time than a traditional germicidal lamp in intermittent use. Likewise, LED degradation increases with heat, while filament and HID lamp output wavelength is dependent on temperature, so engineers can design LEDs of a particular size and cost to have a higher output and faster degradation or a lower output and slower decline over time.
See also
HEPA filter
Portable water purification
Sanitation
Sanitation Standard Operating Procedures
Solar water disinfection
References
External links
International Ultraviolet Association
Radiobiology
Ultraviolet radiation
Hygiene
Waste treatment technology
Sterilization (microbiology) | Ultraviolet germicidal irradiation | [
"Physics",
"Chemistry",
"Engineering",
"Biology"
] | 6,881 | [
"Spectrum (physical sciences)",
"Water treatment",
"Radiobiology",
"Electromagnetic spectrum",
"Ultraviolet radiation",
"Microbiology techniques",
"Sterilization (microbiology)",
"Environmental engineering",
"Waste treatment technology",
"Radioactivity"
] |
6,908,822 | https://en.wikipedia.org/wiki/John%20Michell%20%28writer%29 | John Frederick Carden Michell (9 February 1933 – 24 April 2009) was an English author and esotericist who was a prominent figure in the development of the pseudoscientific Earth mysteries movement. Over the course of his life he published over forty books on an array of different subjects, being a proponent of the Traditionalist school of esoteric thought.
Born in London to a wealthy family, Michell was educated at Cheam School and Eton College before serving as a Russian translator in the Royal Navy for two years. After failing a degree in Russian and German at Trinity College, Cambridge, he qualified as a chartered surveyor then returned to London and worked for his father's property business, there developing his interest in Ufology.
Embracing the counter-cultural ideas of the Earth mysteries movement during the 1960s, in The Flying Saucer Vision he built on Alfred Watkins' ideas of ley lines by arguing that they represented linear marks created in prehistory to guide extraterrestrial spacecraft. He followed this with his most influential work, The View Over Atlantis, in 1969. His ideas were at odds with those of academic archaeologists, for whom he expressed contempt.
Michell believed in the existence of an ancient spiritual tradition that connected humanity to divinity, but which had been lost as a result of modernity. He believed however that this tradition would be revived and that humanity would enter a Golden Age, with Britain as the centre of this transformation.
Michell's other publications covered an eclectic range of topics, and included an overview on the Shakespeare authorship question, a tract condemning Salman Rushdie during The Satanic Verses controversy, and a book of Adolf Hitler's quotations. Keenly interested in the crop circle phenomenon, he co-founded a magazine devoted to the subject, The Cereologist, in 1990, and served as its initial editor. From 1992 until his death he wrote a column for The Oldie magazine, which was largely devoted to his anti-modernist opinions. He accompanied this with a column on esoteric topics for the Daily Mirror tabloid.
A lifelong marijuana smoker, Michell died of lung cancer in 2009.
Michell's impact in the Earth mysteries movement was considerable, and through it he also influenced the British Pagan movement. During the 2000s, his ideas also proved an influence on the Radical Traditionalist sector of the New Right.
Biography
Early life
John Frederick Carden Michell was born in London on 9 February 1933. His father, Alfred Henry Michell, was of Cornish & Welsh descent and worked as a property dealer in the capital, while his mother Enid Evelyn (née Carden) was the daughter of Major Sir Frederick Carden, 3rd Baronet, great-granddaughter of Sir Robert Carden, 1st Baronet, who served as Lord Mayor of London in 1857, and 3x great-granddaughter of John Walter, founder of The Times. The eldest of three children, Michell's siblings were named Charles and Clare.
Michell was raised at Stargroves, his maternal grandfather's Victorian-era estate on the Berkshire Downs near to Newbury, and it was here that he developed a love of the countryside, learning about the local flora and fauna from a neighbouring naturalist. He was raised into the Anglican denomination of Christianity, although in later life rejected the religion.
Michell was initially educated as a boarder at the preparatory Cheam School, where he was Head Boy and excelled at the high jump. From there he went to study at Eton College, where he was a contemporary of Lord Moyne and Ian Cameron, the father of future Prime Minister David Cameron.
He spent his two years of national service in the Royal Navy, during which time he qualified as a Russian translator at the School of Slavonic Studies. He then went on to study Russian and German at Trinity College, Cambridge, although was unable to secure a third-class degree. He then qualified as a chartered surveyor at a firm in Gloucestershire, before moving back to London to work for his father's property business. Commenting on this job, he later stated that it was "quite amusing, but of course I wasn't any good at it", with property speculators eroding much of his fortune.
In 1966 one of his properties, the basement of his own residence, became the base of the London Free School. The Black Power activist Michael X, having previously run a gambling club in the basement, had now become active in the organisation of the LFS and brought Michell into counter-culture activities. Michell began to offer courses in UFOs and ley lines.
In 1964, with Jocasta Innes, Michell fathered a son, Jason Goodwin, who also became a writer. The relationship with Innes did not last. Jason Goodwin did not meet his natural father until 1992, at the age of 28, at which point they became quite close.
Embracing the Earth Mysteries movement
Michell developed an interest in Ufology and Earth mysteries after attending a talk given by Jimmy Goddard at Kensington Central Library on the subject of "Leys and Orthonies" in November 1965. Michell's first publication on the subject of Ufology was the article "Flying Saucers", which appeared in the 30 January 1967 edition of the counter-cultural newspaper International Times. He proceeded to write a book on the subject, but lost the original manuscript after accidentally leaving it in a North London café, at which he had to rewrite it. The book eventually saw publication as The Flying Saucer Vision, published in 1967, when Michell was 35 years old.
The Flying Saucer Vision took the idea of Tony Wedd that ley lines – alleged trackways across the landscape whose existence was first argued by Alfred Watkins – represented markers for the flight of extraterrestrial spacecraft and built on it, arguing that early human society was aided by alien entities who were understood as gods, but that these extraterrestrials had abandoned humanity because of the latter's greed for material and technological development.
According to Lachman, at this time Michell took the view that "an imminent revelation of literally inconceivable scope" was at hand, and that the appearance of UFOs was linked to "the start of a new phase in our history". Many fans of Michell's work consider it to be "by far his most impressive book". In their social history of Ufology, David Clarke and Andy Roberts stated that Michell's work was "the catalyst and helmsman" for the growing interest in UFOs among the hippie sector of the counter-culture.
Subsequently, there was a shift in Michell's emphasis as he became increasingly interested in the landscapes in which he believed that ley lines could be found rather than the UFOs themselves. He wrote an article on "Lung Mei and the Dragon Paths of England" for a September 1967 issue of Image magazine, in which he compared British ley-lines to the Chinese mythological idea of lung mei lines, arguing that this was evidence of a widespread pre-Christian dragon cult in ancient Britain. He built on these ideas for The View Over Atlantis, a book which he privately published in 1969, with a republication following three years later. Believing this earth energy to be a real magnetic phenomenon arising naturally from the ground, Michell argued that an ancient religious-scientific elite had traveled the world constructing the lines and various megalithic monuments in order to channel this energy and direct it for the good of humanity. The tone of his work reflected "a fervent religious feeling", describing the existence of an ancient, universal, and true system of belief that was once spread across the ancient world but which had been lost through the degeneracy of subsequent generations. He added however that this ancient knowledge would be revived with the dawning of the Age of Aquarius, allowing for what Michell described as the "rediscovery of access to the divine will".
The Pagan studies scholar Amy Hale stated that The View Over Atlantis was "a smash countercultural success", while the historian Ronald Hutton described it as "almost the founding document of the modern earth mysteries movement". Fellow ley-hunter and later biographer Paul Screeton considered it to be a "groundbreaking" work which "re-enchanted the British landscape and empowered a generation to seek out and appreciate the spiritual dimension of the countryside, not least attracting them to reawaken the sleepy town of Glastonbury".
The book inspired an array of Earth Mysteries publications in the 1970s and 1980s, accompanied by growth in the ley-hunting movement. Among the most prominent works to build on Michell's ideas during this period were Janet and Colin Bord's Mysterious Britain, which used them in its presentation of a gazetteer of ancient sites, and Paul Screeton's Quicksilver Heritage, which argued that the Neolithic had been a time devoted to spiritual endeavours which had been corrupted by the emergence of metal technologies. Michell associated with many individuals active in this ley-hunting community, and in July 1971 was one of many attendees at a ley-hunters picnic held at Risbury Camp, the largest outdoor gathering of the movement since 1939.
In May 1969 Michell established a group known as the Research Into Lost Knowledge Organisation (RILKO) with his friends Keith Critchlow and Mary Williams. In conjunction with the Garnstone Press, RILKO founded the Prehistory and Ancient Science Library, a book series that brought out reprints of older works, such as Watkins' The Old Straight Track and William Stirling's The Canon, both of which contained forewords by Michell. Michell also founded a small publishing company of his own, West Country Editions, through which he brought out his own A Little History of Bladud in 1973 as well as a reprint of Howard C. Levis's 1919 book Bladud of Bath. With his friend John "Peewee" Michael, who lived in Bristol, Michell also established a second small press, Pentacle Books, although it failed to become a commercial success and was short lived.
Michell was involved in the summer 1971 Glastonbury Fayre music festival near Pilton, Somerset, where the pyramid stage was built to Michell's specifications and situated at what he claimed were the apex of two ley lines. Through Michael Rainey, Michell was introduced to the members of rock band The Rolling Stones at the Courtfield Road home of band member Brian Jones. Michell befriended the band's lead singer, Mick Jagger, and he accompanied the band on a visit to Stonehenge. Michell then went on a visit to Woolhope in Herefordshire with Keith Richards, Anita Pallenberg, Christopher Gibbs, and the filmmaker Kenneth Anger, where they hunted for ley lines and UFOs. Marianne Faithfull later recounted that band member Jones was particularly interested in Michell's ideas. He would later meet with the members of The Grateful Dead on their 1972 European tour; band members Phil Lesh and Jerry Garcia expressed an interest in Michell's Earth Mysteries ideas.
Michell's impact on the hippie subculture was recognised by mainstream media, and he was invited to submit an article titled "Flying saucers" to The Listener in May 1968, which was accompanied by a critical piece by editor Karl Miller, in which Michell was described as "less a hippy, perhaps, than a hippy's counsellor, one of their junior Merlins."
Hale noted that Michell promoted the idea of "England as a site of spiritual redemption in the New Age", bringing together "popular ideas about sacred geometry, Druids, sacred landscapes, earth energies, Atlantis, and UFOs".
In 1972 Michell published a sequel to The View Over Atlantis as City of Revelation. Shortly after publication he stated that he had written the work in "almost two years of near total solitude and intense study in Bath." This work was more complex than its predecessor, including chapters on sacred geometry, numerology, gematria, and the esoteric concept of the New Jerusalem, and required an understanding of mathematics and Classics to follow its arguments.
Bob Rickard, founding editor of Fortean Times, has written that Michell's first three works "provided a synthesis of and a context for all the other weirdness of the era. It’s fair to say that it played a big part in the foundation of Fortean Times itself by helping create a readership that wanted more things to think about and a place to discuss them. The overall effect was to help the burgeoning interest in strange phenomena spread out into mainstream culture."
Challenging academic archaeology
The work of Michell and others in the ley-hunting and Earth mysteries communities were rejected by the professional archaeological establishment, with the prominent British archaeologist Glyn Daniel denouncing what he perceived as the "lunatic fringe". In turn, Michell was hostile to professional and academic archaeologists, accusing them of "treasure hunting and grave robbery" and viewing them as representations of what he interpreted as the evils of modernity. In response to the academic archaeological community's refusal to take the idea of ley lines seriously, in 1970 Michell offered a challenge for professional archaeologists to disprove his ideas regarding the West Peninsula leys. He stated that were he to be proved wrong then he would donate a large sum to charity, but at the time no one took up his offer.
However, in 1983 his case study was analysed by two archaeologists, Tom Williamson and Liz Bellamy, as part of their work Ley Lines in Question, a critical analysis of the evidence for ley-lines. They highlighted that Michell had erroneously included medieval crosses and natural features under his definition of late prehistoric monuments, and that arguments for ley-lines more widely could not be sustained. The impact of their work on the ley-hunting community was substantial, with one section moving in a more fully religious direction by declaring that leys could only be detected by intuition, and the other renouncing a ley line belief in favour of a more ethnographically rooted analysis of linear connections in the landscape. Responding to their work, Michell said that "I just feel sorry for Williamson and Bellamy that the most exciting thing they can find to do with their youth is to discredit the ley vision."
In 1983 Michell published an altered version of his best known work as The New View Over Atlantis.
Ioan Culianu, a specialist in gnosticism and Renaissance esoteric studies, in a review in 1991 of The Dimensions of Paradise: The Proportions and Symbolic Numbers in Ancient Cosmology, expressed the view that, "After some deliberation the reader of this book will oscillate between two hypotheses: either that many mysteries of the universe are based on numbers, or that the book's author is a fairly learned crank obsessed with numbers."
In 1970, Michell founded the Anti-Metrification Board to oppose the adoption of the metric system of measurement in the United Kingdom. Believing that the established imperial system of measurement had both ancient and sacred origins, through the Board he brought out a newsletter, Just Measure. In 1972 he published the first of his "Radical Traditionalist Papers", A Defence of Sacred Measures, in which he laid out his opposition to the metric system.
In his third Radical Traditionalist Paper, published in 1973, he argued against population control, critiquing the ideas of Thomas Robert Malthus and arguing that correct use of resources could maintain an ever-growing human population.
His fifth Radical Traditionalist Paper, Concordance to High Monarchists, offered Michell's proposed solution to The Troubles of Northern Ireland; in his view, Ireland should be divided into four provinces, each administered separately but all ultimately pledging allegiance to a High King, in this way mirroring what Michell believed was the socio-political organisation of prehistoric Ireland.
Other publications
Following the 1975 execution of Michael X for a murder committed in Trinidad, Michell published a souvenir pamphlet to commemorate the execution, claiming that all royalties from its publication would go to Michael X's widow. In 1976 he published The Hip Pocket Hitler, a book containing those quotations from Adolf Hitler, the leader of Nazi Germany, which Michell deemed to be humorous or insightful, thus seeking to portray a side to Hitler that was more favourable than the dominant paradigm. In 1979 he provided an introduction to a translation of Pliny the Elder's Inventorum Natura, which had been illustrated by Una Woodruff. That same year he brought out Simulcra, a work in which he examined perceived faces in natural forms such as trees. In collaboration with Bob Rickard, in 1977 Michell published Phenomena: A Book of Wonders, an encyclopedic work devoted to paranormal and fortean phenomena which covered such topics as UFOs, werewolves, lake monsters, and spontaneous human combustion. They followed this with a second encyclopedic volume, Living Wonders: Mysteries and Curiosities of the Animal World, which appeared in 1982 and was devoted to fortean topics involving animals, with much of it focusing on cryptozoological topics.
In 1984 he published Eccentric Lives and Peculiar Notions, in which he provided brief biographies of various figures whose ideas had been rejected by mainstream scholarship and society, among them Nesta Webster, Iolo Morganwg, Brinsley Trench, and Comyns Beaumont. In Euphonics: A Poet's Dictionary of Sounds he then argued that every name represents a "vocal imitation" of the subject that it describes, for instance arguing that "s" appears in the words "snake" and "serpent" because it resembles the curved movement of the animal.
Following the controversy that erupted around Salman Rushdie's 1988 book The Satanic Verses, Michell published a tract condemning Rushdie, accusing him of deliberately and provocatively insulting Islam. Titled Rushdie's Insult, Michell later withdrew the publication.
Michell was keenly interested in the crop circle phenomenon, and with Christine Rhone and Richard Adams he established a magazine devoted to the subject in 1990. Initially titled The Cereologist, some issues would be alternately titled The Cerealogist, and although Michell initially served as the magazine's editor, he stepped down after the ninth issue, although continued to contribute articles to it. In 1991, he published a book on the subject, Dowsing the Crop Circles, and in 2001 followed this with a booklet titled The Face and the Message, which was devoted to a circle depicting the face of a Grey alien which had appeared in Hampshire in August 2001. Despite the longstanding animosity with which Michell held academic archaeology, in 1991 the peer-reviewed archaeological journal Antiquity invited him to author a review of a Southbank exhibit, "From Art to Archaeology", which was duly published in the journal.
In the 1980s Michell was a member of the Lindisfarne Association and a teacher at its School of Sacred Architecture. He lectured at the Kairos Foundation, an "educational charity specifically founded to promote the recovery of traditional values in the Arts and Sciences". He was for some years a visiting lecturer at the Prince of Wales' School of Traditional Arts, which had been established by his friend Keith Critchlow. He became a Fellow of the Temenos Academy, a religious organisation which had Traditionalist underpinnings.
Newspaper columnist: 1992–2009
From January 1992 until his death, Michell published a monthly column, "An Orthodox Voice", in The Oldie magazine. He primarily used this as an outlet for condemning the modern world and lambasting what he perceived as the stupidity of most contemporary humans. His first article in this outlet contained an attack on evolution which resulted in a published response from the evolutionary biologist Richard Dawkins. He also used his column to encourage the use of mind-altering drugs, in particular LSD. Two anthologies that collected together some of these Oldie columns would be published; the first appeared in 1995 as An Orthodox Voice while the second was published in 2005 as Confessions of a Radical Traditionalist and contained an introduction from the scholar of esotericism Joscelyn Godwin. During this period, Michell also authored occasional book reviews for the conservative magazine, The Spectator.
In 1996 Michell published Who Wrote Shakespeare?, in which he outlined various candidates in the Shakespeare authorship question. Who Wrote Shakespeare? received mixed reviews: Publishers Weekly was critical, while The Washington Post and The Independent praised his treatment of the subject. To mark their fiftieth anniversary in 1999, the publisher Thames and Hudson – who had published many of Michell's works – suggested that a biography be written by Michell's friend Paul Screeton. Michell however refused to cooperate with the project, which was abandoned. In 2000, Michell published The Temple at Jerusalem: A Revelation, in which he outlined his own interpretation of Jerusalem's Old City.
From 2001 to 2004 he contributed several columns to tabloid newspaper The Mirror as part of an ongoing series run by the astrologer Jonathan Cainer. Cainer had sought to bring together a range of esotericists to write on related topics, with Michell's fellow contributors including Mark Winter, Patty Greenall, Sarah Sirillan, and Uri Geller. The series came to an end when Cainer left The Mirror to work for the rival Daily Mail.
A keen painter, in 2003 an exhibit of his works was held at the Christopher Gibbs Gallery. In April 2007 Michell married Denise Price, the Archdruidess of the Glastonbury Order of Druids, at a ceremony held in Glastonbury's St Benedict's Church, although their relationship ended several months later. A lifelong smoker, Michell contracted lung cancer, and in his final days he was nursed at his son's home in Poole, Dorset, ultimately dying on 24 April 2009, at the age of 76. His body was buried at St Mary's Church in Stoke Abbott on May Day. A high church memorial service was then held at All Saints' Church in Notting Hill, which was attended by around 400 mourners.
His work, How the World is Made – which he regarded as his magnum opus – was published posthumously.
Thought
Throughout his life, Michell's "views remained relatively static", albeit with some exceptions. He characterised his viewpoint as "Radical Traditionalism", which in his words was a perspective "both idealistic and rooted in common sense".
Michell was a proponent of the Traditionalist school of esoteric thought. Michell was also interested in the writings of Traditionalist philosopher Julius Evola, agreeing in particular with the sentiments expressed in Evola's Revolt Against the Modern World. He held to the Traditionalist belief in an ancient perennial tradition found across the world, believing that this was passed on by a priesthood in accordance to divine will. He shared the Traditionalist attitude of anti-modernism, believing that modernity had brought about chaos, destruction of the land, and spiritual degradation. He believed that humanity would return to what he perceived as its natural order and enter a Golden Age.
Screeton believed that despite his "obvious acts of liberalism", Michell also had a "right-wing streak", with Hale describing Mitchell as being "quite right-wing in many of his views". She thought it would be "apt" to characterise Mitchell's thought as being "third positionist" in nature.
Angered by the idea of evolution, Michell repeatedly authored articles denouncing it as a false creation myth. Instead he embraced a viewpoint that Screeton referred to as "intelligent design creationism". Accordingly, he was particularly critical of Charles Darwin and Dawkins, lambasting the latter alongside physicist Stephen Hawking as belonging with "the disappointed Marxists, pandering politicians, pettifoggers, grievance-mongers and atheistic bishops who set the tone in modern society." Condemning the scientific community's view of the development of the Earth and humanity, he embraced Richard Milton's claim that the Earth was only 20,000 years old, as well as Rupert Sheldrake's idea regarding "morphogenetic fields", believing that it was these – and not biological evolution – that resulted in changes occurring within species.
Michell's conception of the physical and spiritual worlds was strongly influenced by the ancient Greek philosopher Plato. He believed that sacred geometry revealed a universal scheme in the landscape which reflected the structure of the heavens. His views on geometry led him to the belief that pre-industrial societies across the world respected the Earth as a living creature imbued with its own spirit, and that humans then created permanent residences for this spirit.
He also embraced a belief in the tenets of astrology, alchemy, and prophecy, believing that all had been unfairly rejected by the modern world.
Described as an exponent of "British nativist spirituality", he adopted the view of the British-Israelite movement that the British people represented the descendants of the Ten Lost Tribes who are mentioned in the Old Testament. Michell sometimes referred to his approach as "mystic nationalism" and interpreted the island of Britain as being sacred, connecting this attitude to those of William Blake and Lewis Spence. Adopting a millennialist attitude, he believed that in future Britain would be reborn as the New Jerusalem with the coming of a new Golden Age.
He believed that humans really desired to live in a state of extreme order, deeming a societal hierarchy to be natural and inevitable. Generally opposed to democracy, except within small groups in which every person knew the individual being elected, Michell instead believed that communities should be led by a strong leader who personified the solar deity. This embrace of the Divine Right of Kings led him to believe that Queen Elizabeth II should take control of Britain as an authoritarian leader who could intercede between the British people and the divine. He was critical of multiculturalism in Britain, believing that each ethnic or cultural group should live independently in an area segregated from other groups, stating that this would allow a people's traditions to remain vibrant. He did not espouse racial supremacy, with his ideas on this subject instead being similar to the ethnopluralism of Alain de Benoist and other New Right thinkers. He was an opponent of British membership of the European Union and also opposed the UK's transition to the metric system, instead favouring the continued use of imperial measurement, believing that the latter had links to the divine order used by ancient society.
Personal life
At over six feet in height, Michell was described by biographer and friend Paul Screeton as having "a charismatic personality and imposing presence", being "placidly outgoing and the epitome of gentlemanly charm", and usually appeared "cheerful and optimistic".
In keeping with his upper-class background, he was described as having an "unmistakable patrician hauteur", with "all the self-assurance, impeccable manners and debonair charm of one born to wealth." Screeton described Michell as "gregarious but slightly shy, unassuming but opinionated. Quixotic in behaviour, he was an exemplary host and fastidious and single-minded when embarked upon a project", although also noted that Michell was impatient with those who did not share his Traditionalist beliefs and values.
In keeping with norms within the counter-culture, Michell regularly smoked marijuana, and publicly encouraged the use of mind-altering drugs. His favoured newspaper was The Telegraph, a right-wing daily. One of his hobbies was woodworking, and he constructed some of the bookshelves in his home. Although he had a strong dislike of computers and advised his readers not to possess a personal computer, in later life he obtained one in order to type up his writings using a word processor. For many years, he lived at 11 Powis Gardens in Notting Hill, North London.
Legacy
Screeton described Michell as "a countercultural icon", while Hale stated that on his death, Michell left "a rich legacy of publications and cultural influence". At the time he was remembered as "a charming British eccentric and champion of the outsider". His influence was strongly apparent in the British Pagan community, with many British Pagans being familiar with his writings. The archaeologist Adam Stout noted that Michell played "the major role in the 1960s rediscovery" of the work of Alfred Watkins.
Hutton for instance noted that the influence of Michell's ideas could be seen on the Druidic Order of the Pendragon, a Pagan group based in Leicestershire that arose to public attention in 2004. His ideas about dragon energies across the landscape have been incorporated into novels like Judy Allen's 1973 The Spring of the Mountain and Cara Louise's 2006 Annie and the Dragon.
Michell's books received a broadly positive reception amongst the "New Age" and "Earth mysteries" movements and he is credited as perhaps being "the most articulate and influential writer on the subject of leys and alternative studies of the past". Ronald Hutton describes his research as part of an alternative archaeology "quite unacceptable to orthodox scholarship." Accordingly, Screeton noted that during his life, Michell was considered to be "anathema, lunatic fringe, and cranky" by his critics, although he rejected the idea that Michell was a "crank", claiming that such an accusation was "fundamentally mistaken".
Following his death, various aspects of Michell's work have been adopted by thinkers associated with the European New Right and with related right-wing currents in the United States.
Michell's term "Radical Traditionalism", which he espoused in his self-published series of "Radical Traditionalist Papers" in the 1970s and 1980s, would later be taken up as a self-descriptor by Michael Moynihan and Joshua Buckley, the editors of the right-wing journal Tyr: Myth, Culture and Tradition from their inaugural 2002 edition onward. The editors of Tyr gave the term political overtones which were not present in Michell's original usage of the term. Hale believed that through Radical Traditionalism and the New Right Michell's writings have been brought to "a whole new audience" where they have a "surprisingly different sort of relevance."
Bibliography
1967 The Flying Saucer Vision: the Holy Grail Restored, Sidgwick & Jackson, Abacus Books, Ace.
1969 The View Over Atlantis, HarperCollins, ; first published by Sago Press in Great Britain in 1969; new edition published in Great Britain by Garnstone Press in 1972 and Abacus in 1973, and in the United States by Ballantine Books in 1972.
1972 City of Revelation: On the Proportions and Symbolic Numbers of the Cosmic Temple, Garnstone Press, ,
1974 The Old Stones of Land's End, Garnstone Press,
1975 The Earth Spirit: Its Ways, Shrines, and Mysteries, Avon,
1977 with R. J. M. Rickard, Phenomena: A Book of Wonders, Thames & Hudson,
1977 A Little History of Astro-Archaeology: Stages in the Transformation of a Heresy , Thames and Hudson, SBN-10: 0500275572 SBN-10: 0500275572, (reprinted 2001)
1979 Natural Likeness: Faces and Figures in Nature, Thames and Hudson,
1979 Plinius Scundus C., Inventorum Natura, HarperCollins, English Latin, D. MacSweeney (translator)
1981 Ancient Metrology: the Dimensions of Stonehenge and of the Whole World as Therein Symbolized, Pentacle Books,
1982 Megalithomania: Artists, Antiquarians & Archaeologists at the Old Stone Monuments, Thames and Hudson , Cornell University Press
1983 The New View Over Atlantis, Thames and Hudson , , (Much revised edition of The View Over Atlantis.)
1984 Eccentric Lives and Peculiar Notions , Thames and Hudson, reissued Harcourt Brace Jovanovich,
1985 Stonehenge – Its Druids, Custodians, Festival and Future , Richard Adams Associates (June 1985) ,
1988 Geosophy – An Overview of Earth Mysteries. Paul Devereux, John Steele, John Michell, Nigel Pennick, Martin Brennan, Harry Oldfield and more, a Mystic Fire Video from Trigon Communications, Inc, New York, 1988 (reissued 1990), also by EMPRESS, Wales, UK, 95 minutes, VHS.
1986 commentary, Feng-Shui: The Science of Sacred Landscape in Old China, Ernest J. Eitel, Syngergetic Press
1988 The Dimensions of Paradise: The Proportions and Symbolic Numbers of Ancient Cosmology, London : Thames and Hudson, 1988.
1989 The Traveller's Key to Sacred England , reissued 2006, Gothic Image
1989 Secrets of the Stones: New Revelations of Astro-Archaeology and the Mystical Sciences of Antiquity, Destiny Books,
1989 Earth Spirit: Its Ways, Shrines and Mysteries , Thames and Hudson,
1990 New Light on the Ancient Mystery of Glastonbury, Gothic Image Publications (p/b), (h/b)
1991 Dowsing the Crop Circles, (Editor/Contributor), Gothic Image Publications,
1991 Twelve Tribe Nations and the Science of Enchanting the Landscape, with Christine Rhone, Thames and Hudson,
1994 At the Center of the World: Polar Symbolism Discovered in Celtic, Norse and Other Ritualized Landscapes, Thames and Hudson,
1996 Who Wrote Shakespeare?, Thames and Hudson
2000, with Bob Rickard, Unexplained Phenomena: Mysteries and Curiosities of Science, Folklore and Superstition, Rough Guides,
2000 The Temple at Jerusalem: A Revelation, Samuel Weiser. ,
2001 The Dimensions of Paradise: The Proportions and Symbolic Numbers of Ancient Cosmology , Adventures Unlimited,
2002 The Face and the Message: What Do They Mean and Where Are They From?, Gothic Image,
2003 The Traveller's Guide to Sacred England: A Guide to the Legends, Lore and Landscapes of England's Sacred Places, Gothic Image Publications,
2003 Prehistoric Sacred Sites of Cornwall, Wessex Books,
2005 Confessions of a Radical Traditionalist, Dominion Press,
2006 "Prehistoric Sacred Sites of Cornwall", Wessex Books,
2006 Euphonics: A Poet's Dictionary of Sounds, Wooden Books,
2008 Dimensions of Paradise, The Sacred Geometry, Ancient Science and the Heavenly Order on Earth, (revised edition of City of Revelation) Inner Traditions, Bear & Company.
2009 How The World Is Made: The Story of Creation According To Sacred Geometry, (with Allan Brown), Thames & Hudson
2009 Sacred Center: The Ancient Art of Locating Sanctuaries, Inner Traditions,
2010 Michellany, A John Michell Reader, ed. Jonangus Mackay, Michellany Editions, London.
References
Footnotes
Sources
Further reading
White, Rupert (2017). The Re-enchanted Landscape: Earth Mysteries, Paganism and Art in Cornwall Antenna Publications ISBN 9780993216435
External links
The John Michell Network
Michell and the 1971 Glastonbury Festival
International Fortean Organisation
1933 births
2009 deaths
20th-century English novelists
Alumni of Trinity College, Cambridge
Ancient astronauts proponents
Atlantis proponents
English male novelists
English writers on paranormal topics
Mystics
Fortean writers
New Age writers
People educated at Eton College
Pseudohistorians
Sacred geometry
Far-right politics in the United Kingdom | John Michell (writer) | [
"Engineering"
] | 7,213 | [
"Sacred geometry",
"Architecture"
] |
9,173,060 | https://en.wikipedia.org/wiki/Food%20play | Food play, also known as sitophilia, refers to a form of sexual fetishism in which participants are aroused by erotic situations involving food.
Food play overlaps with other fetishes, including wet and messy fetishism, feederism, and nyotaimori. It is differentiated from vorarephilia in that food play fetishizes food while vore fetishizes the act of eating a living creature, or being eaten alive.
Practice
Any food can be considered erotic, depending on the context and the viewer.
Certain foods, such as bananas and hot dogs, are commonly considered fetish objects due to having a phallic shape. Foods that can be eaten off another person, such as whipped cream or melted chocolate, are also popular, especially in popular culture.
Some foods and herbs are purported to cause sexual arousal, and can have sexual connotations, such as oysters.
Home dildo makers are produced to allow food to be sculpted into a phallic shape for easier insertion.
Alcohol
A body shot is a shot of alcohol that is consumed from a person's body. Body shots are done either by taking a shot from a glass on a person's body, or the shot is poured onto a person's body and licked up by another person. The term "body shot" can also mean a shot drinking ritual that involves the use of another person's body, such as taking a shot from a glass and licking salt off a person's body afterwards.
, also called wakame sake and seaweed sake, involves drinking alcohol from a woman's body. The woman closes her legs tight enough that the triangle between the thighs and mons pubis forms a cup, and then pours sake down her chest into this triangle. Her partner then drinks the sake from there. The name comes from the idea that the woman's pubic hair in the sake resembles soft seaweed (wakame) floating in the sea.
See also
Food and sexuality
Nyotaimori
Vorarephilia
Wet and messy fetishism
References
Sexual acts
Sexual fetishism
Sexuality in Japan
Paraphilias | Food play | [
"Biology"
] | 437 | [
"Sexual acts",
"Behavior",
"Sexuality",
"Mating"
] |
9,173,111 | https://en.wikipedia.org/wiki/FGED%20Society | The Functional GEnomics Data Society (FGED) (formerly known as the MGED Society)
was a non-profit, volunteer-run international organization
of biologists, computer scientists, and data analysts that aims to
facilitate biological and biomedical discovery through data
integration. The approach of FGED was to promote the sharing of basic research
data generated primarily via high-throughput technologies
that generate large data sets within the domain of functional genomics.
Members of the FGED Society worked with other organizations to support the effective sharing and reproducibility
of functional genomics data; facilitate the creation of
standards and software tools that leverage the standards; and promote the sharing of high quality, well
annotated data within the life sciences and biomedical communities.
Founded in 1999 as the "Microarray Gene Expression Data (MGED) Society", this organization changed its name to the "Functional Genomics Data Society" in 2010 to reflect the fact that it has broadened its focus beyond the application of DNA microarrays for gene expression analysis to include technologies such as high-throughput sequencing. The scope of the FGED Society includes data generated using any functional genomics technology when applied to genome-scale studies of gene expression, binding, modification and other related applications.
In September 2021, the FGED Society ceased operations.
History
The FGED Society was formed in 1999 at a meeting on Microarray Gene
Expression Databases in recognition of the need to establish standards
for sharing and storing data from DNA microarray experiments. Originally named the "MGED Society," the society began with
a focus on DNA microarrays and gene expression data.
The original MGED Society was incorporated in 2002 as a non-profit
public benefit organization with the title
Microarray Gene Expression Data Society and obtained permanent
charity status in 2007. The MGED name was legally changed in 2007 to
Microarray and Gene Expression Data Society to emphasize a broader
scope.
In September 2008, the Society decided to promote itself simply as the
MGED Society to broaden the Society's scope beyond microarray technology
and gene expression applications, yet still
retain the recognized value of the MGED name within the community.
In July 2010, the society voted to change its name to the "Functional Genomics Data (FGED) Society"
to reflect its current mission which goes beyond microarrays and gene expression to encompass data
generated using any functional genomics technology applied to
genomic-scale studies of gene
expression, binding, modification (such as DNA methylation),
and other related applications. This was formally announced on 14 July 2010 at the society's "MGED13" annual meeting.
Presidents of the FGED Society
Board members and officers of the FGED Society are elected annually each May and start serving in June. Presidents of the FGED Society along with their terms in office are as follows:
Francis Ouellette (2013–2021)
John Quackenbush (2011–2013)
Chris Stoeckert (2007–2011)
Catherine Ball (2003–2007)
Alvis Brasma (1999–2003)
Membership
The FGED Board of Directors and Advisory Board consist of volunteers
from academia, industry, government, and journals representing a
cross-section of those generating, analyzing, archiving, and
publishing in the functional genomics area.
Although there is no formal membership, the attendees of
the annual FGED meetings are considered to be part of the FGED community.
Standards
To date, FGED has produced a variety of standards specifications pertaining to DNA microarray experiments. These standards are designed to improve the annotation, communication, and sharing of data and findings from such experiments within the life science research community.
MINSEQE
Minimal Information about a high-throughput SEQuencing Experiment (MINSEQE) is a
data content minimum information standard that describes the essential information needed to
adequately document a high-throughput sequencing experiment for the purpose of
interpretation and replication of the results.
MIAME
MIAME (Minimal Information About a Microarray Experiment) is a
data content standard that describes the essential information needed to
adequately document a DNA microarray experiment for the purpose of
interpretation and replication of the results. It was the first published example of a minimum information standard for high-throughput experiments in the life sciences, and as such, laid the groundwork for similar standards in other bioscience domains.
MAGE-OM and MAGE-TAB
MAGE-OM (MicroArray Gene Expression
Object Model) is a data exchange and data modeling standard for use in encoding
data from microarray experiments for the purpose of export and import
into software tools and databases via XML files. MAGE-OM is a
platform-independent model implemented in the XML-based MAGE-ML
format.
A new version, MAGE-TAB, has been developed to be easier to
understand and generate by data producers as it is in a format
(tab-delimited) that can be viewed and edited using widely available
spreadsheet software, such as Microsoft Excel.
MGED Ontology
The MGED Ontology (MO) provides a standard terminology for describing components of a
DNA microarray experiment.
The Ontology for Biomedical Investigations (OBI) is being developed as a replacement for the MO. A mapping of ontology terms from MO to OBI is available.
Annual meeting
A major component of the FGED Society effort
has been the annual FGED meeting to showcase cutting-edge scientific work and promote
standards.
The FGED Society has held its annual meeting at venues around
the world since 1999, coordinating with a local scientific organization
that provides space for talks, poster sessions, workshops, and tutorials.
Past meetings of the FGED Society
Here is a list of the annual meeting dates and locations for past meetings of the FGED Society. All meetings from 2010 and prior were held under the name "MGED Society".
See also
Minimum Information Standards
Genomic Standards Consortium
References
Bioinformatics organizations
Genomics organizations
International scientific organizations | FGED Society | [
"Biology"
] | 1,201 | [
"Bioinformatics",
"Bioinformatics organizations"
] |
9,173,273 | https://en.wikipedia.org/wiki/Coronal%20loop | In solar physics, a coronal loop is a well-defined arch-like structure in the Sun's atmosphere made up of relatively dense plasma confined and isolated from the surrounding medium by magnetic flux tubes. Coronal loops begin and end at two footpoints on the photosphere and project into the transition region and lower corona. They typically form and dissipate over periods of seconds to days and may span anywhere from in length.
Coronal loops are often associated with the strong magnetic fields located within active regions and sunspots. The number of coronal loops varies with the 11 year solar cycle.
Origin and physical features
Due to a natural process called the solar dynamo driven by heat produced in the Sun's core, convective motion of the electrically conductive plasma which makes up the Sun creates electric currents, which in turn create powerful magnetic fields in the Sun's interior. These magnetic fields are in the form of closed loops of magnetic flux, which are twisted and tangled by solar differential rotation (the different rotation rates of the plasma at different latitudes of the solar sphere). A coronal loop occurs when a curved arc of the magnetic field projects through the visible surface of the Sun, the photosphere, protruding into the solar atmosphere.
Within a coronal loop, the paths of the moving electrically charged particles which make up its plasma—electrons and ions—are sharply bent by the Lorentz force when moving transverse to the loop's magnetic field. As a result, they can only move freely parallel to the magnetic field lines, tending to spiral around these lines. Thus, the plasma within a coronal loop cannot escape sideways out of the loop and can only flow along its length. This is known as the frozen-in condition.
The strong interaction of the magnetic field with the dense plasma on and below the Sun's surface tends to tie the magnetic field lines to the motion of the Sun's plasma; thus, the two footpoints (the location where the loop enters the photosphere) are anchored to and rotate with the Sun's surface. Within each footpoint, the strong magnetic flux tends to inhibit the convection currents which carry hot plasma from the Sun's interior to the surface, so the footpoints are often (but not always) cooler than the surrounding photosphere. These appear as dark spots on the Sun's surface, known as sunspots. Thus, sunspots tend to occur under coronal loops, and tend to come in pairs of opposite magnetic polarity; a point where the magnetic field loop emerges from the photosphere is a North magnetic pole, and the other where the loop enters the surface again is a South magnetic pole.
Coronal loops form in a wide range of sizes, from 10 km to 10,000 km. Coronal loops have a wide variety of temperatures along their lengths. Loops at temperatures below 1 megakelvin (MK) are generally known as cool loops; those existing at around 1 MK are known as warm loops; and those beyond 1 MK are known as hot loops. Naturally, these different categories radiate at different wavelengths.
A related phenomenon is the open flux tube, in which magnetic fields extend from the surface far into the corona and heliosphere; these are the source of the Sun's large scale magnetic field (magnetosphere) and the solar wind.
Location
Coronal loops have been shown on both active and quiet regions of the solar surface. Active regions on the solar surface take up small areas but produce the majority of activity and are often the source of flares and coronal mass ejections due to the intense magnetic field present. Active regions produce 82% of the total coronal heating energy.
Dynamic flows
Many solar observation missions have observed strong plasma flows and highly dynamic processes in coronal loops. For example, SUMER observations suggest flow velocities of 5–16 km/s in the solar disk, and other joint SUMER/TRACE observations detect flows of 15–40 km/s. Very high plasma velocities (in the range of 40–60 km/s) have been detected by the Flat Crystal Spectrometer (FCS) on board the Solar Maximum Mission.
History of observations
Before 1991
Despite progress made by ground-based telescopes and eclipse observations of the corona, space-based observations became necessary to escape the obscuring effect of the Earth's atmosphere. Rocket missions such as the Aerobee flights and Skylark rockets successfully measured solar extreme ultraviolet (EUV) and X-ray emissions. However, these rocket missions were limited in lifetime and payload. Later, satellites such as the Orbiting Solar Observatory series (OSO-1 to OSO-8), Skylab, and the Solar Maximum Mission (the first observatory to last the majority of a solar cycle: from 1980 to 1989) were able to gain far more data across a much wider range of emission.
1991–present day
In August 1991, the solar observatory spacecraft Yohkoh launched from the Kagoshima Space Center. During its 10 years of operation, it revolutionized X-ray observations. Yohkoh carried four instruments; of particular interest is the SXT instrument, which observed X-ray-emitting coronal loops. This instrument observed X-rays in the 0.25–4.0 keV range, resolving solar features to 2.5 arc seconds with a temporal resolution of 0.5–2 seconds. SXT was sensitive to plasma in the 2–4 MK temperature range, making its data ideal for comparison with data later collected by TRACE of coronal loops radiating in the extra ultraviolet (EUV) wavelengths.
The next major step in solar physics came in December 1995, with the launch of the Solar and Heliospheric Observatory (SOHO) from Cape Canaveral Air Force Station. SOHO originally had an operational lifetime of two years. The mission was extended to March 2007 due to its resounding success, allowing SOHO to observe a complete 11-year solar cycle. SOHO has 12 instruments on board, all of which are used to study the transition region and corona. In particular, the Extreme ultraviolet Imaging Telescope (EIT) instrument is used extensively in coronal loop observations. EIT images the transition region through to the inner corona by using four band passes—171 Å FeIX, 195 Å FeXII, 284 Å FeXV, and 304 Å HeII, each corresponding to different EUV temperatures—to probe the chromospheric network to the lower corona.
In April 1998, the Transition Region and Coronal Explorer (TRACE) was launched from Vandenberg Air Force Base. Its observations of the transition region and lower corona, made in conjunction with SOHO, give an unprecedented view of the solar environment during the rising phase of the solar maximum, an active phase in the solar cycle. Due to the high spatial (1 arc second) and temporal resolution (1–5 seconds), TRACE has been able to capture highly detailed images of coronal structures, whilst SOHO provides the global (lower resolution) picture of the Sun. This campaign demonstrates the observatory's ability to track the evolution of steady-state (or 'quiescent') coronal loops. TRACE uses filters sensitive to various types of electromagnetic radiation; in particular, the 171 Å, 195 Å, and 284 Å band passes are sensitive to the radiation emitted by quiescent coronal loops.
See also
Solar spicule
Solar prominence
Coronal hole
References
External links
TRACE homepage
Solar and Heliospheric Observatory, including near-real-time images of the solar corona
Coronal heating problem at Innovation Reports
NASA/GSFC description of the coronal heating problem
FAQ about coronal heating
Animated explanation of Coronal loops and their role in creating Prominences (University of South Wales)
Sun
Space plasmas
Astrophysics
Articles containing video clips | Coronal loop | [
"Physics",
"Astronomy"
] | 1,594 | [
"Space plasmas",
"Astronomical sub-disciplines",
"Astrophysics"
] |
9,174,370 | https://en.wikipedia.org/wiki/Price%20Medal | Price Medal is a medal of the Royal Astronomical Society, for investigations of outstanding merit in solid-earth geophysics, oceanography, or planetary sciences. The medal is named after Albert Thomas Price. It was first awarded in 1994 and was initially given every three years. In 2005 this switched to every two years, and from 2014 it has been awarded every year.
Price Medallists
Source: Royal Astronomical Society (unless otherwise noted)
1994 J.A. Jacobs
1997 Catherine Constable
2000 Jean-Louis Le Mouël
2003 Y. Kaminde
2005 Gillian Foulger
2007 Andrew Jackson
2009 Malcolm Sambridge
2011 Roger Searle
2013 Kathryn Whaler
2014 Seth Stein
2015 John Brodholt
2016 John Tarduno
2017 Richard Holme
2018 Stuart Crampin
2019 Catherine Johnson
2020 Phil Livermore
2021 Emily Brodsky
2022 Hrvoje Tkalcic
2023 Rhian Jones
2024 Chris Davies
See also
List of astronomy awards
List of geophysicists
List of geophysics awards
List of prizes named after people
References
Awards of the Royal Astronomical Society
Awards established in 1994
1994 establishments in the United Kingdom
Geophysics awards | Price Medal | [
"Astronomy"
] | 222 | [
"Awards of the Royal Astronomical Society",
"Astronomy prizes"
] |
9,174,778 | https://en.wikipedia.org/wiki/Microprocessor%20development%20board | A microprocessor development board is a printed circuit board containing a microprocessor and the minimal support logic needed for an electronic engineer or any person who wants to become acquainted with the microprocessor on the board and to learn to program it. It also served users of the microprocessor as a method to prototype applications in products.
Unlike a general-purpose system such as a home computer, usually a development board contains little or no hardware dedicated to a user interface. It will have some provision to accept and run a user-supplied program, such as downloading a program through a serial port to flash memory, or some form of programmable memory in a socket in earlier systems.
History
The reason for the existence of a development board was solely to provide a system for learning to use a new microprocessor, not for entertainment, so everything superfluous was left out to keep costs down. Even an enclosure was not supplied, nor a power supply. This is because the board would only be used in a "laboratory" environment so it did not need an enclosure, and the board could be powered by a typical bench power supply already available to an electronic engineer.
Microprocessor training development kits were not always produced by microprocessor manufacturers. Many systems that can be classified as microprocessor development kits were produced by third parties, one example is the Sinclair MK14, which was inspired by the official SC/MP development board from National Semiconductor, the "NS introkit".
Although these development boards were not designed for hobbyists, they were often bought by them because they were the earliest cheap microcomputer devices available. They often added all kinds of expansions, such as more memory, a video interface etc. It was very popular to use (or write) an implementation of Tiny Basic. The most popular microprocessor board, the KIM-1, received the most attention from the hobby community, because it was much cheaper than most other development boards, and more software was available for it (Tiny Basic, games, assemblers), and cheap expansion cards to add more memory or other functionality. More articles were published in magazines like "Kilobaud Microcomputing" that described home-brew software and hardware for the KIM-1 than for other development boards.
Today some chip producers still release "test boards" to demonstrate their chips, and to use them as a "reference design". Their significance these days is much smaller than it was in the days that such boards, (the KIM-1 being the canonical example) were the only low cost way to get "hands-on" acquainted with microprocessors..
Features
The most important feature of the microprocessor development board was the ROM-based built-in machine language monitor, or "debugger" as it was also sometimes called. Often the name of the board was related to the name of this monitor program, for example the name of the monitor program of the KIM-1 was "Keyboard Input Monitor", because the ROM-based software allowed entry of programs without the rows of cumbersome toggle switches that older systems used. The popular Motorola 6800-based systems often used a monitor with a name with the word "bug" for "debugger" in it, for example the popular "MIKBUG".
Input was normally done with a hexadecimal keyboard, using a machine language monitor program, and the display only consisted of a 7-segment display. Backup storage of written assembler programs was primitive: only a cassette type interface was typically provided, or the serial Teletype interface was used to read (or punch) a papertape.
Often the board has some kind to expansion connector that brought out all the necessary CPU signals, so that an engineer could build and test an experimental interface or other electronic device.
External interfaces on the bare board were often limited to a single RS-232 or current loop serial port, so a terminal, printer, or Teletype could be connected.
List of historical development boards
8085AAT, an Intel 8085 microprocessor training unit from Paccom
CDP18S020 evaluation board for the RCA CDP1802 microprocessor
EVK 300 6800 single board from American Microsystems (AMI)
Explorer/85 expandable learning system based on the 8085, by Netronics's research and development ltd.
ITT experimenter used switches and LEDs, and an Intel 8080
JOLT was designed by Raymond M. Holt, co-founder of Microcomputer Associates, Incorporated.
KIM-1 the development board for the MOS Technology/Rockwell/Synertek 6502 microprocessor. The name KIM is short for "keyboard input monitor"
SYM-1 a slightly improved KIM-1 with improved software, more memory, and I/O. Also known as the VIM
AIM-65 an improved KIM-1 with an alpha-numerical LED display, and a built-in printer.
The KIM-1 also lead to some unofficial copies, such as the super-KIM and the Junior from the magazine Elektor, and the MCS Alpha 1
LC80 by Kombinat Mikroelektronik Erfurt
MAXBOARD development board for the Motorola 6802.
MEK6800D2 the official development board for the Motorola 6800 microprocessor. The name of the monitor software was MIKBUG
MicroChroma 68 color graphics kit. Developed by Motorola to demonstrate their new 6847 video display processor. The monitor software was called TVBUG
Motorola EXORciser development system (rack based) for the Motorola 6809
Microprofessor I (MPF-1) Z80 development and training system by Acer
Tangerine Microtan 65 6502 development system with VDU, that could be expanded to a more capable system.
MST-80B 8080 training system by the Lawrence Livermore National Laboratory
NS introkit by National Semiconductor featuring the SC/MP, the predecessor to the Sinclair MK14
NRI microcomputer, a system developed to teach computer courses by McGraw-Hill and the National Radio Institute (NRI)
MK14 Training system for the SC/MP microprocessor from Sinclair Research Ltd.
SDK-80 Intel's development board for their 8080 microprocessor
SDK-51 Intel's development board for their Intel MCS-51
SDK-85 Intel's development board for their 8085 microprocessor
SDK-86 Intel's development board for their 8086 microprocessor
Siemens Microset-8080 boxed system based on an 8080.
Signetics Instructor 50 based on the Signetics 2650.
SGS-ATES Nanocomputer Z80.
RCA Cosmac Super Elf by RCA . a 1802 learning system with an RCA 1861 Video Display Controller.
TK-80 the development board for NEC's clone of Intel's i8080, the μPD 8080A
TM 990/100M evaluation board for the Texas Instruments TMS9900
TM 990/180M evaluation board for the Texas Instruments TMS9800
XPO-1 Texas Instruments development system for the PPS-4/1 line of microcontrollers
DSP evaluation boards
A DSP evaluation board, sometimes also known as a DSP starter kit (DSK) or a DSP evaluation module, is an electronic board with a digital signal processor used for experiments, evaluation and development. Applications are developed in DSP Starter Kits using software usually referred as an integrated development environment (IDE). Texas Instruments and Spectrum Digital are two companies who produce these kits.
Two examples are the DSK 6416 by Texas Instruments, based on the TMS320C6416 fixed point digital signal processor, a member of C6000 series of processors that is based on VelociTI.2 architecture, and the DSK 6713 by Texas Instruments, which was developed in cooperation with Spectrum Digital, based on the TMS320C6713 32-bit floating point digital signal processor, which allows for programming in C and assembly.
See also
Embedded system
Intel system development kit
Single-board computer
Single-board microcontroller
References
Early microcomputers
Telecommunications engineering | Microprocessor development board | [
"Engineering"
] | 1,708 | [
"Electrical engineering",
"Telecommunications engineering"
] |
9,175,084 | https://en.wikipedia.org/wiki/Steinhaus%20theorem | In the mathematical field of real analysis, the Steinhaus theorem states that the difference set of a set of positive measure contains an open neighbourhood of zero. It was first proved by Hugo Steinhaus.
Statement
Let A be a Lebesgue-measurable set on the real line such that the Lebesgue measure of A is not zero. Then the difference set
contains an open neighbourhood of the origin.
The general version of the theorem, first proved by André Weil, states that if G is a locally compact group, and A ⊂ G a subset of positive (left) Haar measure, then
contains an open neighbourhood of unity.
The theorem can also be extended to nonmeagre sets with the Baire property. The proof of these extensions, sometimes also called Steinhaus theorem, is almost identical to the one below.
Proof
The following simple proof can be found in a collection of problems by late professor H.M. Martirosian from the Yerevan State University, Armenia (Russian).
For any , there exists an open set , so that and . Since is a union of open intervals, for a given , we can find an interval such that , where .
Let . Suppose for contradiction that there exists such that . Then, , and thus
But, we also have
,
so , which contradicts .
Hence, for all , and it follows immediately that , as desired.
Corollary
A corollary of this theorem is that any measurable proper subgroup of is of measure zero.
See also
Falconer's conjecture
Notes
References
.
Theorems in measure theory
Articles containing proofs
Theorems in real analysis | Steinhaus theorem | [
"Mathematics"
] | 327 | [
"Theorems in mathematical analysis",
"Theorems in real analysis",
"Theorems in measure theory",
"Articles containing proofs"
] |
9,175,375 | https://en.wikipedia.org/wiki/Prandtl%E2%80%93Meyer%20function | In aerodynamics, the Prandtl–Meyer function describes the angle through which a flow turns isentropically from sonic velocity (M=1) to a Mach (M) number greater than 1. The maximum angle through which a sonic (M = 1) flow can be turned around a convex corner is calculated for M = . For an ideal gas, it is expressed as follows,
where is the Prandtl–Meyer function, is the Mach number of the flow and is the ratio of the specific heat capacities.
By convention, the constant of integration is selected such that
As Mach number varies from 1 to , takes values from 0 to , where
where, is the absolute value of the angle through which the flow turns, is the flow Mach number and the suffixes "1" and "2" denote the initial and final conditions respectively.
See also
Gas dynamics
Prandtl–Meyer expansion fan
References
Aerodynamics
Fluid dynamics | Prandtl–Meyer function | [
"Chemistry",
"Engineering"
] | 189 | [
"Chemical engineering",
"Aerodynamics",
"Aerospace engineering",
"Piping",
"Fluid dynamics stubs",
"Fluid dynamics"
] |
9,175,809 | https://en.wikipedia.org/wiki/Enterogastrone | An enterogastrone is any hormone secreted by the mucosa of the duodenum in the lower gastrointestinal tract in response to dietary lipids that inhibits the caudal (or "forward, analward") motion of the contents of chyme. The function of enterogastrone is almost the same as gastric inhibitor peptide, it inhibits gastric secretion and motility of the stomach.
Examples
Examples include:
Secretin
Cholecystokinin
References
External links
Digestive system | Enterogastrone | [
"Biology"
] | 109 | [
"Digestive system",
"Organ systems"
] |
9,176,384 | https://en.wikipedia.org/wiki/Cathode%20bias | In electronics, cathode bias (also known as self-bias, or automatic bias) is a technique used with vacuum tubes to make the direct current (dc) cathode voltage positive in relation to the negative side of the plate voltage supply by an amount equal to the magnitude of the desired grid bias voltage.
Operation
The most common cathode bias implementation passes the cathode current through a resistor connected between the cathode and the negative side of the plate voltage supply. The cathode current through this resistor causes the desired voltage drop across the resistor and places the cathode at a positive dc voltage equal in magnitude to the negative grid bias voltage required. The grid circuit puts the grid at zero volts dc relative to negative side of the plate voltage supply, causing the grid voltage to be negative with respect to the cathode by the required amount. Directly heated cathode circuits connect the cathode bias resistor to the center tap of the filament transformer secondary or to the center tap of a low resistance connected across the filament.
Design
To find the correct resistor value, first the tube operating point is determined. The plate current, the grid voltage relative to the cathode and the screen current (if applicable) are noted for the operating point. The cathode bias resistor value is found by dividing the absolute value of the operating point grid voltage by the operating point cathode current (plate current plus screen current). The power dissipated by the cathode bias resistor is the product of the square of the cathode current and the resistance in ohms.
Any signal frequency effect of the cathode resistor may be minimized by providing a suitable bypass capacitor in parallel with the resistor. In general, the capacitor value is selected such that the time constant of the capacitor and bias resistor is an order of magnitude greater than the period of the lowest frequency to be amplified. The capacitor makes the gain of the stage, at the signal frequencies, essentially the same as if the cathode was connected directly to the circuit return.
In some designs, the degenerative (negative) feedback caused by the cathode resistor may be desirable. In this case, all or a portion of the cathode resistance is not bypassed by a capacitor.
In class A push-pull circuits a pair of tubes driven by identical signals 180 degrees out of phase may share a common unbypassed cathode resistor. Degeneration will not occur because, if the grid voltage versus plate current characteristics of the two tubes are matched, the current through the cathode resistor will not vary during the 360 degrees of the signal cycle.
Application considerations
The voltage gain of the stage is reduced by the cathode resistor. The cathode resistor appears in series with the plate load impedance in the voltage gain equation.
Local negative feedback (cathode degeneration) is caused by the cathode resistor.
The "B" or plate supply voltage available to the tube is, in effect, reduced by the magnitude of the bias voltage.
Comparison with fixed bias
Cathode bias, as a solution, is often the alternative to using fixed bias. Robert Tomer, in his 1960 book about vacuum tubes, which mainly concerned itself with strategies for improving tube lifespan, condemned fixed bias designs in favor of cathode bias. He said that fixed bias, unlike cathode bias, does not provide a margin for error that protects the system from inevitable differences between vacuum tubes nor does it protect against run-away conditions caused by tube or circuit malfunctions. He also asserted that most tube specialists consider fixed bias operation to be dangerous. Despite this stance, fixed bias is commonly used in tube amplifiers today. Tomer identified the trend toward fixed bias designs in 1960 but was not certain about the reasons for it.
See also
Biasing
References
Further reading
(technical info)
Vacuum tubes | Cathode bias | [
"Physics"
] | 818 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
9,176,452 | https://en.wikipedia.org/wiki/Supramolecular%20polymer | Supramolecular polymers are a subset of polymers where the monomeric units are connected by reversible and highly directional secondary interactions–that is, non-covalent bonds. These non-covalent interactions include van der Waals interactions, hydrogen bonding, Coulomb or ionic interactions, π-π stacking, metal coordination, halogen bonding, chalcogen bonding, and host–guest interaction. Their behavior can be described by the theories of polymer physics) in dilute and concentrated solution, as well as in the bulk.
Additionally, some supramolecular polymers have distinctive characteristics, such as the ability to self-heal. Covalent polymers can be difficult to recycle, but supramolecular polymers may address this problem.
History
The preamble of the field of supramolecular polymers can be considered dye-aggregates and host-guest complexes. In early 19th century, it was noticed that dyes aggregate via "a special kind of polymerization". In 1988, Takuzo Aida, a Japanese polymer chemist, reported the concept of cofacial assembly wherein the amphiphilic porphyrin monomers are connected via van der Waals interaction forming one-dimensional architectures in solution, which can be considered as a prototype of supramolecular polymers. Soon thereafter, one-dimensional aggregates were described based on hydrogen bonding interaction in the crystalline state. With a different strategyusing hydrogen bonds, Jean M. J. Fréchet showed in 1989 that mesogenic molecules with carboxylic acid and pyridyl motifs, upon mixing in bulk, heterotropically dimerize to form a stable liquid crystalline structure. In 1990, Jean-Marie Lehn showed that this strategy can be expanded to form a new category of polymers, which he called "liquid crystalline supramolecular polymer" using complementary triple hydrogen bonding motifs in bulk. In 1993, M. Reza Ghadiri reported a nanotubular supramolecular polymer where a b-sheet-forming macrocyclic peptide monomer assembled together via multiple hydrogen bonding between adjacent macrocycles. In 1994, Anselm. C. Griffin showed an amorphous supramolecular material using a single hydrogen bond between a homotropic molecules having carboxylic acid and pyridine termini. The idea to make mechanically strong polymeric materials by 1D supramolecular association of small molecules requires a high association constant between the repeating building blocks. In 1997, E.W. "Bert" Meijer reported a telechelic monomer with ureidopyrimidinone termini as a "self-complementary" quadruple hydrogen bonding motif and demonstrated that the resulting supramolecular polymer in chloroform shows a temperature-dependent viscoelastic property in solution. This is the first demonstration that supramolecular polymers, when sufficiently mechanically robust, are physically entangled in solution.
Formation mechanisms
Monomers undergoing supramolecular polymerization are considered to be in equilibrium with the growing polymers, and thermodynamic factors therefore dominate the system. However, when the constituent monomers are connected via strong and multivalent interactions, a "metastable" kinetic state can dominate the polymerization. An externally supplied energy, in the form of heat in most cases, can transform the "metastable" state into a thermodynamically stable polymer. A clear understanding of multiple pathways exist in supramolecular polymerization is still under debate, however, the concept of "pathway complexity", introduced by E.W. "Bert" Meijer, shed a light on the kinetic behavior of supramolecular polymerization. Thereafter, many dedicated scientists are expanding the scope of "pathway complexity" because it can produce a variety of interesting assembled structures from the same monomeric units. Along this line of kinetically controlled processes, supramolecular polymers having "stimuli-responsive" and "thermally bisignate" characteristics is also possible.
In conventional covalent polymerization, two models based on step-growth and chain-growth mechanisms are operative. Nowadays, a similar subdivision is acceptable for supramolecular polymerization; isodesmic also known as equal-K model (step-growth mechanism) and cooperative or nucleation-elongation model (chain-growth mechanism). A third category is seeded supramolecular polymerization, which can be considered as a special case of chain-growth mechanism.
Step-growth polymerization
Supramolecular equivalent of step-growth mechanism is commonly known as isodesmic or equal-K model (K represents the total binding interaction between two neighboring monomers). In isodesmic supramolecular polymerization, no critical temperature or concentration of monomers is required for the polymerization to occur and the association constant between polymer and monomer is independent of the polymer chain length. Instead, the length of the supramolecular polymer chains rises as the concentration of monomers in the solution increases, or as the temperature decreases. In conventional polycondensation, the association constant is usually large that leads to a high degree of polymerization; however, a byproduct is observed. In isodesmic supramolecular polymerization, due to non-covalent bonding, the association between monomeric units is weak, and the degree of polymerization strongly depends on the strength of interaction, i.e. multivalent interaction between monomeric units. For instance, supramolecular polymers consisting of bifunctional monomers having single hydrogen bonding donor/acceptor at their termini usually end up with low degree of polymerization, however those with quadrupole hydrogen bonding, as in the case of ureidopyrimidinone motifs, result in a high degree of polymerization. In ureidopyrimidinone-based supramolecular polymer, the experimentally observed molecular weight at semi-dilute concentrations is in the order of 106 Dalton and the molecular weight of the polymer can be controlled by adding mono-functional chain-cappers.
Chain-growth polymerization
Conventional chain-growth polymerization involves at least two phases; initiation and propagation, while and in some cases termination and chain transfer phases also occur. Chain-growth supramolecular polymerization in a broad sense involves two distinct phases; a less favored nucleation and a favored propagation. In this mechanism, after the formation of a nucleus of a certain size, the association constant is increased, and further monomer addition becomes more favored, at which point the polymer growth is initiated. Long polymer chains will form only above a minimum concentration of monomer and below a certain temperature. However, to realize a covalent analogue of chain-growth supramolecular polymerization, a challenging prerequisite is the design of appropriate monomers that can polymerize only by the action of initiators. Recently one example of chain-growth supramolecular polymerization with "living" characteristics is demonstrated. In this case, a bowl-shaped monomer with amide-appended side chains form a kinetically favored intramolecular hydrogen bonding network and does not spontaneously undergo supramolecular polymerization at ambient temperatures. However, an N-methylated version of the monomer serves as an initiator by opening the intramolecular hydrogen bonding network for the supramolecular polymerization, just like ring-opening covalent polymerization. The chain end in this case remains active for further extension of supramolecular polymer and hence chain-growth mechanism allows for the precise control of supramolecular polymer materials.
Seeded polymerization
This is a special category of chain-growth supramolecular polymerization, where the monomer nucleates only in an early stage of polymerization to generate "seeds" and becomes active for polymer chain elongation upon further addition of a new batch of monomer. A secondary nucleation is suppressed in most of the case and thus possible to realize a narrow polydispersity of the resulting supramolecular polymer. In 2007, Ian Manners and Mitchell A. Winnik introduced this concept using a polyferrocenyldimethylsilane–polyisoprene diblock copolymer as the monomer, which assembles into cylindrical micelles. When a fresh feed of the monomer is added to the micellar "seeds" obtained by sonication, the polymerization starts in a living polymerization manner. They named this method as crystallization-driven self-assembly (CDSA) and is applicable to construct micron-scale supramolecular anisotropic structures in 1D–3D. A conceptually different seeded supramolecular polymerization was shown by Kazunori Sugiyasu in a porphyrin-based monomer bearing amide-appended long alkyl chains. At low temperature, this monomer preferentially forms spherical J-aggregates while fibrous H-aggregates at higher temperature. By adding a sonicated mixture of the J-aggregates ("seeds") into a concentrated solution of the J-aggregate particles, long fibers can be prepared via living seeded supramolecular polymerization. Frank Würthner achieved similar seeded supramolecular polymerization of amide functionalized perylene bisimide as monomer. Importantly, the seeded supramolecular polymerization is also applicable to prepare supramolecular block copolymers.
Examples
Hydrogen bonding interaction
Monomers capable of forming single, double, triple or quadruple hydrogen bonding has been utilized for making supramolecular polymers, and increased association of monomers obviously possible when monomers have maximum number of hydrogen bonding donor/acceptor motifs. For instance, ureidopyrimidinone-based monomer with self-complementary quadruple hydrogen bonding termini polymerized in solution, accordingly with the theory of conventional polymers and displayed a distinct viscoelastic nature at ambient temperatures.
π-π stacking
Monomers with aromatic motifs such as bis(merocyanine), oligo(para-phenylenevinylene) (OPV), perylene bisimide (PBI) dye, cyanine dye, corannulene and nano-graphene derivatives have been employed to prepare supramolecular polymers. In some cases, hydrogen bonding side chains appended onto the core aromatic motif help to hold the monomer strongly in the supramolecular polymer. A notable system in this category is a nanotubular supramolecular polymer formed by the supramolecular polymerization of amphiphilic hexa-peri-hexabenzocoronene (HBC) derivatives. Generally, nanotubes are categorized as 1D objects morphologically, however, their walls adopt a 2D geometry and therefore require a different design strategy. HBC amphiphiles in polar solvents solvophobically assemble into a 2D bilayer membrane, which roles up into a helical tape or a nanotubular polymer. Conceptually similar amphiphilic design based on cyanine dye and zinc chlorin dye also polymerize in water resulting in nanotubular supramolecular polymers.
Host-guest interaction
A variety of supramolecular polymers can be synthesized by using monomers with host-guest complementary binding motifs, such as crown ethers/ammonium ions, cucurbiturils/viologens, calixarene/viologens, cyclodextrins/adamantane derivatives, and pillar arene/imidazolium derivatives [30–33]. When the monomers are "heteroditopic", supramolecular copolymers results, provided the monomers does not homopolymerize. Akira Harada was one of the firstwhorecognize the importance of combining polymers and cyclodextrins. Feihe Huang showed an example of supramolecular alternating copolymer from two heteroditopic monomers carrying both crown ether and ammonium ion termini. Takeharo Haino demonstrated an extreme example of sequence control in supramolecular copolymer, where three heteroditopic monomers are arranged in an ABC sequence along the copolymer chain. The design strategy utilizing three distinct binding interactions; ball-and-socket (calix[5]arene/C60), donor-acceptor (bisporphyrin/trinitrofluorenone), and Hamilton's H-bonding interactions is the key to attain a high orthogonality to form an ABC supramolecular terpolymer.
Chirality
Stereochemical information of a chiral monomer can be expressed in a supramolecular polymer. Helical supramolecular polymer with P-and M-conformation are widely seen, especially those composed of disc-shaped monomers. When the monomers are achiral, both P-and M-helices are formed in equal amounts. When the monomers are chiral, typically due to the presence of one or more stereocenters in the side chains, the diastereomeric relationship between P- and M-helices leads to the preference of one conformation over the other. Typical example is a C3-symmetric disk-shaped chiral monomer that forms helical supramolecular polymers via the "majority rule". A slight excess of one enantiomer of the chiral monomer resulted in a strong bias to either the right-handed or left-handed helical geometry at the supramolecular polymer level. In this case, a characteristic nonlinear dependence of the anisotropic factor, g, on the enantiomeric excess of a chiral monomer can be generally observed. Like in small molecule based chiral system, chirality of a supramolecular polymer also affected by chiral solvents. Some application such as a catalyst for asymmetric synthesis and circular polarized luminescence are observed in chiral supramolecular polymers too.
Copolymers
A copolymer is formed from more than one monomeric species. Advanced polymerization techniques have been established for the preparation of covalent copolymers, however supramolecular copolymers are still in its infancy and is slowly progressing. In recent years, all plausible category of supramolecular copolymers such as random, alternating, block, blocky, or periodic has been demonstrated in a broad sense.
Properties
Supramolecular polymers are the subject of research in academia and industry.
Reversibility and dynamicity
The stability of a supramolecular polymer can be described using the association constant, Kass. When Kass ≤ 104M−1, the polymeric aggregates are typically small in size and do not show any interesting properties and when Kass≥ 1010 M−1, the supramolecular polymer behaves just like covalent polymers due to the lack of dynamics. So, an optimum Kass = 104–1010M−1need to be attained for producing functional supramolecular polymers. The dynamics and stability of the supramolecular polymers often affect by the influence of additives (e.g. co-solvent or chain-capper). When a good solvent, for instance chloroform, is added to a supramolecular polymer in a poor solvent, for instance heptane, the polymer disassembles. However, in some cases, cosolvents contribute the stabilization/destabilization of supramolecular polymer. For instance, supramolecular polymerization of a hydrogen bonding porphyrin-based monomer in a hydrocarbon solvent containing a minute amount of a hydrogen bond scavenging alcohol shows distinct pathways, i.e. polymerization favored both by cooling as well as heating, and is known as "thermally bisignate supramolecular polymerization". In another example, minute amounts of molecularly dissolved water molecules in apolar solvents, like methylcyclohexane, become part of the supramolecular polymer at lower temperatures, due to specific hydrogen bonding interaction between the monomer and water.
Self-healing
Supramolecular polymers may be relevant to self-healing materials. A supramolecular rubber based on vitrimers can self-heal simply by pressing the two broken edges of the material together. High mechanical strength of a material and self-healing ability are generally mutually exclusive. Thus, a glassy material that can self-heal at room temperature remained a challenge until recently. A supramolecularly polymer based on ether-thiourea is mechanically robust (e= 1.4 GPa) but can self-heal at room temperature by a compression at the fractured surfaces. The invention of self-healable polymer glass updated the preconception that only soft rubbery materials can heal.
Another strategy uses a bivalent poly(isobutylene)s (PIBs) functionalized with barbituric acid at head and tail. Multiple hydrogen bonding existed between the carbonyl group and amide group of barbituric acid enable it to form a supramolecular network. In this case, the snipped small PIBs-based disks can recover itself from mechanical damage after several-hour contact at room temperature.
Interactions between catechol and ferric ions exhibit pH-controlled self-healing supramolecular polymers. The formation of mono-, bis- and triscatehchol-Fe3+ complexes can be manipulated by pH, of which the bis- and triscatehchol-Fe3+ complexes show elastic moduli as well as self-healing capacity. For example, the triscatehchol-Fe3+ can restore its cohesiveness and shape after being torn. Chain-folding polyimide and pyrenyl-end-capped chains give rise to supramolecular networks.
Optoelectronic
By incorporating electron donors and electron acceptors into the supramolecular polymers, features of artificial photosynthesis can be replicated.
Biocompatible
DNA is a major example of a supramolecular polymer. protein Much effort has been develoted to related but synthetic materials. At the same time, their reversible and dynamic nature make supramolecular polymers bio-degradable, which surmounts hard-to-degrade issue of covalent polymers and makes supramolecular polymers a promising platform for biomedical applications. Being able to degrade in biological environment lowers potential toxicity of polymers to a great extent and therefore, enhances biocompatibility of supramolecular polymers.
Biomedical applications
With the excellent nature in biodegradation and biocompatibility, supramolecular polymers show great potential in the development of drug delivery, gene transfection and other biomedical applications.
Drug delivery: Multiple cellular stimuli could induce responses in supramolecular polymers. The dynamic molecular skeletons of supramolecular polymers can be depolymerized when exposing to the external stimuli like pH in vivo. On the basis of this property, supramolecular polymers are capable of being a drug carrier. Making use of hydrogen bonding between nucleobases to induce self-assemble into pH-sensitive spherical micelles.
Gene transfection: Effective and low-toxic nonviral cationic vectors are highly desired in the field of gene therapy. On account of the dynamic and stimuli-responsive properties, supramolecular polymers offer a cogent platform to construct vectors for gene transfection. By combining ferrocene dimer with β-cyclodextrin dimer, a redox-control supramolecular polymers system has been proposed as a vector. In COS-7 cells, this supramolecular polymersic vector can release enclosed DNA upon exposing to hydrogen peroxide and achieve gene transfection.
Adjustable mechanical properties
Basic Principle : Noncovalent interactions between polymer molecules significantly affect the mechanical properties of supramolecular polymers. More interaction between polymers tends to enhance the interaction strength between polymers. The association rate and dissociation rate of interacting groups in polymer molecules determine intermolecular interaction strength. For supramolecular polymers, the dissociation kinetics for dynamic networks plays a critical role in the material design and mechanical properties of the SPNs(supramolecular polymer networks). By changing the dissociation rate of polymer crosslink dynamics, supramolecular polymers have adjustable mechanical properties. With a slow dissociation rate for dynamic networks of supramolecular polymers, glass-like mechanical properties are dominant, on the other hand, rubber-like mechanical properties are dominant for a fast dissociation rate. These properties can be obtained by changing the molecular structure of the crosslink part of the molecule.
Experimental examples : One research controlled the molecular design of cucurbit[8]uril, CB[8]. The hydrophobic structure of the second guest of CB-mediated host-guest interaction within its molecular structure can tune the dissociative kinetics of the dynamic crosslinks. To slow the dissociation rate (kd), a stronger enthalpic driving force is needed for the second guest association (ka) to release more of the conformationally restricted water from the CB(8] cavity. In other words, the hydrophobic second guest exhibited the highest Keq and lowest kd values. Therefore, by polymerizing different concentrations of polymer subgroups, different dynamics of the intermolecular network can be designed.For example, mechanical properties like compressive strain can be tuned by this process. Polymerized with different hydrophobic subgroups in CB[B], The compressive strength was found to increase across the series in correlation with a decrease of kd, which could be tuned between 10–100MPa. NVI, is the most hydrophobic subgroup structure of monomer which have two benzene rings, on the other hand, BVI is the least hydrophobic subgroup structure of monomer via control group. Besides, varying concentrations of hydrophobic subgroups in CB[B], polymerized molecules show different compressive properties. Polymers with the highest concentration of hydrophobic subgroups show the highest compressive strain and vice versa.
Biomaterials
Supramolecular polymers can simultaneously meet the requirements of aqueous compatibility, bio-degradability, biocompatibility, stimuli-responsiveness and other strict criterion. Consequently, supramolecular polymers could be applicable to the biomedical fields.
The reversible nature of supramolecular polymers can produce biomaterials that can sense and respond to physiological cues, or that mimic the structural and functional aspects of biological signaling.
Protein delivery, bio-imaging and diagnosis and tissue engineering, are also well developed.
Further reading
<
References
Supramolecular chemistry
Polymers | Supramolecular polymer | [
"Chemistry",
"Materials_science"
] | 4,811 | [
"Polymer chemistry",
"nan",
"Polymers",
"Nanotechnology",
"Supramolecular chemistry"
] |
9,176,798 | https://en.wikipedia.org/wiki/Quotient%20of%20subspace%20theorem | In mathematics, the quotient of subspace theorem is an important property of finite-dimensional normed spaces, discovered by Vitali Milman.
Let (X, ||·||) be an N-dimensional normed space. There exist subspaces Z ⊂ Y ⊂ X such that the following holds:
The quotient space E = Y / Z is of dimension dim E ≥ c N, where c > 0 is a universal constant.
The induced norm || · || on E, defined by
is uniformly isomorphic to Euclidean. That is, there exists a positive quadratic form ("Euclidean structure") Q on E, such that
for
with K > 1 a universal constant.
The statement is relative easy to prove by induction on the dimension of Z (even for Y=Z, X=0, c=1) with a K that depends only on N; the point of the theorem is that K is independent of N.
In fact, the constant c can be made arbitrarily close to 1, at the expense of the
constant K becoming large. The original proof allowed
Notes
References
Banach spaces
Asymptotic geometric analysis
Theorems in functional analysis | Quotient of subspace theorem | [
"Mathematics"
] | 244 | [
"Theorems in mathematical analysis",
"Theorems in functional analysis"
] |
9,176,966 | https://en.wikipedia.org/wiki/Monotrysia | The Monotrysia are a group of moths in the lepidopteran order, not currently considered to be a natural group or clade. The group is so named because the female has a single genital opening for mating and laying eggs, in contrast to the rest of the Lepidoptera (Ditrysia), which have two female reproductive openings. Later classifications used Monotrysia in a narrower sense for the nonditrysian Heteroneura, but this group was also found to be paraphyletic with respect to Ditrysia. Apart from the recently discovered family Andesianidae, most of the group consists of small, relatively understudied species.
See also
References
Further reading
Davis D. R. (1999). The Monotrysian Heteroneura. Pages 65–90 in: Lepidoptera: Moths and Butterflies. 1. Evolution, Systematics, and Biogeography. Handbook of Zoology Vol. IV, Part 35. N. P. Kristensen, ed. De Gruyter, Berlin and New York.
External links
Neolepidoptera
Obsolete arthropod taxa
Paraphyletic groups | Monotrysia | [
"Biology"
] | 233 | [
"Phylogenetics",
"Paraphyletic groups"
] |
9,177,825 | https://en.wikipedia.org/wiki/Dini%27s%20theorem | In the mathematical field of analysis, Dini's theorem says that if a monotone sequence of continuous functions converges pointwise on a compact space and if the limit function is also continuous, then the convergence is uniform.
Formal statement
If is a compact topological space, and is a monotonically increasing sequence (meaning for all and ) of continuous real-valued functions on which converges pointwise to a continuous function , then the convergence is uniform. The same conclusion holds if is monotonically decreasing instead of increasing. The theorem is named after Ulisse Dini.
This is one of the few situations in mathematics where pointwise convergence implies uniform convergence; the key is the greater control implied by the monotonicity. The limit function must be continuous, since a uniform limit of continuous functions is necessarily continuous. The continuity of the limit function cannot be inferred from the other hypothesis (consider in .)
Proof
Let be given. For each , let , and let be the set of those such that . Each is continuous, and so each is open (because each is the preimage of the open set under , a continuous function). Since is monotonically increasing, is monotonically decreasing, it follows that the sequence is ascending (i.e. for all ). Since converges pointwise to , it follows that the collection is an open cover of . By compactness, there is a finite subcover, and since are ascending the largest of these is a cover too. Thus we obtain that there is some positive integer such that . That is, if and is a point in , then , as desired.
Notes
References
Bartle, Robert G. and Sherbert Donald R.(2000) "Introduction to Real Analysis, Third Edition" Wiley. p 238. – Presents a proof using gauges.
Jost, Jürgen (2005) Postmodern Analysis, Third Edition, Springer. See Theorem 12.1 on page 157 for the monotone increasing case.
Rudin, Walter R. (1976) Principles of Mathematical Analysis, Third Edition, McGraw–Hill. See Theorem 7.13 on page 150 for the monotone decreasing case.
Theorems in real analysis
Articles containing proofs | Dini's theorem | [
"Mathematics"
] | 449 | [
"Theorems in mathematical analysis",
"Theorems in real analysis",
"Articles containing proofs"
] |
9,178,245 | https://en.wikipedia.org/wiki/Complex%20conjugate%20root%20theorem | In mathematics, the complex conjugate root theorem states that if P is a polynomial in one variable with real coefficients, and a + bi is a root of P with a and b real numbers, then its complex conjugate a − bi is also a root of P.
It follows from this (and the fundamental theorem of algebra) that, if the degree of a real polynomial is odd, it must have at least one real root. That fact can also be proved by using the intermediate value theorem.
Examples and consequences
The polynomial x2 + 1 = 0 has roots ± i.
Any real square matrix of odd degree has at least one real eigenvalue. For example, if the matrix is orthogonal, then 1 or −1 is an eigenvalue.
The polynomial
has roots
and thus can be factored as
In computing the product of the last two factors, the imaginary parts cancel, and we get
The non-real factors come in pairs which when multiplied give quadratic polynomials with real coefficients. Since every polynomial with complex coefficients can be factored into 1st-degree factors (that is one way of stating the fundamental theorem of algebra), it follows that every polynomial with real coefficients can be factored into factors of degree no higher than 2: just 1st-degree and quadratic factors.
If the roots are and , they form a quadratic
.
If the third root is , this becomes
.
Corollary on odd-degree polynomials
It follows from the present theorem and the fundamental theorem of algebra that if the degree of a real polynomial is odd, it must have at least one real root.
This can be proved as follows.
Since non-real complex roots come in conjugate pairs, there are an even number of them;
But a polynomial of odd degree has an odd number of roots (fundamental theorem of algebra);
Therefore some of them must be real.
This requires some care in the presence of multiple roots; but a complex root and its conjugate do have the same multiplicity (and this lemma is not hard to prove). It can also be worked around by considering only irreducible polynomials; any real polynomial of odd degree must have an irreducible factor of odd degree, which (having no multiple roots) must have a real root by the reasoning above.
This corollary can also be proved directly by using the intermediate value theorem.
Proof
One proof of the theorem is as follows:
Consider the polynomial
where all ar are real. Suppose some complex number ζ is a root of P, that is . It needs to be shown that
as well.
If P(ζ  ) = 0, then
which can be put as
Now
and given the properties of complex conjugation,
Since
it follows that
That is,
Note that this works only because the ar are real, that is, . If any of the coefficients were non-real, the roots would not necessarily come in conjugate pairs.
Notes
Theorems in complex analysis
Theorems about polynomials
Articles containing proofs | Complex conjugate root theorem | [
"Mathematics"
] | 620 | [
"Theorems in mathematical analysis",
"Theorems in algebra",
"Theorems in complex analysis",
"Theorems about polynomials",
"Articles containing proofs"
] |
9,179,093 | https://en.wikipedia.org/wiki/List%20of%20kampo%20herbs | Kampō (or Kanpō, 漢方) medicine is the Japanese study and adaptation of traditional Chinese medicine. In 1967, the Japanese Ministry of Health, Labour and Welfare approved four kampo medicines for reimbursement under the National Health Insurance (NHI) program. In 1976, 82 kampo medicines were approved by the Ministry of Health, Labour and Welfare. Currently, 148 kampo medicines are approved for reimbursement.
The 14th edition of the Japanese Pharmacopoeia (JP) (日本薬局方 Nihon yakkyokuhō) lists 165 herbal ingredients that are approved to be used in kampo remedies.
Tsumura (ツムラ) is the leading maker making 128 of the 148 kampo medicines. The "count" column shows in how many of these 128 formulae the herb is found. The most common herb is Glycyrrhizae Radix (Chinese liquorice root). It is in 94 of the 128 Tsumura formulae. Other common herbs are Zingiberis Rhizoma (ginger) (51 of 128 formulae) and Paeoniae Radix (Chinese peony root) (44 of 128 formulae).
Note 1: this character cannot be displayed correctly on a computer. "庶" is usually substituted in Chinese and Japanese. The "灬" in "庶" should be replaced with "虫".
Note 2: this character cannot be displayed correctly on a computer. "梨" is usually substituted in Chinese. "梨" or "藜" is usually substituted in Japanese. The "勿" in "藜" should be replaced with "刂".
See also
Kampo list
Chinese classic herbal formula
List of plants used in herbalism
Pharmacopoeia
References
Tsumura Herb Handbook
Bensky, Dan, Steve Clavey, Erich Stöger, and Andrew Gamble "Chinese Herbal Medicine: Materia Medica" 3rd ed. Eastland Press, 2004. () Eastland Press Herb List Arranged by Pinyin
Wiseman, Nigel. "Learner's Disney Character Dictionary of Chinese Medicine"
External links
The World of Kampo
Kampo
Kampo
Kampo | List of kampo herbs | [
"Biology"
] | 448 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
9,179,644 | https://en.wikipedia.org/wiki/Multi-threshold%20CMOS | Multi-threshold CMOS (MTCMOS) is a variation of CMOS chip technology which has transistors with multiple threshold voltages (Vth) in order to optimize delay or power. The Vth of a MOSFET is the gate voltage where an inversion layer forms at the interface between the insulating layer (oxide) and the substrate (body) of the transistor. Low Vth devices switch faster, and are therefore useful on critical delay paths to minimize clock periods. The penalty is that low Vth devices have substantially higher static leakage power. High Vth devices are used on non-critical paths to reduce static leakage power without incurring a delay penalty. Typical high Vth devices reduce static leakage by 10 times compared with low Vth devices.
One method of creating devices with multiple threshold voltages is to apply different bias voltages (Vb) to the base or bulk terminal of the transistors. Other methods involve adjusting the gate oxide thickness, gate oxide dielectric constant (material type), or dopant concentration in the channel region beneath the gate oxide.
A common method of fabricating multi-threshold CMOS involves simply adding additional photolithography and ion implantation steps. For a given fabrication process, the Vth is adjusted by altering the concentration of dopant atoms in the channel region beneath the gate oxide. Typically, the concentration is adjusted by ion implantation method. For example, photolithography methods are applied to cover all devices except the p-MOSFETs with photoresist. Ion implantation is then completed, with ions of the chosen dopant type penetrating the gate oxide in areas where no photoresist is present. The photoresist is then stripped. Photolithography methods are again applied to cover all devices except the n-MOSFETs. Another implantation is then completed using a different dopant type, with ions penetrating the gate oxide. The photoresist is stripped. At some point during the subsequent fabrication process, implanted ions are activated by annealing at an elevated temperature.
In principle, any number of threshold voltage transistors can be produced. For CMOS having two threshold voltages, one additional photomasking and implantation step is required for each of p-MOSFET and n-MOSFET. For fabrication of normal, low, and high Vth CMOS, four additional steps are required relative to conventional single-Vth CMOS.
Implementation
The most common implementation of MTCMOS for reducing power makes use of sleep transistors. Logic is supplied by a virtual power rail. Low Vth devices are used in the logic where fast switching speed is important. High Vth devices connecting the power rails and virtual power rails are turned on in active mode, off in sleep mode. High Vth devices are used as sleep transistors to reduce static leakage power.
The design of the power switch which turns on and off the power supply to the logic gates is essential to low-voltage, high-speed circuit techniques such as MTCMOS. The speed, area, and power of a logic circuit are influenced by the characteristics of the power switch.
In a "coarse-grained" approach, high Vth sleep transistors gate the power to entire logic blocks. The sleep signal is de-asserted during active mode, causing the transistor to turn on and provide virtual power (ground) to the low Vth logic. The sleep signal is asserted during sleep mode, causing the transistor to turn off and disconnect power (ground) from the low Vth logic. The drawbacks of this approach are that:
logic blocks must be partitioned to determine when a block may be safely turned off (on)
sleep transistors are large and must be carefully sized to supply the current required by the circuit block
an always active (never in sleep mode) power management circuit must be added
In a "fine-grained" approach, high Vth sleep transistors are incorporated within every gate. Low Vth transistors are used for the pull-up and pull-down networks, and a high Vth transistor is used to gate the leakage current between the two networks. This approach eliminates problems of logic block partitioning and sleep transistor sizing. However, a large amount of area overhead is added due both to inclusion of additional transistors in every Boolean gate, and in creating a sleep signal distribution tree.
An intermediate approach is to incorporate high Vth sleep transistors into threshold gates having more complicated function. Since fewer such threshold gates are required to implement any arbitrary function compared to Boolean gates, incorporating MTCMOS into each gate requires less area overhead. Examples of threshold gates having more complicated function are found with Null Convention Logic (NCL) and Sleep Convention Logic (SCL). Some art is required to implement MTCMOS without causing glitches or other problems.
References
Electronic design
Digital electronics
Logic families | Multi-threshold CMOS | [
"Engineering"
] | 1,025 | [
"Electronic design",
"Electronic engineering",
"Design",
"Digital electronics"
] |
9,179,665 | https://en.wikipedia.org/wiki/M.%20Riesz%20extension%20theorem | The M. Riesz extension theorem is a theorem in mathematics, proved by Marcel Riesz during his study of the problem of moments.
Formulation
Let be a real vector space, be a vector subspace, and be a convex cone.
A linear functional is called -positive, if it takes only non-negative values on the cone :
A linear functional is called a -positive extension of , if it is identical to in the domain of , and also returns a value of at least 0 for all points in the cone :
In general, a -positive linear functional on cannot be extended to a -positive linear functional on . Already in two dimensions one obtains a counterexample. Let and be the -axis. The positive functional can not be extended to a positive functional on .
However, the extension exists under the additional assumption that namely for every there exists an such that
Proof
The proof is similar to the proof of the Hahn–Banach theorem (see also below).
By transfinite induction or Zorn's lemma it is sufficient to consider the case dim .
Choose any . Set
We will prove below that . For now, choose any satisfying , and set , , and then extend to all of by linearity. We need to show that is -positive. Suppose . Then either , or or for some and . If , then . In the first remaining case , and so
by definition. Thus
In the second case, , and so similarly
by definition and so
In all cases, , and so is -positive.
We now prove that . Notice by assumption there exists at least one for which , and so . However, it may be the case that there are no for which , in which case and the inequality is trivial (in this case notice that the third case above cannot happen). Therefore, we may assume that and there is at least one for which . To prove the inequality, it suffices to show that whenever and , and and , then . Indeed,
since is a convex cone, and so
since is -positive.
Corollary: Krein's extension theorem
Let E be a real linear space, and let K ⊂ E be a convex cone. Let x ∈ E/(−K) be such that R x + K = E. Then there exists a K-positive linear functional φ: E → R such that φ(x) > 0.
Connection to the Hahn–Banach theorem
The Hahn–Banach theorem can be deduced from the M. Riesz extension theorem.
Let V be a linear space, and let N be a sublinear function on V. Let φ be a functional on a subspace U ⊂ V that is dominated by N:
The Hahn–Banach theorem asserts that φ can be extended to a linear functional on V that is dominated by N.
To derive this from the M. Riesz extension theorem, define a convex cone K ⊂ R×V by
Define a functional φ1 on R×U by
One can see that φ1 is K-positive, and that K + (R × U) = R × V. Therefore φ1 can be extended to a K-positive functional ψ1 on R×V. Then
is the desired extension of φ. Indeed, if ψ(x) > N(x), we have: (N(x), x) ∈ K, whereas
leading to a contradiction.
References
Sources
Theorems in convex geometry
Theorems in functional analysis | M. Riesz extension theorem | [
"Mathematics"
] | 707 | [
"Theorems in mathematical analysis",
"Theorems in convex geometry",
"Theorems in functional analysis",
"Theorems in geometry"
] |
9,180,437 | https://en.wikipedia.org/wiki/Methyllysine | Methyllysine is derivative of the amino acid residue lysine where the sidechain ammonium group has been methylated one or more times.
Such methylated lysines play an important role in epigenetics; the methylation of specific lysines of certain histones in a nucleosome alters the binding of the surrounding DNA to those histones, which in turn affects the expression of genes on that DNA. The binding is affected because the effective radius of the positive charge is increased (methyl groups are larger than the hydrogen atoms they replace), reducing the strongest potential electrostatic attraction with the negatively charged DNA.
It is thought that the methylation of lysine (and arginine) on histone tails does not directly affect their binding to DNA. Rather, such methyl marks recruit other proteins that modulate chromatin structure.
In Protein Data Bank files, methylated lysines are indicated by the MLY or MLZ acronyms.
References
Alpha-Amino acids
Basic amino acids
Diamines | Methyllysine | [
"Chemistry",
"Biology"
] | 212 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
9,180,566 | https://en.wikipedia.org/wiki/Sand%20boil | Sand boils, sand volcanoes, or sand blows occur when water under pressure wells up through a bed of sand. The water looks like it is boiling up from the bed of sand, hence the name.
Sand volcano
A sand volcano or sand blow is a cone of sand formed by the ejection of sand onto a surface from a central point. The sand builds up as a cone with slopes at the sand's angle of repose. A crater is commonly seen at the summit. The cone looks like a small volcanic cone and can range in size from millimetres to metres in diameter.
The process is often associated with soil liquefaction and the ejection of fluidized sand that can occur in water-saturated sediments during an earthquake. The New Madrid seismic zone exhibited many such features during the 1811–1812 New Madrid earthquakes. Adjacent sand blows aligned in a row along a linear fracture within fine-grained surface sediments are just as common, and can still be seen in the New Madrid area.
These earthquakes also caused the largest known sand boil in the world, which can still be found near Hayti, Missouri and is locally called "The Beach". It is 2.3 kilometers long and covers 55 hectares.
In the past few years, much effort has gone into the mapping of liquefaction features to study ancient earthquakes. The basic idea is to map zones that are susceptible to the process and then go in for a closer look. The presence or absence of soil liquefaction features, such as clastic dikes, is strong evidence of past earthquake activity, or lack thereof.
These are to be contrasted with mud volcanoes, which occur in areas of geyser or subsurface gas venting.
Flood protection structures
Sand boils can be a mechanism contributing to liquefaction and levee failure during floods. Boil refers to the visible "boiling" movement of coarse sand grains retained in the hole even as finer particles (silts and fine sands) are carried out and deposited on the apron around the boil hole. Sand boils are caused by hydraulic head in levee or dike pushing the water to seep out the other side, most likely during a flood. Sand boils start as simple seeps of laminar flow. With increasing head from rising flood waters, turbulent flow will initiate where the laminar flow leaves the soil to flow freely. Turbulent flow produces soil piping, whereby backward erosion results in a pipe-shaped cavity reaching back into the embankment, initiating at the seep and working through the embankment back to the water source. Once initiated, an unmitigated soil pipe can proceed quickly to embankment failure.
The flow in a sand boil can be slowed, but it is impractical to stop completely. The most effective response to an active sand boil is to stand water over the boil deep enough to reduce the hydraulic gradient and slow the water flow to eliminate turbulence and backward erosion at the head of the pipe. Slower, nonturbulent flow will not be able to move soil particles. The suppressing depth of water is created with sandbags forming a stacked ring around the boil.
During the flood of spring 2011, the United States Army Corps of Engineers had to work to contain the largest active sand boil ever discovered. The sand boil measured nine by 12 meters (30 by 40 feet) and was located in the city of Cairo, Illinois, at the confluence of the Mississippi River and the Ohio River.
Earthquakes
An example of this is during the 1989 earthquake in San Francisco when sand boils brought up debris from the 1906 earthquake. This process is a result of liquefaction. By mapping the location of sand boils that erupted in the Marina District during the 1989 Loma Prieta earthquake, scientists discovered the site of a lagoon that existed in 1906. The lagoon developed after the Fair's Seawall was constructed and was later filled in in 1915 in preparation for the Panama–Pacific International Exposition.
See also
Internal erosion
Seepage
References
Sand
Sedimentology
Seismology | Sand boil | [
"Physics"
] | 813 | [
"Soil mechanics",
"Applied and interdisciplinary physics"
] |
9,180,916 | https://en.wikipedia.org/wiki/Open%20Source%20Cluster%20Application%20Resources | Open Source Cluster Application Resources (OSCAR) is a Linux-based software installation for high-performance cluster computing. OSCAR allows users to install a Beowulf type high performance computing cluster.
See also
TORQUE Resource Manager
Maui Cluster Scheduler
Beowulf cluster
External links
Official OSCAR site
github repository
Cluster computing
Parallel computing | Open Source Cluster Application Resources | [
"Technology"
] | 65 | [
"Computing stubs",
"Computer network stubs"
] |
9,181,685 | https://en.wikipedia.org/wiki/Drum%20screen | A drum screen, sometimes referred to as a drum shield or acoustic shield, is a tool used by audio engineers to avoid the sound control problems caused when louder instruments overwhelm quieter instruments and vocals on stage. It is a transparent acoustic panel or system of panels that are used around drums, percussion instruments and possibly other loud musical instruments to acoustically separate unusually loud instruments from other musical instruments and vocalists who may be close by.
Composition
Drum screens are usually made out of a 0.22-inch (5.6mm) thick clear acrylic sheet material. A more expensive scratch-resistant or AR (abrasion-resistant) acrylic is also sometimes used for rugged use and touring applications.
While plastic drum screens perform fairly well as sound barriers, they reflect most sound that strikes them and so very little sound is actually absorbed. Therefore, it is usually recommended that some type of acoustic absorption product, such as acoustic foam, heavy curtains, acoustic panels, or absorption baffles be used on a significant percentage of the screen surface and opposite the screen in order to soak up and dissipate as much of the direct and reflected sound energy as possible.
Variations
In some applications where the performance area has high ceilings that reflect a large percentage of the sound, acrylic sheet is sometimes used to almost completely surround the loud instrument on all sides and above to create what is commonly known as an isolation booth. While this technique will aid in isolating the loud sound source, the interior acoustics of any booth with such a large percentage of highly reflective surfaces will be difficult or impossible to control. It is therefore recommended that at least 60% or more of the reflective surfaces inside this type of isolation booth be treated with acoustic absorption products. It is also wise to use additional carpet padding and carpet under the entire setup to reduce reflections off the floor and absorb additional sound energy.
See also
References
Drumming
Musical instrument parts and accessories | Drum screen | [
"Technology"
] | 389 | [
"Components",
"Musical instrument parts and accessories"
] |
9,181,701 | https://en.wikipedia.org/wiki/McDiarmid%27s%20inequality | In probability theory and theoretical computer science, McDiarmid's inequality (named after Colin McDiarmid ) is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain functions when they are evaluated on independent random variables. McDiarmid's inequality applies to functions that satisfy a bounded differences property, meaning that replacing a single argument to the function while leaving all other arguments unchanged cannot cause too large of a change in the value of the function.
Statement
A function satisfies the bounded differences property if substituting the value of the th coordinate changes the value of by at most . More formally, if there are constants such that for all , and all ,
Extensions
Unbalanced distributions
A stronger bound may be given when the arguments to the function are sampled from unbalanced distributions, such that resampling a single argument rarely causes a large change to the function value.
This may be used to characterize, for example, the value of a function on graphs when evaluated on sparse random graphs and hypergraphs, since in a sparse random graph, it is much more likely for any particular edge to be missing than to be present.
Differences bounded with high probability
McDiarmid's inequality may be extended to the case where the function being analyzed does not strictly satisfy the bounded differences property, but large differences remain very rare.
There exist stronger refinements to this analysis in some distribution-dependent scenarios, such as those that arise in learning theory.
Sub-Gaussian and sub-exponential norms
Let the th centered conditional version of a function be
so that is a random variable depending on random values of .
Bennett and Bernstein forms
Refinements to McDiarmid's inequality in the style of Bennett's inequality and Bernstein inequalities are made possible by defining a variance term for each function argument. Let
Proof
The following proof of McDiarmid's inequality constructs the Doob martingale tracking the conditional expected value of the function as more and more of its arguments are sampled and conditioned on, and then applies a martingale concentration inequality (Azuma's inequality).
An alternate argument avoiding the use of martingales also exists, taking advantage of the independence of the function arguments to provide a Chernoff-bound-like argument.
For better readability, we will introduce a notational shorthand: will denote for any and integers , so that, for example,
Pick any . Then, for any , by triangle inequality,
and thus is bounded.
Since is bounded, define the Doob martingale (each being a random variable depending on the random values of ) as
for all and , so that .
Now define the random variables for each
Since are independent of each other, conditioning on does not affect the probabilities of the other variables, so these are equal to the expressions
Note that . In addition,
Then, applying the general form of Azuma's inequality to , we have
The one-sided bound in the other direction is obtained by applying Azuma's inequality to and the two-sided bound follows from a union bound.
See also
References
Probabilistic inequalities
Statistical inequalities
Martingale theory | McDiarmid's inequality | [
"Mathematics"
] | 653 | [
"Theorems in statistics",
"Statistical inequalities",
"Theorems in probability theory",
"Probabilistic inequalities",
"Inequalities (mathematics)"
] |
9,182,754 | https://en.wikipedia.org/wiki/Johann%20Friedrich%20Meckel | Johann Friedrich Meckel (17 October 1781 – 31 October 1833), often referred to as Johann Friedrich Meckel, the Younger, was a German anatomist born in Halle. He worked as a professor of anatomy, pathology and zoology at the University of Halle, Germany.
Life and research
In 1802, he received his medical doctorate from the University of Halle, defending his doctoral thesis De cordis conditionibus abnormibus on 8 April 1802. At Halle he had as instructors, Kurt Sprengel (1766-1833) and Johann Christian Reil (1759-1813). After graduation, Meckel continued his education in Würzburg, Vienna and Paris. In Paris, he assisted zoologist Georges Cuvier (1769–1832) with systematic analysis of anatomical and zootomical specimens. In 1810 he finished translating Cuvier's five-volume Leçons d’anatomie Comparée from French into German.
In 1808, he became a full professor of normal and pathological anatomy, surgery and obstetrics at the University of Halle, replacing Justus Christian Loder (1753-1832). From 1826 to 1833, he was editor of the Archiv für Anatomie und Physiologie. In 1829, he was elected a foreign member of the Royal Swedish Academy of Sciences.
Meckel adopted naturalist Jean-Baptiste Lamarck's (1744–1829) evolutionary beliefs. He was a pioneer in the science of teratology, in particular the study of birth defects and abnormalities that occur during embryonic development. He believed that abnormal development adhered to the same natural laws as did normal development. With French embryologist Étienne Serres (1786–1868), the "Meckel-Serres Law" is named, defined as a theory of parallelism between the stages of ontogeny and the stages of a unifying pattern in the organic world ("scala naturae").
Associated terms
The following eponymous terms are named after him:
Meckel's diverticulum – an out-pouching of the ileum, part of the small intestine, and found in approximately 2% of the population.
Meckel's cartilage – A cartilaginous bar from which the mandible is formed. Described in 1820.
A syndrome – Meckel syndrome – is also named after him. This condition was described in 1822.
A protein – mecklin – the gene for which is found on chromosome 8 (8q21.3-q22.1) is named after him.
The supposed Meckel-Serres Law of recapitulation in embryology.
Family
His grandfather was also named "Johann Friedrich Meckel". In order to avoid confusion, he is often referred to as Johann Friedrich Meckel, the Elder. The elder Meckel was also a professor of anatomy, and he too has anatomical structures named after him.
His father, Philipp Friedrich Theodor Meckel (1755–1803), was also an anatomist.
His brother, August Albrecht Meckel (1789–1829), practiced legal medicine and investigated avian anatomy but died prematurely from tuberculosis.
August's son – Johann Heinrich Meckel (1821–1856) – was the professor of pathologic anatomy at the University of Berlin that his great-grandfather had held at the Charité. After his death also from pulmonary disease, his position was filled by Rudolf Virchow.
See also
Meckel syndrome
Meckel diverticulum
References
Johann Friedrich Meckel @ Who Named It
External links
1781 births
1833 deaths
Scientists from Halle (Saale)
German anatomists
Teratologists
People from the Duchy of Magdeburg
Proto-evolutionary biologists
University of Halle alumni
Academic staff of the University of Halle
Members of the Royal Swedish Academy of Sciences
Foreign members of the Royal Society
18th-century German biologists | Johann Friedrich Meckel | [
"Biology"
] | 795 | [
"Non-Darwinian evolution",
"Biology theories",
"Proto-evolutionary biologists"
] |
9,183,354 | https://en.wikipedia.org/wiki/Docomo%20Pacific | Docomo Pacific is a wholly owned subsidiary of Japanese mobile phone operator NTT Docomo headquartered in Tamuning, Guam. It is the largest provider of mobile, television, internet and telephone services to the United States territories of Guam and the Northern Mariana Islands.
The company was formed through the merger of cell phone carriers Guamcell Communications and HafaTel and was acquired in December 2006 by NTT Docomo, a spin-off of Japanese communication company Nippon Telegraph and Telephone. In October 2008, Docomo Pacific was the first company on Guam to introduce a HSDPA network. In November 2011, Docomo Pacific launched 4G HSPA+ service on Guam followed by the launch of advanced 4G LTE service in October 2012.
In May 2013, Docomo Pacific acquired cable company MCV Broadband (Marianas Cable Vision Broadband) from Seaport Capital, an investment company based in New York City.
Incidents
March 17, 2023
A cyberattack occurred early in the morning, disabling access to Docomo's call hotline center, website, and internet services.
Immediate failsafe protocols were initiated by Docomo's cybersecurity technicians, shutting down the affected servers and isolating the intrusion.
During the downtime of the call center and the internet, people on Facebook from Guam complained about what was happening. People complained about the stuff that happened.
Docomo subscribers still had access to the mobile network, as well voice, SMS, and fiber internet services.
Subscribers of other telecommunication companies were not affected.
Docomo Pacific posted the updates on their Facebook page, due to their website being offline. After March 17th, the post was deleted and they returned to their normal schedule.
Some Docomo internet services came back the next day to some places in Guam, but other villages were still left with no internet. The next day, Internet services around Guam came back.
See also
Communications in Guam
References
Cable television companies of the United States
Broadband
Companies of Guam
Mass media in Guam
Nippon Telegraph and Telephone
NTT Docomo | Docomo Pacific | [
"Technology"
] | 412 | [
"Members of the Conexus Mobile Alliance",
"NTT Docomo"
] |
9,183,439 | https://en.wikipedia.org/wiki/Hawking%20%282004%20film%29 | Hawking is a 2004 biographical drama television film directed by Philip Martin and written by Peter Moffat. Starring Benedict Cumberbatch, it chronicles Stephen Hawking's early years as a PhD student at the University of Cambridge, following his search for the beginning of time, and his struggle against motor neurons disease. It premiered in the UK in April 2004.
The film received positive reviews, with critics particularly lauding Cumberbatch's performance as Hawking. It received two British Academy Television Awards nominations: Best Single Drama and Best Actor (Cumberbatch). Cumberbatch won the Golden Nymph for Best Performance by an Actor in a TV Film or Miniseries.
Cumberbatch's portrayal of Hawking was the first portrayal of the physicist on screen not by himself.
Plot
At Stephen Hawking's 21st birthday party he meets a new friend, Jane Wilde. There is a strong attraction between the two and Jane is intrigued by Stephen's talk of stars and the universe, but realises that there is something very wrong with Stephen when he suddenly finds that he is unable to stand up. A stay in hospital results in a distressing diagnosis. Stephen has motor neurone disease and doctors don't expect him to survive for more than two years. Stephen returns to Cambridge where the new term has started without him. But he cannot hide from the reality of his condition through work because he can't find a subject for his PhD. While his colleagues throw themselves into academic and college life, Stephen's life seems to have been put on hold. He rejects the help of his supervisor Dennis Sciama and sinks into a depression. It is only Stephen's occasional meetings with Jane and her faith in him that seem to keep him afloat. The prevailing theory in cosmology at the time is Steady State, which argues that the universe had no beginning – it has always existed, and always will – and Steady State is dominated by Professor Fred Hoyle, a plain-speaking Yorkshireman, and one of the first science TV pundits.
Stephen gets an early glimpse of a paper by Hoyle that is to be presented at a Royal Society lecture. He works through the calculations, identifies a mistake, and publicly confronts Hoyle after he has finished speaking. The row causes a stir in the department but, more importantly, it seems to give Stephen the confidence to get started on his own work. At almost the same time Stephen is introduced to a new way of thinking about his subject by another physicist, Roger Penrose. Topology is an approach that uses concepts of shape rather than equations to think about the nature of the universe, and this proves to be the perfect tool for Stephen, who is starting to find it very difficult to write. Penrose's great passion is the fate of dying stars. When a star comes to the end of its life, it begins to collapse in on itself. His calculations suggest something extraordinary. The collapse of the dying star appears to continue indefinitely, until the star is infinitely dense, forming a black hole in space. And at the heart of this black hole, Penrose shows, is something scientists call a singularity. It is this which leads Stephen to his PhD subject. He has always had a niggling scepticism about Steady State Theory, and now he can begin to see a way of explaining the revolutionary and highly controversial idea that the universe might have had a beginning. Sciama is sceptical but supportive – glad to see his student fired up and ready to work. Meanwhile, Stephen's condition continues to decline, he writes and walks with difficulty and his speech is starting to slur. But he now has a focus for his energies and, with the support of Jane, enters a new phase. He also commits to his relationship with her, asking her to marry him and in doing so exhibiting a defiant determination to survive.
With his mind fired up, Stephen begins to work away at the implications of Penrose's discovery and starts to home in on the idea of a singularity. With remarkable insight – a real Eureka moment – he asks himself: what would happen if you ran Penrose's maths backwards? Instead of something collapsing into nothingness, what if nothingness exploded into something? And what if you applied this not to a star but to the whole universe? Answer: the universe really could have originated in a big bang. At last, Stephen enters a period of feverish academic work. He applies Penrose's theorems for collapsing stars to the universe itself. Justifying Sciama's faith in him, he produces a PhD of real brilliance and profound implications. In theory, at least, the big bang could have happened. Two years after his initial diagnosis, Stephen is not only still very much alive, but has played a part in a great scientific breakthrough which revolutionises the way people think about the universe. Today, the scientific consensus is that the universe started with a big bang: billions of years ago, a cosmic explosion brought space and time into existence.
A secondary, interwoven storyline follows a different but connected scientific quest. Unbeknownst to Hawking, just as he was being diagnosed in 1963, two American scientists were embarking on their own scientific mission. Their research was to produce hard evidence to support Hawking's theoretical work. Arno Allan Penzias and Robert Woodrow Wilson are encountered in a hotel room in Stockholm in 1978. They are being interviewed about their discovery on the eve of receiving the Nobel Prize for Physics. They describe how, in the hills above New Jersey, they scanned the skies with a radio-telescope, and began to pick up a strange radio signal from space. In time, the two scientists came to realise that they had detected the left-over heat of the first, ancient explosion that had created the universe. They had found the physical proof of the big bang.
Cast
Benedict Cumberbatch as Stephen Hawking
Michael Brandon as Arno Allan Penzias
Tom Hodgkins as Robert Woodrow Wilson
Lisa Dillon as Jane Wilde
Phoebe Nicholls as Isobel Hawking
Adam Godley as Frank Hawking
Peter Firth as Fred Hoyle
Tom Ward as Roger Penrose
John Sessions as Dennis Sciama
Matthew Marsh as Dr. John Holloway
Alice Eve as Martha Guthrie
Rohan Siva as Jayant Narlikar
Reception
Accolades
Hawking received two nominations at the 2005 British Academy Television Awards: Best Single Drama and Best Actor (Cumberbatch). At the Monte-Carlo Television Festival, Benedict Cumberbatch won the Golden Nymph for Best Performance by an Actor in a TV film or miniseries.
Notes
References
External links
2004 television films
2004 films
2000s British films
BBC television dramas
British drama television films
British docudrama films
Cultural depictions of Stephen Hawking
Films about people with paraplegia or tetraplegia
Films directed by Philip Martin (director)
Science docudramas | Hawking (2004 film) | [
"Astronomy"
] | 1,400 | [
"Cultural depictions of astronomers",
"Cultural depictions of Stephen Hawking"
] |
9,183,639 | https://en.wikipedia.org/wiki/Diskless%20shared-root%20cluster | A diskless shared-root cluster is a way to manage several machines at the same time. Instead of each having its own operating system (OS) on its local disk, there is only one image of the OS available on a server, and all the nodes use the same image. (SSI cluster = single-system image)
The simplest way to achieve this is to use a NFS server, configured to host the generic boot image for the SSI cluster nodes. (pxe + dhcp + tftp + nfs)
To ensure that there is no single point of failure, the NFS export for the boot-image should be hosted on a two node cluster.
The architecture of a diskless computer cluster makes it possible to separate servers and storage array. The operating system as well as the actual reference data (userfiles, databases or websites) are stored competitively on the attached storage system in a centralized manner. Any server that acts as a cluster node can be easily exchanged by demand.
The additional abstraction layer between storage system and computing power eases the scale out of the infrastructure. Most notably the storage capacity, the computing power and the network bandwidth can be scaled independent from one another.
A similar technology can be found in VMScluster (OpenVMS) and TruCluster (Tru64 UNIX).
The open-source implementation of a diskless shared-root cluster is known as Open-Sharedroot.
Literature
Marc Grimme, Mark Hlawatschek, Thomas Merz: Data sharing with a Red Hat GFS storage cluster
Marc Grimme, Mark Hlawatschek German Whitepaper: Der Diskless Shared-root Cluster (PDF-Datei; 1,1 MB)
Kenneth W. Preslan: Red Hat GFS 6.1 – Administrator’s Guide
References
Cluster computing
Parallel computing
Computer networking | Diskless shared-root cluster | [
"Technology",
"Engineering"
] | 382 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
9,183,755 | https://en.wikipedia.org/wiki/Multicolumn%20countercurrent%20solvent%20gradient%20purification | Multicolumn countercurrent solvent gradient purification (MCSGP) is a form of chromatography that is used to separate or purify biomolecules from complex mixtures. It was developed at the Swiss Federal Institute of Technology Zürich by Aumann and Morbidelli. The process consists of two to six chromatographic columns which are connected to one another in such a way that as the mixture moves through the columns the compound is purified into several fractions.
Overview
The MCSGP process consists of several, at least two, chromatographic columns which are switched in position opposite to the flow direction. Most of the columns are equipped with a gradient pump to adjust the modifier concentration at the column inlet. Some columns are connected directly, so that non pure product streams are internally recycled. Other columns are short circuited, so that they operate in pure batch mode. The system is split into several sections, from which every section performs a tasks analogous to the tasks of a batch purification. These tasks are loading the feed, running the gradient elution, recycling of weakly adsorbing site fractions, fractionation of the purified product, recycling of strongly adsorbing site fractions, cleaning the column from strongly adsorbing impurities, cleaning in place and re-equilibration of the column to start the next purification run. All of the tasks mentioned here are carried out at the same time in one unit. Recycling of non-pure side fractions is performed in countercurrent movement.
Comparison with other purification methods
Biomolecules are often purified via solvent gradient batch chromatography. Here smooth linear solvent gradients are applied to carefully handle the separation between the desired component and hundreds of impurities. The desired product is usually intermediate between weakly and strongly absorbing impurities. A center cut is required to get the desired pure product. Often the preparative resins have a low efficiency due to strong axial dispersion and slow mass transfer. Then a purification in one chromatographic step is not possible. Countercurrent movement as known from the SMB process would be required. For large scale productions and for very valuable molecules countercurrent solid movement need to be applied to increase the separation efficiency, the yield and the productivity of the purification. The MCSGP process combines both techniques in one process, the countercurrent SMB principle and the solvent gradient batch technique.
Discontinuous mode consists of equilibration, loading, washing, purification and regeneration steps. The discontinuous mode of operation allows exploiting the advantage of solvent gradients, but it implies high solvent consumptions and low productivities with respect to continuous countercurrent processes. An established process of this kind is the simulated moving bed technique (SMB) that requires the solvent-consuming steps of equilibration, washing, regeneration only once per operation and has a better resin utilization. However, major drawbacks of SMB are the inability of separating a mixture into three fractions and the lack of solvent gradient applicability.
In the case of antibodies, the state-of-the-art technique is based on batch affinity chromatography (with Protein A or Protein G as ligands) which is able to selectively bind antibody molecules. In general, affinity techniques have the advantage of purifying biomolecules with high yields and purities but the disadvantages are in general the high stationary phase cost, ligand leaching and reduced cleanability.
The MCSGP process can result in purities and yields comparable to those of purification using Protein A. The second application example for the MCSGP prototype is the separation of three MAb variants using a preparative weak cation-exchange resin. Although the intermediately eluting MAb variant can only be obtained with 80% purity at recoveries close to zero in a batch chromatographic process, the MCSGP process can provide 90% purity at 93% yield. A numerical comparison of the MCSGP process with the batch chromatographic process, and a batch chromatographic process including ideal recycling, has been performed using an industrial polypeptide purification as the model system. It shows that the MCSGP process can increase the productivity by a factor of 10 and reduce the solvent requirement by 90%.
The main advantages with respect to solvent gradient batch chromatography are high yields also for difficult separations, less solvent consumption, higher productivity, usage of countercurrent solid movement, which increases the separation efficiency. The process is continuous. Once a steady state is reached, it delivers continuously purified product in constant quality and quantity. Automatic cleaning in place is integrated. A pure empirical design of the operating conditions from a single solvent gradient batch chromatogram is possible.
Applications
All chromatographic purifications and separations which are executed via solvent gradient batch chromatography can be performed using MCSGP. Typical examples are reversed phase purification of peptides, hydrophobic interaction chromatography for fatty acids or for example ion exchange chromatography of proteins or antibodies. The process can effectively enrich components, which have been fed in only small amounts. Continuous capturing of antibodies without affinity chromatography can be realized with the MCSGP-process.
References
Chromatography
Separation processes | Multicolumn countercurrent solvent gradient purification | [
"Chemistry"
] | 1,100 | [
"Chromatography",
"nan",
"Separation processes"
] |
9,184,570 | https://en.wikipedia.org/wiki/Particle%20mass%20density | The particle mass density or particle density of a material (such as particulate solid or powder) is the mass density of the particles that make up the powder. Particle density is in contrast to the bulk density, which measures the average density of a large volume of the powder in a specific medium (usually air).
The particle density is a relatively well-defined quantity, as it is not dependent on the degree of compaction of the solid, whereas the bulk density has different values depending on whether it is measured in the freely settled or compacted state (tap density).
However, a variety of definitions of particle density are available, which differ in terms of whether pores are included in the particle volume, and whether voids are included.
Measurement
The measurement of particle density can be done in a number of ways:
Archimedes' principle
The powder is placed inside a pycnometer of known volume, and weighed. The pycnometer is then filled with a fluid of known density, in which the powder is not soluble. The volume of the powder is determined by the difference between the volume as shown by the pycnometer, and the volume of liquid added (i.e. the volume of air displaced). A similar method, which does not include pore volume, is to suspend a known mass of particles in molten wax of known density, allow any bubbles to escape, allow the wax to solidify, and then measure the volume and mass of the wax/particulate brick.
A slurry of the powder in a liquid of known density can also be used with a hydrometer to measure particle density by buoyancy.
Another method based on buoyancy is to measure the weight of the sample in air, and also in a liquid of known density.
A column of liquid with a density gradient can also be prepared: The column should contain a liquid of continuously varying composition, so that the maximum density (at the bottom) is higher than that of the solid, and the minimum density is lower. If a small sample of powder is allowed to settle in this column, it will come to rest at the point where the liquid density is equal to the particle density.
Volumetric measurement
A gas pycnometer can be used to measure the volume of a powder sample. A sample of known mass is loaded into a chamber of known volume that is connected by a closed valve to a gas reservoir, also of known volume, at a higher pressure than the chamber. After the valve is opened, the final pressure in the system allows the total gas volume to be determined by application of Boyle's law.
A mercury porosimeter is an instrument that allows the total volume of a powder to be determined, as well as the volume of pores of different sizes: A known mass of powder is submerged in mercury. At ambient pressure, the mercury does not invade the interparticle spaces or the pores of the sample. At increasing pressure, the mercury invades smaller and smaller pores, with the relationship between pore diameter and pressure being known. A continuous trace of pressure versus volume can then be generated, which allows for a complete characterization of the sample's porosity.
See also
Archimedes' principle
Bulk density
Porosity
Number density
External links
An excellent overview of particle density and porosity measurements, with references
Mass density
Particulates | Particle mass density | [
"Physics",
"Chemistry"
] | 686 | [
"Mechanical quantities",
"Physical quantities",
"Intensive quantities",
"Mass",
"Volume-specific quantities",
"Particulates",
"Density",
"Particle technology",
"Mass density",
"Matter"
] |
9,184,746 | https://en.wikipedia.org/wiki/Office%20of%20the%20Chief%20Scientist%20%28Australia%29 | The Office of the Chief Scientist (OCS) is part of the Department of Industry, Science and Resources. Its primary responsibilities are to enable growth and productivity for globally competitive industries. To help realise this vision, the department has four key objectives: supporting science and commercialisation, growing business investment and improving business capability, streamlining regulation and building a high performance organisation.
Chief Scientist
The chief scientist is responsible for advising the Government of Australia on scientific and technological issues.
The chief scientist chairs the Research Quality Framework Development Advisory Group, the National Research Priorities Standing Committee and is a member of other key government committees:
Coordination Committee on Science and Technology
Prime Minister's Science Prizes Committee
Cooperative Research Centres Committee
Publicly Funded Research Agencies Committee
Commonwealth, State and Territory Advisory Council on Innovation
National Collaborative Research Infrastructure Strategy Committee
Chief scientists
National Science and Technology Council
The National Science and Technology Council is responsible for providing advice to the prime minister and other ministers on important science and technology issues facing Australia.
The prime minister, Scott Morrison, and the minister for industry, science and technology, the Hon Karen Andrews MP, announced the new council on 28 November 2018.
The council is chaired by the prime minister, with the minister for industry, science and technology as deputy chair. Australia's chief scientist, Cathy Foley, is the executive office.
History of Australian science councils
Australian Science, Technology and Engineering Council (1977–1997)
Prime Minister's Science, Engineering and Innovation Council (1997–2013)
Commonwealth Science Council (2014–2018)
National Science and Technology Council (2018–present) ()
See also
Backing Australia's Ability
References
External links
Commonwealth Government agencies of Australia
Scientific organisations based in Australia
Australia | Office of the Chief Scientist (Australia) | [
"Technology"
] | 334 | [
"Scientists in technology assessment and policy",
"Chief scientific advisers by country"
] |
9,184,841 | https://en.wikipedia.org/wiki/Turret%20clock | A turret clock or tower clock is a clock designed to be mounted high in the wall of a building, usually in a clock tower, in public buildings such as churches, university buildings, and town halls. As a public amenity to enable the community to tell the time, it has a large face visible from far away, and often a striking mechanism which rings bells upon the hours.
The turret clock is one of the earliest types of clock. Beginning in 12th century Europe, towns and monasteries built clocks in high towers to strike bells to call the community to prayer. Public clocks played an important timekeeping role in daily life until the 20th century, when accurate watches became cheap enough for ordinary people to afford. Today the time-disseminating functions of turret clocks are not much needed, and they are mainly built and preserved for traditional, decorative, and artistic reasons.
To turn the large hands and run the striking train, the mechanism of turret clocks must be more powerful than that of ordinary clocks. Traditional turret clocks are large pendulum clocks run by hanging weights, but modern ones are often run by electricity.
History
Water clocks
Water clocks are reported as early as the 16th century B.C. and were used in the ancient world, but these were domestic clocks. Beginning in the Middle Ages around 1000 A.D. striking water clocks were invented, which rang bells on the canonical hours for the purpose of calling the community to prayer. Installed in clock towers in cathedrals, monasteries and town squares so they could be heard at long distances, these were the first turret clocks. By the 13th century towns in Europe competed with each other to build the most elaborate, beautiful clocks. Water clocks kept time by the rate of water flowing through an orifice. Since the rate of flow varies with pressure which is proportional to the height of water in the source container, and viscosity which varies with temperature during the day, water clocks had limited accuracy. Other disadvantages were that they required water to be manually hauled in a bucket from a well or river to fill the clock reservoir every day, and froze solid in winter.
Verge and foliot clocks
The first all-mechanical clocks which emerged in Europe in the late 13th century kept time with a verge escapement and foliot (also known as crown and balance wheels). In the second half of the 14th century, over 500 striking turret clocks were installed in public buildings all over Europe. The new mechanical clocks were easier to maintain than water clocks, as the power to run the clock was provided by turning a crank to raise a weight on a cord, and they also did not freeze during winter, so they became the standard mechanism used in the turret clocks being installed in bell towers in churches, cathedrals, monasteries and town halls all over Europe.
The verge and foliot timekeeping mechanism in these early mechanical clocks was very inaccurate, as the primitive foliot balance wheel did not have a balance spring to provide a restoring force, so the balance wheel was not a harmonic oscillator with an inherent resonant frequency or "beat"; its rate varied with variations in the force of the wheel train. The error in the first mechanical clocks may have been several hours per day. Therefore, the clock had to be frequently reset by the passage of the sun or stars overhead.
Pendulum clocks
The pendulum clock was invented and patented in 1657 by Dutch scientist Christiaan Huygens, inspired by the superior timekeeping properties of the pendulum discovered beginning in 1602 by Italian scientist Galileo Galilei. Pendulum clocks were much more accurate than the previous foliot clocks, improving timekeeping accuracy of the best precision clocks from 15 minutes per day to perhaps 10 seconds a day. Within a few decades most tower clocks throughout Europe were rebuilt to convert the previous verge and foliot escapement to pendulums. Almost no examples of the original verge and foliot mechanisms of these early clocks have survived to the present day.
The accuracy of the pendulum clock was increased by the invention of the anchor escapement in 1657 by Robert Hooke, which quickly replaced the primitive verge escapement in pendulum clocks. The first tower clock with the new escapement was the Wadham College Clock, built at Wadham College, Oxford, UK, in 1670, probably by clockmaker Joseph Knibb. The anchor escapement reduced the pendulum's width of swing from 80 to 100° in the verge clock to 3-6°. This greatly reduced the energy consumed by the pendulum, and allowed longer pendulums to be used. While domestic pendulum clocks usually use a seconds pendulum long, tower clocks often use a 1.5 second pendulum, long, or a two-second pendulum, long.
Tower clocks had a source of error not found in other clocks: the varying torque on the wheel train caused by the weight of the huge external clock hands as they turned, which was made worse by seasonal snow, ice and wind loads on the hands. The variations in force, applied to the pendulum by the escape wheel, caused the period of the pendulum to vary. During the 19th century specialized escapements were invented for tower clocks to mitigate this problem. In the most common type, called gravity escapements, instead of applying the force of the gear train to push the pendulum directly, the escape wheel instead lifted a weighted lever, which was then released and its weight gave the pendulum a push during its downward swing. This isolated the pendulum from variations in the drive force. One of the most widely used types was the three-legged gravity escapement invented in 1854 by Edmund Beckett (Lord Grimsthorpe).
Electrical clocks
Electric turret clocks and hybrid mechanical/electric clocks were introduced in the late 19th century.
Some mechanical turret clocks are wound by electric motor. These still are considered mechanical clocks.
Table of early public turret clocks
This table shows some of the turret clocks which were installed throughout Europe. It is not complete and mainly serves to illustrate the rate of adoption. There are hardly any surviving turret clock mechanisms that date before 1400, and because of extensive rebuilding of clocks the authenticity of those that do survive is disputed. What little is known of their mechanisms is mostly gleaned from manuscript sources.
The "country" column refers to the present (2012) international boundaries. For example, Colmar was in Germany in 1370, but is now in France.
Thirteenth century
The verge and foliot escapement is thought to have been introduced sometime at the end of the thirteenth century, so very few if any of these clocks had foliot mechanisms; most were water clocks or in a few cases, possibly mercury.
Fourteenth century
During the fourteenth century, the emergence of the foliot replaced the high-maintenance water clocks. It is not known when that happened exactly and which of the early 14th century clocks were water clocks and which ones use a foliot.
The Heinrich von Wieck clock in Paris dating from 1362 is the first clock of which it is known with certainty that it had a foliot and a verge escapement. The fact that there is a sudden increase in the number of recorded turret clock installations points to the fact that these new clocks use verge & foliot. This happens in the years 1350 and onwards.
It becomes apparent that even small towns can afford to put up public striking clocks. Turret clocks are now common throughout Europe.
No surviving clock mechanisms (apart from the claims from Salisbury and Wells) is known from this era.
See also
History of timekeeping devices
List of clocks
References
C. F. C. Beeson English Church Clocks London 1971
Christopher McKay (Editor) The Great Salisbury Clock Trial, Antiquarian Horological Society Turret Clock Group, 1993
Alfred Ungerer Les horloges astronomiques et monumentales les plus remarquables de l'antiquité jusquà nos jours, Strasbourg, 1931
Ferdinand Berthoud Histoire de la mesure du temps par les horloges, Imprimerie de la Republique, 1802
Gustav Bilfinger Die Mittelalterlichen Horen und die Modernen Stunden, Stuttgart, 1892
F.J. Britten Old clocks and their makers:an historical and descriptive account of the different styles of clocks of the past in England and abroad : with a list of over eleven thousand makers, London, 1910
Ernst Zinner Aus der Frühzeit der Räderuhr. Von der Gewichtsuhr zur Federzuguhr München, 1954
Clock designs
Turrets
Clocks | Turret clock | [
"Physics",
"Technology",
"Engineering"
] | 1,717 | [
"Physical systems",
"Machines",
"Clocks",
"Measuring instruments"
] |
10,673,638 | https://en.wikipedia.org/wiki/Solar-like%20oscillations | Solar-like oscillations are oscillations in stars that are excited in the same way as those in the Sun, namely by turbulent convection in its outer layers. Stars that show solar-like oscillations are called solar-like oscillators. The oscillations are standing pressure and mixed pressure-gravity modes that are excited over a range in frequency, with the amplitudes roughly following a bell-shaped distribution. Unlike opacity-driven oscillators, all the modes in the frequency range are excited, making the oscillations relatively easy to identify. The surface convection also damps the modes, and each is well-approximated in frequency space by a Lorentzian curve, the width of which corresponds to the lifetime of the mode: the faster it decays, the broader is the Lorentzian. All stars with surface convection zones are expected to show solar-like oscillations, including cool main-sequence stars (up to surface temperatures of about 7000K), subgiants and red giants. Because of the small amplitudes of the oscillations, their study has advanced tremendously thanks to space-based missions (mainly COROT and Kepler).
Solar-like oscillations have been used, among other things, to precisely determine the masses and radii of planet-hosting stars and thus improve the measurements of the planets' masses and radii.
Red giants
In red giants, mixed modes are observed, which are in part directly sensitive to the core properties of the star. These have been used to distinguish red giants burning helium in their cores from those that are still only burning hydrogen in a shell, to show that the cores of red giants are rotating more slowly than models predict and to constrain the internal magnetic fields of the cores
Echelle diagrams
The peak of the oscillation power roughly corresponds to lower frequencies and radial orders for larger stars. For the Sun, the highest amplitude modes occur around a frequency of 3 mHz with order , and no mixed modes are observed. For more massive and more evolved stars, the modes are of lower radial order and overall lower frequencies. Mixed modes can be seen in the evolved stars. In principle, such mixed modes may also be present in main-sequence stars but they are at too low frequency to be excited to observable amplitudes. High-order pressure modes of a given angular degree are expected to be roughly evenly-spaced in frequency, with a characteristic spacing known as the large separation . This motivates the echelle diagram, in which the mode frequencies are plotted as a function of the frequency modulo the large separation, and modes of a particular angular degree form roughly vertical ridges.
Scaling relations
The frequency of maximum oscillation power is accepted to vary roughly with the acoustic cut-off frequency, above which waves can propagate in the stellar atmosphere, and thus are not trapped and do not contribute to standing modes. This gives
Similarly, the large frequency separation is known to be roughly proportional to the square root of the density:
When combined with an estimate of the effective temperature, this allows one to solve directly for the mass and radius of the star, basing the constants of proportionality on the known values for the Sun. These are known as the scaling relations:
Equivalently, if one knows the star's luminosity, then the temperature can be replaced via the blackbody luminosity relationship , which gives
Some bright solar-like oscillators
Procyon
Alpha Centauri A and B
Mu Herculis
See also
Asteroseismology
Helioseismology
Variable stars
References
External links
Lecture Notes on Stellar Oscillations published by J. Christensen-Dalsgaard (Aarhus University, Denmark)
Variable stars
Asteroseismology | Solar-like oscillations | [
"Physics"
] | 779 | [
"Physical phenomena",
"Stellar phenomena",
"Astrophysics",
"Asteroseismology"
] |
10,674,243 | https://en.wikipedia.org/wiki/Tzanck%20test | In dermatopathology, the Tzanck test, also Tzanck smear, is scraping of an ulcer base to look for Tzanck cells. It is sometimes also called the chickenpox skin test and the herpes skin test. It is a simple, low-cost, and rapid office based test.
Tzanck cells (acantholytic cells) are found in:
Herpes simplex
Varicella and herpes zoster
Pemphigus vulgaris
Cytomegalovirus
Arnault Tzanck did the first cytological examinations in order to diagnose skin diseases. To diagnose pemphigus, he identified acantholytic cells, and to diagnose of herpetic infections he identified multinucleated giant cells and acantholytic cells. He extended his cytologic findings to certain skin tumors as well.
Even though cytological examination can provide rapid and reliable diagnosis for many skin diseases, its use is limited to a few diseases. In endemic regions, Tzanck test is used to diagnose leishmaniasis and leprosy. For other regions, Tzanck test is mainly used to diagnose pemphigus and herpetic infections. Some clinics use biopsies even for herpetic infections. This is because the advantages of this test are not well known, and the main textbooks of dermatopathology do not include dedicated sections for cytology or Tzanck smear. A deep learning model called TzanckNet has been developed to lower the experience barrier needed to use this test.
Procedure
Unroof vesicle and scrape base w/ sterile №15 scalpel blade
Smear with cotton stick onto a clean glass slide
Fix w/ gentle heat or air dry
Fix w/ MeOH (Methanol)
Stain w/ Giemsa, methylene blue or Wright’s stain.
Microscopic examination using an oil immersion lens. (Look for multinucleated giant cells)
A modified test can be performed using proprietary agents which requires fewer steps and allows the sample to be fixed quicker.
Cytologic findings
For microscopic evaluation, samples are first scanned with low magnification objectives (X4 and X10) and then examined in detail with the high magnification objective (X100). The X4 objectives are used to select the areas to investigate in detail and to detect some ectoparasites, but the basis of the cytological diagnostic process is the X10 objective. With X10 magnification, the individual characteristics of the cells, the relationship of the cells to each other and the presence of some infection and infestation agents are evaluated. For this reason, most of the cytological examination is spent at this magnification, and most samples are diagnosed at this magnification. The key cytological findings that are observed at low magnification or, in other words, should be investigated according to the clinical characteristics of the patient are as follows: acantholytic cells, tadpole cells, granulomatous inflammation, infectious agents and increases in specific cells.
Major indications, cytologic findings and diagnostic value of Tzanck smear test
Tzanck smear examples
References
External links
Tzanck test - medlineplus.org.
Definition of Tzanck test - medterms.com.
Medical tests
Pathology
Chickenpox | Tzanck test | [
"Biology"
] | 701 | [
"Pathology"
] |
10,674,680 | https://en.wikipedia.org/wiki/Sumitomo%20Electric%20Industries | is a manufacturer of electric wire and optical fiber cables. Its headquarters are in Chūō-ku, Osaka, Japan. The company's shares are listed in the first section of the Tokyo, Nagoya Stock Exchanges, and the Fukuoka Stock Exchange. In the period ending March 2021, the company reported consolidated sales of US$26,5 billion (2,918,580 million Japanese yen).
The company was founded in 1897 to produce copper wire for electrical uses. Sumitomo Electric operates in five business fields: Automotive, Information & Communications, Electronics, Environment & Energy, and Industrial materials and is developing in two others: Life Sciences and Materials & Resources. It has more than 400 subsidiaries and over 280,000 employees in more than 30 countries.
Sumitomo Electric has traditionally had an intensive focus on R&D to develop new products. Its technologies have been used in major projects including traffic control in Thailand, improvement of telecom networks in Nigeria, membrane technology for waste water treatment in Korea, and bridge construction in Germany. Sumitomo produces chips for 5G base stations.
Sumitomo Electric's electrical wiring harness systems, which are used to send information and energy to automobiles, hold the largest market share in the world. Sumitomo Electric also continues to be the leading manufacturer of composite semiconductors (GaAs, GaN, InP), which are widely used in semiconductor lasers, LEDs, and mobile telecommunications devices. The company is one of the top three manufacturers in the world of optical fiber.
Sumitomo Electric Industries is a part of the Sumitomo keiretsu.
History
1897 to 1950
1897 - Sumitomo Copper Rolling Works was founded
1900 - Started production of coated wires
1908 - Started production of power cables
1909 - Started trial production of telecommunication cables
1911 - Established Sumitomo Electric Wire & Cable Works; Laid first Japan-made high-voltage underground cables
1916 - Opened a new factory (now the Osaka Works); Started production of enamel wires
1920 - Sumitomo Electric Wire & Cable Works incorporated as a limited company
1931 - Started production of cemented carbide tools
1932 - Started production of special steel wires
1939 - Company name changed to the current name, Sumitomo Electric Industries, Ltd.
1941 - Opened the Itami Works
1943 - Started production of vibration-proof rubber products and fuel tanks
1946 - Opened a brand office in Tokyo (now the Tokyo Head Office)
1948 - Started marketing sintered powder metal products
1949 - Entered into the construction business of overhead transmission lines
1951 to 2000
1957 - Delivered the first Japan-made television broadcasting antennas
1961 - Opened the Yokohama Works; Delivered the wiring harnesses for four-wheel vehicles for the first time in its history
1962 - Started production of the Irrax Tube electron beam irradiation tubes; The head office was moved from Osaka's Konohana Ward to its present location in Chuo Ward
1963 - Started production of disc brakes
1964 - Started production of electron beam irradiation wires
1968 - Entered into the traffic control systems business
1969 - Established the first overseas production subsidiary in Thailand (SIAM Electric Industries Co., Ltd.); Started development of flexible printed circuits (FPCs)
1970 - Started production of compound semiconductors
1971 - Opened the Kanto Works
1974 - Started production of optical fiber cables
1975 - Contracted to construct a power transmission line in Iran
1976 - Received an order for a large telecommunications network construction project in Nigeria
1978 - Delivered and put into operation the world's first bidirectional fiber optics CATV system called “Hi-OVIS”
1981 - Delivered and installed fiber optic LAN systems for the first time in its history
1982 - Succeeded in producing the world's-largest-class (1.2 carats) synthetic diamonds
1996 - Developed a technology for producing long-length oxide high voltage superconducting wires
1998 - Developed and started marketing ecology wires and cables
1999 - Sumitomo Electric Fine Polymer, Inc. (fine polymer products) started operation
2001 to present
2001 - J-Power Systems Corporation (high-voltage power cables) started operational
2002 - Sumitomo Electric Networks, Inc. (network equipment), Sumitomo (SEI) Steel Wire Corp. (special steel wires) and Sumitomo Electric Wintec, Inc. (magnet wires) started operation
2003 - Sumiden Hitachi Cable Ltd. (wires and cables for buildings and industrial equipment) and Sumitomo Electric Hardmetal Corp. (powder metal and diamond products) started operation
2004 - A.L.M.T. Corp. was made a wholly owned subsidiary
2006 - The HTS cable used in a power transmission grid in the U.S. started supplying electricity
2007 - Sumitomo Wiring Systems, Ltd. was made a wholly owned subsidiary; Nissin Electric Co., Ltd. was made a consolidated subsidiary
2008 - Opened Technical Training Center
2009 - Eudyna Devices Inc. was made a wholly owned subsidiary and changed its trade name to Sumitomo Electric Device Innovations, Inc.
2010 - Opened WinD Lab, a new laboratory building; SEI Optifrontier Co., Ltd. Started lightwave network product business
2011 - Commenced “Demonstration of Megawatt-Class Power Generation/Storage System” at Osaka Works, and at Yokohama Works in 2012
2012 - Obtained the world's first certification of Thunderbolt, an optical cable which began commercial production; Established joint research on “Commercialization and Construction Technology for Offshore Wind Power Generation Plants Regional Promotion Aqua-Wind Commercialization Study Group.”
2013 - Constructed manufacturing base for automotive aluminum electric wire in Thailand.
Business units
Sumitomo Electric and its global subsidiaries and affiliates undertake product development, manufacturing and marketing, as well as service provision in the five business divisions: “Automotive,” “Infocommunications,” “Electronics,” “Environment and Energy,” and “Industrial Materials & Others.”
Automotive
The automotive segment accounts for 50% of Sumitomo Electric's annual sales. With the aim of realizing an automotive society characterized by safety, comfort, and environmental responsibility, Sumitomo Electric supplies the global market with a broad range of products, including wiring harnesses for in-vehicle data and energy transmission, and anti-vibration rubber.
The automotive wiring harness business commenced in 1949 with supplies to the Occupation Forces for their jeeps. In 1961, for the first time, the company supplied wiring harnesses for four-wheel-drive vehicles. At present, Sumitomo Electric promotes the automotive wiring harness business in a tripartite system, in which Sumitomo Electric takes charge of sales and business planning, Sumitomo Wiring Systems handles design and manufacturing, and AutoNetworks Technologies conduct research and development. As a result, Sumitomo Electric's electrical wiring harness systems, which are used to send information and energy to automobiles, have garnered the second largest market share in the world.
Info-communications
This segment provides key products and devices that support optical communications, such as optical fibers, cables, connectors, fusion splicers, GE-PON (Gigabit Ethernet Passive Optical Network) devices, various network access equipment, as well as electronic devices and antenna products for wireless communications. The division also provides various products for supporting the Information and Communication Technology (ICT) society such as traffic control systems and other intelligent transportation system (ITS) devices.
Sumitomo Electric produced optical fiber well ahead of other manufacturers, taking note of the product's great capacity for voluminous, speedy, and assured data transmission, ideal for the advanced information age that was to come. In 1986, Sumitomo Electric developed Z-fiber, pure silica core fiber with the world's lowest transmission loss. This has supported the construction of optical communication networks, such as its wide use in many submarine cables. Sumitomo Electric's optical fibers ranks among the best in optical transmission networks and optical communication devices.
Electronics
The Sumitomo Electric Group's electronics division supplies various products to manufacturers of smartphones, flat-screen televisions, and other highly advanced electronic goods. Products include base material, wiring, and components for compact and lightweight devices with high functionality, such as flexible printed circuits (FPCs), electronic wires, heat-shrinkable tubing, fine polymer products, and compound semiconductors. Capitalizing on compound semiconductor development and manufacturing-knowledge accumulated over many years, Sumitomo Electric succeeded in developing and mass-producing the world's first gallium nitride substances. Sumitomo Electric also continues to be the leading manufacturer of composite semiconductors (GaAs, GaN, InP), which are widely used in semiconductor lasers, LEDs, and mobile telecommunications devices.
Environment and Energy
This division provides electric wire and cable products that underpin stable energy supply. They include copper wire rods from which various types of electric wires and cables are made, power cables that are indispensable for the supply of high-voltage electricity, and trolley wires for railways. This business segment also supplies magnet wires used in household appliances, automotive electric components, and industrial motors- including hybrid products of rubber, plastic, and ceramics resulting from our development of wire coating technologies- to many different branches of industry.
Industrial Materials
Hard metal products, such as cutting tools, are essential for high-speed, high-performance, and high-precision mechanical processing. This division manufactures products used in many industries, including special metal wires for prestressed concrete used in civil engineering and construction projects; special steel wires such as steel cords used as tire-reinforcement materials in the automobile industry; and oil-tempered wires for valve springs. This division also makes sintered parts that are used as structural components in automobiles and home electric appliances, ranking among the top 3 in the world.
Starting in 2013, Sumitomo Electric will expand into two more divisions, “Life Sciences” and “Resources” by making full use of the Group's wide-ranging technological capabilities.
Projects
Americas
Argentina – Optical Cable System
Brazil – Automotive Wiring Harness Business
Mexico – Submarine and underground power transmission installation project
USA – Restoration of I-35 W Highway Bridge, Superconducting Cable Demonstration Test
Middle East
Iran – Major power transmission line installation project
Asia
China – Establishment of optical fiber product joint ventures with Futong Group
India - Sumitomo Riko Co. Ltd.(Formally Tokai Rubber Industries) which produces rubber hose and other synthetic resin products
Japan – World’s first superconducting electric car
Korea – Membrane Technology for waste water treatment
Taiwan – contribution to Taiwan’s high-speed Shinkansen Bullet Train
Thailand – Easing traffic jam, First overseas manufacturing base
Vietnam – New business center for electronic products
Malaysia - Sumitomo Electric Wintec Sdn Bhd
Europe
Germany – Introduction of new bridge building method
UK – SUMICRYSTAL: Synthetic Single Crystal Diamond
Russia – communications equipment and electric wire business
Africa
Nigeria – Telecom network project.
Morocco - SEWS.
Tunisia - SEWS.
Oceania
Australia- Long-distance underground power transmission lines
See also
Sumiflon
References
External links
Sumitomo Electric Industries - Global Website
Sumitomo Electric Industries - Japanese Website
Sumitomo Electric Industries - Chinese Website
Companies listed on the Tokyo Stock Exchange
Companies listed on the Osaka Exchange
Companies listed on the Fukuoka Stock Exchange
Companies in the Nikkei 225
Manufacturing companies based in Osaka
Wire and cable manufacturers
Electrical equipment manufacturers
Manufacturing companies established in 1897
Sumitomo Group
Electronics companies of Japan
Companies listed on the Nagoya Stock Exchange
Japanese companies established in 1897
Electric motor manufacturers | Sumitomo Electric Industries | [
"Engineering"
] | 2,320 | [
"Electrical engineering organizations",
"Electrical equipment manufacturers"
] |
10,674,791 | https://en.wikipedia.org/wiki/Near-field%20electromagnetic%20ranging | Near-field electromagnetic ranging (NFER) refers to any radio technology employing the near-field properties of radio waves as a Real Time Location System (RTLS).
Overview
Near-field electromagnetic ranging is an emerging RTLS technology that employs transmitter tags and one or more receiving units. Operating within a half-wavelength of a receiver, transmitter tags must use relatively low frequencies (less than 30 MHz) to achieve significant ranging. Depending on the choice of frequency, NFER has the potential for range resolution of and ranges up to .
Technical Discussion
The phase relations between the EH components of an electro-magnetic field ((E and H are the components E=electric and H=magnetic)) vary with distance around small antennas. This was first discovered by Heinrich Hertz and is formulated with Maxwell's field theory.
Close to a small antenna, the electric and magnetic field components of a radio wave are 90 degrees out of phase. As the distance from the antenna increases, the EH phase difference decreases. Far from a small antenna in the far-field, the EH phase difference goes to zero. Thus a receiver that can separately measure the electric and magnetic field components of a near-field signal and compare their phases can measure the range to the transmitter.
Advantages
NFER technology is a different approach for locating systems. It has several inherent advantages over other RTLS systems.
First, no signal modulation is required, so baseband signals with an arbitrarily small bandwidth may be used for ranging.
Second, precise synchronization is not required between different receivers: in fact, a local range measurement can be made with just a single receiver.
Third, since EH phase differences are preserved when a signal is down-converted to baseband, high range precision may be achieved with relatively low time precision.
For instance, a radio wave at 1 MHz has a period of 1 μs, and the EH phase difference changes about 45 degrees between to . Thus, a 1 degree EH phase difference in a 1 MHz signal corresponds to a range difference of about and 1/360 of the period or 27.78 ns difference in time between the electric and magnetic signals. Down-converted to a 1 kHz audio signal, the period becomes 1 ms, and the time difference required to measure becomes 27.78 μs. A comparable time-of-flight (TOF) or Time difference of arrival (TDOA) system would require 2 ns to 4 ns to make the same measurement.
Using relatively low frequencies also conveys additional advantages. First, low frequencies are generally more penetrating than higher frequencies. For instance, at 2.4 GHz a reinforced concrete wall might attenuate signals as much as 20 dB. Second, the long wavelengths associated with low frequencies are far less vulnerable to multipath. In dense metallic structures, multipath obscures or destroys the ability of microwave or UHF signals to be used for reliable positioning. Low frequencies are less affected by this problem.
Disadvantages
Operation at low frequencies faces challenges as well. In general, antennas are most efficient at frequencies whose wavelengths are comparable to the antennas' dimensions (e.g., a quarter-wavelength monopole antenna). Therefore, since higher frequencies have smaller wavelengths, high frequency antennas are typically smaller than low frequency antennas. The larger size of practically efficient low frequency antennas is a significant hurdle that near-field electromagnetic ranging systems cannot overcome without decreasing gain. Applying fractal antennae to NFC requires complex adaptive controls
Applications
The low-frequency, multipath-resistant characteristics of NFER make it well suited for tracking in dense metallic locations, such as typical office and industrial environments. Low frequencies also readily diffract around the human body, which makes tracking people possible without the body blockage experienced by microwave systems like Ultra-wideband (UWB). Systems deployed in complicated indoor propagation environments reportedly achieve accuracy or better at ranges of or more. There is also an indication that multiple frequency implementations may yield increased accuracy.
See also
Near-field, a definition with the Hertz and Maxwell wave models
Near Field Communication, a short-range wireless technology
Radio Frequency Identification (RFID)
Real Time Locating Systems (RTLS)
Real Time Location Systems
Ultra-wideband (UWB)
References
External links
Capps, Charles. “Near Field or Far Field,” EDN, August 16, 2001, pp. 95-102.
Introduction to Near Field Electromagnetic Ranging
Technical Papers on Near Field Electromagnetic Ranging
Radio technology | Near-field electromagnetic ranging | [
"Technology",
"Engineering"
] | 896 | [
"Information and communications technology",
"Telecommunications engineering",
"Radio technology"
] |
10,674,858 | https://en.wikipedia.org/wiki/Polarity%20symbols | Polarity symbols are a notation for electrical polarity, found on devices that use direct current (DC) power, when this is or may be provided from an alternating current (AC) source via an AC adapter. The adapter typically supplies power to the device through a thin electrical cord which terminates in a coaxial power connector often referred to as a "barrel plug" (so-named because of its cylindrical shape). The polarity of the adapter cord and plug must match the polarity of the device, meaning that the positive contact of the plug must mate with the positive contact in the receptacle, and the negative plug contact must mate with the negative receptacle contact. Since there is no standardization of these plugs, a polarity symbol is typically printed on the case indicating which type of plug is needed.
The commonly used symbol denoting the polarity of a device or adapter consists of a black dot with a line leading to the right and a broken circle (like the letter "C") surrounding the dot and with a line leading to the left. At the ends of the lines leading right and left are found a plus sign (+), meaning positive, also sometimes referred to as "hot", and a minus sign (−), meaning negative, also sometimes referred to as "neutral".
The symbol connected to the dot (usually the symbol found to the right) denotes the polarity of the center/tip, whereas the symbol connected to the broken circle denotes the polarity of the barrel/ring. When a device or adapter is described simply as having "positive polarity" or "negative polarity", this denotes the polarity of the center/tip.
External links
Electric Current Symbols
Polarity Symbols.
Electrical engineering | Polarity symbols | [
"Engineering"
] | 361 | [
"Electrical engineering"
] |
10,675,917 | https://en.wikipedia.org/wiki/Crown%20gold | Crown gold is a 22 karat (kt) gold alloy used in the crown coin introduced in England in 1526 (by Henry VIII). In this alloy, the proportion of gold is 22 parts out of 24 (91.667% gold). Crown gold is appreciably less prone to wear than the softer 23 kt gold of earlier gold sovereigns — an important point for coins intended for everyday use in circulation.
Alloying metal
The alloying metal in England is traditionally restricted to copper. Copper is still used for the current British gold sovereign. An exception was the gold sovereign of 1887, when 1.25% silver, replacing the same weight of copper, was used to gain a better effigy of Queen Victoria for the Golden Jubilee of her reign. Elsewhere, both copper and silver have been used in varying proportions.
Circulating coins
In the United States until 1834, gold circulating coins were minted in 22 kt crown gold using about 6% silver as well as copper. From 1834, the fineness of U.S. coin gold was decreased from the 22 kt crown gold standard to 0.8992 fine (21.58 kt); and in 1837 to 0.900 fine (21.60 kt exactly). This 90% gold–copper alloy continued in the U.S. from 1837 until gold coins were removed from circulation in the U.S. in 1933.
The South African Krugerrand, first produced in 1967, is produced in the traditional crown gold recipe of 22 kt gold, with the remainder copper, because it was originally intended to circulate as currency.
Bullion coins
Most current gold coinage is intended as bullion and not designed for circulation, so the requirement for a hard alloy is much less. Gold bullion coins are commonly 24 kt, 0.999, 0.9999, or even 0.99999 fine in the case of the Canadian Gold Maple Leaf. Some bullion coins have stayed with the traditional crown gold standard, including the British sovereign, the Krugerrand, and American Gold Eagles.
See also
The Great Debasement
References
External links
Metal Used in Coins and Medals by Tony Clayton
Gold
Coins
Precious metal alloys
Numismatics
History of British coinage
Coinage metals and alloys | Crown gold | [
"Chemistry"
] | 461 | [
"Precious metal alloys",
"Alloys",
"Coinage metals and alloys"
] |
10,675,964 | https://en.wikipedia.org/wiki/Ship%20stability | Ship stability is an area of naval architecture and ship design that deals with how a ship behaves at sea, both in still water and in waves, whether intact or damaged. Stability calculations focus on centers of gravity, centers of buoyancy, the metacenters of vessels, and on how these interact.
History
Ship stability, as it pertains to naval architecture, has been taken into account for hundreds of years. Historically, ship stability calculations relied on rule of thumb calculations, often tied to a specific system of measurement. Some of these very old equations continue to be used in naval architecture books today. However, the advent of calculus-based methods of determining stability, particularly Pierre Bouguer's introduction of the concept of the metacenter in the 1740s ship model basin, allow much more complex analysis.
Master shipbuilders of the past used a system of adaptive and variant design. Ships were often copied from one generation to the next with only minor changes; by replicating stable designs, serious problems were usually avoided. Ships today still use this process of adaptation and variation; however, computational fluid dynamics, ship model testing and a better overall understanding of fluid and ship motions has allowed much more analytical design.
Transverse and longitudinal waterproof bulkheads were introduced in ironclad designs between 1860 and the 1880s, anti-collision bulkheads having been made compulsory in British steam merchant ships prior to 1860. Before this, a hull breach in any part of a vessel could flood its entire length. Transverse bulkheads, while expensive, increase the likelihood of ship survival in the event of hull damage, by limiting flooding to the breached compartments they separate from undamaged ones. Longitudinal bulkheads have a similar purpose, but damaged stability effects must be taken into account to eliminate excessive heeling. Today, most ships have means to equalize water in sections port and starboard (cross flooding), which helps limit structural stresses and changes to the ship's heel and/or trim.
Add-on stability systems
Add-on stability systems are designed to reduce the effects of waves and wind gusts. They do not increase a vessel's stability in calm seas. The International Maritime Organization International Convention on Load Lines does not cite active stability systems as a method of ensuring stability. The hull must be stable without active systems.
Passive systems
Bilge keel
A bilge keel is a long, often V-shaped metal fin welded along the length of the ship at the turn of the bilge. Bilge keels are employed in pairs (one for each side of the ship). Rarely, a ship may have more than one bilge keel per side. Bilge keels increase hydrodynamic resistance when a vessel rolls, limiting the amount of roll.
Outriggers
Outriggers may be employed on vessels to reduce rolling, either by the force required to submerge buoyant floats or by hydrodynamic foils. In some cases, these outriggers are of sufficient size to classify the vessel as a trimaran; on other vessels, they may simply be referred to as stabilizers.
Antiroll tanks
Antiroll tanks are interior tanks fitted with baffles to slow the rate of water transfer from the tank's port side to its starboard side. It is designed so that a larger amount of water is trapped on the vessel's higher side. It is intended to have an effect counter to that of the free surface effect.
Paravanes
Paravanes may be employed by slow-moving vessels, such as fishing vessels, to reduce roll.
Active systems
Active stability systems, found on many vessels, require energy to be applied to the system in the form of pumps, hydraulic pistons, or electric actuators. They include stabilizer fins attached to the side of the vessel or tanks in which fluid is pumped around to counteract the vessel's motion.
Stabilizer fins
Active fin stabilizers reduce the roll a vessel experiences while underway or, more recently, while at rest. They extend beyond the vessel's hull below the waterline and alter their angle of attack depending on heel angle and the vessel's rate-of-roll, operating similarly to airplane ailerons. Cruise ships and yachts frequently use this type of stabilizing system.
When fins are not retractable, they constitute fixed appendages to the hull, possibly extending the beam or draft envelope and requiring attention for additional hull clearance.
While the typical "active fin" stabilizer effectively counteracts roll for ships underway, some modern active fin systems can reduce roll when vessels are not underway. Referred to as zero-speed, or Stabilization at Rest, these systems work by moving specially designed fins with sufficient acceleration and impulse timing to create effective roll-cancelling energy.
Rudder Roll Stabilisation
In case a ship is underway, a fast rudder change will not only initiate a heading change, but it will also cause the ship to roll. For some ships such as frigates, this effect is so large that it can be used by a control algorithm to simultaneously steer the ship while reducing its roll motions. Such a system is usually referred to as "Rudder Roll Stabilisation System". Its effectiveness can be as good as that of stabiliser fins. However, that depends on the ship speed (higher is better) and various ship design aspects such as position, size and quality of the rudder positioning system (behaves as fast as a stabiliser fin). Also important is how quickly the ship will respond to rudder motions with roll motions (quick is better) and rate of turn (slow is better). Despite the high costs of high-quality steering gear and strengthening of the ship's stern, this stabilisation option offers better economics than stabiliser fins. It requires fewer installations, is less vulnerable and it causes less drag. Even better, the required high-quality components provide excellent steering properties also for those periods when roll reduction is not required and a significant reduction of underwater noise. Known navy ships with this stabilisation solution are F124 (Germany), M-fregat and LCF (both of Dutch Navy).
Gyroscopic internal stabilizers
Gyroscopes were first used to control a ship's roll in the late 1920s and early 1930s for warships and then passenger liners. The most ambitious use of large gyros to control a ship's roll was on an Italian passenger liner, the SS Conte di Savoia, in which three large Sperry gyros were mounted in the forward part of the ship. While it proved successful in drastically reducing roll in the westbound trips, the system had to be disconnected on the eastbound leg for safety reasons. This was because with a following sea (and the deep slow rolls this generated) the vessel tended to 'hang' with the system turned on, and the inertia it generated made it harder for the vessel to right herself from heavy rolls.
Gyro stabilizers consist of a spinning flywheel and gyroscopic precession that imposes boat-righting torque on the hull structure.
The angular momentum of the gyro's flywheel is a measure of the extent to which the flywheel will continue to rotate about its axis unless acted upon by an external torque. The higher the angular momentum, the greater the resisting force of the gyro to external torque (in this case more ability to cancel boat roll).
A gyroscope has three axes: a spin axis, an input axis, and an output axis. The spin axis is the axis about which the flywheel is spinning and is vertical for a boat gyro. The input axis is the axis about which input torques are applied. For a boat, the principal input axis is the longitudinal axis of the boat since that is the axis around which the boat rolls. The principal output axis is the transverse (athwartship) axis about which the gyro rotates or precesses in reaction to an input.
When the boat rolls, the rotation acts as an input to the gyro, causing the gyro to generate rotation around its output axis such that the spin axis rotates to align itself with the input axis. This output rotation is called precession and, in the boat case, the gyro will rotate fore and aft about the output or gimbal axis.
Angular momentum is the measure of effectiveness for a gyro stabilizer, analogous to horsepower ratings on a diesel engine or kilowatts on a generator. In specifications for gyro stabilizers, the total angular momentum (moment of inertia multiplied by spin speed) is the key quantity. In modern designs, the output axis torque can be used to control the angle of the stabilizer fins (see above) to counteract the roll of the boat so that only a small gyroscope is needed. The idea for gyro controlling a ship's fin stabilizers was first proposed in 1932 by a General Electric scientist, Dr Alexanderson. He proposed a gyro to control the current to the electric motors on the stabilizer fins, with the actuating instructions being generated by thyratron vacuum tubes.
Calculated stability conditions
When a hull is designed, stability calculations are performed for the intact and damaged states of the vessel. Ships are usually designed to slightly exceed the stability requirements (below), as they are usually tested for this by a classification society.
Intact stability
Intact stability calculations are relatively straightforward and involve taking all the centers of mass of objects on the vessel which are then computed/calculated to identify the center of gravity of the vessel, and the center of buoyancy of the hull. Cargo arrangements and loadings, crane operations, and the design sea states are usually taken into account. The diagram at the right shows the center of gravity is well above the center of buoyancy, yet the ship remains stable. The ship is stable because as it begins to heel, one side of the hull begins to rise from the water and the other side begins to submerge. This causes the center of buoyancy to shift toward the side that is lower in the water. The job of the naval architect is to make sure that the center of buoyancy shifts outboard of the center of gravity as the ship heels. A line drawn from the center of buoyancy in a slightly heeled condition vertically will intersect the centerline at a point called the metacenter. As long as the metacenter is further above the keel than the center of gravity, the ship is stable in an upright condition.
Intact stability for ships at sea is governed by the International Maritime Organization (IMO) standard the International Code on Intact Stability.
Damage stability (Stability in the damaged condition)
Damage stability calculations are much more complicated than intact stability. Software utilizing numerical methods are typically employed because the areas and volumes can quickly become tedious and long to compute using other methods.
The loss of stability from flooding may be due in part to the free surface effect. Water accumulating in the hull usually drains to the bilges, lowering the center of gravity and actually increasing the metacentric height. This assumes the ship remains stationary and upright. However, once the ship is inclined to any degree (a wave strikes it for example), the fluid in the bilge moves to the lower side. This results in a list.
Stability is also reduced in flooding when, for example, an empty tank is filled with seawater. The lost buoyancy of the tank results in that section of the ship lowering into the water slightly. This creates a list unless the tank is on the centerline of the vessel.
In stability calculations, when a tank is filled, its contents are assumed to be lost and replaced by seawater. If these contents are lighter than seawater, (light oil for example) then buoyancy is lost and the section lowers slightly in the water accordingly.
For merchant vessels, and increasingly for passenger vessels, the damage stability calculations are of a probabilistic nature. That is, instead of assessing the ship for one compartment failure, a situation where two or even up to three compartments are flooded will be assessed as well.
This is a concept in which the chance that a compartment is damaged is combined with the consequences for the ship, resulting in a damage stability index number that has to comply with certain regulations.
Required stability
In order to be acceptable to classification societies such as the Bureau Veritas, American Bureau of Shipping, Lloyd's Register of Ships, Korean Register of Shipping and Det Norske Veritas, the blueprints of the ship must be provided for independent review by the classification society. Calculations must also be provided which follow a structure outlined in the regulations for the country in which the ship intends to be flagged.
Within this framework different countries establish requirements that must be met. For U.S.-flagged vessels, blueprints and stability calculations are checked against the U.S. Code of Federal Regulations and International Convention for the Safety of Life at Sea conventions (SOLAS). Ships are required to be stable in the conditions to which they are designed for, in both undamaged and damaged states. The extent of damage required to design for is included in the regulations. The assumed hole is calculated as fractions of the length and breadth of the vessel, and is to be placed in the area of the ship where it would cause the most damage to vessel stability.
In addition, United States Coast Guard rules apply to vessels operating in U.S. ports and in U.S. waters. Generally these Coast Guard rules concern a minimum metacentric height or a minimum righting moment. Because different countries may have different requirements for the minimum metacentric height, most ships are now fitted with stability computers that calculate this distance on the fly based on the cargo or crew loading. There are many commercially available computer programs used for this task.
Depending upon the class of vessel either a stability letter or stability booklet is required to be carried on board.
See also
References
External links
Title 46 U.S. Code of Federal Regulations
ABS Rules for Building and Classing Steel Vessels 2007
Overview of a few common Roll Attenuation Strategies
Shipbuilding | Ship stability | [
"Engineering"
] | 2,888 | [
"Shipbuilding",
"Marine engineering"
] |
10,676,383 | https://en.wikipedia.org/wiki/Wa%20%28unit%29 | Wa ( , also waa or wah, abbreviated ) is a unit of length, equal to two metres (2 m) or four sok (.) Wa as a verb means to outstretch (one's) arms to both sides, which relates to the fathom's distance between the fingertips of a man's outstretched arms. The 1833 Siamese-American Treaty of Amity and Commerce, reads, "[The] Siamese fathom...being computed to contain 78 English or American inches, corresponding to 96 Siamese inches." The length then would have been equivalent to a modern 1.981 metres. Since conversion to the metric system in 1923, the length as derived from the metre is precisely two metres, but the unit is neither part of nor recognized by the modern International metric system (SI).
Wa also occurs as a colloquialism for "square wa" (tarang wa) a unit of area abbreviated or .)
As with many terms normally written in the Thai alphabet, romanization of Thai causes spelling variants such as waa and wah.
See also
Thai units of measurement
Orders of magnitude (area) for a comparison with other lengths
References
External links
Units of length
Human-based units of measurement
Thai units of measurement | Wa (unit) | [
"Mathematics"
] | 260 | [
"Quantity",
"Units of measurement",
"Units of length"
] |
10,677,559 | https://en.wikipedia.org/wiki/M520%20Goer | The M520 "Truck, Cargo, 8-ton, 4x4", nicknamed Goer, truck series was formerly the US Army’s standard heavy tactical truck before its replacement by the Oshkosh HEMTT. As trucks go, the Caterpillar-made Goer stands out due to being articulated, much wider than other trucks, and lacking suspension on the wheels.
Some 1,300 of these trucks were built from 1972 to 1976. The majority were M520 Cargo Trucks. The tankers were designated M559 Fuel Servicing Tanker Truck, and the wreckers M553 Wrecker Truck. When fitted with its own crane, the cargo variant was designated M877 Cargo Truck with Material Handling Crane.
Overview
In the mid-1950s, the US military were looking for a new, extreme off-road, tactical truck series, with substantially increased load-carrying capacity. According to a May 2006 article in Classic Military Vehicle magazine, the United States Armor Board began evaluating and testing commercially available, large, wheeled, articulated-steering, earth-moving equipment for potential tactical application in 1956 / 1957. This resulted in development contracts for 4x4 all-terrain vehicles of various weight classes being awarded to Clark Equipment, Le Tourneau-Westinghouse, and Caterpillar Tractor Company.
Clark provided a prototype, based on their Model 75 log-skidder, powered by a Cummins 6-cyl. diesel engine. Caterpillar's entries were in the eight-ton class and were designated: XM520 (8-ton cargo truck), XM553 ( wrecker recovering), and XM559 (, tanker). Le Tourneau-Westinghouse offered three variants in the class: XM437 Cargo, XM438 Tanker, and XM554 Wrecker.
Without exception, the prototypes consisted of two segments: housing engine and driver's compartment in the front and using the rear part as main transport unit, whereby steering was accomplished by articulating the whole front unit relative to the rear, as opposed to pivot steering the front wheels conventionally. The large wheels with large, low-pressure tires were mounted without any suspension or steering mechanism, greatly simplifying the design. In order to keep the wheels on the ground on uneven terrain, the front and rear units could not only swivel around a vertical axis, but also along the vehicle's longitudinal axis, allowing significant articulation. In low gear ranges, the Goer had four-wheel drive capability, but on-road it was purely front-wheel drive.
The Caterpillar design did well in testing, and in 1960, the company was awarded a multimillion-dollar contract for developing eight cargo trucks, delivered in 1961 and 1962, as well as two wreckers and two tankers in 1962. Another twenty-three units were ordered in 1963, then field-tested in West Germany in 1964 and in South Vietnam in 1966.
Production of official variants
Not until 1971 did Caterpillar eventually receive a production contract for 1300 units: 812 M520 cargo-vehicles, 371 M559 tankers and 117 M553 wreckers. Production began in 1972 and lasted through June 1976. When fitted with its own crane, the cargo variant would be designated M877. All variants except the wrecker existed both with or without front winch, whereas all wreckers had winches both front and rear. Early units, with a Cat D333 engine, were multi-fuel, but later ones, with the D333C powerplant, were diesel only.
Field performance
Not only did the Caterpillar offer extreme off-road ability, including 20° longitudinal articulation and 30° side-slopes, it was also fully amphibious, using the wheels for propulsion in the water. The rear cargo-bed tailgate and drop-side doors, that allowed rapid discharge of cargo, had watertight seals to preserve the unit's swimming capability. In the US's involvement in the Vietnam War, the Goer developed a reputation of being able to go where other trucks could not, and it was one of the preferred resupply vehicles after the pre-production units' introduction in 1966. They achieved a 90% availability rate, even though spare parts for the Goer were not an official part of the US Army inventory until 1972.
Nevertheless, the vehicle's lack of suspension made it too bouncy on hardened surfaces, making most drivers shy away from its 31 mph (50 km/h) top speed. The method for keeping bounce to a minimum on hard roads was to gently sway the vehicle left and right at top speed. The bilge pumps were often abused on hard road convoys as "super-squirters" by bored drivers as they would accumulate water in the hull, and drivers soon realized havoc could be raised by turning on the high volume pumps to douse passing and oncoming traffic. The oscillating cab was also dangerous as entering or exiting the vehicle with the engine off could put pressure on the steering wheel and when the engine was started the cab would turn without warning. Also, its oversize dimensions proved generally awkward, so in the 1980s it was replaced by the Oshkosh Heavy Expanded Mobility Tactical Truck series, that combined good on-road behavior with adequate off-road performance. As the Goers were surplused accordingly, it was done so under a demilitarization order similar to that of the M151 jeep. Core components in the steering and driveline were destroyed before the remains of the vehicle were sold off. Consequently, only very few vehicles remained in existence, in museums and private collections.
See also
G-numbers
M-numbers
Heavy Expanded Mobility Tactical Truck (replacement in U.S. Army service)
References
Reference codes
SNL G861 – Supply catalog standard nomenclature number (until late 1950s)
FSN / NSN 2320 — various Federal Stock Numbers / National Stock Numbers, per variant, in the 1960s and 1970s
TM 9-2320-233 technical manuals, dated 1972-1979
External links
M520 Goer at Olive-Drab
Detailed tabulated data from TM 9-2320-233-20 technical manual
Wheeled amphibious vehicles
Articulated vehicles
Military trucks of the United States
Off-road vehicles
Caterpillar Inc. vehicles
Military vehicles introduced in the 1970s | M520 Goer | [
"Engineering"
] | 1,305 | [
"Engineering vehicles",
"Caterpillar Inc. vehicles"
] |
10,677,607 | https://en.wikipedia.org/wiki/Micron%20Memory%20Japan | Micron Memory Japan, K.K. is a Japanese subsidiary of Micron Technology. It was formerly known as established in 1999 that developed, designed, manufactured and sold dynamic random-access memory (DRAM) products. It was also a semiconductor foundry. With headquarters in Yaesu, Chūō, Tokyo, Japan, it was initially formed under the name NEC Hitachi Memory in 1999 by the merger of the Hitachi and NEC DRAM businesses. In the following year it took on the name Elpida. In 2003, Elpida took over the Mitsubishi DRAM business. In 2004, it listed its shares in the first section of the Tokyo Stock Exchange. In 2012, those shares were delisted as a result of its bankruptcy. In 2013, Elpida was acquired by Micron Technology. On February 28, 2014, Elpida changed its name to Micron Memory Japan and Elpida Akita changed its name to Micron Akita, Inc.
History
Elpida Memory was founded in 1999 as a merger of NEC's and Hitachi's DRAM operations and began development operations for DRAM products in 2000. Both companies also spun off their other semiconductor operations into Renesas.
In 2001, the company began construction of its 300mm wafer fabrication plant. Later that year, it began sales operations in domestic markets.
In 2002, armed with the Sherman Antitrust Act, the United States Department of Justice began a probe into the activities of dynamic random access memory (DRAM) manufacturers. US computer makers, including Dell and Gateway, claimed that inflated DRAM pricing was causing lost profits and hindering their effectiveness in the marketplace. To date, five manufacturers have pleaded guilty to their involvement in an international price-fixing conspiracy including Hynix, Infineon, Micron Technology, Samsung, and Elpida. Micron Technology was not fined for its involvement due to co-operation with investigators.
In 2003, the company took over Mitsubishi Electric Corporation's DRAM operations and employed Mitsubishi development engineers.
In 2004, Elpida Memory went public and was listed on the Tokyo Stock Exchange.
In 2006, the company established Akita Elpida to take on the development of advanced back-end technology processes.
In March 2006, Elpida reported consolidated sales of 241,500,000,000 Japanese yen. It employed 3196 people.
The company received 140 billion yen in financial aid and loans from the Japanese government and banks during the financial crisis in 2009.
On April 3, 2010, Elpida Memory sold ¥18.5billion worth of shares to Kingston Technology
On April 22, 2010, Elpida announced it had developed the world's first four-gigabit DDR3 SDRAM. Based on a 40 nm process, this DRAM was said to use about thirty percent less power compared to two 40 nm process two-gigabit DDR3 SDRAMs. It was to operate at both standard DDR3 1.5 V and 1.35 V to further reduce power consumption.
In July 2011, Elpida announced that it planned to raise $987 million by selling shares and bonds. In August 2011, Elpida claimed to be the first memory maker to begin sampling 25 nm DRAMs.
On February 27, 2012, Elpida filed for bankruptcy.
With liabilities of 448 billion yen (US$5.5 billion), the company's bankruptcy was Japan's largest since Japan Airlines bankrupted in January 2010. The company suffered from both strong yen and a sharp drop of DRAM prices as a result of stagnant demand of personal computers and disruption of computer production caused by flooding of HDD factories in Thailand. DRAM prices plunged to a record low in 2011 as the price of the benchmark DDR3 2-gigabit DRAM declined 85%. Elpida was the third largest DRAM maker, held 18 percent of the market by revenue in 2011.
On March 28, 2012, Elpida was delisted from the Tokyo Stock Exchange. At the time, Elpida was one of the suppliers of SDRAM components for the A6 processor in the Apple iPhone 5.
In February 2013, Tokyo court and Elpida creditors approved an acquisition by Micron Technology.
The company became a fully owned subsidiary of Micron Technology on July 31, 2013.
Effective February 28, 2014, Elpida changed its name to Micron Memory Japan and Elpida Akita changed its name to Micron Akita, Inc.
In August 2017, an agreement was signed with Power Technology Inc. for the acquisition of the majority stakes in Micron Akita, Inc. as well as Tera Probe Inc. from Micron Technology Inc.
In September 2022, the Japanese government provided Micron Technology Inc. a subsidy of $320 million for the development of advanced memory chips at the Hiroshima plant. In May 2023, it was announced that Micron Technology would invest up to $3.7 billion for extreme ultraviolet (EUV) technology with support from the Japanese government. In October 2023, the government once again approved a $1.3 billion subsidy for the Hiroshima chip factory.
Products
DDR5 SDRAM
DDR4 SDRAM
DDR3 SDRAM
DDR2 SDRAM
Mobile RAM
GDDR7
GDDR5
XDR DRAM
Locations
Micron has two design centers, one manufacturing plant/technology development site, and two sales offices in Japan:
The Hiroshima Plant is key to Micron's efforts to develop low-power DRAM products essential to smartphones and other mobile devices. Once these products achieve yield and performance targets (optimal cost structure, quality and lower end-to-end product cycle time) in Hiroshima, the manufacturing process can then be transferred to other sites.
Micron's realignment of the Japanese operations included the following:
$2 billion investment in Hiroshima to enhance competitive capabilities
Acquisition of test and probe personnel in Hiroshima from to bring these capabilities in-house
Sell-off of its test and assembly capabilities in Micron Akita to Powertech Technology, Inc. (PTI), a Taiwanese semiconductor assembly, packaging and testing company
Sell-off of its equity stake in Tera Probe to PTI
With these changes, Micron's DRAM test and assembly capabilities would be based in Hiroshima and Taiwan.
See also
Renesas Electronics
Numonyx
References
External links
Micron Technology
Computer companies of Japan
Computer hardware companies
Computer memory companies
Manufacturing companies based in Tokyo
Semiconductor companies of Japan
Companies formerly listed on the Tokyo Stock Exchange
Electronics companies established in 1999
Japanese companies established in 1999
Companies that have filed for bankruptcy in Japan
Japanese subsidiaries of foreign companies
2004 initial public offerings
2013 mergers and acquisitions | Micron Memory Japan | [
"Technology"
] | 1,367 | [
"Computer hardware companies",
"Computers"
] |
10,677,746 | https://en.wikipedia.org/wiki/Information%20exchange | Information exchange or information sharing means that people or other entities pass information from one to another. This could be done electronically or through certain systems. These are terms that can either refer to bidirectional information transfer in telecommunications and computer science or communication seen from a system-theoretic or information-theoretic point of view. As "information," in this context invariably refers to (electronic) data that encodes and represents the information at hand, a broader treatment can be found under data exchange.
Information exchange has a long history in information technology. Traditional information sharing referred to one-to-one exchanges of data between a sender and receiver. Online information sharing gives useful data to businesses for future strategies based on online sharing. These information exchanges are implemented via dozens of open and proprietary protocols, message, and file formats. Electronic data interchange (EDI) is a successful implementation of commercial data exchanges that began in the late 1970s and remains in use today.
Some controversy comes when discussing regulations regarding information exchange. Initiatives to standardize information sharing protocols include extensible markup language (XML), simple object access protocol (SOAP), and web services description language (WSDL).
From the point of view of a computer scientist, the four primary information sharing design patterns are sharing information one-to-one, one-to-many, many-to-many, and many-to-one. Technologies to meet all four of these design patterns are evolving and include blogs, wikis, really simple syndication, tagging, and chat.
One example of United States government's attempt to implement one of these design patterns (one to one) is the National Information Exchange Model (NIEM). One-to-one exchange models fall short of supporting all of the required design patterns needed to fully implement data exploitation technology.
Advanced information sharing platforms provide controlled vocabularies, data harmonization, data stewardship policies and guidelines, standards for uniform data as they relate to privacy, security, and data quality.
Information Sharing, Intelligence Reform, and Terrorism Prevention Act
The term information sharing gained popularity as a result of the 9/11 Commission Hearings and its report of the United States government's lack of response to information known about the planned terrorist attack on the New York City World Trade Center prior to the event. The resulting commission report led to the enactment of several executive orders by President Bush that mandated agencies to implement policies to "share information" across organizational boundaries. In addition, an Information Sharing Environment Program Manager (PM-ISE) was appointed, tasked to implement the provisions of the Intelligence Reform and Terrorism Prevention Act of 2004. In making recommendation toward the creation of an "Information Sharing Environment" the 9/11 Commission based itself on the findings and recommendations made by the Markle Task Force on National Security in the Information Age.
See also
Channel (communications)
Communications protocol
Sexual recombination enables cross-pollination in bio
Data mapping
Electronic data interchange
Fusion center
Information Exchange Gateway
Knowledge sharing
Semi-structured data
Data interchange standards
Health information exchange
Cyberinfrastructure
References
Information theory
Sharing | Information exchange | [
"Mathematics",
"Technology",
"Engineering"
] | 627 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
10,678,027 | https://en.wikipedia.org/wiki/Connection-oriented%20Ethernet | Connection-oriented Ethernet refers to the transformation of Ethernet, a connectionless communication system by design, into a connection-oriented system. The aim of connection-oriented Ethernet is to create a networking technology that combines the flexibility and cost-efficiency of Ethernet with the reliability of connection-oriented protocols. Connection-oriented Ethernet is used in commercial carrier grade networks.
Traditional carrier networks deliver services at very high availability. Packet-switched networks are different, as they offer services based on statistical multiplexing. Moreover, packet transport equipment, which makes up the machinery of data networking, leaves most of the carrier-grade qualities such as quality of service, routing, provisioning, and security, to be realized by packet processing. Addressing these needs in a cost-efficient way is a challenge for packet-based technologies.
The IP-MPLS approach aims at providing guaranteed services over the Internet Protocol using a multitude of networking protocols to create, maintain and handle packet data streams. While this approach solves the problem, it inevitably also creates a great deal of complexity.
This has resulted in the emergence of connection-oriented Ethernet which includes a variety of methodologies to utilize Ethernet for the same functionalities otherwise based on extensive IP protocols. The challenge of carrier Ethernet is to add carrier-grade functionality to Ethernet equipment without losing the cost-effectiveness and simplicity that makes it attractive in the first place. To meet this challenge, common connection-oriented Ethernet solutions have chosen to rid themselves of the complex parts of packet transport to achieve stability and control. Key connection-oriented Ethernet technologies used to achieve this include mainly IEEE 802.1ah, Provider Backbone Transport and MPLS-TP, and formerly T-MPLS.
PBT and PBB
Provider Backbone Transport (PBT) is connection-oriented switch operation scheme and network management architecture. PBT was invented by British Telecom (BT) and developed by Nortel (now Avaya). It defines methods to emulate connection-oriented networks by providing "nailed-up" trunks through a packet-switched network. Key data-plane differences from PBB include the static configuration of forwarding tables within Ethernet switches, dropping of multicast packets and the prevention of "flooding" of frames to unknown destination addresses. Configuration is performed by a centralized management server like in SDH networks, though in the future a control plane may be added. PBT has been presented to IEEE802 and a new project has been approved to standardize it under the name of Provider Backbone Bridge Traffic Engineering (PBB-TE) (IEEE 802.1Qay), a modification to PBB.
Provider Backbone Bridges (PBB) is an Ethernet data-plane technology invented in 2004 by Nortel Networks (now Avaya). It is sometimes known as MAC-in-MAC because it involves encapsulating an Ethernet datagram inside another one with new source and destination addresses (termed B-SA and B-DA). IEEE802 is standardizing the technology as (IEEE 802.1ah), currently under development. PBB is the original data-plane chosen by British Telecom for their new PBT-based Ethernet transport.
PBB can support point-to-point, point-to-multipoint and multipoint-to-multipoint networks. PBT focuses on point-to-point connectivity, and may be capable of extension to point-to-multipoint, a key technology for advanced data applications such as IPTV. PBT avoids trying to address multipoint-to-multipoint networking, as in the opinion of some of its supporters guaranteed levels of service in multipoint-to-multipoint networks are impossible.
Additionally Ethernet is being reinforced with operations, administration, and maintenance (OAM) capabilities through the work of various standard bodies (IEEE 802.1ag, ITU-T Y.1731 and G.8021, IEEE 802.3ah).
PBT/PBB equipment leverages economies of scale inherent in Ethernet, promising about 30%–40% cheaper solutions compared to T-MPLS equipment with identical features and capabilities, making PBT a better overall return on investment.
T-MPLS (Transport MPLS) / MPLS-TP (MPLS Transport Profile)
T-MPLS, as its name implies, is a derivative of MPLS that renounces all MPLS signaling features and, like PBT, uses a centralized control-plane to perform routing and traffic engineering. T-MPLS is currently being standardized only at ITU-T and enjoys strong vendor support but little carrier support.
As a native MPLS derivative, T-MPLS can be easily implemented over existing MPLS routers. However, T-MPLS has been stripped of the characteristics which originally made it attractive to carriers—control-plane automation, signaling, and QoS—and therefore has yet to prove its benefits for the transport network. T-MPLS OAM, defined in ITU Y.1711, is different from MPLS OAM and lacks powerful management tools that carriers typically expect. T MPLS was abandoned by the ITU-T in favor of MPLS-TP in December 2008.
MPLS-TP or MPLS Transport Profile is a profile of MPLS developed in cooperation between ITU-T and IETF since 2008 as a connection-oriented packet-switched (CO-PS) extension. Based on the same architectural principles of layered networking that are used in longstanding transport network technologies like SDH, SONET, and OTN, MPLS-TP provides a reliable packet-based L2 technology that is comparable to circuit-based transport networking, and thus aligned with current organizational processes and large-scale work procedures similar to other packet transport technologies.
Achieving the promise of carrier-grade Ethernet
Services in the data network are typically classified into 2 major categories: Committed Information Rate (CIR) and Excess Information Rate (EIR). A CIR service guarantees its user a fixed amount of bandwidth, whereas an EIR service offers best-effort only transport. Both types of services share a single capacity-constrained infrastructure. Both are further defined by additional parameters.
A carrier's return on investment is directly related to its ability to transport more service instances over a fixed capacity-constrained infrastructure, keeping Quality of Service high. It is further associated with its ability to offer a broad range of added-value services, such as IPTV, Voice, and VPN, whose requirements can widely vary and pose technical difficulties when sharing the same infrastructure.
With the above in mind, the carrier's objective is to offer a maximum amount of best-effort EIR services over its network while reliably serving its committed CIR services. To achieve this PBB/PBT and T-MPLS approaches largely under-provision network resources, in order to avoid a situation where a burst in best-effort traffic would jeopardize the ability to serve committed traffic, leading to costly penalties. An additional issue with best-effort access on data networks is fair allocation among clients. With PBB/PBT and T-MPLS, the amount of bandwidth available to a particular client greatly depends on the client's location and the prevailing traffic conditions. This limits the value customers attach to EIR services and undercuts carriers' opportunities to offer differentiated access to its excess capacity.
Performing traffic engineering in real-time is thus key to next-generation Ethernet transport. Additional qualities are required to make Ethernet a carrier-grade technology:
Rich OAM: capacity optimization, path calculation and configuration, iterative optimization.
Path protection: sub-50ms failover.
Support for next-generation services: efficient provisioning of point-to-multipoint and multipoint-to-multipoint services.
Multi-vendor support, the ability to support a variety of Ethernet switches in the core, is a desirable attribute as it allows carriers to use inexpensive switches to build their metro transport network. Vendors such as Tejas Networks, Ethos Networks, and Nortel offer solutions which meet the above requirements, yet preserve Ethernet's simplicity and flexibility.
See also
Ethernet in the first mile
Metro Ethernet
References
External links
Nortel's PBB/PBT page
Tejas Networks' page
IEEE 802.1Qay
Ethernet
MPLS networking
Network topology | Connection-oriented Ethernet | [
"Mathematics"
] | 1,678 | [
"Network topology",
"Topology"
] |
10,678,219 | https://en.wikipedia.org/wiki/Quinone-interacting%20membrane-bound%20oxidoreductase | Quinone-interacting membrane-bound oxidoreductase is a membrane-bound protein complex present in the electron transport chain of sulfate reducers (e.g. Desulfovibrio species) and some sulfur oxidizers.
It was first described by Pires et al. (2003).
References
Cellular respiration
Integral membrane proteins
Oxidoreductases | Quinone-interacting membrane-bound oxidoreductase | [
"Chemistry",
"Biology"
] | 78 | [
"Cellular respiration",
"Oxidoreductases",
"Metabolism",
"Biochemistry",
"Bioinorganic chemistry"
] |
10,678,543 | https://en.wikipedia.org/wiki/Phylogenetic%20nomenclature | Phylogenetic nomenclature is a method of nomenclature for taxa in biology that uses phylogenetic definitions for taxon names as explained below. This contrasts with the traditional method, by which taxon names are defined by a type, which can be a specimen or a taxon of lower rank, and a description in words. Phylogenetic nomenclature is regulated currently by the International Code of Phylogenetic Nomenclature (PhyloCode).
Definitions
Phylogenetic nomenclature associates names with clades, groups consisting of an ancestor and all its descendants. Such groups are said to be monophyletic. There are slightly different methods of specifying the ancestor, which are discussed below. Once the ancestor is specified, the meaning of the name is fixed: the ancestor and all organisms which are its descendants are included in the taxon named. Listing all these organisms (i.e. providing a full circumscription) requires the complete phylogenetic tree to be known. In practice, there are almost always one or more hypotheses as to the correct relationship. Different hypotheses result in different organisms being thought to be included in the named taxon, but application to the name in the context of various phylogenies generally remains unambiguous. Possible exceptions occur for apomorphy-based definitions, when optimization of the defining apomorphy is ambiguous.
Phylogenetic definitions of clade names
Phylogenetic nomenclature assigns names to clades, groups consisting solely of an ancestor and all its descendants. All that is needed to specify a clade, therefore, is to designate the ancestor. There are a number of methods of doing this. Commonly, the ancestor is indicated by its relation to two or more specifiers (species, specimens, or traits) that are mentioned explicitly. The diagram shows three common ways of doing this. For previously defined clades A, B, and C, the clade X can be defined as:
A node-based definition could read: "the last common ancestor of A and B, and all descendants of that ancestor". Thus, the entire line below the junction of A and B does not belong to the clade to which the name with this definition refers. A crown group is a type of node-based group where A and B are extant (living) taxa.
Example: The sauropod dinosaurs consist of the last common ancestor of Vulcanodon (A) and Apatosaurus (B) and all of that ancestor's descendants. This ancestor was the first sauropod. C could include other dinosaurs like Stegosaurus.
A branch-based definition, often termed a stem-based definition, could read: "the first ancestor of A which is not also an ancestor of C, and all descendants of that ancestor". Thus, the entire line below the junction of A and B (other than the bottommost point) does belong to the clade to which the name with this definition refers. A pan-group or total group is a type of branch-based group where A and C are extant (living) taxa.
Example (also a total group): The rodents consist of the first ancestor of the house mouse (A) that is not also an ancestor of the eastern cottontail rabbit (C) together with all descendants of that ancestor. Here, the ancestor of A (but not C) is the very first rodent. B is some other descendant of that first rodent, perhaps the red squirrel.
An apomorphy-based definition could read: "the first ancestor of A to possess trait M that is inherited by A, and all descendants of that ancestor". In the diagram, M evolves at the intersection of the horizontal line with the tree. Thus, the clade to which the name with this definition refers contains that part of the line below the last common ancestor of A and B which corresponds to ancestors possessing the apomorphy M. The lower part of the line is excluded. It is not required that B have trait M; it may have disappeared in the lineage leading to B.
Example: the tetrapods consist of the first ancestor of humans (A) from which humans inherited limbs with fingers or toes (M) and all descendants of that ancestor. These descendants include snakes (B), which do not have limbs.
Several other alternatives are provided in the PhyloCode, (see below) though there is no attempt to be exhaustive.
Phylogenetic nomenclature allows the use, not only of ancestral relations, but also of the property of being extant. One of the many methods of specifying the Neornithes (modern birds), for example, is:
The Neornithes consist of the last common ancestor of the extant members of the most inclusive clade containing the cockatoo Cacatua galerita but not the dinosaur Stegosaurus armatus, as well as all descendants of that ancestor.
Neornithes is a crown clade, a clade for which the last common ancestor of its extant members is also the last common ancestor of all its members.
Node names
Crown node: Most recent common ancestor of the sampled species of the clade of interest.
Stem node: Most recent common ancestor of the clade of interest and its sibling clade.
Ancestry-based definitions of the names of paraphyletic and polyphyletic taxa
For the PhyloCode, only a clade can receive a "phylogenetic definition", and this restriction is observed in the present article. However, it is also possible to create definitions for the names of other groups that are phylogenetic in the sense that they use only ancestral relations based on species or specimens. For example, assuming Mammalia and Aves (birds) are defined in this manner, Amniotes could be defined as "the most recent common ancestor of Mammalia and Aves and all its descendants except Mammalia and Aves". This is an example of a paraphyletic group, a clade minus one or more subordinate clades. Names of polyphyletic groups, characterized by a trait that evolved convergently in two or more subgroups, can be defined similarly as the sum of multiple clades.
Ranks
Using the traditional nomenclature codes, such as the International Code of Zoological Nomenclature and the International Code of Nomenclature for algae, fungi, and plants, taxa that are not associated explicitly with a rank cannot be named formally, because the application of a name to a taxon is based on both a type and a rank. Thus for example the "family" Hominidae uses the genus Homo as its type; its rank (family) is indicated by the suffix -idae (see discussion below). The requirement for a rank is a major difference between traditional and phylogenetic nomenclature. It has several consequences: it limits the number of nested levels at which names can be applied; it causes the endings of names to change if a group has its rank changed, even if it has precisely the same members (i.e. the same circumscription); and it is logically inconsistent with all taxa being monophyletic.
The current codes have rules stating that names must have certain endings depending on the rank of the taxa to which they are applied. When a group has a different rank in different classifications, its name must have a different suffix. Ereshefsky (1997:512) gave an example. He noted that Simpson in 1963 and Wiley in 1981 agreed that the same group of genera, which included the genus Homo, should be placed together in a taxon. Simpson treated this taxon as a family, and so gave it the name "Hominidae": "Homin-" from "Homo" and "-idae" as the suffix for family using the zoological code. Wiley considered it to be at the rank of "tribe", and so gave it the name "Hominini", "-ini" being the suffix for tribe. Wiley's tribe Hominini formed only part of a family which he termed "Hominidae". Thus, using the zoological code, two groups with precisely the same circumscription were given different names (Simpson's Hominidae and Wiley's Hominini), and two groups with the same name had different circumscriptions (Simpson's Hominidae and Wiley's Hominidae).
Especially in recent decades (due to advances in phylogenetics), taxonomists have named many "nested" taxa (i.e. taxa which are contained inside other taxa). No system of nomenclature attempts to name every clade; this would be particularly difficult with traditional nomenclature since every named taxon must be given a lower rank than any named taxon in which it is nested, so the number of names that can be assigned in a nested set of taxa can be no greater than the number of generally recognized ranks. Gauthier et al. (1988) suggested that, if Reptilia is assigned its traditional rank of "class", then a phylogenetic classification has to assign the rank of genus to Aves. In such a classification, all ~12,000 known species of extant and extinct birds would then have to be incorporated into this genus.
Various solutions have been proposed while keeping the rank-based nomenclature codes. Patterson and Rosen (1977) suggested nine new ranks between family and superfamily in order to be able to classify a clade of herrings, and McKenna and Bell (1997) introduced a large array of new ranks in order to cope with the diversity of Mammalia; these have not been adopted widely. For botany, the Angiosperm Phylogeny Group, responsible for the currently most widely used classification of flowering plants, chose a different method. They retained the traditional ranks of family and order, considering them to be of value for teaching and studying relationships between taxa, but also introduced named clades without formal ranks.
For phylogenetic nomenclature, ranks have no bearing on the spelling of taxon names (see e.g. Gauthier (1994) and the PhyloCode). Ranks are, however, not altogether forbidden for phylogenetic nomenclature. They are merely decoupled from nomenclature: they do not influence which names can be used, which taxa are associated with which names, and which names can refer to nested taxa.
The principles of traditional rank-based nomenclature are incompatible logically with all taxa being strictly monophyletic. Every organism must belong to a genus, for example, so there would have to be a genus for every common ancestor of the mammals and the birds. For such a genus to be monophyletic, it would have to include both the class Mammalia and the class Aves. For rank-based nomenclature, however, classes must include genera, not the other way around.
Philosophy
The conflict between phylogenetic and traditional nomenclature represents differing opinions of the metaphysics and epistemology of taxa. For the advocates of phylogenetic nomenclature, a taxon is an individual entity, an entity that may gain and lose attributes as time passes. Just as a person does not become somebody else when his or her properties change through maturation, senility, or more radical changes like amnesia, the loss of a limb, or a change of sex, so a taxon remains the same entity whatever characteristics are gained or lost. Given the metaphysical claims regarding unobservable entities made by advocates of phylogenetic nomenclature, critics have referred to their method as origin essentialism.
For any individual, there has to be something that associates its temporal stages with each other by virtue of which it remains the same entity. For a person, the spatiotemporal continuity of the body provides the relevant conceptual continuity; from infancy to old age, the body traces a continuous path through the world and it is this continuity, rather than any characteristics of the individual, that associates the baby with the octogenarian. This is similar to the well-known philosophical problem of the Ship of Theseus. For a taxon, IF characteristics are not relevant, THEN it can only be ancestral relations that associate the Devonian Rhyniognatha hirsti with the modern monarch butterfly as representatives, separated by 400 million years, of the taxon Insecta. The opposing opinion questions the premise of that syllogism, and argues, from an epistemological perspective, that members of taxa are only recognizable empirically on the basis of their observable characteristics, and hypotheses of common ancestry are results of theoretical systematics, not a priori premises. If there are no characteristics that allow scientists to recognize a fossil as belonging to a taxonomic group, then it is just an unclassifiable piece of rock.
If ancestry is sufficient for the continuity of a taxon, then all descendants of a taxon member will also be included in the taxon, so all bona fide taxa are monophyletic; the names of paraphyletic groups do not merit formal recognition. As "Pelycosauria" refers to a paraphyletic group that includes some Permian tetrapods but not their extant descendants, it cannot be admitted as a valid taxon name. Again, while not disagreeing with the notion that only monophyletic groups should be named, empiricist systematists counter this ancestry essentialism by pointing out that pelycosaurs are recognized as paraphyletic precisely because they exhibit a combination of synapomorphies and symplesiomorphies indicating that some of them are more closely related to mammals than they are to other pelycosaurs. The material existence of an assemblage of fossils and its status as a clade are not the same issue. Monophyletic groups are worthy of attention and naming because they share properties of interest -- synapomorphies -- that are the evidence that allows inference of common ancestry.
History
Phylogenetic nomenclature is a semantic extension of the general acceptance of the idea of branching during the course of evolution, represented in the diagrams of Jean-Baptiste Lamarck and later writers like Charles Darwin and Ernst Haeckel. In 1866, Haeckel for the first time constructed a single relational diagram of all life based on the existing classification of life accepted at the time. This classification was rank-based, but did not contain taxa that Haeckel considered polyphyletic. In it, Haeckel introduced the rank of phylum which carries a connotation of monophyly in its name (literally meaning "stem").
Ever since, it has been debated in which ways and to what extent the understanding of the phylogeny of life should be used as a basis for its classification, with opinions including "numerical taxonomy" (phenetics), "evolutionary taxonomy" (gradistics), and "phylogenetic systematics". From the 1960s onwards, rankless classifications were occasionally proposed, but in general the principles and common language of traditional nomenclature have been used by all three schools of thought.
Most of the basic tenets of phylogenetic nomenclature (lack of obligatory ranks, and something close to phylogenetic definitions) can, however, be traced to 1916, when Edwin Goodrich interpreted the name Sauropsida, defined 40 years earlier by Thomas Henry Huxley, to include the birds (Aves) as well as part of Reptilia, and invented the new name Theropsida to include the mammals as well as another part of Reptilia. As these taxa were separate from traditional zoological nomenclature, Goodrich did not emphasize ranks, but he clearly discussed the diagnostic features necessary to recognize and classify fossils belonging to the various groups. For example, in regard to the fifth metatarsal of the hind leg, he said "the facts support our view, for these early reptiles have normal metatarsals like their Amphibian ancestors. It is clear, then, that we have here a valuable corroborative character to help us to decide whether a given species belongs to the Theropsidan or the Sauropsidan line of evolution." Goodrich concluded his paper: "The possession of these characters shows that all living Reptilia belong to the Sauropsidan group, while the structure of the foot enables us to determine the affinities of many incompletely known fossil genera, and to conclude that only certain extinct orders can belong to the Theropsidan branch." Goodrich opined that the name Reptilia should be abandoned once the phylogeny of the reptiles was better known.
The principle that only clades should be named formally became popular among some researchers during the second half of the 20th century. It spread together with the methods for discovering clades (cladistics) and is an integral part of phylogenetic systematics (see above). At the same time, it became apparent that the obligatory ranks that are part of the traditional systems of nomenclature produced problems. Some authors suggested abandoning them altogether, starting with Willi Hennig's abandonment of his earlier proposal to define ranks as geological age classes.
The first use of phylogenetic nomenclature in a publication can be dated to 1986. Theoretical papers outlining the principles of phylogenetic nomenclature, as well as further publications containing applications of phylogenetic nomenclature (mostly to vertebrates), soon followed (see Literature section).
In an attempt to avoid a schism among the systematics community, "Gauthier suggested to two members of the ICZN to apply formal taxonomic names ruled by the zoological code only to clades (at least for supraspecific taxa) and to abandon Linnean ranks, but these two members promptly rejected these ideas". The premise of names in traditional nomenclature is based, ultimately, on type specimens, and the circumscription of groups is considered a taxonomic choice made by the systematists working on particular groups, rather than a nomenclatural decision made based on a priori rules of the Codes on Nomenclature. The desire to subsume taxonomic circumscriptions within nomenclatural definitions caused Kevin de Queiroz and the botanist Philip Cantino to start drafting their own code of nomenclature, the PhyloCode, to regulate phylogenetic nomenclature.
Controversy
Willi Hennig's pioneering work provoked a controversy about the relative merits of phylogenetic nomenclature versus Linnaean taxonomy, or the related method of evolutionary taxonomy, which has continued to the present. Some of the controversies with which the cladists were engaged had been happening since the 19th century. While Hennig insisted that different classification schemes were useful for different purposes, he gave primacy to his own, claiming that the categories of his system had "individuality and reality" in contrast to the "timeless abstractions" of classifications based on overall similarity.
Formal classifications based on cladistic reasoning are said to emphasize ancestry at the expense of descriptive characteristics. Nonetheless, most taxonomists presently avoid paraphyletic groups whenever they think it is possible within Linnaean taxonomy; polyphyletic taxa have long been unfashionable. Many cladists claim that the traditional Codes of Zoological and Botanical Nomenclature are fully compatible with cladistic methods, and that there is no need to reinvent a system of names that has functioned well for 250 years, but others argue that this system is not as effective as it should be and that it is time to adopt nomenclatural principles that represent divergent evolution as a mechanism that explains much of the known biodiversity. In fact, calls to reform biological nomenclature were made even before phylogenetic nomenclature was developed.
The International Code of Phylogenetic Nomenclature
The ICPN, or PhyloCode, is a code of rules and recommendations for phylogenetic nomenclature.
The ICPN only regulates clade names. Names for species rely on the rules of the traditional codes of nomenclature.
The Principle of Priority (or "precedence") is claimed for names and for definitions within the ICPN. The starting point for priority was April 30, 2020.
Definitions for existing names, and new names along with their definitions, must be published in peer-reviewed works (on or after the starting date) and must be registered in an online database in order to be valid.
The number of supporters for widespread adoption of the PhyloCode is still small, and it is uncertain how widely it will be followed.
References
Sources
(reprinted 1979 and 1999)
Further reading
A few publications not cited in the references are cited here. An exhaustive list of publications about phylogenetic nomenclature can be found on the website of the International Society for Phylogenetic Nomenclature.
de Queiroz, Kevin (1992). Phylogenetic definitions and taxonomic philosophy. Biol. Philos. 7:295–313.
Gauthier, Jacques A., Arnold G. Kluge, and Timothy Rowe (1988). The early evolution of the Amniota. Pages 103–155 in Michael J. Benton (ed.): The Phylogeny and Classification of the Tetrapods, Volume 1: Amphibians, Reptiles, Birds. Syst. Ass. Spec. Vol. 35A. Clarendon Press, Oxford.
Gauthier, Jacques, David Cannatella, Kevin de Queiroz, Arnold G. Kluge, and Timothy Rowe (1989). Tetrapod phylogeny. Pages 337–353 in B. Fernholm, K. Bremer, and H. Jörnvall (eds.): The Hierarchy of Life. Elsevier Science B. V. (Biomedical Division), New York.
Laurin, Michel (2005). The advantages of phylogenetic nomenclature over Linnean nomenclature. Pages 67–97 in A. Minelli, G. Ortalli, and G. Sanga (eds): Animal Names. Instituto Veneto di Scienze, Lettere ed Arti; Venice.
Biological nomenclature
Phylogenetics | Phylogenetic nomenclature | [
"Biology"
] | 4,411 | [
"Bioinformatics",
"Phylogenetics",
"Biological nomenclature",
"Taxonomy (biology)"
] |
10,678,608 | https://en.wikipedia.org/wiki/Phototendering | Phototendering is the process by which organic fibres and textiles lose strength and flexibility due to exposure to sunlight. The ultraviolet component of the sun's spectrum affects fibres, causing chain degradation and, hence, loss of strength. Colour fade is a common problem in phototendering.
UV Degradation
The rate of deterioration is also affected by pigments and dyes present in the textiles. Pigments can also be affected, generally fading after UVA and UVB radiation exposure. Great care is needed to preserve museum artefacts from the harmful effects of UV light, which can also be present in fluorescent lamps, such as ancient textiles. Paintings such as watercolours need protection from sunlight to preserve the original colours.
Many synthetic polymers are also degraded by UV light, and polypropylene is especially susceptible. As a result, UV stabilisers are added to many thermoplastics. Carbon black is also effective in protecting products against UV degradation.
See also
Polymer degradation
Textiles
Ultraviolet
UV degradation
UV Stabilizers in plastics
External links
Survival of Old Textiles
Patent for Protection of Fibres from Phototendering
Mechanical failure | Phototendering | [
"Materials_science",
"Engineering"
] | 227 | [
"Mechanical failure",
"Materials science",
"Mechanical engineering"
] |
10,678,861 | https://en.wikipedia.org/wiki/Parhelic%20circle | A parhelic circle is a type of halo, an optical phenomenon appearing as a horizontal white line on the same altitude as the Sun, or occasionally the Moon. If complete, it stretches all around the sky, but more commonly it only appears in sections. If the halo occurs due to light from the Moon rather than the Sun, it is known as a paraselenic circle.
Even fractions of parhelic circles are less common than sun dogs and 22° halos. While parhelic circles are generally white in colour because they are produced by reflection, they can however show a bluish or greenish tone near the 120° parhelia and be reddish or deep violet along the fringes.
Parhelic circles form as beams of sunlight are reflected by vertical or almost vertical hexagonal ice crystals. The reflection can be either external (e.g. without the light passing through the crystal) which contributes to the parhelic circle near the Sun, or internal (one or more reflections inside the crystal) which creates much of the circle away from the Sun. Because an increasing number of reflections makes refraction asymmetric some colour separation occurs away from the Sun. Sun dogs are always aligned to the parhelic circle (but not always to the 22° halo).
The intensity distribution of the parhelic circle is largely dominated by 1-3-2 and 1-3-8-2 rays (cf. the nomenclature by W. Tape, i.e. 1 denotes the top hexagonal face, 2 the bottom face, and 3-8 enumerate the side faces in counter-clockwise fashion. A ray is notated by the sequence in which it encounters the prism faces). The former ray-path is responsible for the blue spot halo which occurs at an azimuth.
,
with being the material's index of refraction (not the Bravais index of refraction for inclined rays). However, many more features give a structure to the intensity pattern of the parhelic circle. Among the features of the parhelic circle are the Liljequist parhelia, the 90° parhelia (likely unobservable), the second order 90° parhelia (unobservable), the 22° parhelia and more.
Artificial parhelic circles can be realized by experimental means using, for instance, spinning crystals.
See also
Upper and lower tangent arcs
Circumzenithal arc
Liljequist parhelion
References
External links
Atmospheric Optics – Ice Halos
Atmospheric optical phenomena | Parhelic circle | [
"Physics"
] | 523 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
10,679,358 | https://en.wikipedia.org/wiki/Prodromus%20Systematis%20Naturalis%20Regni%20Vegetabilis | Prodromus Systematis Naturalis Regni Vegetabilis (1824–1873), also known by its standard botanical abbreviation Prodr. (DC.), is a 17-volume treatise on botany initiated by Augustin Pyramus de Candolle. De Candolle intended it as a summary of all known seed plants, encompassing taxonomy, ecology, evolution and biogeography. He authored seven volumes between 1824 and 1839, but died in 1841. His son, Alphonse de Candolle, then took up the work, editing a further ten volumes, with contributions from a range of authors. Volume 17 was published in October 1873. The fourth and final part of the index came out in 1874. The Prodromus remained incomplete, dealing only with dicotyledons.
In the Prodromus, De Candolle further developed his concept of families. Note that this system was published well before there were internationally accepted rules for botanical nomenclature. Here, a family is indicated as "ordo". Terminations for families were not what they are now. Neither of these phenomena is a problem from a nomenclatural perspective, the present day ICN provides for this. Within the dicotyledons ("classis prima DICOTYLEDONEÆ") the De Candolle system recognises (Pagination from Prodromus, 17 Parts) the list:
System
Subclassis I. THALAMIFLORÆ [Part I]
ordo I. RANUNCULACEÆ (Page 1)
ordo II. DILLENIACEÆ (Page 67)
ordo III. MAGNOLIACEÆ (Page 77)
ordo IV. ANONACEÆ [sic] (Page 83)
ordo V. MENISPERMACEÆ (Page 95)
ordo VI. BERBERIDEÆ
ordo VII. PODOPHYLLACEÆ
ordo VIII. NYMPHÆACEÆ
ordo VIIIbis. SARRACENIACEÆ
ordo IX. PAPAVERACEÆ
ordo X. FUMARIACEÆ (Page 125)
ordo XIbis. RESEDACEÆ
ordo XI. CRUCIFERÆ
ordo XII. CAPPARIDEÆ
ordo XIII. FLACOURTIANEÆ
ordo XIV. BIXINEÆ
ordo XIVbis. LACISTEMACEÆ
ordo XV. CISTINEÆ
ordo XVI. VIOLARIEÆ
ordo XVII. DROSERACEÆ
ordo XVIII. POLYGALACEÆ
ordo XIX. TREMANDREÆ
ordo XX. PITTOSPOREÆ
ordo XXI. FRANKENIACEÆ
ordo XXII. CARYOPHYLLEÆ
ordo XXIII. LINEÆ
ordo XXIV. MALVACEÆ
ordo XXV. BOMBACEÆ [sic]
ordo XXVI. BYTTNERIACEÆ
ordo XXVII. TILIACEÆ
ordo XXVIII. ELÆOCARPEÆ
ordo XXIX. CHLENACEÆ
ordo XXIXbis. ANCISTROCLADEÆ
ordo XXIXter. DIPTEROCARPEÆ
ordo XXIXter.[sic] LOPHIRACEÆ
ordo XXX. TERNSTROEMIACEÆ
ordo XXXI. CAMELLIEÆ
ordo XXXII. OLACINEÆ
ordo XXXIII. AURANTIACEÆ
ordo XXXIV. HYPERICINEÆ
ordo XXXV. GUTTIFERÆ
ordo XXXVI. MARCGRAVIACEÆ
ordo XXXVII. HIPPOCRATEACEÆ
ordo XXXVIII. ERYTHROXYLEÆ
ordo XXXIX. MALPIGHIACEÆ
ordo XL. ACERINEÆ
ordo XLI. HIPPOCASTANEÆ
ordo XLII. RHIZOBOLEÆ
ordo XLIII. SAPINDACEÆ
ordo XLIV. MELIACEÆ
ordo XLV. AMPELIDEÆ
ordo XLVI. GERANIACEÆ
ordo XLVII. TROPÆOLEÆ
ordo XLVIII. BALSAMINEÆ
ordo XLIX. OXALIDEÆ
ordo L. ZYGOPHYLLEÆ
ordo LI. RUTACEÆ
ordo LII. SIMARUBEÆ [sic]
ordo LIII. OCHNACEÆ
ordo LIV. CORIARIEÆ (Page 739)
(Index to Part I p. 741)
Subclassis II. CALYCIFLORÆ [Parts II – VII]
ordo LV. CELASTRINEÆ [Part II],(Page 2)
ordo LVI. RHAMNEÆ
ordo LVII. BRUNIACEÆ
ordo LVIII. SAMYDEÆ
ordo LIX. HOMALINEÆ
ordo LX. CHAILLETIACEÆ
ordo LXI. AQUILARINEÆ
ordo LXII. TEREBINTHACEÆ
ordo LXIII. LEGUMINOSÆ
ordo LXIV. ROSACEÆ (Page 525)
ordo LXV. CALYCANTHEÆ [Part III], (Page 1)
ordo LXVbis. MONIMIACEÆ
ordo LXVI. GRANATEÆ
ordo LXVII. MEMECYLEÆ
ordo LXVIII. COMBRETACEÆ
ordo LXIX. VOCHYSIEÆ
ordo LXX RHIZOPHOREÆ
ordo LXXI. ONAGRARIEÆ
ordo LXXII. HALORAGEÆ
ordo LXXIII. CERATOPHYLLEÆ
ordo LXXIV. LYTHRARIEÆ
ordo LXXIVbis. CRYPTERONIACEÆ
ordo LXXV. TAMARISCINEÆ
ordo LXXVI. MELASTOMACEÆ
ordo LXXVII. ALANGIEÆ
ordo LXXVIII. PHILADELPEÆ
ordo LXXIX. MYRTACEÆ
ordo LXXX. CUCURBITACEÆ
ordo LXXXI. PASSIFLOREÆ
ordo LXXXII. LOASEÆ
ordo LXXXIII. TURNERACEÆ
ordo LXXXIV. FOUQUIERACEÆ
ordo LXXXV. PORTULACEÆ
ordo LXXXVI. PARONYCHIEÆ
ordo LXXXVII. CRASSULACEÆ
ordo LXXXVIII. FICOIDEÆ (Page 415)
ordo LXXXIX. CACTEÆ
ordo XC. GROSSULARIEÆ
ordo XCI. SAXIFRAGACEÆ [Part IV], (Page 1)
ordo XCII. UMBELLIFERÆ
ordo XCIII. ARALIACEÆ
ordo XCIV. HAMAMELIDEÆ
ordo XCV. CORNEÆ
ordo XCVbis. HELWINGIACEÆ
ordo XCVI. LORANTHACEÆ
ordo XCVII. CAPRIFOLIACEÆ
ordo XCVIII. RUBIACEÆ
ordo XCIX. VALERIANEÆ
ordo C. DIPSACEÆ (Page 643)
ordo CI. CALYCEREÆ [Part V], (Page 1)
ordo CII. COMPOSITÆ (Page 4); [Part VI], (Page 1); [Part VII], (Page 1)
ordo CIII. STYLIDIEÆ [Part VII]
ordo CIV. LOBELIACEÆ
ordo CV. CAMPANULACEÆ
ordo CVI. CYPHIACEÆ
ordo CVII. GOODENOVIEÆ
ordo CVIII. ROUSSÆACEÆ
ordo CIX. GESNERIACEÆ
ordo CX. SPHENOCLEACEÆ
ordo CXI. COLUMELLIACEÆ
ordo CXII. NAPOLEONEÆ
ordo CXIII. VACCINIEÆ
ordo CXIV. ERICACEÆ (Page 580) (Four tribes)
Arbuteae (Page 580)
Andromedae (Page 588)
Ericeae (Page 612)
Rhodoreae (Page 712) (Two subtribes)
Rhododendreae (712) (Nine genera)
Rhododendron (719) (Six sections)
Buramia (720)
Hymenanthes (721)
Eurhododendron (721)
Pogonanthum (725)
Chamaecistus (725)
Tsutsusi (726)
Kalmia (728)
Ledeae (729)
ordo CXV. EPACRIDEÆ (Page 734)
ordo CXVI. PYROLACEÆ
ordo CXVII. FRANCOACEÆ
ordo CXVIII. MONOTROPEÆ (Page 779)
Subclassis III. COROLLIFLORÆ [Parts VIII – XIII(1)]
ordo CXIX. LENTIBULARIEÆ (Page 1)
ordo CXX. PRIMULACEÆ
ordo CXXI. MYRSINEACEÆ
ordo CXXII. ÆGICERACEÆ
ordo CXXIII. THEOPHRASTACEÆ
ordo CXXIV. SAPOTACEÆ
ordo CXXV. EBENACEÆ
ordo CXXVI. STYRACACEÆ
ordo CXXVII. OLEACEÆ
ordo CXXVIIbis. SALVADORACEÆ
ordo CXXVIII. JASMINEÆ
ordo CXXIX. APOCYNACEÆ
ordo CXXX. ASCLEPIADEÆ (Page 460)
ordo CXXX[a?] LEONIACEÆ
ordo CXXXI. LOGANIACEÆ [Part IX], (Page 1)
ordo CXXXII. GENTIANACEÆ
ordo CXXXIII. BIGNONIACEÆ
ordo CXXXIV. SESAMEÆ
ordo CXXXV. CYRTANDRACEÆ
ordo CXXXVI. HYDROPHYLLACEÆ
ordo CXXXVII. POLEMONIACEÆ
ordo CXXXVII. [sic] CONVOLVULACEÆ
ordo CXXXVIII. ERICYBEÆ
ordo CXXXIX. BORRAGINEÆ [sic] (Page 466); [Part X], (Page 1)
ordo CXL. HYDROLEACEÆ
ordo CXLII. SCROPHULARIACEÆ (Page 186)
ordo CXLII(I).[sic] SOLANACEÆ [Part XIII (1)], (Pages 1 – 692) out of sequence
ordo CXLIV. OROBRANCHACEÆ [Part XI], (Page 1)
ordo CXLV. ACANTHACEÆ
ordo CXLVI. PHRYMACEÆ
ordo CXLVII VERBENACEÆ
ordo CXLVIII MYOPORACEÆ (Page 701)
ordo CXLIX SELAGINACEÆ [Part XII], (Page 1)
ordo CL. LABIATÆ
ordo CLI. STILBACEÆ
ordo CLII. GLOBULARIACEÆ
ordo CLIII. BRUNONIACEÆ
ordo CLIV. PLUMBAGINEÆ (Page 617)
ordo CLV.[?] PLANTAGINACEÆ [Part XIII], (Page 693)
Subclassis IV. MONOCHLAMYDEÆ [Parts XIII(2) – XVI]
ordo CLVI. PHYTOLACCACEÆ (Page 2)
ordo CLVII. SALSOLACEÆ
ordo CLVIII. BASELLACEÆ
ordo CLIX. AMARANTACEÆ [sic]
ordo CLX. NYCTAGINACEÆ (Page 425)
ordo CLXI. POLYGONACEÆ [Part XIV], (Pages 1 – 186)
ordo CLXII. LAURACEÆ [Part XIV], (Page 186); [Part XV(1)], (Pages 1 – 260) out of sequence
ordo CLXIII. MYRISTICACEÆ (Page 187)
ordo CLXIV. PROTEACEÆ (Page 209)
ordo CLXV. PENÆACEÆ
ordo CLXVI. GEISSOLOMACEÆ (Page 491)
ordo CLXVII. THYMELÆACEÆ
ordo CLXVIII. ELÆAGNACEÆ
ordo CLXIX. GRUBBIACEÆ
ordo CLXX. SANTALACEÆ (Page 619)
ordo CLXXI. HERNANDIACEÆ [Part XV(1)], (Page 1)
ordo CLXXII. BEGONIACEÆ
ordo CLXXIII. DATISCACEÆ
ordo CLXXIV. PAPAYACEÆ
ordo CLXXV. ARISTOLOCHIACEÆ
ordo CLXXVbis. NEPENTHACEÆ
ordo CLXXVI. STACKHOUSIACEÆ (Page 419)
[sic]
ordo CLXXVIII. EUPHORBIACEÆ [Part XV(2)], (Page 1)
ordo CLXXIX. DAPHNIPHYLLACEÆ [Part XVI(1)], (Page 1)
ordo CLXXX. BUXACEÆ
ordo CLXXXbis. BATIDACEÆ
ordo CLXXXI. EMPETRACEÆ
ordo CLXXXII. CANNABINEÆ
ordo CLXXXIII. ULMACEÆ
ordo CLXXXIIIbis. MORACEÆ
ordo CLXXXIV. ARTOCARPEÆ
ordo CLXXXV. URTICACEÆ
ordo CLXXXVI. PIPERACEÆ
[sic]
ordo CLXXXVIII. CHLORANTHACEÆ
ordo CLXXXIX. GARRYACEÆ (Page 486)
ordo CXC. CUPULIFERÆ [Part XVI(2)], (Page 1)
ordo CXCI. CORYLACEÆ (Page 124)
ordo CXCII. JUGLANDEÆ (Page 134)
ordo CXCIII. MYRICACEÆ (Page 147)
ordo CXCIV. PLATANACEÆ
ordo CXCV. BETULACEÆ (Page 161)
ordo CXCVI. SALICINEÆ (Page 190)
ordo CXCVII CASUARINEÆ (Page 332)
Other
Somewhat inconsistently the Prodromus also treats:
GYMNOSPERMÆ [Part XVI(2)], (Page 345)
ordo CXCVIII. GNETACEÆ (Page 347)
ordo CXCIX. CONIFERÆ (Page 361)
ordo CC. CYCADACEÆ (Pages 522 – 547)
incertæ sedis
ordo (dubiæ affin.) LENNOACEÆ
ordo (affin. dubiæ) PODOSTEMACEÆ
ordo num.? CYTINACEÆ
ordo incertae sedis BALANOPHORACEÆ
(Overall Index Part XVII Page 323)
See also
History of botany
References
Bibliography
Also available online on Botanicus at Prodromus and Gallica at Prodromus
1824 non-fiction books
Botany books
Florae (publication) | Prodromus Systematis Naturalis Regni Vegetabilis | [
"Biology"
] | 3,220 | [
"Flora",
"Florae (publication)"
] |
10,679,910 | https://en.wikipedia.org/wiki/Glomerella%20graminicola | Glomerella graminicola is an economically important crop parasite affecting both wheat and maize where it causes the plant disease Anthracnose Leaf Blight.
Host and symptoms
Glomerella graminicola is an anamorphic fungus which is identified as Colletotrichum graminicola in the teleomorphic phase. It is the anamorphic phase that causes anthracnose in many cereal species. While the main host of this disease is maize, it can also affect other cereals and grasses, such as sorghum, ryegrass, bluegrass, barley, wheat, and some cultivars of fescue where the production of fruiting bodies cause symptoms to appear in the host plant.
Corn anthracnose leaf blight is the most common stalk disease in maize and occurs most frequently in reduced-till or no-till fields.
Symptoms can vary depending on which part of the growing season the corn is in.
Early in the growing season, the main symptom is foliar leaf blight. This often appears as long and wide oval or spindle-shaped water-soaked lesions on the lower leaves of the plant. This tissue can become necrotic and has the potential to spread throughout the entire leaf, causing it to yellow and die. They are light brown in color, with margins that appear dark brown or purple. If this persists, black fruiting bodies will appear in the center of the lesion.
The mid-season symptoms appear several weeks after corn produces tassels, when there will be a top die-back if the infection has spread throughout many parts of the plant. In this dieback, the entire plant will become necrotic and die, beginning at the tassel and working its way down the entire stalk to the lowest leaves.
Late in the growing season, another major symptom of this disease appears: stalk rot. It can first be seen as a reflective black stripe on the internodes of the stalk, and can make the stalk soft, causing the plants to easily lodge in heavy precipitation or a wind event.
Morphology
Stromata
70-300 μm in diameter
Bear prominent, dark, septate spines (setae) up to 100 μm long.
Conidia
Developing at the base of the spines
Hyaline to pale yellow, unicellular, sickle-shaped, falcate to fusiform, tapered toward both ends
3-5 x 19-29 μm.
Phialides
Unicellular, hylanine and cylindrical,
4-8 x 8-20 μm.
Growth on PDA
Growth on potato dextrose agar is:
Gray and feltlike
Conidia and appressoria are numerous when culture are well aerated, and sclerotia sometimes occur.
Appressoria are diagnostic: they are tawny brown, irregular-shaped in edge, prominent, and terminal on thickened hyphae.
Disease cycle
In the spring, fruiting structures (acervuli) form from corn residue and produce banana-shaped spores (conidia) that are dispersed by wind blown raindrops and splashing. Conidial spores infect young plants through the epidermis or stomata. Anthracnose develops rapidly in cloudy, overcast conditions with high temperatures and humidity. In optimal environmental conditions, conidia can germinate in as little as 6–8 hours in 100% humidity. Initial necrotic spots or lesions can be seen within 72 hours after infection by conidia. Lower leaves that develop lesions provide conidial spores and cause secondary infections on the upper leaves and stalk. Vascular infections primarily occur from wounds caused by stalk-boring insects, such as the larvae of the European corn borer, allowing for conidia to infect and colonize the xylem. From this, anthracnose top die back (vascular wilt) or stalk rot can occur. In the fall, C. graminicola survives as a saprophyte on corn leaf residue. The pathogen can also overwinter on corn stalks as conidia in an extracellular secretion. The secretion prevents conidia from desiccating and protects them from unfavorable environmental conditions. Overwintering on corn residue serves as a vital source of primary inoculum for the leaf blight phase in the spring. The cycle will start all over again when susceptible corn seedlings emerge from the ground in the spring.
Environment
There are several conditions that favor the infection and persistence of anthracnose leaf blight. When high temperatures and long periods of wet weather or high humidity occur, these are the most ideal conditions for its spread and survival. A specific temperature range is required in order for the pathogen to successfully infect the host plant, between . Two other things, those being prolonged periods of low sunlight due to overcast conditions, or an already weakened host due to the infection of other diseases or pests will also favor infection of the host plant. In addition to this, there are two cultural practices that will also favor the disease. Continuous plantings of the same host without introducing crop rotation and no-till fields will favor persistence of the pathogen between growing seasons.
Disease management
Since C. graminicola is found to survive on corn residue, specifically on the soil surface, one of the most effective methods of control is a one-year minimum of crop rotation to reduce anthracnose leaf blight. A study in 2009 showed more severe symptoms of leaf blight due to C. graminicola when grown on fields previously used for corn in comparison to fields previously used for soybean. There are cultural practices that can be taken to disrupt the primary inoculum phase and conidial spore infection of the host plant, and these include using hybrid cultivars resistant to the pathogen and keeping the host plants healthy and controlling other pests to keep them resilient to infection. While there are hybrids resistant to the leaf blight, these same hybrids are often not resistant to the stalk rot that occurs later in the growing season. There is also a cultural practice that disrupts the saprophytic stage of the pathogen, and this involves plowing the leftover corn residue deep into the soil and then using a one-year crop rotation away from the same host plant that was just used in that field. These methods move the saprophytic stage into the soil, where it is out-competed by other organisms, and does not survive. Biological control may also be possible, though the large-scale implementation of this method has not been studied. This is done by applying yeasts to the leaf surfaces that are showing symptoms of the leaf blight.
Importance
Corn anthracnose caused by C. graminicola is a disease present worldwide. This disease can affect all parts of the plant and can develop at any time during the growing season. This disease is typically seen in leaf blight or stalk rot form. Before the 1970s, Anthracnose was not an issue in North America. In the early 1970s, north-central and eastern U.S was hit with severe epidemics. Within 2 years of C. graminicolas appearance in Western-Indiana, sweet corn production for canning companies were nearly wiped out and production no longer exists there today.
Anthracnose stalk rot was seen in many U.S corn fields in the 1980s and 1990s. A survey conducted in Illinois in 1982 and 1983 found that 34 to 46% of rotted corn stalks contained C. graminicola. Estimates on yield grain losses from anthracnose leaf blight and stalk rot range from zero to over 40%. This is dependent on hybrid, environment, timing of infection, and other stresses.
Pathogenesis
Once conidia germinate on corn leaves, a germ tube differentiates and develops into an appresoria and allows C. graminicola to penetrate epidermal cells. Germination and appressorium formation occur best in the temperature range ) Penetration occurs in a much narrower temperature range . In order to penetrate the cell wall, the fungus first pumps melanin into the walls of the appressorium to create turgor pressure in the appressorium. The melanin allows water into the appressorium cell but nothing out. This builds up an incredible amount of turgor pressure which the fungus then uses to push a hyphae through the corn cell wall. This is called the penetration peg. The penetration peg then grows, extends through the cell extracting nutrients and the host cell wall dies. Hyphae migrate from epidermal cells to mesophyll cells. As a defense response, the cells produce papillae to prevent cell entry but is typically not seen successful. It is believed C. graminicola has a biotrophic phase because the plasma membrane of the epidermal cells is not immediately penetrated after invasion into the epidermal cell wall. Between 48–72 hours after infection, C. graminicola shifted from biotrophic growth to nectrotrophy (lesions appear). This is when secondary hyphae invade cell walls and intercellular spaces.
References
External links
Index Fungorum
USDA ARS Fungal Database
The Vaillancourt Lab
Japanese Fungi on Plants No.36
Colletotrichum dot org
fungi.ensembl.org
Colletotrichum
Fungal plant pathogens and diseases
Cereal diseases
Maize diseases
Wheat diseases
Fungi described in 1952
Fungus species | Glomerella graminicola | [
"Biology"
] | 1,939 | [
"Fungi",
"Fungus species"
] |
10,679,980 | https://en.wikipedia.org/wiki/22%C2%B0%20halo | A 22° halo is an atmospheric optical phenomenon that consists of a halo with an apparent diameter of approximately 22° around the Sun or Moon. Around the Sun, it may also be called a sun halo. Around the Moon, it is also known as a moon ring, storm ring, or winter halo. It forms as sunlight or moonlight is refracted by millions of hexagonal ice crystals suspended in the atmosphere. Its radius, as viewed from Earth, is roughly the length of an outstretched hand at arm's length.
Formation
Even though it is one of the most common types of halo, the shape and orientation of the ice crystals responsible for the 22° halo are the topic of debate. Hexagonal, randomly oriented columns are usually put forward as the most likely candidate, but this explanation presents problems, such as the fact that the aerodynamic properties of such crystals leads them to be oriented horizontally rather than randomly. Alternative explanations include the involvement of clusters of bullet-shaped ice columns.
As light passes through the 60° apex angle of the hexagonal ice prisms, it is deflected twice, resulting in deviation angles ranging from 22° to 50°. Given the angle of incidence onto the hexagonal ice prism and the refractive index inside the prism , then the angle of deviation can be derived from Snell's law:
For = 1.309, the angle of minimum deviation is almost 22° (21.76°, when = 40.88°). More specifically, the angle of minimum deviation is 21.84° on average ( = 1.31); 21.54° for red light ( = 1.306) and 22.37° for blue light ( = 1.317). This wavelength-dependent variation in refraction causes the inner edge of the circle to be reddish while the outer edge is bluish.
The ice crystals in the clouds all deviate the light similarly, but only the ones from the specific ring at 22 degrees contribute to the effect for an observer at a set distance. As no light is refracted at angles smaller than 22°, the sky is darker inside the halo.
Another way to intuitively understand the formation of the 22° halo is to consider the following logic:
All rays from the Sun/Moon are incoming in a parallel manner towards the observer.
We can consider a specific case when the source is right on top of the sky.
Hexagonal water crystals can take on any orientation. But any rotation beyond 30° would be redundant when analyzing the angles subtended by the emerging rays. This means that for all the incoming vertical rays, we only need to consider incident angles in the range 30° to 60° that are incumbent on one edge of the hexagonal crystal; these are the ones that will reach the observer.
For the above range of incident angles, we can find the angle of the outgoing ray with respect to the vertical—which in fact is the angle subtended at the eye of the observer.
Outgoing ray angles (in the graphs on the right in the figure below) were obtained from the equation at the bottom. For a majority of rotation angles, the average of outgoing ray angles for red hovers around 22° and is slightly higher for blue.
Angle of rotation =
Another phenomenon resulting in a ring around the Sun or Moon—and therefore sometimes confused with the 22° halo—is the corona. Unlike the 22° halo, however, it is produced by water droplets instead of ice crystals and is much smaller and more colorful.
Weather relation
In folklore, moon rings are said to warn of approaching storms. Like other ice halos, 22° halos appear when the sky is covered by thin cirrus or cirrostratus clouds that often come a few days before a large storm front. However, the same clouds can also occur without any associated weather change, making a 22° halo unreliable as a sign of bad weather.
See also
46° halo
Circumzenithal arc
Circumhorizontal arc
Sun dog
Parhelic circle
Moon dog
Moonbow
References
Atmospheric optical phenomena | 22° halo | [
"Physics"
] | 831 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
10,680,005 | https://en.wikipedia.org/wiki/%CE%91-Cyano-4-hydroxycinnamic%20acid | α-Cyano-4-hydroxycinnamic acid, also written as alpha-cyano-4-hydroxycinnamic acid and abbreviated CHCA or HCCA, is a cinnamic acid derivative and is a member of the phenylpropanoid family. The carboxylate form is α-cyano-4-hydroxycinnamate.
Matrix-assisted laser desorption/ionization
α-Cyano-4-hydroxycinnamic acid is used as a matrix for peptides and nucleotides in MALDI mass spectrometry analyses.
See also
Sinapinic acid
References
Hydroxycinnamic acids
4-Hydroxyphenyl compounds
Phenylpropanoids
Conjugated nitriles
Vinylogous carboxylic acids | Α-Cyano-4-hydroxycinnamic acid | [
"Chemistry"
] | 169 | [
"Biomolecules by chemical classification",
"Phenylpropanoids"
] |
10,680,320 | https://en.wikipedia.org/wiki/Lithium%20superoxide | Lithium superoxide is an unstable inorganic salt with formula . A radical compound, it can be produced at low temperature in matrix isolation experiments, or in certain nonpolar, non-protic solvents. Lithium superoxide is also a transient species during the reduction of oxygen in a lithium–air galvanic cell, and serves as a main constraint on possible solvents for such a battery. For this reason, it has been investigated thoroughly using a variety of methods, both theoretical and spectroscopic.
Structure
The molecule is a misnomer: the bonds between lithium and oxygen are highly ionic, with almost complete electron-transfer. The force constant between the two oxygen atoms matches the constants measured for the superoxide anion () in other contexts. The bond length for the O-O bond was determined to be 1.34 Å. Using a simple crystal structure optimization, the Li-O bond was calculated to be approximately 2.10 Å.
There have been quite a few studies regarding the clusters formed by molecules. The most common dimer has been found to be the cage isomer. Second to it is the singlet bypyramidal structure. Studies have also been done on the chair complex and the planar ring, but these two are less favorable, though not necessarily impossible.
Production and reactions
Lithium superoxide is extremely reactive because of the odd number of electrons present in the π* molecular orbital of the superoxide anion. Matrix isolation techniques can produce pure samples of the compound, but they are only stable at 15-40 K.
At higher (but still cryogenic) temperatures, lithium superoxide can be produced by ozonating lithium peroxide () in freon 12:
The resulting product is only stable up to −35 °C.
Alternatively, lithium electride dissolved in anhydrous ammonia will reduce oxygen gas to yield the same product:
Lithium superoxide is, however, only metastable in ammonia, gradually oxidizing the solvent to water and nitrogen gas:
Unlike other known decompositions of , this reaction bypasses lithium peroxide.
Occurrence
Like other superoxides, lithium superoxide is the product of a one-electron reduction of an oxygen molecule. It thus appears whenever oxygen is mixed with single-electron redox catalysts, such as p-benzoquinone.
In batteries
Lithium superoxide also appears at the cathode of a lithium-air galvanic cell during discharge, as in the following reaction:
This product typically then reacts and proceed to form lithium peroxide,
The mechanism for this last reaction has not been confirmed and developing a complete theory of the oxygen reduction process remains a theoretical challenge . Indeed, recent work suggests that can be stabilized via a suitable cathode made of graphene with iridium nanoparticles.
A significant challenge when investigating these batteries is finding an ideal solvent in which to perform these reactions; current candidates are ether- and amide-based, but these compounds readily react with the superoxide and decompose. Nevertheless, lithium-air cells remain the focus of intense research, because of their large energy density—comparable to the internal combustion engine.
In the atmosphere
Lithium superoxide can also form for extended periods of time in low-density, high-energy environments, such as the upper atmosphere. The mesosphere contains a persistent layer of alkali metal cations ablated from meteors. For sodium and potassium, many of the ions bond to form particles of the corresponding superoxide. It is currently unclear whether lithium should react analogously.
See also
Lithium oxide
Lithium peroxide
References
Superoxides
Lithium salts | Lithium superoxide | [
"Chemistry"
] | 729 | [
"Lithium salts",
"Salts"
] |
10,680,427 | https://en.wikipedia.org/wiki/3-Hydroxypicolinic%20acid | 3-Hydroxy picolinic acid is a picolinic acid derivative and is a member of the pyridine family. It is used as a matrix for nucleotides in MALDI mass spectrometry analyses and the synthesis of favipiravir.
See also
Matrix-assisted laser desorption/ionization
Sinapinic acid
Picolinic acid
α-Cyano-4-hydroxycinnamic acid
References
Hydroxypyridines
Alpha hydroxy acids | 3-Hydroxypicolinic acid | [
"Chemistry"
] | 103 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
10,680,694 | https://en.wikipedia.org/wiki/120%C2%B0%20parhelion | A 120° parhelion (plural: 120° parhelia) is a relatively rare halo, an optical phenomenon occasionally appearing along with very bright sun dogs (also called parhelia) when ice crystal-saturated cirrus clouds fill the atmosphere. The 120° parhelia are named for appearing in pair on the parhelic circle ±120° from the sun.
When visible, 120° parhelia appear as white-bluish bright spots on the white parhelic circle and are the product of at least two interior reflections in the hexagonal ice crystals. Their colour together with them being rather obscure can make observing them difficult as they tend to fuse with the clouds in the sky.
See also
Liljequist parhelion
Subhelic arc
References
External links
A photo of a 120° parhelion in the Czech Republic in October 2006
A photo by Joni Tornambe, 2006
A video from Russia featuring both 120° parhelia, as well as the classic sundogs and parhelic circle, 2007
A similar video from Kazakhstan with very bright 120° parhelia, 2007 Warning: loud audio!
Atmospheric optical phenomena
fr:Parhélie#Paranthélies et parhélies secondaires | 120° parhelion | [
"Physics"
] | 254 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
10,680,827 | https://en.wikipedia.org/wiki/ANT%20%28network%29 | ANT (originates from Adaptive Network Topology) is a proprietary (but open access) multicast wireless sensor network technology designed and marketed by ANT Wireless (a division of Garmin Canada). It provides personal area networks (PANs), primarily for activity trackers. ANT was introduced by Dynastream Innovations in 2003, followed by the low-power standard ANT+ in 2004, before Dynastream was bought by Garmin in 2006.
ANT defines a wireless communications protocol stack that enables hardware operating in the 2.4GHz ISM band to communicate by establishing standard rules for co-existence, data representation, signalling, authentication, and error detection. It is conceptually similar to Bluetooth low energy, but is oriented towards use with sensors.
the ANT website lists almost 200 brands using ANT technology. Samsung and, to a lesser part, Fujitsu, HTC, Kyocera, Nokia and Sharp added native support (without the use of a USB adapter) to their smartphones, with Samsung starting support with the Galaxy S4 and ending support with the Galaxy S20 line.
Overview
ANT-powered nodes are capable of acting as sources or sinks within a wireless sensor network concurrently. This means the nodes can act as transmitters, receivers, or transceivers to route traffic to other nodes. In addition, every node is capable of determining when to transmit based on the activity of its neighbors.
Technical information
ANT can be configured to spend long periods in a low-power sleep mode (consuming current on the order of microamperes), wake up briefly to communicate (when consumption rises to a peak of 22 milliamperes (at −5dB) during reception and 13.5 milliamperes (at −5 dB) during transmission) and return to sleep mode. Average current consumption for low message rates is less than 60 microamperes on some devices.
Each ANT channel consists of one or more transmitting nodes and one or more receiving nodes, depending on the network topology. Any node can transmit or receive, so the channels are bi-directional.
ANT accommodates three types of messaging: broadcast, acknowledged, and burst. Broadcast is a one-way communication from one node to another (or many). The receiving node(s) transmit no acknowledgment, but the receiving node may still send messages back to the transmitting node. This technique is suited to sensor applications and is the most economical method of operation.
Acknowledged messaging confirms receipt of data packets. The transmitter is informed of success or failure, although there are no retransmissions. This technique is suited to control applications.
ANT can also be used for burst messaging; this is a multi-message transmission technique using the full data bandwidth and running to completion. The receiving node acknowledges receipt and informs of corrupted packets that the transmitter then re-sends. The packets are sequence numbered for traceability. This technique is suited to data block transfer where the integrity of the data is paramount.
Comparison to other protocols
ANT was designed for low-bit-rate and low-power sensor networks, in a manner conceptually similar to (but not compatible with) Bluetooth Low Energy. This is in contrast with normal Bluetooth, which was designed for relatively high-bit-rate applications such as streaming sound for low-power headsets.
ANT uses adaptive isochronous transmission to allow many ANT devices to communicate concurrently without interference from one another, unlike Bluetooth LE, which supports an unlimited number of nodes through scatternets and broadcasting between devices.
Interference immunity
Bluetooth, Wi-Fi, and Zigbee employ direct-sequence spread spectrum (DSSS) and Frequency-hopping spread spectrum (FHSS) schemes respectively to maintain the integrity of the wireless link.
ANT uses an adaptive isochronous network technology to ensure coexistence with other ANT devices. This scheme provides the ability for each transmission to occur in an interference-free time slot within the defined frequency band. The radio transmits for less than 150 μs per message, allowing a single channel to be divided into hundreds of time slots. The ANT messaging period (the time between each node transmitting its data) determines how many time slots are available.
ANT+
ANT+, introduced in 2004 as "the first ultra low power wireless standard", is an interoperability function that can be added to the base ANT protocol. This standardization allows the networking of nearby ANT+ devices to facilitate the open collection and interpretation of sensor data. For example, ANT+ enabled fitness monitoring devices such as heart-rate monitors, pedometers, speed monitors, and weight scales can all work together to assemble and track performance metrics.
ANT+ is designed and maintained by the ANT+ Alliance, which is managed by ANT Wireless, a division of Dynastream Innovations, owned by Garmin. ANT+ is used in Garmin's line of fitness monitoring equipment. It is also used by Garmin's Chirp, a geocaching device, for logging and alerting nearby participants.
ANT+ devices require certification from the ANT+ Alliance to ensure compliance with standard device profiles. Each device profile has an icon which may be used to visually match interoperable devices sharing the same device profiles.
The ANT+ specification is publicly available. At DEF CON 2019, hacker Brad Dixon demonstrated a tool to modify ANT+ data transmitted through USB for cheating in virtual cycling.
See also
LoRa
Wi-Fi
Wireless USB
References
External links
Open-source communication tool (antpm)
Wireless sensor network | ANT (network) | [
"Technology"
] | 1,115 | [
"Wireless networking",
"Wireless sensor network"
] |
10,681,803 | https://en.wikipedia.org/wiki/Royal%20Society%20Wolfson%20Fellowship | The Royal Society Wolfson Fellowship, known as the Royal Society Wolfson Research Merit Award until 2020, is a 5 years fellowship awarded by the Royal Society since 2000. The scheme is described by the Royal Society as providing long-term flexible funding for senior career researchers recruited or retained to a UK university or research institution in fields identified as a strategic priority for the host department or organisation.
It is administered by the Royal Society and jointly funded by the Wolfson Foundation and the UK Office of Science and Technology, to provide universities "with additional financial support to attract key researchers to this country or to retain those who might seek to gain higher salaries elsewhere." to tackle the brain drain. They are given in four annual rounds, with up to seven awards per round.
Since 2021, the Royal Society Wolfson Fellowships program has expanded to include a Visiting Fellowship strand.
Recipients
Winners of this award (see Royal Society Wolfson Research Merit Award holders) award included:
Sue Black
Samuel L. Braunstein
Martin Bridson (2012)
Michael Bronstein (2018)
Peter Buneman
Michael Cant (2015)
José A. Carrillo (2012)
Ken Carslaw
Marianna Csörnyei
Candace Currie (2015)
Nicholas Dale (2015)
Roger Davies
René de Borst
Nora de Leeuw
Jonathan Essex
Ernesto Estrada
Wenfei Fan
Michael Farber (2004)
Andrea C. Ferrari
Philip A. Gale (2013)
Matthew Gaunt (2015)
Alain Goriely (2010)
Georg Gottlob
Andrew Granville (2015)
Peter Green
Ruth Gregory
Martin Hairer
Edwin Hancock
Mark Handley
Nicholas Higham
Simone Hochgreb (2003)
Saiful Islam (2013)
Brad Karp
Tara Keck
Rebecca Kilner (2015)
Daniela Kuhn (2015)
Ari Laptev
Tim Lenton
Malcolm Levitt
Stephan Lewandowsky
Leonid Libkin
Jon Lloyd (microbiologist) (2015)
Andy Mackenzie
Barbara Maher (2006-2012)
Vladimir Markovic
Robin May (2015)
Paul Milewski
E.J. Milner-Gulland
Tim Minshull (2015)
André Neves
Peter O'Hearn
William Lionheart (2015)
Fabrice Pierron
Alistair Pike
Gordon Plotkin
Adrian Podoleanu (2015)
Fernando Quevedo (2003)
David Richardson
Gareth Roberts (2015)
Alexander Ruban (2015)
Daniela Schmidt (2015)
Steven H. Simon
Nigel Smart
John Smillie (2015)
Stefan Söldner-Rembold (2013)
John Speakman
David Stephenson (2015)
Kate Storey (2015)
Andrew Taylor (2015)
Françoise Tisseur (2014)
Richard Thomas
Vlatko Vedral (2007)
Gabriella Vigliocco (2018)
Benjamin Willcox (2015)
Richard Winpenny (2009)
Philip J. Withers (2002)
Tim Wright (2015)
Ziheng Yang
Xin Yao (2012)
Nikolay I. Zheludev
Florian Markowetz (2017)
References
Grants (money)
Research awards
Funding bodies | Royal Society Wolfson Fellowship | [
"Technology"
] | 599 | [
"Science and technology awards",
"Research awards"
] |
10,681,813 | https://en.wikipedia.org/wiki/Soft%20laser%20desorption | Soft laser desorption (SLD) is laser desorption of large molecules that results in ionization without fragmentation. "Soft" in the context of ion formation means forming ions without breaking chemical bonds. "Hard" ionization is the formation of ions with the breaking of bonds and the formation of fragment ions.
Background
The term "soft laser desorption" has not been widely used by the mass spectrometry community, which in most cases uses matrix-assisted laser desorption/ionization (MALDI) to indicate soft laser desorption ionization that is aided by a separate matrix compound. The term soft laser desorption was used most notably by the Nobel Foundation in public information released in conjunction with the 2002 Nobel Prize in Chemistry. Koichi Tanaka was awarded 1/4 of the prize for his use of a mixture of cobalt nanoparticles and glycerol in what he called the “ultra fine metal plus liquid matrix method” of laser desorption ionization. With this approach, he was able to demonstrate the soft ionization of proteins. The MALDI technique was demonstrated (and the name coined) in 1985 by Michael Karas, Doris Bachmann, and Franz Hillenkamp, but ionization of proteins by MALDI was not reported until 1988, immediately after Tanaka's results were reported.
Some have argued that Karas and Hillenkamp were more deserving of the Nobel Prize than Tanaka because their crystalline matrix method is much more widely used than Tanaka's liquid matrix. Countering this argument is the fact that Tanaka was the first to use a 337 nm nitrogen laser while Karas and Hillenkamp were using a 266 nm Nd:YAG laser. The "modern" MALDI approach came into being several years after the first soft laser desorption of proteins was demonstrated.
The term soft laser desorption is now used to refer to MALDI as well as "matrix free" methods for laser desorption ionization with minimal fragmentation.
Variants
Graphite
The surface-assisted laser desorption/ionization (SALDI) approach uses a liquid plus graphite particle matrix. A colloidal graphite matrix has been called "GALDI" for colloidal graphite-assisted laser desorption/ionization.
Nanostructured surfaces
The desorption ionization on silicon (DIOS) approach is laser desorption/ionization of a sample deposited on a porous silicon surface. Nanostructure-initiator mass spectrometry (NIMS) is a variant of DIOS that uses "initiator" molecules trapped in the nanostructures. Although nanostructures are typically formed by etching, laser etching can also be used, for example as in laser-induced silicon microcolumn arrays (LISMA) for matrix-free mass spectrometry analysis.
Nanowires
Silicon nanowires were initially developed as a DIOS-MS application. This approach was later commercialized as Nanowire-assisted laser desorption/ionization (NALDI) uses a target consisting of nanowires made from metal oxides or nitrides. NALDI targets are available from Bruker Daltonics (although they are marketed as "nanostructured" rather than "nanowire" targets).
Surface-enhanced laser desorption/ionization (SELDI)
The surface-enhanced laser desorption/ionization (SELDI) variant is similar to MALDI, but uses a biochemical affinity target. The technique known as surface-enhanced neat desorption (SEND) is a related variant of MALDI with the matrix is covalently linked to the target surface. The SELDI technology was commercialized by Ciphergen Biosystems in 1997 as the ProteinChip system. It is now produced and marketed by Bio-Rad Laboratories.
Other methods
The technique known as laser induced acoustic desorption (LIAD) is transmission geometry LDI with a metal film target.
References
External links
The Nobel Prize in Chemistry 2002 – Information for the Public
Ion source
Mass spectrometry | Soft laser desorption | [
"Physics",
"Chemistry"
] | 848 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Ion source",
"Mass spectrometry",
"Matter"
] |
10,682,125 | https://en.wikipedia.org/wiki/Tangent%20arc | Tangent arcs are a type of halo, an atmospheric optical phenomenon, which appears above and below the observed Sun or Moon, tangent to the 22° halo. To produce these arcs, rod-shaped hexagonal ice crystals need to have their long axis aligned horizontally.
Description
Upper arc
The shape of an upper tangent arc varies with the elevation of the Sun; while the Sun is low (less than 29–32°) it appears as an arc over the observed Sun forming a sharp angle. As the Sun is seen to rise above the Earth's horizon, the curved wings of the arc lower towards the 22° halo while gradually becoming longer. As the Sun rises over 29–32°, the upper tangent arc unites with the lower tangent arc to form the circumscribed halo.
Lower arc
The lower tangent arc is rarely observable, appearing under and tangent to a 22° halo centred on the Sun. Just like upper tangent arcs, the shape of a lower arc is dependent on the altitude of the Sun. As the Sun is observed slipping over Earth's horizon the lower tangent arc forms a sharp, wing-shaped angle below the Sun. As the Sun rises over Earth's horizon, the arc first folds upon itself and then takes the shape of a wide arc. As the Sun reaches 29-32° over the horizon, it finally begins to widen and merge with the upper tangent arc to form the circumscribed halo.
Since by definition, the Sun elevation must exceed 22° above the horizon, most observations are from elevated observation points such as mountains and planes.
Origin
Both the upper and lower tangent arc form when hexagonal rod-shaped ice crystals in cirrus clouds have their long axis oriented horizontally. Each crystal can have its long axis oriented in a different horizontal direction, and can rotate around the long axis. Such a crystal configuration also produces other halos, including 22° halos and sun dogs; a predominant horizontal orientation is required to produce a crisp upper tangent arc. Like all colored halos, tangent arcs grade from red towards the Sun (i.e., downwards) to blue away from it, because red light is refracted less strongly than blue light.
See also
Circumhorizontal arc
Circumzenithal arc
Kern arc
Subsun
References
External links
www.paraselene.de - Tangent Arcs (including several HaloSim simulations.)
Atmospheric Optics - Alaska Lower Tangent Arc - Great photo by Ryan Skorecki.
Atmospheric Optics - Lower Tangent arc - A photo taken from an aeroplane.
Lower Tangent Arc Sep 25, 2005 - Photos of a lower tangent arc.
Atmospheric optical phenomena
ja:外接ハロ#上端接弧と下端接弧 | Tangent arc | [
"Physics"
] | 560 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
10,682,146 | https://en.wikipedia.org/wiki/Whipple%20Award | The Fred Whipple Award, established in 1989 by the Planetary Sciences Section of the American Geophysical Union, is presented to an individual who makes an outstanding contribution to the field of planetary science. The award was established to honor Fred Whipple. The Whipple Award includes an opportunity to present an invited lecture during the American Geophysical Union Fall Meeting.
Recipients
Source: AGU
See also
List of astronomy awards
List of geophysics awards
List of awards named after people
References
American Geophysical Union awards
Astronomy prizes
Awards established in 1989 | Whipple Award | [
"Astronomy",
"Technology"
] | 105 | [
"Science and technology awards",
"Astronomy prizes"
] |
10,682,387 | https://en.wikipedia.org/wiki/NGC%203593 | NGC 3593 is a lenticular galaxy located in the constellation Leo. It has a morphological classification of SA(s)0/a, which indicates it is a lenticular galaxy of the pure spiral type. Despite this, it has a large amount of hydrogen, both in its molecular () and atomic (H) form. It is a starburst galaxy, which means it is forming new stars at a high rate. This is occurring in a band of gas surrounding the central nucleus. There is a single arm, which spirals outward from this ring. It is frequently but not consistently identified as a member of the Leo Triplet group.
This galaxy is known to contain two counter-rotating populations of stars—that is, one set of stars is rotating in the opposite direction with respect to the other. One means for this to occur is by acquiring gas from an external source, which then undergoes star formation. An alternative is by a merger with a second galaxy. Neither scenario has been ruled out. The age of the lower mass, counter-rotating population is younger by about than the primary star population of the galaxy.
A dynamical study found that there is likely a supermassive black hole (SMBH) at the center of NGC 3593. The mass of the SMBH is between and solar masses.
References
External links
HubbleSite NewsCenter: Pictures and description on NGC 3593
Unbarred spiral galaxies
Leo (constellation)
3593
34257
06272
Virgo Supercluster | NGC 3593 | [
"Astronomy"
] | 311 | [
"Leo (constellation)",
"Constellations"
] |
10,684,384 | https://en.wikipedia.org/wiki/List%20of%20purification%20methods%20in%20chemistry | Purification in a chemical context is the physical separation of a chemical substance of interest from foreign or contaminating substances. Pure results of a successful purification process are termed isolate. The following list of chemical purification methods should not be considered exhaustive.
Affinity purification purifies proteins by retaining them on a column through their affinity to antibodies, enzymes, or receptors that have been immobilised on the column.
Filtration is a mechanical method to separate solids from liquids or gases by passing the feed stream through a porous sheet such as a cloth or membrane, which retains the solids and allows the liquid to pass through.
Centrifugation is a process that uses an electric motor to spin a vessel of fluid at high speed to make heavier components settle to the bottom of the vessel.
Evaporation removes volatile liquids from non-volatile solutes, which cannot be done through filtration due to the small size of the substances.
Liquid–liquid extraction removes an impurity or recovers a desired product by dissolving the crude material in a solvent in which other components of the feed material are soluble.
Crystallization separates a product from a liquid feed stream, often in extremely pure form, by cooling the feed stream or adding precipitants that lower the solubility of the desired product so that it forms crystals. The pure solid crystals are then separated from the remaining liquor by filtration or centrifugation.
Recrystallization: In analytical and synthetic chemistry work, purchased reagents of doubtful purity may be recrystallised, e.g. dissolved in a very pure solvent, and then crystallized, and the crystals recovered, in order to improve and/or verify their purity.
Trituration removes highly soluble impurities from usually solid insoluble material by rinsing it with an appropriate solvent.
Adsorption removes a soluble impurity from a feed stream by trapping it on the surface of a solid material, such as activated carbon, that forms strong non-covalent chemical bonds with the impurity.
Chromatography employs continuous adsorption and desorption on a packed bed of a solid to purify multiple components of a single feed stream. In a laboratory setting, mixture of dissolved materials are typically fed using a solvent into a column packed with an appropriate adsorbent, and due to different affinities for solvent (moving phase) versus adsorbent (stationary phase) the components in the original mixture pass through the column in the moving phase at different rates, which thus allows to selectively collect desired materials out of the initial mixture.
Smelting produces metals from raw ore, and involves adding chemicals to the ore and heating it up to the melting point of the metal.
Refining is used primarily in the petroleum industry, whereby crude oil is heated and separated into stages according to the condensation points of the various elements.
Distillation, widely used in petroleum refining and in purification of ethanol separates volatile liquids on the basis of their relative volatilities. There are several type of distillation: simple distillation, steam distillation etc.
Water purification combines a number of methods to produce potable or drinking water.
Downstream processing refers to purification of chemicals, pharmaceuticals and food ingredients produced by fermentation or synthesized by plant and animal tissues, for example antibiotics, citric acid, vitamin E, and insulin.
Fractionation refers to a purification strategy in which some relatively inefficient purification method is repeatedly applied to isolate the desired substance in progressively greater purity.
Electrolysis refers to the breakdown of substances using an electric current. This removes impurities in a substance that an electric current is run through
Sublimation is the process of changing of any substance (usually on heating) from a solid to a gas (or from gas to a solid) without passing through liquid phase. In terms of purification - material is heated, often under vacuum, and the vapors of the material are then condensed back to a solid on a cooler surface. The process thus in its essence is similar to distillation, however the material which is condensed on the cooler surface then has to be removed mechanically, thus requiring different laboratory equipment.
Bioleaching is the extraction of metals from their ores through the use of living organisms.
Separation process
From Crystallization
Plasma-chemical purification...
References
External links
www.zuiveringstechnieken.nl/purification-techniques Useful information about various purification techniques
Laboratory techniques
Purification methods in chemistry
Chemical processes | List of purification methods in chemistry | [
"Chemistry"
] | 915 | [
"Chemical process engineering",
"Chemical processes",
"nan"
] |
10,684,424 | https://en.wikipedia.org/wiki/Watertown%20Arsenal | The Watertown Arsenal was a major American arsenal located on the northern shore of the Charles River in Watertown, Massachusetts. The site is now registered on the ASCE's List of Historic Civil Engineering Landmarks and on the US National Register of Historic Places, and it is home to a park, restaurants, mixed use office space, and formerly served as the national headquarters for athenahealth.
History
The arsenal was established in 1816, on of land, by the United States Army for the receipt, storage, and issuance of ordnance. In this role, it replaced the earlier Charlestown Arsenal. The arsenal's earliest plan incorporated 12 buildings aligned along a north–south axis overlooking the river. Alexander Parris, later designer of Quincy Market, was architect. Buildings included a military store and arsenal, as well as shops and housing for officers and men. All were made of brick with slate roofs in the Federal style, and a high wall enclosed the compound. By 1819 all buildings were completed and occupied.
The arsenal's site, duties, and buildings grew gradually until the American Civil War, enlarging beyond the original quadrangle. During the war it greatly expanded to produce field and coastal gun carriages, and the war's impetus led to the quick construction of a large machine shop and smith shop built as contemporary factories, as well as a number of smaller buildings.
During the Civil War, a new commander's quarters was commissioned by then-Capt. Thomas J. Rodman, inventor of the Rodman gun. The lavish, , quarters would ultimately become one of the largest commander's quarters on any US military installation. This mansion is now on the National Register of Historic Places. The expense ($63,478.65) was considered wasteful and excessive and drew a stern rebuke from Congress, which then promoted Rodman to brigadier general and sent him to command Rock Island Arsenal on the frontier in Illinois, where he built an even larger commander's quarters.
Activities and new construction at the Watertown Arsenal continued to gradually expand until the early 1890s.
Activities changed decisively in 1892 when Congress authorized modernization to gun carriage manufacturing. At this point the arsenal became a manufacturing complex rather than storage depot. A number of major buildings were constructed, which over time began to reflect typical industrial facilities rather than the earlier arsenal styles. In 1897 an additional were purchased, and a hospital built.
Scientific management as designed by arsenal commander Charles Brewster Wheeler was implemented between 1908 and 1915. It was considered by the War Department as successful in saving money over the alternatives; but it was so hated by the work force that the Congress eventually overturned its use.
During World War I the arsenal nearly tripled in size. Building #311 was then reported to be one of the largest steel-frame structures in the United States, sized to accommodate both very large gun carriages and the equipment used to construct them. Railroad tracks ran throughout the arsenal complex. World War II brought an additional with existing industrial buildings, as the arsenal produced steel artillery pieces. In 1959–1960, a research nuclear reactor (Horace Hardy Lester Reactor) was constructed on site, for material research programs, and operated there until 1970.
In 1968 the Army ceased operations at the arsenal; were sold to the Watertown Redevelopment Authority, while the remaining were converted to the United States Army Materials and Mechanics Research Center, renamed the United States Materials Technology Laboratory in 1985. In 1995 all Army activity ceased and the remainder of the site was converted to civilian use.
The Armory site was formerly included on US EPA's National Priorities List of highly contaminated sites, more widely known as Superfund. The site was removed from the NPL in 2006.
See also
List of Historic Civil Engineering Landmarks
National Register of Historic Places listings in Middlesex County, Massachusetts
List of military installations in Massachusetts
References
Bibliography
. First published in 1960 by Harvard University Press. Republished in 1985 by Princeton University Press, with a new foreword by Merritt Roe Smith.
Earls, Alan R., 2007 Watertown Arsenal, (Images of America), Arcadia Publishing, Charleston, SC, United States ().
()
External links
Official Website of the former Watertown Arsenal Commander's Quarters
Historic American Engineering Record documentation, filed under Watertown, Middlesex County, MA:
Industrial buildings and structures on the National Register of Historic Places in Massachusetts
Historic districts on the National Register of Historic Places in Massachusetts
Buildings and structures in Watertown, Massachusetts
Armories on the National Register of Historic Places in Massachusetts
Historic American Engineering Record in Massachusetts
Historic Civil Engineering Landmarks
Massachusetts in the American Civil War
United States Army arsenals during World War II
Superfund sites in Massachusetts
National Register of Historic Places in Middlesex County, Massachusetts
1816 establishments in Massachusetts
Installations of the United States Army in Massachusetts
Former installations of the United States Army | Watertown Arsenal | [
"Engineering"
] | 961 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
10,684,991 | https://en.wikipedia.org/wiki/Astrotactin | Astrotactin-1, abbreviated ASTN1, is a glycoprotein expressed primarily in the central nervous system. ASTN1 and its counterpart ASTN2 are involved in regulation of adhesion during the radial migration of neurons in the developing CNS. Astrotactin is a neuronal adhesion molecule required for glial-guided migration of young postmitotic neuroblasts with expression is primarily located in the cells of cortical regions of the developing brain including; cerebrum, hippocampus, cerebellum, and olfactory bulb.
Biochemical properties
ASTN-1 is created via developmental pathways via mRNA, the astrotactin-1 protein is generated from Chromosome 1: 176,861,067-177,164,712 and the Gencode Gene: ENSG00000152092.16 with a base pair size of 303,646 and total Exon count of 23. Composed of 1294 amino acids on average with some variations of 1302, 1228, and 1216 depending on an individual's genetic composition. Astrotactin-1 appears highly conserved as mutations are quite rare with 2 deletions and a single duplication recorded within a sample of 64,114 subjects. ASTN1 features a relatively short amino terminus coupled with a longer carboxyl terminus within the extracellular matrix of their environment that may aid in the protein function of movement.
Considered a Multi-pass membrane protein responsible for the neuronal migrations in cortical regions of the brain are guided by a system of radial glial fibers. This process begins via gene signaling during fetal development and lasts until brain maturation around the age of 26.
Neural action
Gord Fishell and Mary E. Hatten in 1991 used in vitro assays to determine the role Astrotactin-1 plays in the neuronal migration of granule neurons. This mechanism of action appears to be ASTN1 providing the neural receptor for migration in astro-gial membranes via its transmembrane regions to intersect glial membranes.
Using phase-contrast microscopy the average speed and frequency of migration of individual cells could be used to quantify the movement of large populations of neurons along the glial fibers. One experiment showed that over a three hour observation period, approximately eighty percent of the cells moved more than 15 pm along a single glial fiber. Additionally, migrating cells expressed regular and characteristic form, creating a tight apposition with the astro-glial fiber matrix and extending a leading process of cells in the direction of movement. During migration, saltatory contraction and extension of the neural soma along the glial fiber appeared to provide forward motion. The total distance moved by the individual cells over a 5h period were as much as 210 μm, with rapidly migrating cells generally moving those longer distances. It must be noted that distance is not a total measurement but only the positive value given for forward mobility. Neurons would often briefly reverse their migrating direction, reaching the end of a glial fiber before reversing, along with some pauses between start and end of glia fibers.
Further, via the addition of anti-astrotactin antibodies caused migrating neurons to detach from the glial fibers within three hours of the antibody addition to said cultures. Conversely, while anti-astrotactin antibodies blocked glial guided neuronal migration, neurons stayed bound to the radial astrotactin-1 supported glial fibers for the entire three hour assay period.
Unlike other mechanisms of CNS neuron movement which use the formation of an interstitial junction. The neural protein astrotactin contributes to the establishment of adhesion sites along the leading process, the formation of brand new interstitial junctions and neural cytoskeletal organization as the neuron moves along the astrotactin glial guide.
Wnt signal transduction may play a part in alleviating the effect of ASTN1 on migration and invasion of neural cells. ASTN1-silenced cells were treated to inhibit the Wnt signal transduction pathway; significantly increased migration and invasion of cells, whereas control treatment suppressed these effects. These results further verify the role of the Wnt signaling pathway in the effects of ASTN1 on migration and invasion.
Connection to diseases
Down regulation of ASTN1 is linked to both central nervous system diseases as well as other tissue disorders such as various cancers. Further connections have been made to the possible connection of neurodevelopmental and neurodegenerative diseases to ASTN1. While ASTN2 has a high correlation to various neurodevelopmental disorders such as: ASDs, ADHD, speech delays, general anxiety, and obsessive compulsive disorder ASTN1 appears to be less mutated and impactful to these disorders.
Hepatocellular carcinoma, HCC, has some connection to decreased ASTN1 levels. Within HCC tissues ASTN1 is downregulated compared to controls, with lower levels of ASTN1 mRNA expression in tumor tissues. Further, lower levels of ASTN1 are associated with more advanced clinical stages of hepatocellular carcinoma. More generally decreased levels of ASTN1 in tissues are associated with cancer along with; presence of microscopic satellite nodules, vascular invasion, tumor size, encapsulation, and Tumor-Node-Metastasis stage.
ASTN1 downregulates several downstream genes of the Wnt/β-catenin signal transduction pathway, which is excessively activated in human liver cancer tissues. The results suggest that ASTN1 may function via the Wnt/β-catenin signal transduction pathway and could potentially serve as a tumor-suppressor gene in liver cancer.
Interestingly, as many of the genes are mutated in both early and late stage tumors, ASTN1 mutations are present in stages II–IV tumors but not in stage I tumors. This suggests that ASTN1 plays a role in tumor progression rather than tumor initiation in small cell lung cancers. ASTN1 mutations seemingly play a role across tumor stages II to IV and not in stage I possibly indicating ASTN1 role as a tumor progressor in some cancer such as lung.
Potential connections between ASTN1 and Ritscher-Schinzel Syndrome 1, an autosomal recessive disease featuring developmental malformations characterized by craniofacial abnormalities, heart defects, and brain malformations within the cerebellum are some of the more current avenues of study.
ASTN2 expression has been linked to higher survival rates for female lung cancer patients.
Intramembrane cleavage potential
Familial proteins Astn1 and Astn2 found in mouse cells were observed to feature intramembrane cleavage. Confirmed by the joining of one or more disulfide bonds within the second transmembrane region via cysteine-to-serine mutagenesis. Cleavage for the ASTN1 protein is undetermined as a determinant cleavage site is yet to be reported.
References
Glycoproteins | Astrotactin | [
"Chemistry"
] | 1,497 | [
"Glycoproteins",
"Glycobiology"
] |
10,685,654 | https://en.wikipedia.org/wiki/Second%20sound | In condensed matter physics, second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Its presence leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of entropy and temperature is similar to the propagation of pressure waves in air (sound). The phenomenon of second sound was first described by Lev Landau in 1941.
Description
Normal sound waves are fluctuations in the displacement and density of molecules in a substance;
second sound waves are fluctuations in the density of quasiparticle thermal excitations (rotons and phonons). Second sound can be observed in any system in which most phonon-phonon collisions conserve momentum, like superfluids and in some dielectric crystals when Umklapp scattering is small.
Contrary to molecules in a gas, quasiparticles are not necessarily conserved. Also gas molecules in a box conserve momentum (except at the boundaries of box), while quasiparticles can sometimes not conserve momentum in the presence of impurities or Umklapp scattering. Umklapp phonon-phonon scattering exchanges momentum with the crystal lattice, so phonon momentum is not conserved, but Umklapp processes can be reduced at low temperatures.
Normal sound in gases is a consequence of the collision rate between molecules being large compared to the frequency of the sound wave . For second sound, the Umklapp rate has to be small compared to the oscillation frequency for energy and momentum conservation. However analogous to gasses, the relaxation time describing the collisions has to be large with respect to the frequency , leaving a window:
for sound-like behaviour or second sound. The second sound thus behaves as oscillations of the local number of quasiparticles (or of the local energy carried by these particles). Contrary to the normal sound where energy is related to pressure and temperature, in a crystal the local energy density is purely a function of the temperature. In this sense, the second sound can also be considered as oscillations of the local temperature. Second sound is a wave-like phenomenon which makes it very different from usual heat diffusion.
In helium II
Second sound is observed in liquid helium at temperatures below the lambda point, 2.1768 K, where 4He becomes a superfluid known as helium II. Helium II has the highest thermal conductivity of any known material (several hundred times higher than copper). Second sound can be observed either as pulses or in a resonant cavity.
The speed of second sound is close to zero near the lambda point, increasing to approximately 20 m/s around 1.8 K, about ten times slower than normal sound waves.
At temperatures below 1 K, the speed of second sound in helium II increases as the temperature decreases.
Second sound is also observed in superfluid helium-3 below its lambda point 2.5 mK.
As per the two-fluid, the speed of second sound is given by
where is the temperature, is the entropy, is the specific heat, is the superfluid density and is the normal fluid density. As , , where is the ordinary (or first) sound speed.
In other media
Second sound has been observed in solid 4He and 3He,
and in some dielectric solids such as Bi in the temperature
range of 1.2 to 4.0 K with a velocity of 780 ± 50 m/s,
or solid sodium fluoride (NaF) around 10 to 20 K. In 2021 this effect was observed in a BKT superfluid as well as in a germanium semiconductor
In graphite
In 2019 it was reported that ordinary graphite exhibits second sound at 120 K. This feature was both predicted theoretically and observed experimentally, and
was by far the highest temperature at which second sound has been observed. However, this second sound is observed only at the microscale, because the wave dies out exponentially with
characteristic length 1-10 microns. Therefore, presumably graphite in the right temperature regime has extraordinarily high thermal conductivity but only for the purpose of transferring heat pulses distances of order 10 microns, and for pulses of duration on the order of 10 nanoseconds. For more "normal" heat-transfer, graphite's observed thermal conductivity is less than that of, e.g., copper. The theoretical models, however, predict longer absorption lengths would be seen in isotopically pure graphite, and perhaps over a wider temperature range, e.g. even at room temperature. (As of March 2019, that experiment has not yet been tried.)
Applications
Measuring the speed of second sound in 3He-4He mixtures can be
used as a thermometer in the range 0.01-0.7 K.
Oscillating superleak transducers (OST) use second sound to locate defects in superconducting accelerator cavities.
See also
Zero sound
Third sound
References
Bibliography
Sinyan Shen, Surface Second Sound in Superfluid Helium. PhD Dissertation (1973). http://adsabs.harvard.edu/abs/1973PhDT.......142S
V. Peshkov, "'Second Sound' in Helium II," J. Phys. (Moscow) 8, 381 (1944)
U. Piram, "Numerical investigation of second sound in liquid helium," Dipl.-Ing. Dissertation (1991). Retrieved on April 15, 2007.
Quantum mechanics
Thermodynamics
Superfluidity
Lev Landau | Second sound | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 1,151 | [
"Physical phenomena",
"Phase transitions",
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Superfluidity",
"Thermodynamics",
"Condensed matter physics",
"Exotic matter",
"Dynamical systems",
"Matter",
"Fluid dynamics"
] |
10,685,778 | https://en.wikipedia.org/wiki/Kumada%20coupling | In organic chemistry, the Kumada coupling is a type of cross coupling reaction, useful for generating carbon–carbon bonds by the reaction of a Grignard reagent and an organic halide. The procedure uses transition metal catalysts, typically nickel or palladium, to couple a combination of two alkyl, aryl or vinyl groups. The groups of Robert Corriu and Makoto Kumada reported the reaction independently in 1972.
The reaction is notable for being among the first reported catalytic cross-coupling methods. Despite the subsequent development of alternative reactions (Suzuki, Sonogashira, Stille, Hiyama, Negishi), the Kumada coupling continues to be employed in many synthetic applications, including the industrial-scale production of aliskiren, a hypertension medication, and polythiophenes, useful in organic electronic devices.
History
The first investigations into the catalytic coupling of Grignard reagents with organic halides date back to the 1941 study of cobalt catalysts by Morris S. Kharasch and E. K. Fields. In 1971, Tamura and Kochi elaborated on this work in a series of publications demonstrating the viability of catalysts based on silver, copper and iron. However, these early approaches produced poor yields due to substantial formation of homocoupling products, where two identical species are coupled.
These efforts culminated in 1972, when the Corriu and Kumada groups concurrently reported the use of nickel-containing catalysts. With the introduction of palladium catalysts in 1975 by the Murahashi group, the scope of the reaction was further broadened. Subsequently, many additional coupling techniques have been developed, culminating in the 2010 Nobel Prize in Chemistry recognized Ei-ichi Negishi, Akira Suzuki and Richard F. Heck for their contributions to the field.
Mechanism
Palladium catalysis
According to the widely accepted mechanism, the palladium-catalyzed Kumada coupling is understood to be analogous to palladium's role in other cross coupling reactions. The proposed catalytic cycle involves both palladium(0) and palladium(II) oxidation states. Initially, the electron-rich Pd(0) catalyst (1) inserts into the R–X bond of the organic halide. This oxidative addition forms an organo-Pd(II)-complex (2). Subsequent transmetalation with the Grignard reagent forms a hetero-organometallic complex (3). Before the next step, isomerization is necessary to bring the organic ligands next to each other into mutually cis positions. Finally, reductive elimination of (4) forms a carbon–carbon bond and releases the cross coupled product while regenerating the Pd(0) catalyst (1). For palladium catalysts, the frequently rate-determining oxidative addition occurs more slowly than with nickel catalyst systems.
Nickel catalysis
Current understanding of the mechanism for the nickel-catalyzed coupling is limited. Indeed, the reaction mechanism is believed to proceed differently under different reaction conditions and when using different nickel ligands. In general the mechanism can still be described as analogous to the palladium scheme (right). Under certain reaction conditions, however, the mechanism fails to explain all observations. Examination by Vicic and coworkers using tridentate terpyridine ligand identified intermediates of a Ni(II)-Ni(I)-Ni(III) catalytic cycle, suggesting a more complicated scheme. Additionally, with the addition of butadiene, the reaction is believed to involve a Ni(IV) intermediate.
Scope
Organic halides and pseudohalides
The Kumada coupling has been successfully demonstrated for a variety of aryl or vinyl halides. In place of the halide reagent pseudohalides can also be used, and the coupling has been shown to be quite effective using tosylate and triflate species in variety of conditions.
Despite broad success with aryl and vinyl couplings, the use of alkyl halides is less general due to several complicating factors. Having no π-electrons, alkyl halides require different oxidative addition mechanisms than aryl or vinyl groups, and these processes are currently poorly understood. Additionally, the presence of β-hydrogens makes alkyl halides susceptible to competitive elimination processes.
These issues have been circumvented by the presence of an activating group, such as the carbonyl in α-bromoketones, that drives the reaction forward. However, Kumada couplings have also been performed with non-activated alkyl chains, often through the use of additional catalysts or reagents. For instance, with the addition of 1,3-butadienes Kambe and coworkers demonstrated nickel catalyzed alkyl–alkyl couplings that would otherwise be unreactive.
Though poorly understood, the mechanism of this reaction is proposed to involve the formation of an octadienyl nickel complex. This catalyst is proposed to undergo transmetalation with a Grignard reagent first, prior to the reductive elimination of the halide, reducing the risk of β-hydride elimination. However, the presence of a Ni(IV) intermediate is contrary to mechanisms proposed for aryl or vinyl halide couplings.
Grignard reagent
Couplings involving aryl and vinyl Grignard reagents were reported in the original publications by Kumada and Corriu. Alkyl Grignard reagents can also be used without difficulty, as they do not suffer from β-hydride elimination processes. Although the Grignard reagent inherently has poor functional group tolerance, low-temperature syntheses have been prepared with highly functionalized aryl groups.
Catalysts
Kumada couplings can be performed with a variety of nickel(II) or palladium(II) catalysts. The structures of the catalytic precursors can be generally formulated as ML2X2, where L is a phosphine ligand. Common choices for L2 include bidentate diphosphine ligands such as dppe and dppp among others.
Work by Alois Fürstner and coworkers on iron-based catalysts have shown reasonable yields. The catalytic species in these reactions is proposed to be an "inorganic Grignard reagent" consisting of .
Reaction conditions
The reaction typically is carried out in tetrahydrofuran or diethyl ether as solvent. Such ethereal solvents are convenient because these are typical solvents for generating the Grignard reagent. Due to the high reactivity of the Grignard reagent, Kumada couplings have limited functional group tolerance which can be problematic in large syntheses. In particular, Grignard reagents are sensitive to protonolysis from even mildly acidic groups such as alcohols. They also add to carbonyls and other oxidative groups.
As in many coupling reactions, the transition metal palladium catalyst is often air-sensitive, requiring an inert Argon or nitrogen reaction environment.
A sample synthetic preparation is available at the Organic Syntheses website.
Selectivity
Stereoselectivity
Both cis- and trans-olefin halides promote the overall retention of geometric configuration when coupled with alkyl Grignards. This observation is independent of other factors, including the choice of catalyst ligands and vinylic substituents.
Conversely, a Kumada coupling using vinylic Grignard reagents proceeds without stereospecificity to form a mixture of cis- and trans-alkenes. The degree of isomerization is dependent on a variety of factors including reagent ratios and the identity of the halide group. According to Kumada, this loss of stereochemistry is attributable to side-reactions between two equivalents of the allylic Grignard reagent.
Enantioselectivity
Asymmetric Kumada couplings can be effected through the use of chiral ligands. Using planar chiral ferrocene ligands, enantiomeric excesses (ee) upward of 95% have been observed in aryl couplings. More recently, Gregory Fu and co-workers have demonstrated enantioconvergent couplings of α-bromoketones using catalysts based on bis-oxazoline ligands, wherein the chiral catalyst converts a racemic mixture of starting material to one enantiomer of product with up to 95% ee. The latter reaction is also significant for involving a traditionally inaccessible alkyl halide coupling.
Chemoselectivity
Grignard reagents do not typically couple with chlorinated arenes. This low reactivity is the basis for chemoselectivity for nickel insertion into the C–Br bond of bromochlorobenzene using a NiCl2-based catalyst.
Applications
Synthesis of aliskiren
The Kumada coupling is suitable for large-scale, industrial processes, such as drug synthesis. The reaction is used to construct the carbon skeleton of aliskiren (trade name Tekturna), a treatment for hypertension.
Synthesis of polythiophenes
The Kumada coupling also shows promise in the synthesis of conjugated polymers, polymers such as polyalkylthiophenes (PAT), which have a variety of potential applications in organic solar cells and light-emitting diodes. In 1992, McCollough and Lowe developed the first synthesis of regioregular polyalkylthiophenes by utilizing the Kumada coupling scheme pictured below, which requires subzero temperatures.
Since this initial preparation, the synthesis has been improved to obtain higher yields and operate at room temperature.
See also
Heck reaction
Hiyama coupling
Suzuki reaction
Negishi coupling
Petasis reaction
Stille reaction
Sonogashira coupling
Murahashi coupling
Citations
Carbon-carbon bond forming reactions
Name reactions | Kumada coupling | [
"Chemistry"
] | 2,044 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
10,685,957 | https://en.wikipedia.org/wiki/Haplogroup%20S%20%28mtDNA%29 | In human genetics, Haplogroup S is a human mitochondrial DNA (mtDNA) haplogroup found only among Indigenous Australians. It is a descendant of macrohaplogroup N.
Origin
Haplogroup S mtDNA evolved within Australia between 64,000 and 40,000 years ago (51 kya).
Distribution
It is found in the Indigenous Australian population. Haplogroup S2 found in Willandra Lakes human remain WLH4 dated back Late Holocene (3,000–500 years ago).
The following table lists relevant GenBank samples:
Subclades
Tree
This phylogenetic tree of haplogroup S subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation and subsequent published research. The TMRCA for haplogroup S is between 49 and 51 KYA according to Nano Nagle's Aboriginal Australian mitochondrial genome variation – an increased understanding of population antiquity and diversity publication that published in 2017.
S (64-40 kya) in Australia
S1 (53-32 kya) in Australia
S1a (44-29 kya) found in WA, NT, QLD and NSW
S1b (37-22 kya) found in NT, QLD and NSW
S1b1 (30-10 kya) found in NT and QLD
S1b1a (24-6 kya) found in QLD
S1b2 (17-3 kya) found in QLD
S1b3 (20-4 kya) found in QLD and NSW
S2 (44-22 kya) in Australia
S2a (38-18 kya) found in NT, QLD, NSW and TAS
S2a1 (31-12 kya) found in NSW, QLD and TAS
S2a1a (19-6 kya) found in NSW and QLD
S2a2 (38-11 kya) found in NT, QLD and NSW
S2b (42-18 kya) found in WA, NT, QLD and VIC
S2b1(27-9 kya) found in NT, QLD and VIC
S2b2 (37-12 kya) found in WA, NT and QLD
S-T152C!
S3 (17-1 kya) found in NT
S4 found in NT
S5 found in WA
S6 found in NSW
See also
Genealogical DNA test
Genetic genealogy
Human mitochondrial genetics
Population genetics
References
External links
Ian Logan's Mitochondrial DNA Site: Haplogroup S
Mannis van Oven's Phylotree
S | Haplogroup S (mtDNA) | [
"Chemistry",
"Biology"
] | 552 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Bioinformatics",
"Bioinformatics stubs"
] |
10,685,970 | https://en.wikipedia.org/wiki/Haplogroup%20pre-JT | Haplogroup pre-JT is a human mitochondrial DNA haplogroup (mtDNA). It is also called R2'JT.
Origin
Haplogroup pre-JT is a descendant of the haplogroup R. It is characterised by the mutation T4216C. The pre-JT clade has two direct descendant lineages, haplogroup JT and haplogroup R2.
Distribution
According to YFull MTree, haplogroup R2'JT has allegedly been sequenced in at least three individuals, among whom one came from ancient Egypt and one from modern Denmark. However, Ian Logan mutationally interpreted the Denmark sample as being a member of T1a.
One carrier of haplogroup R2'JT was found in an in-depth study of "108 Scandinavian Neolithic individuals".
Subclades
Its major subclade is Haplogroup JT, which further divides into Haplogroup J and Haplogroup T. Its other subclade is Haplogroup R2, which has such branches as R2a, R2b, and R2c.
Tree
R2'JT
R2
JT
J
T
See also
Genealogical DNA test
Genetic genealogy
Human mitochondrial genetics
Population genetics
References
External links
Ian Logan's Mitochondrial DNA Site
JT | Haplogroup pre-JT | [
"Chemistry",
"Biology"
] | 280 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Bioinformatics",
"Bioinformatics stubs"
] |
10,686,210 | https://en.wikipedia.org/wiki/Jupiter%20mass | The Jupiter mass, also called Jovian mass, is the unit of mass equal to the total mass of the planet Jupiter. This value may refer to the mass of the planet alone, or the mass of the entire Jovian system to include the moons of Jupiter. Jupiter is by far the most massive planet in the Solar System. It is approximately 2.5 times as massive as all of the other planets in the Solar System combined.
Jupiter mass is a common unit of mass in astronomy that is used to indicate the masses of other similarly-sized objects, including the outer planets, extrasolar planets, and brown dwarfs, as this unit provides a convenient scale for comparison.
Current best estimates
The current best known value for the mass of Jupiter can be expressed as :
which is about as massive as the Sun (is about ):
Jupiter is 318 times as massive as Earth:
Context and implications
Jupiter's mass is 2.5 times that of all the other planets in the Solar System combined—this is so massive that its barycenter with the Sun lies beyond the Sun's surface at 1.068 solar radii from the Sun's center.
Because the mass of Jupiter is so large compared to the other objects in the Solar System, the effects of its gravity must be included when calculating satellite trajectories and the precise orbits of other bodies in the Solar System, including the Moon and even Pluto.
Theoretical models indicate that if Jupiter had much more mass than it does at present, its atmosphere would collapse, and the planet would shrink. For small changes in mass, the radius would not change appreciably, but above about (1.6 Jupiter masses) the interior would become so much more compressed under the increased pressure that its volume would decrease despite the increasing amount of matter. As a result, Jupiter is thought to have about as large a diameter as a planet of its composition and evolutionary history can achieve. The process of further shrinkage with increasing mass would continue until appreciable stellar ignition was achieved, as in high-mass brown dwarfs having around 50 Jupiter masses. Jupiter would need to be about 80 times as massive to fuse hydrogen and become a star.
Gravitational constant
The mass of Jupiter is derived from the measured value called the Jovian mass parameter, which is denoted with GMJ. The mass of Jupiter is calculated by dividing GMJ by the constant G. For celestial bodies such as Jupiter, Earth and the Sun, the value of the GM product is known to many orders of magnitude more precisely than either factor independently. The limited precision available for G limits the uncertainty of the derived mass. For this reason, astronomers often prefer to refer to the gravitational parameter, rather than the explicit mass. The GM products are used when computing the ratio of Jupiter mass relative to other objects.
In 2015, the International Astronomical Union defined the nominal Jovian mass parameter to remain constant regardless of subsequent improvements in measurement precision of . This constant is defined as exactly
If the explicit mass of Jupiter is needed in SI units, it can be calculated by dividing GM by G, where G is the gravitational constant.
Mass composition
The majority of Jupiter's mass is hydrogen and helium. These two elements make up more than 87% of the total mass of Jupiter. The total mass of heavy elements other than hydrogen and helium in the planet is between 11 and . The bulk of the hydrogen on Jupiter is solid hydrogen. Evidence suggests that Jupiter contains a central dense core. If so, the mass of the core is predicted to be no larger than about . The exact mass of the core is uncertain due to the relatively poor knowledge of the behavior of solid hydrogen at very high pressures.
Relative mass
See also
Jupiter radius
Hot Jupiter
Orders of magnitude (mass)
Planetary mass
Solar mass
Notes
References
Units of mass
Planetary science
Units of measurement in astronomy | Jupiter mass | [
"Physics",
"Astronomy",
"Mathematics"
] | 770 | [
"Matter",
"Units of measurement",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement in astronomy",
"Planetary science",
"Astronomical sub-disciplines"
] |
13,169,712 | https://en.wikipedia.org/wiki/Embedment | Embedment is a phenomenon in mechanical engineering in which the surfaces between mechanical members of a loaded joint embed. It can lead to failure by fatigue as described below, and is of particular concern when considering the design of critical fastener joints.
Mechanism
The mechanism behind embedment is different from creep. When the loading of the joint varies (e.g. due to vibration or thermal expansion) the protruding points of the imperfect surfaces will see local stress concentrations and yield until the stress concentration is relieved. Over time, surfaces can flatten an appreciable amount in the order of thousandths of an inch.
Consequences
In critical fastener joints, embedment can mean loss of preload. Flattening of a surface allows the strain of a screw to relax, which in turn correlates with a loss in tension and thus preload. In bolted joints with particularly short grip lengths, the loss of preload due to embedment can be especially significant, causing complete loss of preload. Therefore, embedment can lead directly to loosening of a fastener joint and subsequent fatigue failure.
In bolted joints, most of the embedment occurs during torquing. Only embedment that occurs after installation can cause a loss of preload, and values of up to 0.0005 inches can be seen at each surface mate, as reported by SAE.
Prevention and solutions
Embedment can be prevented by designing mating surfaces of a joint to have high surface hardness and very smooth surface finish. Exceptionally hard and smooth surfaces will have less susceptibility to the mechanism that causes embedment.
In most cases, some degree of embedment is inevitable. That said, short grip lengths should be avoided. For two bolted joints of identical design and installation, except the second having a longer grip length, the first joint will be more likely to loosen and fail. Since both joints have the same loading, the surfaces will experience the same amount of embedment. However, the relaxation in strain is less significant to the longer grip length and the loss in preload will be minimized. For this reason, bolted joints should always be designed with careful consideration for the grip length.
If a short grip length can not be avoided, the use of conical spring washers (Belleville washers or disc springs) can also reduce the loss of bolt pre-load due to embedment.
See also
Stress relaxation
References
Comer, Dr. Jess. (2005); "Source of Fatigue Failures of Threaded Fasteners",
T. Jaglinski, et al. (2007); "Study of Bolt Load Loss in Bolted Aluminum Joints",
External links
SAE Fatigue Design and Evaluation Committee
Mechanical engineering
Reliability engineering
Fasteners
Materials degradation | Embedment | [
"Physics",
"Materials_science",
"Engineering"
] | 561 | [
"Systems engineering",
"Applied and interdisciplinary physics",
"Reliability engineering",
"Fasteners",
"Materials science",
"Construction",
"Mechanical engineering",
"Materials degradation"
] |
13,170,237 | https://en.wikipedia.org/wiki/Jure%20uxoris | Jure uxoris (a Latin phrase meaning "by right of (his) wife") describes a title of nobility used by a man because his wife holds the office or title suo jure ("in her own right"). Similarly, the husband of an heiress could become the legal possessor of her lands. For example, married women in England and Wales were legally incapable of owning real estate until the Married Women's Property Act 1882.
Middle Ages
During the feudal era, the husband's control over his wife's real property, including titles, was substantial. On marriage, the husband gained the right to possess his wife's land during the marriage, including any acquired after the marriage. Whilst he did not gain the formal legal title to the lands, he was able to spend the rents and profits of the land and sell his right, even if the wife protested.
The concept of jure uxoris was standard in the Middle Ages even for queens regnant. In the Kingdom of Jerusalem, Fulk V of Anjou, Guy of Lusignan, Conrad of Montferrat, Henry II of Champagne, and Aimery of Lusignan all became kings as a result of marriage. Another famous instance of jure uxoris occurring was in the case of Richard Neville, 16th Earl of Warwick, who gained said title via his marriage to Anne Beauchamp, 16th Countess of Warwick, herself a daughter of Richard Beauchamp, 13th Earl of Warwick.
Sigismund of Luxembourg married Queen Mary of Hungary and obtained the crown through her, retaining it after her death in 1395.
A man who held the title jure uxoris could retain it even after the death or divorce of his wife. When the marriage of Marie I of Boulogne and Matthew of Boulogne was annulled in 1170, Marie ceased to be countess, while Matthew I continued to reign until 1173. Likewise, upon the death of Maria, Queen of Sicily in 1401, her widower Martin I of Sicily continued to reign as King until his death in 1409. In some cases, the kingdom could pass to the husband's heirs, even when they were not an issue of the wife in question (e.g. Jogaila, who became king by marrying Jadwiga and passed on the kingdom to his children with Sophia of Halshany).
Kings jure uxoris in the medieval era include:
Philip I of Navarre, who was married to Joan I of Navarre
Frederick II, Holy Roman Emperor, who was named King of Jerusalem by virtue of his marriage to Isabella II of Jerusalem
Louis I of Naples, whose wife was Joanna I of Naples
Philip III of Navarre, who was married to Joan II of Navarre
John I of Castile, who was a claimant to the throne of Portugal by virtue of his marriage to Beatrice of Portugal
Guy of Lusignan, who ruled as King of Jerusalem by right of marriage to Sibylla of Jerusalem
Władysław II Jagiełło, who ruled as King of Poland by right of his marriage to Jadwiga of Poland
Renaissance
By the time of the Renaissance, laws and customs had changed in some countries: a woman sometimes remained monarch, with only part of her power transferred to her husband. This was usually the case when multiple kingdoms were consolidated, such as when Isabella and Ferdinand shared crowns.
The precedent of jure uxoris complicated the lives of Henry VIII's daughters, both of whom inherited the throne in their own right. The marriage of Mary I to King Philip in 1554 was seen as a political act, as an attempt to bring England and Ireland under the influence of Catholic Spain. Parliament passed the Act for the Marriage of Queen Mary to Philip of Spain specifically to prevent Philip from seizing power on the basis of jure uxoris. As it turned out, the marriage produced no children, and Mary died in 1558, ending Philip's jure uxoris claims in England and Ireland, as envisaged by the Act, and was followed by the accession of Elizabeth I, who never married.
In Navarre, Jeanne d'Albret had married Antoine of Navarre in 1548, and she became queen regnant at her father's death in 1555. Antoine was crowned co-ruler jure uxoris with Jeanne in August.
Partial transference of power
In Great Britain, husbands acted on their wives' behalf in the House of Lords, from which women were once barred. These offices were exercised jure uxoris.
When Lady Priscilla Bertie inherited the title Baroness Willougby de Eresby in 1780, she also held the position of Lord Great Chamberlain. However, her husband Sir Peter Gwydyr acted on her behalf in that office instead.
Conditions
In Portugal, a male consort could not become a king jure uxoris until the queen regnant had a child and royal heir. Although Queen Maria II married her second husband in 1836, Ferdinand of Saxe-Coburg-Gotha did not become King Ferdinand II until 1837, when their first child was born. Queen Maria's first husband, Auguste of Beauharnais, never became monarch, because he died before he could father an heir. The queen's child did not have to be born after her accession. For example, Queen Maria I already had children by her husband when she acceded, so he became King Peter III at the moment of his wife's accession.
although he is not technically entitled to it under the law. For example, Jaime de Marichalar was often referred to as the Duke of Lugo during his marriage to Infanta Elena, Duchess of Lugo. After their divorce, he ceased to use the title. His brother-in-law Iñaki Urdangarin was referred to as the Duke of Palma before corruption allegations prompted the King to take action. Since 12 June 2015, he is no longer referred to as the Duke of Palma de Mallorca, following the removal of that title from his wife, Infanta Cristina.
See also
Jure matris
List of Latin phrases
References
Latin legal terminology
Nobility
Genealogy
Inheritance
Jure uxoris emperors | Jure uxoris | [
"Biology"
] | 1,254 | [
"Phylogenetics",
"Genealogy"
] |
8,555,091 | https://en.wikipedia.org/wiki/Copper%28II%29%20azide | Copper(II) azide is a medium density explosive with the molecular formula .
Uses
Copper azide is very explosive and is too sensitive for any practical use unless handled in solution.
Preparation
Copper azide can be prepared by a metathesis reaction between water-soluble sources of and azide ions. (Spectator ions omitted in reaction below).
It can be destroyed by concentrated nitric acid to form non-explosive products, these being nitrogen, nitrogen oxides and copper(II) nitrate.
References
Azides
Copper(II) compounds
Explosive chemicals | Copper(II) azide | [
"Chemistry"
] | 110 | [
"Explosive chemicals",
"Azides",
"Inorganic compounds",
"Inorganic compound stubs"
] |
8,555,460 | https://en.wikipedia.org/wiki/Boeing%20EC-135 | The Boeing EC-135 is a retired family of command and control aircraft derived from the Boeing C-135 Stratolifter. During the Cold War, the EC-135 was best known for being modified to perform the Looking Glass mission where one EC-135 was always airborne 24 hours a day to serve as flying command post for the Strategic Air Command in the event of nuclear war. Various other EC-135 aircraft sat on airborne and ground alert throughout the Cold War, with the last EC-135C being retired in 1998. The EC-135N variant served as the tracking aircraft for the Apollo program.
The Boeing E-6B Mercury "TACAMO" replaced the EC-135C.
Missions
Looking Glass
Officially known as "Operation Looking Glass", at least 11 EC-135C command post aircraft were provided to the Commander in Chief, Strategic Air Command (CINCSAC), and were based at various locations throughout the United States and worldwide. Operations began in 1961 with the 34th Air Refueling Squadron at Offutt Air Force Base (Nebraska), initially using EC-135As (converted from KC-135As) until the dedicated EC-135Cs entered service in 1964. Originally built as KC-135Bs, they were re-designated as EC-135Cs from 1 January 1965. Other Offutt-based units included the 38th Strategic Reconnaissance Squadron (1966–1970), the 2d Airborne Command and Control Squadron (1970–1994), and the 7th Airborne Command and Control Squadron (1994–1998). Other units operating the Looking Glass mission included the following:
913th Air Refueling Squadron at Barksdale Air Force Base (Louisiana) (1963–1970)
3rd Airborne Command & Control Squadron at Grissom Air Force Base (Indiana) (1970–1974)
4th Airborne Command & Control Squadron at Ellsworth Air Force Base, (South Dakota) (1970–1991)
99th Air Refueling Squadron, Westover Air Force Base (Massachusetts) (1963–1970)
Other EC-135 aircraft (including EC-135A, G, and L models) supporting the Looking Glass missions (communications relay and Minuteman airborne launch control centers) were flown by the 906th Air Refueling Squadron at Minot Air Force Base (North Dakota) (1963–1970), the 70th Air Refueling Squadron at Grissom AFB (1975–1993), and the 301st Air Refueling Squadron at Lockbourne Air Force Base (Ohio) (1963–1970). All aircraft have been retired or repurposed.
The United States nuclear strategy depends on its ability to command, control, and communicate with its nuclear forces under all conditions. An essential element of that ability is Looking Glass; its crew and staff ensure there is always an aircraft ready to direct bombers and missiles from the air should ground-based command centers be destroyed or rendered inoperable. Looking Glass is intended to guarantee that U.S. strategic forces will act only in the manner dictated by the President. It took the nickname "Looking Glass" because the mission mirrored ground-based command, control, and communications centers.
The Strategic Air Command (SAC) began the Looking Glass mission on February 3, 1961, and Looking Glass aircraft were continuously airborne 24 hours a day for over 29 years, accumulating more than 281,000 accident-free flying hours. On July 24, 1990, "The Glass" ceased continuous airborne alert, but remained on ground or airborne alert 24 hours a day. The EC-135A flew the Command Post mission until EC-135C were delivered starting in 1963. The aircraft were delivered to Offutt AFB and as well as one aircraft to each of the Stateside Numbered Air Force Headquarters – Second Air Force at Barksdale AFB (Louisiana); Eighth Air Force at Westover AFB (Massachusetts); and Fifteenth Air Force at March AFB (California). EC-135s flew all the missions except one, on March 4, 1980, when an E-4B was tested on an operational mission, flying a double sortie as the replacement aircraft could not launch due to weather. About a week after the flight, Washington deleted the funds for additional E-4 aircraft.
On June 1, 1992, SAC was inactivated and replaced by the United States Strategic Command, which now controls the Looking Glass. On October 1, 1998, the Navy's E-6 Mercury TACAMO replaced the USAF's EC-135C in the Looking Glass mission. The last active, former, Looking Glass was converted to a WC-135C Constant Phoenix, where it was retired in November 2020.
Notes
Ellsworth AFB maintained additional EC-135 aircraft on Satellite Alert at Minot AFB to monitor the North Dakota missile silos.
Airborne Launch Control Center
Airborne Launch Control Centers (ALCC—pronounced "Al-see") provided a survivable launch capability for the United States Air Force's LGM-30 Minuteman Intercontinental Ballistic Missile (ICBM) force by utilizing the Airborne Launch Control System (ALCS) on board that is operated by an airborne missileer crew. Historically, from 1967 to 1998, the ALCC mission was performed by United States Air Force Boeing EC-135 command post aircraft. This included EC-135A, EC-135C, EC-135G, and EC-135L aircraft.
In the late 1960s and early 1970s, ALCS crews belonged to the 44th Strategic Missile Wing (SMW) at Ellsworth AFB and the 91st SMW at Minot AFB. ALCS equipment was installed on various Boeing EC-135 variants to include the EC-135A, EC-135C, EC-135G, and for a short while on the EC-135L.
Starting in 1970, there were only two SAC squadrons that operated ALCS capable aircraft. This included the 2nd Airborne Command and Control Squadron (ACCS) operating EC-135C aircraft out of Offutt AFB and the 4th ACCS operating EC-135A, EC-135C, and EC-135G aircraft out of Ellsworth AFB . All three variants of these EC-135A/C/G aircraft had ALCS equipment installed on board.
The 4th ACCS was the workhorse of ALCS operations. Three dedicated Airborne Launch Control Centers (ALCC) were on ground alert around-the-clock providing ALCS coverage for five of the six Minuteman ICBM Wings. These dedicated ALCCs were mostly EC-135A aircraft but could also have been EC-135C or EC-135G aircraft depending on availability. ALCC No. 1 was on ground alert at Ellsworth AFB and during a wartime scenario would have taken off and orbited between the Minuteman Wings at Ellsworth AFB and F.E. Warren AFB (Wyoming) providing ALCS assistance if needed. ALCCs No. 2 and No. 3 were routinely on forward deployed ground alert at Minot AFB. During a wartime scenario, ALCC No. 3 would have orbited between the Minuteman ICBM Wings at Minot AFB and Grand Forks AFB, both in North Dakota, providing ALCS assistance if needed. ALCC No. 2 was dedicated to orbiting near the Minuteman ICBM Wing at Malmstrom AFB (Montana) providing ALCS assistance if needed. The 4th ACCS also maintained an EC-135C or EC-135G on ground alert at Ellsworth AFB as the West Auxiliary Airborne Command Post (WESTAUXCP) as a backup to SAC's "Looking Glass" Airborne Command Post (ABNCP) as well as a radio relay link between the Looking Glass and ALCCs when airborne. Although equipped with ALCS, the WESTAUXCP did not have a dedicated Minuteman ICBM wing to provide ALCS assistance to.
The 2nd ACCS was another major player in ALCS operations. The primary mission of the 2nd ACCS was to fly the SAC ABNCP "Looking Glass" aircraft in continuous airborne operations. However, due to its proximity in orbiting over the central United States, the airborne Looking Glass provided ALCS coverage for the Minuteman ICBM Wing located at Whiteman AFB (Missouri). Not only did Whiteman AFB have Minuteman II ICBMs, but it also had ERCS configured Minuteman missiles on alert. The 2nd ACCS also had an additional EC-135C on ground alert at Offutt AFB as the EASTAUXCP, providing backup to the airborne Looking Glass, radio relay capability, and a means for the Commander in Chief of SAC to escape an enemy nuclear attack. Although the EASTAUXCP was ALCS capable, it did not have a dedicated ALCS mission.
Silk Purse
Operation Silk Purse program provided four EC-135H command post aircraft to the Commander, U.S. European Command (USEUCOM), which were based at RAF Mildenhall in the United Kingdom. Flown by the 10th Airborne Command and Control Squadron 1970–91. Onboard secure/non-secure communications and avionics equipment was maintained by the 513th Avionics Maintenance Squadron and the 2147th Communications Squadron. Aircraft S/Ns 61–0282, 285, 286 and 291.
Scope Light
Operation Scope Light provided five EC-135C/HJ/P command post aircraft to the Commander in Chief, U.S. Atlantic Command (CINCLANT), which were based at Langley AFB (Virginia). Operated by the 6th Airborne Command and Control Squadron 1972–92.
Blue Eagle
Operation Blue Eagle provided five EC-135J/P command post aircraft to the Commander in Chief, U.S. Pacific Command (USCINCPAC), which were based at Hickam AFB (Hawaii). Operated by the 9th Airborne Command and Control Squadron 1969–92. Communications, secure/unsecure voice and teletype, handled by the 1957th Communications Group, Hickam AFB (1969–1992)
"Upkeep" was the call sign for the EC135 flying in southeast Asia during 1969 to 1971, based out of Hickam AFB. It was under the direction of PACAF of which 5th AF in Fuchu AS, Tokyo
Japan handled their voice communications both unsecure and secure.
<1956 Comm Gp USAF 1969 to 1971>
Blue Eagle Ground Stations were located at Hickam AFB, Yakoto AB (Japan) Kadena AB (Okinawa), and Clark AB (Philippines). There may have been an additional Ground Station on Guam.
At Kadena AB, the 1962nd Communications Group hosted the Blue Eagle Ground Station. The call sign for the Kadena Blue Eagle Operation was “Settler”.
All Blue Eagle Ground Stations were contracted to Philco Corporation and consisted of two trailer vans that could be pulled by a single tractor. One van was configured with a 15KW diesel powered generator and diesel fuel tank and the other was outfitted with a 15-ton heavy duty air conditioning unit, three motor generators, three UHF/VHF FM transmitters and receivers, two multiplexers each providing up to 24 telephone lines and a dedicated, individual telephone line provided to the aircraft.
The ground stations were self sufficient in that they were configured in trailers so they could be relocated to safer positions in the event of a national emergency. The equipment installed in the vans was identical to the electronics on board the aircraft. This necessitated the requirement for motor-generators to provide conversion from 60 Hz to 400 Hz power.
Each equipment van had an omni-directional antenna mounted on the roof of the van and 3 additional portable antennas that were deployed on telephone poles. The antennas could be switched electro-mechanically from each transmitter/receiver pair. The vans at Kadena AB were never moved from their initial installation location.
Blue Eagle was formed in 1965 and started 24/7 operation in October 1965 and continued until disbanded in 1992.
Nightwatch
Operation Nightwatch provided three EC-135J command post aircraft to the President of the United States which were based at Andrews AFB (Maryland). All three aircraft were transferred to other ABNCP missions.
Nightwatch was initiated in the mid-1960s utilizing the three EC-135J aircraft, modified from KC-135Bs, as command post aircraft. The three Nightwatch aircraft were ready to fly the President and the National Command Authority (NCA) out of Washington in the event of a nuclear attack. The E-4 aircraft (a modified Boeing 747-200) came on line with the Nightwatch program in 1974 replacing the EC-135s on this mission.
USCENTCOM Support
The 310th Airlift Squadron, part of the 6th Air Mobility Wing at MacDill AFB (Florida), operated two NKC-135s that were reconfigured as EC-135Y aircraft from 1989 to 2003 as executive transport and command & control platforms to support the Commander, United States Central Command. These aircraft have since been replaced with three C-37A Gulfstream V aircraft.
Advanced Range Instrumentation Aircraft
The Advanced Range Instrumentation Aircraft are EC-135Bs, modified C-135B cargo aircraft and EC-18B (former American Airlines 707-320) passenger aircraft that provided tracking and telemetry information to support the US space program in the late 1960s and early 1970s.
During the early 1960s, NASA and the Department of Defense (DoD) needed a very mobile tracking and telemetry platform to support the Apollo space program and other unmanned space flight operations. In a joint project, NASA and the DoD contracted with the McDonnell Douglas and the Bendix Corporations to modify eight Boeing C-135 Stratolifter cargo aircraft into EC-135N Apollo / Range Instrumentation Aircraft (A/RIA). Equipped with a steerable seven-foot antenna dish in its distinctive "Droop Snoot" or "Snoopy Nose", the EC-135N A/RIA became operational in January 1968, and was often known as the "Jimmy Durante" of the Air Force. The Air Force Eastern Test Range (AFETR) at Patrick AFB, Florida, maintained and operated the A/RIA until the end of the Apollo program in 1972, when the USAF renamed it the Advanced Range Instrumentation Aircraft (ARIA).
Since Patrick AFB was located on the Atlantic Ocean, salt water and salt air-induced corrosion issues and associated aircraft maintenance challenges were problematic for the ARIA while based there. Transferred to the 4950th Test Wing at Wright-Patterson AFB, Ohio, in December 1975 as part of an overall consolidation of large test and evaluation aircraft, the ARIA fleet underwent numerous conversions, including a re-engining that changed the EC-135N to the EC-135E. In 1994, the ARIA fleet relocated again to Edwards AFB, California, as part of the 412th Test Wing. However, taskings for the ARIA dwindled because of high costs and improved satellite technology, and the USAF transferred the aircraft to other programs such as E-8 J-STARS.
Over its thirty-two year career, the ARIA supported the United States space program, gathered telemetry, verified international treaties, and supported cruise missile, ballistic missile defense tests, and the Space Shuttle. ARIA aircraft were equipped to collect data from the Sonobuoy Missile Impact Location System (SMILS) composed of a large sonobuoy field and a fixed bottom transponder. Specially equipped Navy P-3 aircraft were also equipped to collect data from this system which supported the Navy's fleet ballistic missile programs testing.
Variant summary
EC-135A – KC-135A modified for airborne national command post role. Later performed Airborne Launch Control Center mission with the Airborne Launch Control System.
EC-135B – C-135B modified with large nose for ARIA mission
EC-135C – re-designated KC-135B to EC-135C for airborne command post role, "Looking Glass"
EC-135E – re-engined EC-135N, "Advanced Range Instrumentation Aircraft" or "ARIA"
EC-135G – KC-135A modified for airborne national command post role. Later performed Airborne Launch Control Center mission with the Airborne Launch Control System.
EC-135H – KC-135A modified for airborne national command post role, "Silk Purse"
EC-135J – KC-135B modified for airborne national command post role, "Nightwatch"
EC-135K – KC-135A modified for deployment control duties, "Head Dancer"
EC-135L – KC-135A modified for radio relay and amplitude modulation dropout capability "Cover All"
EC-135N – ARIA aircraft with the so-called "droop snoot" radome housing a large parabolic telemetry gathering antenna.
EC-135J/P – KC-135A modified for airborne command post role, "Blue Eagle" and "Scope Light"
EC-135Y – NKC-135 reconfigured as C3 aircraft for Commander-in-Chief, United States Central Command
Accidents
On 13 June 1971, USAF EC-135N, (AF Serial Number 61-0331), of 4950th Test Wing, Space and Missile Systems Organization (SAMSO), Wright-Patterson AFB disappeared while en route from Pago Pago, American Samoa to Hickam AFB in Hawaii after monitoring a French atmospheric test conducted on the previous day. The aircraft disappeared about 70 miles south of Hawaii near Palmyra Island. Twelve military personnel and twelve civilians died. Cause of the mishap is unknown. Only small bits of wreckage were found.
On 14 September 1977, USAF EC-135K, (AF Serial Number 62-3536), crashed on takeoff from Kirtland Air Force Base, NM for a Higher-Headquarters Directed (HHD) mission. After a long crew duty period, the crew started its takeoff roll at a few minutes prior to midnight. The aircraft impacted the ground 8 km (5 miles) east of the departure base because it lacked sufficient power to either climb above or turn to avoid rapidly rising terrain in that area. All 20 occupants of this Tactical Air Command (TAC) operated aircraft were killed in the crash and subsequent fire at about 8,500 feet up the Manzano Mountain Range east of Albuquerque, NM.
On 2 January 1980, USAF EC-135P, (AF Serial Number 58-0007), was destroyed on the ground at Langley AFB when an electrical short occurred in the water injection tank heater wiring on the J-57-P/F-59W equipped aircraft. There were no injuries as the Tactical Air Command (TAC) aircraft was unoccupied at the time of the mishap.
On 6 May 1981, USAF EC-135N, (AF Serial Number 61-0328), crashed during a scheduled Advanced Range Instrumented Aircraft (ARIA) navigator and Primary Mission Electronic Equipment (PMEE) training mission from Wright-Patterson Air Force Base, OH. For an unexplained reason, the aircraft pitch trim was moved to the full nose-down position, which exceeded the ability of the autopilot to control, and the aircraft pitched over abruptly. The abrupt pitch over caused the generators to trip off line and the loss of AC electrical power prevented the pitch trim from being operated normally. The aircraft became uncontrollable and exploded at about 1,500 ft MSL. The crash occurred near Walkersville, MD at 10:50L. All seventeen crew members and four passengers on board the aircraft were killed.
On 29 May 1992, USAF EC-135J, (AF Serial Number 62-3584), landed long at Pope AFB (North Carolina) and overshot the runway. The undercarriage collapsed and the fuselage broke in two. Although none of the 14 occupants were seriously injured, the aircraft was written off as damaged beyond repair and the remains were removed to Davis-Monthan AFB (Arizona) for disposal.
On 2 September 1997, USAF EC-135C, (AF Serial Number 63-8053), was heavily damaged on landing at Pope AFB when the nose wheel collapsed. None of the 11 occupants was injured significantly, but the Air Combat Command (ACC) aircraft was 32 years and 10 months old at the time of the accident and was written off as damaged beyond repair.
Aircraft on display
60-0374 The Bird of Prey – EC-135E (originally built as a C-135A, later converted to EC-135N) on static display at the National Museum of the United States Air Force at Wright-Patterson Air Force Base in Dayton, Ohio. The aircraft is a former Advanced Range Instrumented Aircraft (ARIA) designated as an EC-135N model with J57-59 engines, and is displayed in the museum's outside Air Park; nose art remains. The aircraft was flown to the museum on November 3, 2000, by a flight crew from the Air Force Flight Test Center (AFFTC), and was delivered with full Prime Mission Electronic Equipment intact.
61-0262 Rollin' Thunder – EC-135A (originally built as a KC-135A) on static display at the South Dakota Air and Space Museum in Box Elder, South Dakota; nose art remains. It was last assigned to the 4th Airborne Command and Control Squadron (4th ACCS), 28th Bomb Wing at Ellsworth.
61-0269 Excaliber – EC-135L (originally built as a KC-135A) on static display at the Grissom Air Museum near Peru, Indiana. The aircraft was last assigned to the 305th Air Refueling Wing and retired in 1992, at the end of the Cold War. It was delivered to the Air Force on 8 December 1961. Assigned to Grissom AFB in 1970, the aircraft flew many missions during Operation Just Cause, Operation Desert Shield and Desert Storm. For the latter, it performed radio relay operations leading to the elimination of two Iraqi aircraft, over 60 tank kills, and 27 Scud missile strikes.
61-0287 – EC-135A Airborne Launch Control Center/radio relay link aircraft (originally built as a KC-135A) on static display at Zorinsky Memorial Air Park at Offutt Air Force Base in Bellevue, Nebraska.
61-0327 – on static display at the Museum of Aviation, Robins AFB
63-8049 – EC-135C (originally built as a KC-135B), on display at the Strategic Air Command & Aerospace Museum in Ashland, Nebraska.
63-8057 – EC-135J (originally built as a KC-135B) on static display at the Pima Air and Space Museum in Tucson, Arizona.
See also
Airborne Launch Control System
Airborne Launch Control Center
Post Attack Command and Control System
Operation Looking Glass
Emergency Rocket Communications System
References
Reference for the Variant Summary list: DoD 4120.14L, Model Designation of Military Aerospace Vehicles, May 12, 2004
External links
USSTRATCOM ABNCP Fact Sheet
EC-0135, Boeing
Military communications
United States nuclear command and control
Telemetry
C-135E, Boeing
1960s United States military reconnaissance aircraft
Quadjets
Historic American Engineering Record in Nebraska
Low-wing aircraft
Aircraft first flown in 1965
Aircraft with retractable tricycle landing gear | Boeing EC-135 | [
"Engineering"
] | 4,708 | [
"Military communications",
"Telecommunications engineering"
] |
8,556,260 | https://en.wikipedia.org/wiki/Thermal%20oxidation | In microfabrication, thermal oxidation is a way to produce a thin layer of oxide (usually silicon dioxide) on the surface of a wafer. The technique forces an oxidizing agent to diffuse into the wafer at high temperature and react with it. The rate of oxide growth is often predicted by the Deal–Grove model. Thermal oxidation may be applied to different materials, but most commonly involves the oxidation of silicon substrates to produce silicon dioxide.
The chemical reaction
Thermal oxidation of silicon is usually performed at a temperature between 800 and 1200 °C, resulting in so called High Temperature Oxide layer (HTO). It may use either water vapor (usually UHP steam) or molecular oxygen as the oxidant; it is consequently called either wet or dry oxidation. The reaction is one of the following:
The oxidizing ambient may also contain several percent of hydrochloric acid (HCl). The chlorine neutralizes metal ions that may occur in the oxide.
Thermal oxide incorporates silicon consumed from the substrate and oxygen supplied from the ambient. Thus, it grows both down into the wafer and up out of it. For every unit thickness of silicon consumed, 2.17 unit thicknesses of oxide will appear. If a bare silicon surface is oxidized, 46% of the oxide thickness will lie below the original surface, and 54% above it.
Deal-Grove model
According to the commonly used Deal-Grove model, the time τ required to grow an oxide of thickness Xo, at a constant temperature, on a bare silicon surface, is:
where the constants A and B relate to properties of the reaction and the oxide layer, respectively. This model has further been adapted to account for self-limiting oxidation processes, as used for the fabrication and morphological design of Si nanowires and other nanostructures.
If a wafer that already contains oxide is placed in an oxidizing ambient, this equation must be modified by adding a corrective term τ, the time that would have been required to grow the pre-existing oxide under current conditions. This term may be found using the equation for t above.
Solving the quadratic equation for Xo yields:
Oxidation technology
Most thermal oxidation is performed in furnaces, at temperatures between 800 and 1200 °C. A single furnace accepts many wafers at the same time, in a specially designed quartz rack (called a "boat"). Historically, the boat entered the oxidation chamber from the side (this design is called "horizontal"), and held the wafers vertically, beside each other. However, many modern designs hold the wafers horizontally, above and below each other, and load them into the oxidation chamber from below.
Because vertical furnaces stand higher than horizontal furnaces, they may not fit into some microfabrication facilities. They help to prevent dust contamination. Unlike horizontal furnaces, in which falling dust can contaminate any wafer, vertical furnaces use enclosed cabinets with air filtration systems to prevent dust from reaching the wafers.
Vertical furnaces also eliminate an issue that plagued horizontal furnaces: non-uniformity of grown oxide across the wafer. Horizontal furnaces typically have convection currents inside the tube which causes the bottom of the tube to be slightly colder than the top of the tube. As the wafers lie vertically in the tube the convection and the temperature gradient with it causes the top of the wafer to have a thicker oxide than the bottom of the wafer. Vertical furnaces solve this problem by having wafer sitting horizontally, and then having the gas flow in the furnace flowing from top to bottom, significantly damping any thermal convections.
Vertical furnaces also allow the use of load locks to purge the wafers with nitrogen before oxidation to limit the growth of native oxide on the Si surface.
Oxide quality
Wet oxidation is preferred to dry oxidation for growing thick oxides, because of the higher growth rate. However, fast oxidation leaves more dangling bonds at the silicon interface, which produce quantum states for electrons and allow current to leak along the interface. (This is called a "dirty" interface.) Wet oxidation also yields a lower-density oxide, with lower dielectric strength.
The long time required to grow a thick oxide in dry oxidation makes this process impractical. Thick oxides are usually grown with a long wet oxidation bracketed by short dry ones (a dry-wet-dry cycle). The beginning and ending dry oxidations produce films of high-quality oxide at the outer and inner surfaces of the oxide layer, respectively.
Mobile metal ions can degrade performance of MOSFETs (sodium is of particular concern). However, chlorine can immobilize sodium by forming sodium chloride. Chlorine is often introduced by adding hydrogen chloride or trichloroethylene to the oxidizing medium. Its presence also increases the rate of oxidation.
Other notes
Thermal oxidation can be performed on selected areas of a wafer, and blocked on others. This process, first developed at Philips, is commonly referred to as the local oxidation of silicon (LOCOS) process. Areas which are not to be oxidized are covered with a film of silicon nitride, which blocks diffusion of oxygen and water vapor due to its oxidation at a much slower rate. The nitride is removed after oxidation is complete. This process cannot produce sharp features, because lateral (parallel to the surface) diffusion of oxidant molecules under the nitride mask causes the oxide to protrude into the masked area.
Because impurities dissolve differently in silicon and oxide, a growing oxide will selectively take up or reject dopants. This redistribution is governed by the segregation coefficient, which determines how strongly the oxide absorbs or rejects the dopant, and the diffusivity.
The orientation of the silicon crystal affects oxidation. A <100> wafer (see Miller indices) oxidizes more slowly than a <111> wafer, but produces an electrically cleaner oxide interface.
Thermal oxidation of any variety produces a higher-quality oxide, with a much cleaner interface, than chemical vapor deposition of oxide resulting in low temperature oxide layer (reaction of TEOS at about 600 °C). However, the high temperatures required to produce High Temperature Oxide (HTO) restrict its usability. For instance, in MOSFET processes, thermal oxidation is never performed after the doping for the source and drain terminals is performed, because it would disturb the placement of the dopants.
References
Notes
Sources
External links
Online calculator including deal grove and massoud oxidation models, with pressure and doping effects at: http://www.lelandstanfordjunior.com/thermaloxide.html
Semiconductor technology
Nanomaterials
Materials
Microtechnology
MOSFETs
Nanoelectronics
Silicon | Thermal oxidation | [
"Physics",
"Materials_science",
"Engineering"
] | 1,380 | [
"Microtechnology",
"Materials science",
"Materials",
"Nanoelectronics",
"Nanotechnology",
"Semiconductor technology",
"Nanomaterials",
"Matter"
] |
8,556,497 | https://en.wikipedia.org/wiki/Contraposition | In logic and mathematics, contraposition, or transposition, refers to the inference of going from a conditional statement into its logically equivalent contrapositive, and an associated proof method known as . The contrapositive of a statement has its antecedent and consequent inverted and flipped.
Conditional statement . In formulas: the contrapositive of is .
If P, Then Q. — If not Q, Then not P. "If it is raining, then I wear my coat" — "If I don't wear my coat, then it isn't raining."
The law of contraposition says that a conditional statement is true if, and only if, its contrapositive is true.
Contraposition () can be compared with three other operations:
Inversion (the inverse), "If it is not raining, then I don't wear my coat." Unlike the contrapositive, the inverse's truth value is not at all dependent on whether or not the original proposition was true, as evidenced here.
Conversion (the converse), "If I wear my coat, then it is raining." The converse is actually the contrapositive of the inverse, and so always has the same truth value as the inverse (which as stated earlier does not always share the same truth value as that of the original proposition).
Negation (the logical complement), "It is not the case that if it is raining then I wear my coat.", or equivalently, "Sometimes, when it is raining, I don't wear my coat. " If the negation is true, then the original proposition (and by extension the contrapositive) is false.
Note that if is true and one is given that is false (i.e., ), then it can logically be concluded that must be also false (i.e., ). This is often called the law of contrapositive, or the modus tollens rule of inference.
Intuitive explanation
In the Euler diagram shown, if something is in A, it must be in B as well. So we can interpret "all of A is in B" as:
It is also clear that anything that is not within B (the blue region) cannot be within A, either. This statement, which can be expressed as:
is the contrapositive of the above statement. Therefore, one can say that
In practice, this equivalence can be used to make proving a statement easier. For example, if one wishes to prove that every girl in the United States (A) has brown hair (B), one can either try to directly prove by checking that all girls in the United States do indeed have brown hair, or try to prove by checking that all girls without brown hair are indeed all outside the US. In particular, if one were to find at least one girl without brown hair within the US, then one would have disproved , and equivalently .
In general, for any statement where A implies B, not B always implies not A. As a result, proving or disproving either one of these statements automatically proves or disproves the other, as they are logically equivalent to each other.
Formal definition
A proposition Q is implicated by a proposition P when the following relationship holds:
This states that, "if , then ", or, "if Socrates is a man, then Socrates is human." In a conditional such as this, is the antecedent, and is the consequent. One statement is the contrapositive of the other only when its antecedent is the negated consequent of the other, and vice versa. Thus a contrapositive generally takes the form of:
That is, "If not-, then not-", or, more clearly, "If is not the case, then P is not the case." Using our example, this is rendered as "If Socrates is not human, then Socrates is not a man." This statement is said to be contraposed to the original and is logically equivalent to it. Due to their logical equivalence, stating one effectively states the other; when one is true, the other is also true, and when one is false, the other is also false.
Strictly speaking, a contraposition can only exist in two simple conditionals. However, a contraposition may also exist in two complex, universal conditionals, if they are similar. Thus, , or "All s are s," is contraposed to , or "All non-s are non-s."
Sequent notation
The transposition rule may be expressed as a sequent:
where is a metalogical symbol meaning that is a syntactic consequence of in some logical system; or as a rule of inference:
where the rule is that wherever an instance of "" appears on a line of a proof, it can be replaced with ""; or as the statement of a truth-functional tautology or theorem of propositional logic. The principle was stated as a theorem of propositional logic by Russell and Whitehead in Principia Mathematica as
where and are propositions expressed in some formal system.
Proofs
Simple proof by definition of a conditional
In first-order logic, the conditional is defined as:
which can be made equivalent to its contrapositive, as follows:
Simple proof by contradiction
Let:
It is given that, if A is true, then B is true, and it is also given that B is not true. We can then show that A must not be true by contradiction. For if A were true, then B would have to also be true (by Modus Ponens). However, it is given that B is not true, so we have a contradiction. Therefore, A is not true (assuming that we are dealing with bivalent statements that are either true or false):
We can apply the same process the other way round, starting with the assumptions that:
Here, we also know that B is either true or not true. If B is not true, then A is also not true. However, it is given that A is true, so the assumption that B is not true leads to a contradiction, which means that it is not the case that B is not true. Therefore, B must be true:
Combining the two proved statements together, we obtain the sought-after logical equivalence between a conditional and its contrapositive:
More rigorous proof of the equivalence of contrapositives
Logical equivalence between two propositions means that they are true together or false together. To prove that contrapositives are logically equivalent, we need to understand when material implication is true or false.
This is only false when is true and is false. Therefore, we can reduce this proposition to the statement "False when and not-" (i.e. "True when it is not the case that and not-"):
The elements of a conjunction can be reversed with no effect (by commutativity):
We define as equal to "", and as equal to (from this, is equal to , which is equal to just ):
This reads "It is not the case that (R is true and S is false)", which is the definition of a material conditional. We can then make this substitution:
By reverting R and S back into and , we then obtain the desired contrapositive:
In classical propositional calculus system
In Hilbert-style deductive systems for propositional logic, only one side of the transposition is taken as an axiom, and the other is a theorem. We describe a proof of this theorem in the system of three axioms proposed by Jan Łukasiewicz:
A1.
A2.
A3.
(A3) already gives one of the directions of the transposition. The other side, , is proven below, using the following lemmas proven here:
(DN1) - Double negation (one direction)
(DN2) - Double negation (another direction)
(HS1) - one form of Hypothetical syllogism
(HS2) - another form of Hypothetical syllogism.
We also use the method of the hypothetical syllogism metatheorem as a shorthand for several proof steps.
The proof is as follows:
(instance of the (DN2))
(instance of the (HS1)
(from (1) and (2) by modus ponens)
(instance of the (DN1))
(instance of the (HS2))
(from (4) and (5) by modus ponens)
(from (3) and (6) using the hypothetical syllogism metatheorem)
(instance of (A3))
(from (7) and (8) using the hypothetical syllogism metatheorem)
Comparisons
Examples
Take the statement "All red objects have color." This can be equivalently expressed as "If an object is red, then it has color."
The contrapositive is "If an object does not have color, then it is not red." This follows logically from our initial statement and, like it, it is evidently true.
The inverse is "If an object is not red, then it does not have color." An object which is blue is not red, and still has color. Therefore, in this case the inverse is false.
The converse is "If an object has color, then it is red." Objects can have other colors, so the converse of our statement is false.
The negation is "There exists a red object that does not have color." This statement is false because the initial statement which it negates is true.
In other words, the contrapositive is logically equivalent to a given conditional statement, though not sufficient for a biconditional.
Similarly, take the statement "All quadrilaterals have four sides," or equivalently expressed "If a polygon is a quadrilateral, then it has four sides."
The contrapositive is "If a polygon does not have four sides, then it is not a quadrilateral." This follows logically, and as a rule, contrapositives share the truth value of their conditional.
The inverse is "If a polygon is not a quadrilateral, then it does not have four sides." In this case, unlike the last example, the inverse of the statement is true.
The converse is "If a polygon has four sides, then it is a quadrilateral." Again, in this case, unlike the last example, the converse of the statement is true.
The negation is "There is at least one quadrilateral that does not have four sides." This statement is clearly false.
Since the statement and the converse are both true, it is called a biconditional, and can be expressed as "A polygon is a quadrilateral if, and only if, it has four sides." (The phrase if and only if is sometimes abbreviated as iff.) That is, having four sides is both necessary to be a quadrilateral, and alone sufficient to deem it a quadrilateral.
Truth
If a statement is true, then its contrapositive is true (and vice versa).
If a statement is false, then its contrapositive is false (and vice versa).
If a statement's inverse is true, then its converse is true (and vice versa).
If a statement's inverse is false, then its converse is false (and vice versa).
If a statement's negation is false, then the statement is true (and vice versa).
If a statement (or its contrapositive) and the inverse (or the converse) are both true or both false, then it is known as a logical biconditional.
Traditional logic
In traditional logic, contraposition is a form of immediate inference in which a proposition is inferred from another and where the former has for its subject the contradictory of the original logical proposition's predicate. In some cases, contraposition involves a change of the former's quality (i.e. affirmation or negation). For its symbolic expression in modern logic, see the rule of transposition. Contraposition also has philosophical application distinct from the other traditional inference processes of conversion and obversion where equivocation varies with different proposition types.
In traditional logic, the process of contraposition is a schema composed of several steps of inference involving categorical propositions and classes. A categorical proposition contains a subject and predicate where the existential impact of the copula implies the proposition as referring to a class with at least one member, in contrast to the conditional form of hypothetical or materially implicative propositions, which are compounds of other propositions, e.g. "If P, then Q" (P and Q are both propositions), and their existential impact is dependent upon further propositions where quantification existence is instantiated (existential instantiation), not on the hypothetical or materially implicative propositions themselves.
Full contraposition is the simultaneous interchange and negation of the subject and predicate, and is valid only for the type "A" and type "O" propositions of Aristotelian logic, while it is conditionally valid for "E" type propositions if a change in quantity from universal to particular is made (partial contraposition). Since the valid obverse is obtained for all the four types (A, E, I, and O types) of traditional propositions, yielding propositions with the contradictory of the original predicate, (full) contraposition is obtained by converting the obvert of the original proposition. For "E" statements, partial contraposition can be obtained by additionally making a change in quantity. Because nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it can be either the original subject, or its contradictory, resulting in two contrapositives which are the obverts of one another in the "A", "O", and "E" type propositions.
By example: from an original, 'A' type categorical proposition,
All residents are voters,
which presupposes that all classes have members and the existential import presumed in the form of categorical propositions, one can derive first by obversion the 'E' type proposition,
No residents are non-voters.
The contrapositive of the original proposition is then derived by conversion to another 'E' type proposition,
No non-voters are residents.
The process is completed by further obversion resulting in the 'A' type proposition that is the obverted contrapositive of the original proposition,
All non-voters are non-residents.
The schema of contraposition:
Notice that contraposition is a valid form of immediate inference only when applied to "A" and "O" propositions. It is not valid for "I" propositions, where the obverse is an "O" proposition which has no valid converse. The contraposition of the "E" proposition is valid only with limitations (per accidens). This is because the obverse of the "E" proposition is an "A" proposition which cannot be validly converted except by limitation, that is, contraposition plus a change in the quantity of the proposition from universal to particular.
Also, notice that contraposition is a method of inference which may require the use of other rules of inference. The contrapositive is the product of the method of contraposition, with different outcomes depending upon whether the contraposition is full, or partial. The successive applications of conversion and obversion within the process of contraposition may be given by a variety of names.
The process of the logical equivalence of a statement and its contrapositive as defined in traditional class logic is not one of the axioms of propositional logic. In traditional logic there is more than one contrapositive inferred from each original statement. In regard to the "A" proposition this is circumvented in the symbolism of modern logic by the rule of transposition, or the law of contraposition. In its technical usage within the field of philosophic logic, the term "contraposition" may be limited by logicians (e.g. Irving Copi, Susan Stebbing) to traditional logic and categorical propositions. In this sense the use of the term "contraposition" is usually referred to by "transposition" when applied to hypothetical propositions or material implications.
Form of transposition
In the inferred proposition, the consequent is the contradictory of the antecedent in the original proposition, and the antecedent of the inferred proposition is the contradictory of the consequent of the original proposition. The symbol for material implication signifies the proposition as a hypothetical, or the "if–then" form, e.g. "if P, then Q".
The biconditional statement of the rule of transposition (↔) refers to the relation between hypothetical (→) propositions, with each proposition including an antecedent and consequential term. As a matter of logical inference, to transpose or convert the terms of one proposition requires the conversion of the terms of the propositions on both sides of the biconditional relationship, meaning that transposing or converting to requires that the other proposition, to be transposed or converted to Otherwise, converting the terms of one proposition and not the other renders the rule invalid, violating the sufficient condition and necessary condition of the terms of the propositions, where the violation is that the changed proposition commits the fallacy of denying the antecedent or affirming the consequent by means of illicit conversion.
The truth of the rule of transposition is dependent upon the relations of sufficient condition and necessary condition in logic.
Sufficient condition
In the proposition "If P, then Q", the occurrence of P is sufficient reason for the occurrence of Q. P, as an individual or a class, materially implicates Q, but the relation of Q to P is such that the converse proposition "If Q, then P" does not necessarily have sufficient condition. The rule of inference for sufficient condition is modus ponens, which is an argument for conditional implication:
Premise (1): If P, then Q
Premise (2): P
Conclusion: Therefore, Q
Necessary condition
Since the converse of premise (1) is not valid, all that can be stated of the relationship of P and Q is that in the absence of Q, P does not occur, meaning that Q is the necessary condition for P. The rule of inference for necessary condition is modus tollens:
Premise (1): If P, then Q
Premise (2): not Q
Conclusion: Therefore, not P
Necessity and sufficiency example
An example traditionally used by logicians contrasting sufficient and necessary conditions is the statement "If there is fire, then oxygen is present". An oxygenated environment is necessary for fire or combustion, but simply because there is an oxygenated environment does not necessarily mean that fire or combustion is occurring. While one can infer that fire stipulates the presence of oxygen, from the presence of oxygen the converse "If there is oxygen present, then fire is present" cannot be inferred. All that can be inferred from the original proposition is that "If oxygen is not present, then there cannot be fire".
Relationship of propositions
The symbol for the biconditional ("↔") signifies the relationship between the propositions is both necessary and sufficient, and is verbalized as "if and only if", or, according to the example "If P, then Q 'if and only if' if not Q, then not P".
Necessary and sufficient conditions can be explained by analogy in terms of the concepts and the rules of immediate inference of traditional logic. In the categorical proposition "All S is P", the subject term S is said to be distributed, that is, all members of its class are exhausted in its expression. Conversely, the predicate term P cannot be said to be distributed, or exhausted in its expression because it is indeterminate whether every instance of a member of P as a class is also a member of S as a class. All that can be validly inferred is that "Some P are S". Thus, the type "A" proposition "All P is S" cannot be inferred by conversion from the original type "A" proposition "All S is P". All that can be inferred is the type "A" proposition "All non-P is non-S" (note that (P → Q) and (¬Q → ¬P) are both type "A" propositions). Grammatically, one cannot infer "all mortals are men" from "All men are mortal". An type "A" proposition can only be immediately inferred by conversion when both the subject and predicate are distributed, as in the inference "All bachelors are unmarried men" from "All unmarried men are bachelors".
Distinguished from transposition
While most authors use the terms for the same thing, some authors distinguish transposition from contraposition. In traditional logic the reasoning process of transposition as a rule of inference is applied to categorical propositions through contraposition and obversion, a series of immediate inferences where the rule of obversion is first applied to the original categorical proposition "All S is P"; yielding the obverse "No S is non-P". In the obversion of the original proposition to a type "E" proposition, both terms become distributed. The obverse is then converted, resulting in "No non-P is S", maintaining distribution of both terms. The "No non-P is S" is again obverted, resulting in the [contrapositive] "All non-P is non-S". Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory, and the predicate term of the resulting type "A" proposition is again undistributed. This results in two contrapositives, one where the predicate term is distributed, and another where the predicate term is undistributed.
Contraposition is a type of immediate inference in which from a given categorical proposition another categorical proposition is inferred which has as its subject the contradictory of the original predicate. Since nothing is said in the definition of contraposition with regard to the predicate of the inferred proposition, it is permissible that it could be the original subject or its contradictory. This is in contradistinction to the form of the propositions of transposition, which may be material implication, or a hypothetical statement. The difference is that in its application to categorical propositions the result of contraposition is two contrapositives, each being the obvert of the other, i.e. "No non-P is S" and "All non-P is non-S". The distinction between the two contrapositives is absorbed and eliminated in the principle of transposition, which presupposes the "mediate inferences" of contraposition and is also referred to as the "law of contraposition".
Proof by contrapositive
Because the contrapositive of a statement always has the same truth value (truth or falsity) as the statement itself, it can be a powerful tool for proving mathematical theorems (especially if the truth of the contrapositive is easier to establish than the truth of the statement itself). A proof by contrapositive is a direct proof of the contrapositive of a statement. However, indirect methods such as proof by contradiction can also be used with contraposition, as, for example, in the proof of the irrationality of the square root of 2. By the definition of a rational number, the statement can be made that "If is rational, then it can be expressed as an irreducible fraction". This statement is true because it is a restatement of a definition. The contrapositive of this statement is "If cannot be expressed as an irreducible fraction, then it is not rational". This contrapositive, like the original statement, is also true. Therefore, if it can be proven that cannot be expressed as an irreducible fraction, then it must be the case that is not a rational number. The latter can be proved by contradiction.
The previous example employed the contrapositive of a definition to prove a theorem. One can also prove a theorem by proving the contrapositive of the theorem's statement. To prove that if a positive integer N is a non-square number, its square root is irrational, we can equivalently prove its contrapositive, that if a positive integer N has a square root that is rational, then N is a square number. This can be shown by setting equal to the rational expression a/b with a and b being positive integers with no common prime factor, and squaring to obtain N = a2/b2 and noting that since N is a positive integer b=1 so that N = a2, a square number.
In mathematics, proof by contrapositive, or proof by contraposition, is a rule of inference used in proofs, where one infers a conditional statement from its contrapositive. In other words, the conclusion "if A, then B" is inferred by constructing a proof of the claim "if not B, then not A" instead. More often than not, this approach is preferred if the contrapositive is easier to prove than the original conditional statement itself.
Logically, the validity of proof by contrapositive can be demonstrated by the use of the following truth table, where it is shown that p → q and q → p share the same truth values in all scenarios:
Difference with proof by contradiction
Proof by contradiction: Assume (for contradiction) that is true. Use this assumption to prove a contradiction. It follows that is false, so is true.
Proof by contrapositive: To prove , prove its contrapositive statement, which is .
Example
Let be an integer.
To prove: If is even, then is even.
Although a direct proof can be given, we choose to prove this statement by contraposition. The contrapositive of the above statement is:If is not even, then is not even.This latter statement can be proven as follows: suppose that x is not even, then x is odd. The product of two odd numbers is odd, hence is odd. Thus is not even.
Having proved the contrapositive, we can then infer that the original statement is true.
In nonclassical logics
Intuitionistic logic
In intuitionistic logic, the statement cannot be proven to be equivalent to . We can prove that implies (see below) without additional assumptions, but the reverse implication, from to , requires knowing , which follows from the law of the excluded middle or an equivalent axiom.
Assume (initial assumption)
Assume
From and , conclude
Discharge assumption; conclude
Turning into , conclude
Discharge assumption; conclude .
Subjective logic Contraposition represents an instance of the subjective Bayes' theorem in subjective logic expressed as:
where denotes a pair of binomial conditional opinions given by source . The parameter denotes the base rate (aka. the prior probability) of . The pair of derivative inverted conditional opinions is denoted . The conditional opinion generalizes the logical statement , i.e. in addition to assigning TRUE or FALSE the source can assign any subjective opinion to the statement. The case where is an absolute TRUE opinion is equivalent to source saying that is TRUE, and the case where is an absolute FALSE opinion is equivalent to source saying that is FALSE. In the case when the conditional opinion is absolute TRUE the subjective Bayes' theorem operator of subjective logic produces an absolute FALSE derivative conditional opinion and thereby an absolute TRUE derivative conditional opinion which is equivalent to being TRUE. Hence, the subjective Bayes' theorem represents a generalization of both contraposition and Bayes' theorem.
In probability theory Contraposition represents an instance of Bayes' theorem which in a specific form can be expressed as:
In the equation above the conditional probability generalizes the logical statement , i.e. in addition to assigning TRUE or FALSE we can also assign any probability to the statement. The term denotes the base rate (aka. the prior probability) of . Assume that is equivalent to being TRUE, and that is equivalent to being FALSE. It is then easy to see that when i.e. when is TRUE. This is because so that the fraction on the right-hand side of the equation above is equal to 1, and hence which is equivalent to being TRUE. Hence, Bayes' theorem represents a generalization of contraposition.
See also
Reductio ad absurdum Logical equivalence
References
Sources
Audun Jøsang, 2016, Subjective Logic; A formalism for Reasoning Under Uncertainty Springer, Cham,
Blumberg, Albert E. "Logic, Modern". Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.
Brody, Bobuch A. "Glossary of Logical Terms". Encyclopedia of Philosophy. Vol. 5-6, p. 61. Macmillan, 1973.
Copi, Irving. Introduction to Logic. MacMillan, 1953.
Copi, Irving. Symbolic Logic. MacMillan, 1979, fifth edition.
Prior, A.N. "Logic, Traditional". Encyclopedia of Philosophy, Vol.5, Macmillan, 1973.
Stebbing, Susan. A Modern Introduction to Logic''. Cromwell Company, 1931.
External links
Improper Transposition (Fallacy Files)
Mathematical logic
Theorems in propositional logic | Contraposition | [
"Mathematics"
] | 6,196 | [
"Mathematical logic",
"Theorems in propositional logic",
"Theorems in the foundations of mathematics"
] |
8,556,691 | https://en.wikipedia.org/wiki/Alarm%20filtering | Alarm filtering, in the context of IT network management, is the method by which an alarm system reports the origin of a system failure, rather than a list of systems failed.
Example
Depending on the way a network is set up, the failure of one device (be it software or hardware) may cause another to fail. In this situation, a non-filtering alarm system will report both the original failure and the other device that failed. With alarm filtering, the alarm system is able to report the original failure with more priority than the subsequent failure, allowing a technician or repairman to concentrate on the cause of the issue, rather than wasting time trying to repair the wrong device.
References
Network management | Alarm filtering | [
"Technology",
"Engineering"
] | 140 | [
"Computing stubs",
"Computer networks engineering",
"Network management",
"Computer network stubs"
] |
8,556,886 | https://en.wikipedia.org/wiki/Albarello | An albarello (a name of Italian descent meaning "cell", plural: albarelli) also known as a "majolica drug jar" because of the type of tin glaze used is known as Majolica (also known as maiolica). This cylindrical storage unit or maiolica earthenware jar, is used for a plethora of purposes, most commonly for drug storage as a medicinal jar. The jar was also used for other purposes such as storing dried fruit, herbs, balms, and apothecaries' ointments and dry drugs.
Function
People usually stored their albarelli in buildings with medical purposes, like pharmacies, hospitals, and doctors' offices. Such jars served both functional and decorative purposes in traditional apothecaries and pharmacies, and represented status and wealth. The jars were generally sealed with a piece of parchment or leather tied with a piece of cord. Hospitals often used albarelli to hold products such as ointments, balms, and different remedies for patients. Albarelli were also utilized in ways other than its originally intended purpose: they were used to hold perfume, they could function as a form of décor, and as vases to display flowers.
Etymology
According to the Oxford English Dictionary, the Italian word albarello means "a decorated ceramic pharmacy jar of a cylindrical shape with a slight constriction halfway up (first used in 1344 as alberello; and then later, in the late 15th century as albarello), but the term's exact origin and etymology remains a topic for a debate. Some scholars argue that its etymology suggests that it derives either from the post-classical Latin word albarus meaning "white poplar" or the Classical Latin word albus meaning white. However, its important to note here that the poplar tree itself is not being compared to the jar, as the Italian usage of the word albarello for a white poplar tree came long after the naming of the albarello jar. The issue has further been muddled because some have claimed that these jars were originally manufactured in wood, even though there is no physical evidence of this in recovered materials and surviving jars.Angelico Prati, Vocabolario etimologico italiano, Garzanti, 1951 Another possible origin of the term lies in another related Classical Latin word albaris ("whitish") and another Classical Latin term albarius referencing its relation to a whitewashing vessel. Its important to note that both albaris and albarius are only recorded as adjectives "of or relating to stucco" and therefore, do not have usage in relation to pottery or pottery glazes.
Importation from China and the Middle East
The origin of this type of pharmacy jar has its roots in the Middle East during the time of the Islamic conquests. The term majolica specifically refers to a type of tin glaze that originated in the Near East along Islamic trade routes, showcasing the strong influence of Islamic material culture. The characteristic shape of the arbarello also has its roots in the East. Additionally, Chinese porcelain and its manufacturing played a significant role in influencing the development and spread of arbarelli across Europe. The influence from the Islamic empire coupled with the manufacturing of Chinese porcelain made for mass manufacture and subsequent exportation of albarelli for several cultures including those in Italy, China, and Spain, and in turn were re-purposed for differing needs.
Spanning out of the East from the Islamic empires and China, the albarelli were first introduced to Europe in Syria and Spain and then brought to Italy by Muslim Arab traders, during the height of the Italian Renaissance where its shape and purpose was adopted. By importing majolica from Spain and Syria, Italian artists began producing versions of their own that differed from the traditional Islamic arbarelli with the addition of handles to heighten the functionality of the jar and the introduction of new designs including "a trofei" (with trophies), "a foglie" (leafy designs), and "a frutti" (decorated with fruits). While the styles in both Spain and Italy kept on developing, clear influence could still be linked backed to traditional Chinese and Islamic ceramic designs despite the changes in style and designs. Eventually Syrian manufacture of the jars lead to them being described in Italian as "porcellana domaschina" (damascene porcelain), to distinguish that the blue-and-white lusterware were made in Damascus and were not authentic Chinese porcelain that had been imported into Europe previously.
Characteristics
Albarelli are known for having a cylindrical shape without handles and a thin neck to make them easy to handle and move. Albarelli are usually made out of majolica, which has helped historians decipher the history between the albarelli, due to it being such a resistant and strong material. In later forms after it spread throughout Europe, their design included the addition of handles that were adopted by the Italians after the 15th century. On average, albarelli were recorded to be 7-8 inches in height. Instead of having a fitted lid to cover the opening at the top, people would use a fitted paper to lay on top to seal it.
Geographic styles
Spanish Renaissance
Spain was introduced to Albarelli by the East, it was adopted quickly. They are known to have the same cylindrical shape, with a more narrow opening, and an indented base where artists usually sign their name. Albarelli were also labelled through the symbols on the outside and through parchment paper in the vase.
Italian Renaissance
The earliest Italian examples were produced in Florence in the 15th century. Albarelli were made in Italy from the first half of the 15th century through to the late 18th century and beyond. Italian-based albarelli are commonly measured to be 7-8 inches in height. Italian albarelli adopted its shape from lustreware from Islamic Spain. Unlike English albarelli, Italian-based pots had flat edges on the rims to account for the placement of a paper cover that functioned as a lid. There was a lot of Oriental inspiration when making these jars, noticed through the blue and white colors on majority of the pots, also referred to as alla porcellana, which means in the form of imported Chinese porcelain, "a trofei" meaning with trophies, "a foglie" meaning leafy designs, and "a frutti" meaning decorated with fruits. Further designs include floral motifs against a white background, to more elaborate designs such as portraits of a cherub or priest, and can include a label describing the contents of the jar. Specific styles of decoration are now associated with various Italian locations, including Florence, Venice, Gerace and Palermo in Sicily.
Gallery
Middle Eastern and Islamic Styles
Spanish StylesItalian Styles
See also
Blue albarellos of the Esteve Pharmacy
Islamic world contributions to Medieval Europe
References
History of ceramics
Italian pottery
Ceramic art
Pharmacy
Ceramics of medieval Europe
Pottery shapes | Albarello | [
"Chemistry"
] | 1,408 | [
"Pharmacology",
"Pharmacy"
] |
8,557,344 | https://en.wikipedia.org/wiki/Drude%20particle | Drude particles are model oscillators used to simulate the effects of electronic polarizability in the context of a classical molecular mechanics force field. They are inspired by the Drude model of mobile electrons and are used in the computational study of proteins, nucleic acids, and other biomolecules.
Classical Drude oscillator
Most force fields in current practice represent individual atoms as point particles interacting according to the laws of Newtonian mechanics. To each atom, a single electric charge is assigned that doesn't change during the course of the simulation. However, such models cannot have induced dipoles or other electronic effects due to a changing local environment.
Classical Drude particles are massless virtual sites carrying a partial electric charge, attached to individual atoms via a harmonic spring. The spring constant and relative partial charges on the atom and associated Drude particle determine its response to the local electrostatic field, serving as a proxy for the changing distribution of the electronic charge of the atom or molecule. However, this response is limited to a changing dipole moment. This response is not enough to model interactions in environments with large field gradients, which interact with higher order moments.
Efficiency of simulation
The major computational cost of simulating classical Drude oscillators is the calculation of the local electrostatic field and the repositioning of the Drude particle at each step. Traditionally, this repositioning is done self consistently. This cost can be reduced by assigning a small mass to each Drude particle, applying a Lagrangian transformation and evolving the simulation in the generalised coordinates. This method of simulation has been used to create water models incorporating classical Drude oscillators.
Quantum Drude oscillator
Since the response of a classical Drude oscillator is limited, it is not enough to model interactions in heterogeneous media with large field gradients, where higher order electronic responses have significant contributions to the interaction energy. A quantum Drude oscillator (QDO) is a natural extension to the classical Drude oscillator. Instead of a classical point particle serving as a proxy for the charge distribution, a QDO uses a quantum harmonic oscillator, in the form of a pseudoelectron connected to an oppositely charged pseudonucleus by a harmonic spring.
A QDO has three free parameters: the spring's frequency , the pseudoelectron's charge and the system's reduced mass . The ground state of a QDO is a gaussian of width . Adding an external field perturbs the ground state of a QDO, which allows us to calculate its polarizability. To second order, the change in energy relative to the ground state is given by the following series:
where the polarizabilities are
Furthermore, since QDOs are quantum mechanical objects, their electrons can correlate, giving rise to dispersion forces between them. The second order change in energy corresponding to such an interaction is:
with the first three dispersion coefficients being (in the case of identical QDOs):
Since the response coefficients of QDOs depend on three parameters only, they are all related. Thus, these response coefficients can combine into four dimensionless constants, all equal to unity:
The QDO representation of atoms is the basis of the many body dispersion model which is a popular way to account for electrostatic forces in molecular dynamics simulations. This representation allows describing the processes of biological ion transport, small drug molecules across hydrophobic cell membranes and the behavior of proteins in solutions.
References
Computational chemistry
Oscillation | Drude particle | [
"Physics",
"Chemistry"
] | 733 | [
"Theoretical chemistry",
"Computational chemistry",
"Mechanics",
"Oscillation"
] |
8,557,559 | https://en.wikipedia.org/wiki/C19H28O2 | {{DISPLAYTITLE:C19H28O2}}
The molecular formula C19H28O2 (molar mass: 288.42 g/mol, exact Mass: 288.20893) may refer to:
Androstanedione
1-Androsterone
Benorterone, an antiandrogen
Dehydroandrosterone
Dehydroepiandrosterone, a neurosteroid
4-Dehydroepiandrosterone
Epitestosterone
Etiocholanedione
Hexahydrocannabivarin
Methylestrenolone
11β-Methyl-19-nortestosterone
Prasterone
Testosterone
1-Testosterone, an anabolic steroid
Trestolone | C19H28O2 | [
"Chemistry"
] | 155 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
8,557,763 | https://en.wikipedia.org/wiki/Albert%20Sauveur | Albert Sauveur (; 21 June 1863 – 26 January 1939) was a Belgian-born American metallurgist. He founded the first metallographic laboratory in a university.
Sauveur was born in Leuven, Belgium. He studied at the Athénée Royal in Brussels, then the School of Mines, Liège and graduated at the Massachusetts Institute of Technology in 1889. He remained in the United States thereafter, becoming a Professor of Metallurgy in 1905.
After several years working in industry, where he pioneered the use of microscopes to study the internal structure of steel, Sauveur joined Harvard University as a Instructor in Metallurgy, becoming Professor of Metallurgy in 1905. From 1924 to 1939, he held the Gordon McKay Professor of Mining and Metallurgy title at the university. From 1939 on, ASM International started bestowing the Albert Sauveur Achievement Award, for achievements in materials science and engineering.
He was a member of the US National Academy of Sciences, the American Academy of Arts and Sciences, the American Philosophical Society and the Iron and Steel Institute of Great Britain, the Iron and Steel Institute of America. He was awarded the Elliott Cresson Medal in 1913 and the Franklin Medal in 1939, both from The Franklin Institute and the Bessemer Gold Medal of the British Iron and Steel Institute in 1924.
He died in Boston, Massachusetts in 1939, survived by his wife and 2 daughters.
Works
"Microstructure of Steel" presented to the American Institute of Mining Engineers at the Engineering Congress of Chicago (1893).
"Microstructure of Steel and the Current Theories of Hardening" Transactions of The American Institute of Mining Engineers (1896)
"Metallographist" a quarterly publication (1898-1905)
"Metallography and Heat Treatment of Iron and Steel" (1912)
"Metallurgical Dialogue" copyrighted in 1935 by Albert Sauveur and published by the American Society for Metals.
"Metallurgical Reminiscences" copyrighted in 1937 by the American Institute of Mining and Metallurgical Engineers.
Both works were later reproduced in "Metallurgical Reminiscences And Dialogue" copyrighted in 1981 by the American Society for Metals.(Library of Congress Catalog Card no. :81-70044 .)
References
1863 births
1939 deaths
Scientists from Leuven
American metallurgists
Belgian emigrants to the United States
Massachusetts Institute of Technology alumni
Harvard University faculty
Bessemer Gold Medal
Members of the United States National Academy of Sciences
Fellows of the American Academy of Arts and Sciences
Members of the American Philosophical Society
Recipients of Franklin Medal | Albert Sauveur | [
"Chemistry"
] | 532 | [
"Bessemer Gold Medal",
"Chemical engineering awards"
] |
8,557,925 | https://en.wikipedia.org/wiki/Paracone | A paracone is a 1960s atmospheric reentry or spaceflight mission abort concept using an inflatable ballistic cone.
A notable feature of the paracone concept is that it facilitates an abort throughout the entire flight profile.
Gallery
See also
Space Shuttle abort modes, which do not include use of the paracone concept.
Escape pod
Escape crew capsule
MOOSE
References
Space technology
Proposed spacecraft | Paracone | [
"Astronomy"
] | 82 | [
"Outer space",
"Space technology",
"Astronomy stubs",
"Spacecraft stubs"
] |
8,558,131 | https://en.wikipedia.org/wiki/Benzene%20%28data%20page%29 | This page provides supplementary chemical data on benzene.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended to seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions. MSDS for benzene is available at AMOCO.
Structure and properties
Thermodynamic properties
Vapor pressure of liquid
Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. Note: (s) notation indicates equilibrium temperature of vapor over solid, otherwise value is equilibrium temperature of vapor over liquid.
Distillation data
Spectral data
Safety data
Material Safety Data Sheet for benzene:
References
Chemical data pages
Data page
Chemical data pages cleanup | Benzene (data page) | [
"Chemistry"
] | 149 | [
"Chemical data pages",
"nan"
] |
8,558,140 | https://en.wikipedia.org/wiki/Private%20speech | Private speech is speech spoken to oneself. It can be done for communication, self-guidance, and behavioral self-regulation. Children have been observed engaging in private speech between ages two and seven. Although audible, private speech is neither intended for, nor directed at others. Private speech was first studied by Lev Vygotsky and Jean Piaget. In the past 30 years, private speech has received renewed attention from researchers. Researchers have noted a positive correlation between children's use of private speech and their task performance and achievement, a fact also noted previously by Vygotsky. It is when children begin school that their use of private speech decreases and "goes underground".
History and theory
Private speech is typically observed in children from about two to seven years old. Private speech or "self-talk" is observed speech spoken to oneself for communication, self-guidance, and self-regulation of behaviour. Private speech is often thought to enhance developing early literacy skills and help increase a child's task performance, success, and achievement. Numerous sources trace the first theories of private speech back to two early well-known developmental psychologists, Vygotsky and Piaget. Both of these psychologists mainly studied private speech in young children, but they had differing perspectives and terminology.
In 1923, Piaget published The Language and Thought of the Child. In this book he recorded his observations of children talking to themselves in classrooms and termed it as "egocentric speech", the earliest concept of private speech. For Piaget, egocentric speech was a sign of cognitive immaturity. He thought egocentric speech would later develop into a fully mature and effective speech after a child gains a fair amount of cognitive and communicative skills.
In Thought and Language, Vygotsky argued that egocentric speech was a part of normal development of communication, self-guidance, self-regulation of behaviour, planning, pacing, and monitoring skills. Vygotsky explains that private speech stems from a child's social interactions as a toddler, then reaches a peak during preschool or kindergarten when children talk aloud to themselves. Private speech serves as "the social/cultural tool or symbol system of language, first used for interpersonal communication but later employed by the child overtly for intrapersonal communication and self-guidance." Private speech decreases during late elementary school years as children transition to using inner speech.
Vygotsky's theory of private speech has been considered significant by more recent developmental psychologists and has served as a basis for research for over 75 years. Berk, Winsler, Diaz, Montero, Neal, Amaya-Williams, and Wertsch are amongst some of the current well-known developmental psychologists and researchers who have been specializing in the field of private speech. Although the concept dates back to the 1930s, private speech is still an emerging field in psychology with a vast amount of research opportunities.
Benefits and uses for children
Evidence has supported Vygotsky's theory that private speech provides many developmental benefits for children. Above all, private speech aids children in different types of self-guidance and self-regulation. More specific uses and benefits of private speech are listed below.
Behavioral self-regulation and emotion regulation
Young children's behaviors are strongly influenced by the environment. For instance, the presence of an interesting new toy in the preschool classroom is likely to draw a child's attention and influence his or her play. Private speech helps children to verbally guide their own behavior and attention by helping them to detach themselves from stimuli in their environment. As private speech is very important for children to engage in at early ages, this speech should not be interrupted or limited by parental control. For example, a child may use private speech to direct themselves away from the distracting toy and toward the activity that the teacher told the child to do. Thus, private speech helps children to be less strongly influenced by their immediate environment and instead to self-control their behavior.
The relationship between private speech and behavioral self-regulation is further demonstrated by research showing that children use more private speech when asked to do more difficult tasks or when asked to do tasks without the help of a teacher or parent. In other words, in circumstances when more behavioral self-regulation is required of a young child, the child is more likely to use private speech. Private speech has also been linked to three-year-old's' ability to engage in task-related goals when explicitly taught to use private speech as a strategy for this purpose.
Young children also use private speech to help them regulate their emotions. One way that children regulate their emotions and comfort themselves through private speech is by mimicking their parents' comforting speech. For instance, a child may help themself calm down for sleep by repeating nighttime phrases that their parents have said to them previously to calm down. Young children who are better at controlling their emotions have also shown an increase in the amount of private speech they use.
Memory, motivation, communication, and creativity
Private speech is used by children spontaneously and is a learned strategy to enhance memory. Private speech is used as a repetitive strategy, to enhance working memory by maintaining information to be remembered. For instance, a child might repeat a rule or story to themselves in order to remember it. Children also use private speech to aid their ability to suppress certain responses or information, and instead use other, less common responses or information, a process known as inhibitory control.
By expressing goals, opinions, feelings, and self-thoughts through private speech, private speech increases children's motivation. For instance, a child may talk themselves through a challenging task. This type of motivating private speech is associated with self-efficacy. Moreover, children have been observed using motivational private speech especially during difficult tasks, and using motivational private speech is related to improved outcomes on the task.
Some researchers have hypothesized that private speech helps young children to master speech communication, by immersing themselves in speech more than they could with others. In doing so, children gain insight about their own communication abilities, practice communication, and build effective speech and communication skills.
Children often use private speech during creative and imaginative play. For instance, children often talk to themselves when playing imaginative and pretend games. Private speech is related to more creative playthe more frequently children engage in private speech, the more creative, flexible, and original thought they display.
Research
Current research is becoming focused on use of private speech in the early childhood classroom setting and teachers' practices and attitudes regarding children's private speech. Many studies have shown that preschool aged children engage in a considerable amount of overt private speech in their early childhood classrooms. Specifically, researchers have found that children use more self-talk when they are busy with a goal-directed task activity (e.g., completing a puzzle). It was also found that preschool aged children were least likely to use private speech in the presence of a teacher.
Many methodological advancements and tools can now be used to better examine the role of speech in self-regulation. With these advancements, there will be more research in the future on children's awareness of inner and private speech. There is also a possibility that researchers will perform additional work on the early precursors of self-talk, early childhood interventions and better understanding the role language has on the formation of inner and private speech.
In adults
In a self-reported questionnaire, young adults reported high levels of private speech, particularly when engaged in tasks with cognitive, mnemonic, and attentional components. This suggests that private speech may be retained to some extent into adulthood, serving a similar purpose as it does in children. Adults who stutter are much less likely to do so during private speech.
See also
Autocommunication
Idioglossialanguage invented and spoken by only one person or very few people
Inner speech
Intrapersonal communication
References
Developmental psychology | Private speech | [
"Biology"
] | 1,594 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
8,558,317 | https://en.wikipedia.org/wiki/GSM%20procedures | GSM procedures are sets of steps performed by the GSM network and devices on it in order for the network to function. GSM (Global System for Mobile Communications) is a set of standards for cell phone networks established by the European Telecommunications Standards Institute and first used in 1991. Its procedures refers to the steps a GSM network takes to communicate with cell phones and other mobile devices on the network.
IMSI attach refers to the procedure used when a mobile device or mobile station joins a GSM network when it turns on and IMSI detach refers to the procedure used to leave or disconnect from a network when the device is turned off.
IMSI attach
In a GSM network, when a Mobile Station (MS) is switched ON, the International Mobile Subscriber Identity (IMSI) attach procedure is executed. This procedure is required for the Mobile Switching Center (MSC) and Visitor Location Register (VLR) to register the MS in the network. If the MS has changed Location area (LA) while it was powered off, then the IMSI attach procedure will lead to a Location update.
When the MS is switched on, it searches for a mobile network to connect to. Once the MS identifies its desired network, it sends a message to the network to indicate that it has entered into an idle state. The Visitor Location Register (VLR) checks its database to determine whether there is an existing record of the particular subscriber.
If no record is found, the VLR communicates with the subscriber's Home Location Register (HLR) and obtains a copy of the subscription information. The obtained information is stored in the database of the VLR. Then an acknowledge message is sent to the MS.
Steps for IMSI attach procedure are as follows:
The MS will send a Channel Request message to the BSS (base station subsystem) on the RACH (random access channel).
The BSS responds on the AGCH (access grant channel) with an Immediate Assignment message and assigns an SDCCH to the MS.
The MS immediately switches to the assigned SDCCH (stand-alone dedicated control channel) and sends a Location Update Request to the BSS. The MS will send either an IMSI or a TMSI (Temporary Mobile Subscriber Identity) to the BSS.
The BSS will acknowledge the message. This acknowledgement only tells the MS that the BTS has received the message, it does not indicate the location update has been processed.
The BSS forwards the Location Update Request to the MSC/VLR.
The MSC/VLR forwards the IMSI to the HLR and requests verification of the IMSI as well as Authentication Triplets (RAND, Kc, SRES).
The HLR will forward the IMSI to the Authentication Center (AuC) and request authentication triplets.
The AuC generates the triplets and sends them along with the IMSI, back to the HLR.
The HLR validates the IMSI by ensuring it is allowed on the network and is allowed subscriber services. It then forwards the IMSI and Triplets to the MSC/VLR.
The MSC/VLR stores the SRES and the Kc and forwards the RAND to the BSS and orders the BSS to authenticate the MS.
The BSS sends the MS an Authentication Request message. The only parameter sent in the message is the RAND.
The MS uses the RAND to calculate the SRES and sends the SRES back to the BSS on the SDCCH in an Authentication Response. The BSS forwards the SRES up to the MSC/VLR.
The MSC/VLR compares the SRES generated by the AuC with the SRES generated by the MS. If they match, then authentication is completed successfully.
The MSC/VLR forwards the Kc for the MS to the BSS. The Kc is NOT sent across the Air Interface to the MS. The BSS stores the Kc and forwards the Set Cipher Mode command to the MS. The CIPH_MOD_CMD only tells the MS which encryption to use (A5/X), no other information is included.
The MS immediately switches to cipher mode using the A5 encryption algorithm. All transmissions are now enciphered. It sends a Ciphering Mode Complete message to the BSS.
The MSC/VLR sends a Location Updating Accept message to the BSS. It also generates a new TMSI for the MS. TMSI assignment is a function of the VLR. The BSS will either send the TMSI in the LOC_UPD_ACC message or it will send a separate TMSI Reallocation Command message. In both cases, since the Air Interface is now in cipher mode, the TMSI is not compromised.
The MS sends a TMSI Reallocation Complete message up to the MSC/VLR.
The BSS instructs the MS to go into idle mode by sending it a Channel Release message. The BSS then unassigns the SDCCH.
The MSC/VLR sends an Update Location message to the HLR. The HLR records which MSC/VLR the MS is currently in, so it knows which MSC to point to when it is queried for the location of the MS.
IMSI detach
IMSI detach is the process of detaching a MS from the mobile network to which it was connected. The IMSI detach procedure informs the network that the Mobile Station is switched off or is unreachable.
At power-down the MS requests a signaling channel.
Once assigned, the MS sends an IMSI detach message to the VLR.
When the VLR receives the IMSI detach-message, the corresponding IMSI is marked as detached by setting the IMSI detach flag. The HLR is not informed of this and the VLR does not acknowledge the MS about the IMSI detach.
If the radio link quality is poor when IMSI detach occurs, the VLR may not properly receive the IMSI-detach request. Since an acknowledgment message is not sent to the MS, it does not make further attempts to send IMSI detach messages. Therefore, the GSM network considers the MS to be still attached.
Implicit IMSI detach
The GSM air-interface, designated Um, transmits network-specific information on specific broadcast channels. This information includes whether the periodic location update is enabled. If enabled, then the MS must send location update requests at time intervals specified by the network. If the MS is switched off, having not properly completed the IMSI detach procedure, the network will consider the MS as switched off or unreachable if no location update is made. In this situation the VLR performs an implicit IMSI detach.
Location update
This procedure is used to update the location of the Mobile Station in the network and is described in more detail here.
Cancel location
When a mobile station registers in a new VLR, the subscriber's data is deleted from the
previous VLR in a cancel location procedure. The HLR initiates the procedure when it
receives an 'update location' message from a VLR other than the one in which the MS
was located at the time when its location information was last updated in the HLR database.
The cancel location procedure can also be initiated with MML commands, with
those, for example, that are used for changing the area, or deleting the MS from the HLR.
References
GSM standard
Mobile telecommunications standards
3GPP standards | GSM procedures | [
"Technology"
] | 1,555 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
8,558,371 | https://en.wikipedia.org/wiki/Alclad | Alclad is a corrosion-resistant aluminium sheet formed from high-purity aluminium surface layers metallurgically bonded (rolled onto) to high-strength aluminium alloy core material. It has a melting point of about . Alclad is a trademark of Alcoa but the term is also used generically.
Since the late 1920s, Alclad has been produced as an aviation-grade material, being first used by the sector in the construction of the ZMC-2 airship. The material has significantly more resistance to corrosion than most aluminium-based alloys, for only a modest increase in weight, making Alclad attractive for building various elements of aircraft, such as the fuselage, structural members, skin, and cowling. Accordingly, it became a relatively popular material for aircraft manufacturing.
Details
The material was described in NACA-TN-259 of August 1927, as "a new corrosion resistant aluminium product which is markedly superior to the present strong alloys. Its use should result in greatly increased life of a structural part. Alclad is a heat-treated aluminium, copper, manganese, magnesium alloy that has the corrosion resistance of pure metal at the surface and the strength of the strong alloy underneath. Of particular importance is the thorough character of the union between the alloy and the pure aluminium. Preliminary results of salt spray tests (24 weeks of exposure) show changes in tensile strength and elongation of Alclad 17ST, when any occurred, to be so small as to be well within the limits of experimental error." In applications involving aircraft construction, Alclad has proven to have increased resistance to corrosion at the expense of increased weight when compared to sheet aluminium.
As pure aluminium possesses a relatively greater resistance to corrosion over the majority of aluminium alloys, it was soon recognised that a thin coating of pure aluminium over the exterior surface of those alloys would take advantage of the superior qualities of both materials. Thus, a key advantage of Alclad over most aluminium alloys is its high corrosion resistance. However, considerable care must be taken while working on an Alclad-covered exterior surface, such as while cleaning the skin of an aircraft, to avoid scarring the surface to expose the vulnerable alloy underneath and prematurely age those elements.
Due to its relatively shiny natural finish, it is often considered to be cosmetically pleasing when used for external elements, particularly during restoration efforts. It has been observed that some fabrication techniques, such as welding, are not suitable when used in conjunction with Alclad. Mild cleaners with a neutral pH value and finer abrasives are recommended for cleaning and polishing Alclad surfaces. It is common for waterproof wax and other inhibitive coverings to be applied to further reduce corrosion. In the twenty-first century, research and evaluation was underway into new coatings and application techniques.
History
Alclad sheeting has become a widely used material within the aviation industry for the construction of aircraft due to its favourable qualities, such as a high fatigue resistance and its strength. During the first half of the twentieth century, substantial studies were conducted into the corrosion qualities of various lightweight aluminium alloys for aviation purposes. The first aircraft to be constructed from Alclad was the all-metal US Navy airship ZMC-2, which was constructed in 1927 at Naval Air Station Grosse Ile. Prior to this, aluminium had been used on the pioneering zeppelins constructed by Ferdinand Zeppelin.
Alclad has been most commonly present in certain elements of an aircraft, including the fuselage, structural members, skin, and cowls. The aluminium alloy that Alclad is derived from has become one of the most commonly used of all aluminium-based alloys. While unclad aluminium has also continued to be extensively used on modern aircraft, which has a lower weight than Alclad, it is more prone to corrosion; the alternating use of the two materials is often defined by the specific components or elements that are composed of them. In aviation-grade Alclad, the thickness of the outer cladding layer typically varies between 1% and 15% of the total thickness.
See also
Kynal-Core, similar aluminium-clad alloys produced by ICI
Duralumin, an aviation-related, copper-content aluminium alloy patented by its inventor Alfred Wilm by 1906
References
Citations
Bibliography
External links
Aluminium Alloys via aircraftmaterials.com
Corrosion and Inspection of General Aviation Aircraft via caa.co.uk
Aluminium
Aluminium alloys
Corrosion prevention
Aerospace materials
Bimetal | Alclad | [
"Chemistry",
"Materials_science",
"Engineering"
] | 904 | [
"Corrosion prevention",
"Aerospace materials",
"Metallurgy",
"Corrosion",
"Bimetal",
"Aluminium alloys",
"Alloys",
"Aerospace engineering"
] |
8,559,342 | https://en.wikipedia.org/wiki/Deal%E2%80%93Grove%20model | The Deal–Grove model mathematically describes the growth of an oxide layer on the surface of a material. In particular, it is used to predict and interpret thermal oxidation of silicon in semiconductor device fabrication. The model was first published in 1965 by Bruce Deal and Andrew Grove of Fairchild Semiconductor, building on Mohamed M. Atalla's work on silicon surface passivation by thermal oxidation at Bell Labs in the late 1950s. This served as a step in the development of CMOS devices and the fabrication of integrated circuits.
Physical assumptions
The model assumes that the oxidation reaction occurs at the interface between the oxide layer and the substrate material, rather than between the oxide and the ambient gas. Thus, it considers three phenomena that the oxidizing species undergoes, in this order:
It diffuses from the bulk of the ambient gas to the surface.
It diffuses through the existing oxide layer to the oxide-substrate interface.
It reacts with the substrate.
The model assumes that each of these stages proceeds at a rate proportional to the oxidant's concentration. In the first step, this means Henry's law; in the second, Fick's law of diffusion; in the third, a first-order reaction with respect to the oxidant. It also assumes steady state conditions, i.e. that transient effects do not appear.
Results
Given these assumptions, the flux of oxidant through each of the three phases can be expressed in terms of concentrations, material properties, and temperature.
By setting the three fluxes equal to each other the following relations can be derived:
Assuming a diffusion controlled growth i.e. where determines the growth rate, and substituting and in terms of from the above two relations into and equation respectively, one obtains:
If N is the concentration of the oxidant inside a unit volume of the oxide, then the oxide growth rate can be written in the form of a differential equation. The solution to this equation gives the oxide thickness at any time t.
where the constants and encapsulate the properties of the reaction and the oxide layer respectively, and is the initial layer of oxide that was present at the surface. These constants are given as:
where , with being the gas solubility parameter of the Henry's law and is the partial pressure of the diffusing gas.
Solving the quadratic equation for x yields:
Taking the short and long time limits of the above equation reveals two main modes of operation. The first mode, where the growth is linear, occurs initially when is small. The second mode gives a quadratic growth and occurs when the oxide thickens as the oxidation time increases.
The quantities B and B/A are often called the quadratic and linear reaction rate constants. They depend exponentially on temperature, like this:
where is the activation energy and is the Boltzmann constant in eV. differs from one equation to the other. The following table lists the values of the four parameters for single-crystal silicon under conditions typically used in industry (low doping, atmospheric pressure). The linear rate constant depends on the orientation of the crystal (usually indicated by the Miller indices of the crystal plane facing the surface). The table gives values for and silicon.
Validity for silicon
The Deal–Grove model works very well for single-crystal silicon under most conditions. However, experimental data shows that very thin oxides (less than about 25 nanometres) grow much more quickly in than the model predicts. In silicon nanostructures (e.g., silicon nanowires) this rapid growth is generally followed by diminishing oxidation kinetics in a process known as self-limiting oxidation, necessitating a modification of the Deal–Grove model.
If the oxide grown in a particular oxidation step greatly exceeds 25 nm, a simple adjustment accounts for the aberrant growth rate. The model yields accurate results for thick oxides if, instead of assuming zero initial thickness (or any initial thickness less than 25 nm), we assume that 25 nm of oxide exists before oxidation begins. However, for oxides near to or thinner than this threshold, more sophisticated models must be used.
In the 1980s, it became obvious that an update to the Deal-Grove model is necessary to model the aforementioned thin oxides (self-limiting cases). One such approach that more accurately models thin oxides is the Massoud model from 1985 [2]. The Massoud model is analytical and based on parallel oxidation mechanisms. It changes the parameters of the Deal-Grove model to better model the initial oxide growth with the addition of rate-enhancement terms.
The Deal-Grove model also fails for polycrystalline silicon ("poly-silicon"). First, the random orientation of the crystal grains makes it difficult to choose a value for the linear rate constant. Second, oxidant molecules diffuse rapidly along grain boundaries, so that poly-silicon oxidizes more rapidly than single-crystal silicon.
Dopant atoms strain the silicon lattice, and make it easier for silicon atoms to bond with incoming oxygen. This effect may be neglected in many cases, but heavily doped silicon oxidizes significantly faster. The pressure of the ambient gas also affects oxidation rate.
References
Bibliography
External links
Online Calculator including pressure, doping, and thin oxide effects
Semiconductor device fabrication
Chemical engineering
Nanomaterials
Nanoelectronics | Deal–Grove model | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,089 | [
"Microtechnology",
"Chemical engineering",
"Semiconductor device fabrication",
"Nanoelectronics",
"nan",
"Nanotechnology",
"Nanomaterials"
] |
8,559,385 | https://en.wikipedia.org/wiki/Saccharomyces%20bayanus | Saccharomyces bayanus is a yeast of the genus Saccharomyces, and is used in winemaking and cider fermentation, and to make distilled beverages. Saccharomyces bayanus, like Saccharomyces pastorianus, is now accepted to be the result of multiple hybridisation events between three pure species, Saccharomyces uvarum, Saccharomyces cerevisiae and Saccharomyces eubayanus. Notably, most commercial yeast cultures sold as pure S. bayanus for wine making, e.g. Lalvin EC-1118 strain, have been found to contain S. cerevisiae cultures instead.
S. bayanus is used intensively in comparative genomics studies. Based on a computation-based experimental design system, Caudy et al. generated a rich resource for expression profiles for S. bayanus, which has been used in several comparative studies in yeast systems, including expression patterns and nucleosome profiles.
See also
Yeast in winemaking
References
External links
Saccharomyces bayanus at ENTREZ genome project
bayanus
Yeasts
Yeasts used in brewing
Fungi described in 1895
Fungus species | Saccharomyces bayanus | [
"Biology"
] | 249 | [
"Yeasts",
"Fungi",
"Fungus species"
] |
8,559,471 | https://en.wikipedia.org/wiki/Capital%20One%20Tower%20%28Louisiana%29 | The Capital One Tower was a skyscraper located in Lake Charles, Louisiana, USA. It was the tallest building in the city and a dominant feature of the downtown skyline.
The building was designed by Lloyd Jones Brewer Associates of Houston and was constructed by Miner-Dederick of Houston and F. Miller and Sons of Lake Charles at a cost of $40 million. Construction began in March 1981 with groundbreaking in January 1982. The building was opened in 1983 and stood 22 stories and tall with of space. The four-acre lot included an attached parking garage.
The building was originally called the CM Tower, for Calcasieu Marine National Bank, then renamed the Hibernia tower after Hibernia National Bank purchased Calcasieu Marine. It was then changed to Capital One Tower after Capital One Bank acquired Hibernia in 2005.
Badly damaged by Hurricane Laura in August 2020, the tower stood empty until it was demolished in September 2024.
Facilities
At 22 floors, the tower dominated the Lake Charles skyline, with the City Club restaurant on the top floor offering panoramic views. It housed many professional offices.
Floor navigation
Two escalators in the atrium (one up and one down) served as access between floor 1 (ground floor) and floor 2 near the bank area.
Eight passenger elevators provided access to the upper floors and were grouped in two banks of four, with each bank having two cars facing each other linked by a hall. The west bank served floors 2, 15-21, and the eastmost two cars also served floor 1. Access to these elevators from floor 1 was inside the north entrance behind the security desk. The east bank of elevators served floors 2–14. A stairway behind each elevator bank also provided access to all floors, as did a freight elevator.
A walkway bridge connected floor 2 of the tower to floor 3 of the parking garage. Two passenger elevators and two stairways in the parking garage served floors 1-5 of the garage.
Hurricane damage
The tower was built with "hurricane-proof" glass, but in 2005 it sustained heavy damage to its atrium and several upper floors during Hurricane Rita, which made landfall as a Category 3 hurricane. In 2007, the facility underwent major repairs and security upgrades such as the addition of ballistic protection to the exterior glass. The reconstruction of the lower floors and the renovations of the atrium, tower facade, and major tenant spaces were designed and project-managed by Vincent-Shows Architects of Sulphur.
The tower was significantly damaged again in August 2020 by Category 4 Hurricane Laura. The building lost the majority of its windows, and the south-facing side was destroyed. In October 2020, Hurricane Delta further damaged the boarded-up building. The tower was left vacant and in 2023, the city gave Hertz Investment Group, who had acquired the building in June 2007, a deadline of November 2024 to either repair or demolish it. Hertz reached a settlement with their insurers and offered the building for sale, but it did not find a buyer.
Demolition
In early 2024, the city announced that Capital One Tower would be demolished by "late August or early September". Mayor Nic Hunter said of the demolition that "It's time to move on". The building was imploded on September 7, 2024.
References
Skyscrapers in Louisiana
Buildings and structures in Lake Charles, Louisiana
Bank buildings in Louisiana
Capital One
Skyscraper office buildings in Louisiana
Office buildings completed in 1982
Buildings and structures demolished in 2024
Buildings and structures demolished by controlled implosion
Demolished buildings and structures in Louisiana
1982 establishments in Louisiana
2020 disestablishments in Louisiana
Former skyscrapers | Capital One Tower (Louisiana) | [
"Engineering"
] | 731 | [
"Buildings and structures demolished by controlled implosion",
"Architecture"
] |
8,559,544 | https://en.wikipedia.org/wiki/Flexible%20shaft | A flexible shaft, often referred to as a flex shaft, is a device for transmitting rotary motion between two objects which are not fixed relative to one another. It consists of a rotating wire rope or coil which is flexible but has some torsional stiffness. It may or may not have a covering, which also bends but does not rotate. It may transmit considerable power, or only motion, with negligible power.
Flexible shafts are commonly used in plumber's snakes. They are popular accessories for handheld rotary tools, and integral parts of rotary tools with a remote motor, which are called "flexible shaft tools". They are used to transmit power to some sheep shears. They are also sold to connect panel knobs to remote potentiometers or other variable electronic components. Flexible shaft tools are used frequently in the dental and jewelry industry, as well as other industrial applications.
See also
Driveshaft
John K. Stewart
Bowden cable
References
Mechanical power transmission
Tools | Flexible shaft | [
"Physics",
"Engineering"
] | 198 | [
"Mechanical engineering stubs",
"Mechanical power transmission",
"Mechanics",
"Mechanical engineering"
] |
8,559,560 | https://en.wikipedia.org/wiki/Algebraic%20fraction | In algebra, an algebraic fraction is a fraction whose numerator and denominator are algebraic expressions. Two examples of algebraic fractions are and . Algebraic fractions are subject to the same laws as arithmetic fractions.
A rational fraction is an algebraic fraction whose numerator and denominator are both polynomials. Thus is a rational fraction, but not because the numerator contains a square root function.
Terminology
In the algebraic fraction , the dividend a is called the numerator and the divisor b is called the denominator. The numerator and denominator are called the terms of the algebraic fraction.
A complex fraction is a fraction whose numerator or denominator, or both, contains a fraction. A simple fraction contains no fraction either in its numerator or its denominator. A fraction is in lowest terms if the only factor common to the numerator and the denominator is 1.
An expression which is not in fractional form is an integral expression. An integral expression can always be written in fractional form by giving it the denominator 1. A mixed expression is the algebraic sum of one or more integral expressions and one or more fractional terms.
Rational fractions
If the expressions a and b are polynomials, the algebraic fraction is called a rational algebraic fraction or simply rational fraction. Rational fractions are also known as rational expressions. A rational fraction is called proper if , and improper otherwise. For example, the rational fraction is proper, and the rational fractions and are improper. Any improper rational fraction can be expressed as the sum of a polynomial (possibly constant) and a proper rational fraction. In the first example of an improper fraction one has
where the second term is a proper rational fraction. The sum of two proper rational fractions is a proper rational fraction as well. The reverse process of expressing a proper rational fraction as the sum of two or more fractions is called resolving it into partial fractions. For example,
Here, the two terms on the right are called partial fractions.
Irrational fractions
An irrational fraction is one that contains the variable under a fractional exponent. An example of an irrational fraction is
The process of transforming an irrational fraction to a rational fraction is known as rationalization. Every irrational fraction in which the radicals are monomials may be rationalized by finding the least common multiple of the indices of the roots, and substituting the variable for another variable with the least common multiple as exponent. In the example given, the least common multiple is 6, hence we can substitute to obtain
See also
Partial fraction decomposition
References
Elementary algebra
Fractions (mathematics)
de:Bruchrechnung#Rechnen_mit_Bruchtermen | Algebraic fraction | [
"Mathematics"
] | 556 | [
"Fractions (mathematics)",
"Mathematical objects",
"Elementary algebra",
"Elementary mathematics",
"Arithmetic",
"Numbers",
"Algebra"
] |
8,559,743 | https://en.wikipedia.org/wiki/Multi-layer%20insulation | Multi-layer insulation (MLI) is thermal insulation composed of multiple layers of thin sheets and is often used on spacecraft and cryogenics. Also referred to as superinsulation, MLI is one of the main items of the spacecraft thermal design, primarily intended to reduce heat loss by thermal radiation. In its basic form, it does not appreciably insulate against other thermal losses such as heat conduction or convection. It is therefore commonly used on satellites and other applications in vacuum where conduction and convection are much less significant and radiation dominates. MLI gives many satellites and other space probes the appearance of being covered with gold foil which is the effect of the amber-coloured Kapton layer deposited over the silver Aluminized mylar.
For non-spacecraft applications, MLI works only as part of a vacuum insulation system. For use in cryogenics, wrapped MLI can be installed inside the annulus of vacuum jacketed pipes. MLI may also be combined with advanced vacuum insulation for use in high temperature applications.
Function and design
The principle behind MLI is radiation balance. To see why it works, start with a concrete example - imagine a square meter of a surface in outer space, held at a fixed temperature of , with an emissivity of 1, facing away from the sun or other heat sources. From the Stefan–Boltzmann law, this surface will radiate 460 W. Now imagine placing a thin (but opaque) layer away from the plate, also with an emissivity of 1. This new layer will cool until it is radiating 230 W from each side, at which point everything is in balance. The new layer receives 460 W from the original plate. 230 W is radiated back to the original plate, and 230 W to space. The original surface still radiates 460 W, but gets 230 W back from the new layers, for a net loss of 230 W. So overall, the radiation losses from the surface have been reduced by half by adding the additional layer.
More layers can be added to reduce the loss further. The blanket can be further improved by making the outside surfaces highly reflective to thermal radiation, which reduces both absorption and emission. The performance of a layer stack can be quantified in terms of its overall heat transfer coefficient U, which defines the radiative heat flow rate Q between two parallel surfaces with a temperature difference and area A as
Theoretically, the heat transfer coefficient between two layers with emissivities and , at absolute temperatures and under vacuum, is
where Wm−2K−4 is the Stefan-Boltzmann Constant. If the temperature difference is not too large (, then a stack of N of layers, all with the same emissivity on both sides, will have an overall heat transfer coefficient
where is the average temperature of the layers. Clearly, increasing the number of layers and decreasing the emissivity both lower the heat transfer coefficient, which is equivalent to a higher insulation value. In space, where the apparent outside temperature could be 3 K (cosmic background radiation), the exact U value is different.
The layers of MLI can be arbitrarily close to each other, as long as they are not in thermal contact. The separation space only needs to be minute, which is the function of the extremely thin scrim or polyester 'bridal veil' as shown in the photo. To reduce weight and blanket thickness, the internal layers are made very thin, but they must be opaque to thermal radiation. Since they don't need much structural strength, these internal layers are usually made of very thin plastic, about thick, such as Mylar or Kapton, coated on one or both sides with a thin layer of metal, typically silver or aluminium. For compactness, the layers are spaced as close to each other as possible, though without touching, since there should be little or no thermal conduction between the layers. A typical insulation blanket has 40 or more layers. The layers may be embossed or crinkled, so they only touch at a few points, or held apart by a thin cloth mesh, or scrim, which can be seen in the picture above. The outer layers must be stronger, and are often thicker and stronger plastic, reinforced with a stronger scrim material such as fiberglass.
In satellite applications, the MLI will be full of air at launch time. As the rocket ascends, this air must be able to escape without damaging the blanket. This may require holes or perforations in the layers, even though this reduces their effectiveness.
In cryogenics, the MLI is the most effective kind of insulation. Therefore, it is commonly used in liquefied gas tanks (e.g. LNG, , , ), cryostats, cryogenic pipelines and superconducting devices. Additionally it is valued for its compact size and weight. A blanket composed of 40 layers of MLI has thickness of about and weight of approximately .
Methods tend to vary between manufacturers with some MLI blankets being constructed primarily using sewing technology. The layers are cut, stacked on top of each other, and sewn together at the edges.
Other more recent methods include the use of Computer-aided design and Computer-aided manufacturing technology to weld a precise outline of the final blanket shape using Ultrasonic welding onto a "pack" (the final set of layers before the external "skin" is added by hand.)
Seams and gaps in the insulation are responsible for most of the heat leakage through MLI blankets. A new method is being developed to use polyetheretherketone (PEEK) tag pins (similar to plastic hooks used to attach price tags to garments) to fix the film layers in place instead of sewing to improve the thermal performance.
Additional properties
Spacecraft also may use MLI as a first line of defence against dust impacts. This normally means spacing it a cm or so away from the surface it is insulating. Also, one or more of the layers may be replaced by a mechanically strong material, such as beta cloth.
In most applications the insulating layers must be grounded, so they cannot build up a charge and arc, causing radio interference. Since the normal construction results in electrical as well as thermal insulation, these applications may include aluminium spacers as opposed to cloth scrim at the points where the blankets are sewn together.
Using similar materials, Single-layer Insulation and Dual-layer insulation (SLI and DLI respectively) are also commonplace on spacecraft.
Alternative Sewing Technologies
The seams remain a problematic area where compromises are usually made. The conventional sewing methods cause compressions along stitch lines in multilayer insulation blankets. Hassan Saeed developed a new technology called Spacer Stitching during his research work at ITM, TU Dresden. The patented technology can avoid compressions along stitch lines in multilayer insulation assemblies.
See also
Liquid hydrogen tank car, on which a form of multi-layer insulation is applied
Thermal Control Subsystem
References
External links
Satellite Thermal Control Handbook, ed. David Gilmore. . In particular, Chapter 5, Insulation, by Martin Donabedian and David Gilmore.
Tutorial on temperature control of spacecraft by JPL
Typical specialist article on tests of Cassini's MLI
Multi-layer Insulation (MLI) Applications
Multi layer insulation material guidelines-NASA publication from 1999 https://ntrs.nasa.gov/citations/19990047691
MLI types and properties
Heat conduction
Thermal protection
Insulators | Multi-layer insulation | [
"Physics",
"Chemistry"
] | 1,527 | [
"Heat conduction",
"Thermodynamics"
] |
8,559,750 | https://en.wikipedia.org/wiki/Hin%20recombinase | Hin recombinase is a 21kD protein composed of 198 amino acids that is found in the bacteria Salmonella. Hin belongs to the serine recombinase family (B2) of DNA invertases in which it relies on the active site serine to initiate DNA cleavage and recombination. The related protein, gamma-delta resolvase shares high similarity to Hin, of which much structural work has been done, including structures bound to DNA and reaction intermediates. Hin functions to invert a 900 base pair (bp) DNA segment within the salmonella genome that contains a promoter for downstream flagellar genes, fljA and fljB. Inversion of the intervening DNA alternates the direction of the promoter and thereby alternates expression of the flagellar genes. This is advantageous to the bacterium as a means of escape from the host immune response.
Hin functions by binding to two 26bp imperfect inverted repeat sequences as a homodimer. These hin binding sites flank the invertible segment which not only encodes the Hin gene itself, but also contains an enhancer element to which the bacterial Fis proteins binds with nanomolar affinity. Four molecules of Fis bind to this site as a homodimers and are required for the recombination reaction to proceed.
The initial reaction requires binding of Hin and Fis to their respective DNA sequences and assemble into a higher-order nucleoprotein complex with branched plectonemic supercoils with the aid of the DNA bending protein HU. At this point, it is believed that the Fis protein modulates subtle contacts to activate the reaction, possibly through direct interactions with the Hin protein. Activation of the 4 catalytic serine residues within the Hin tetramer make a 2-bp double stranded DNA break and forms a covalent reaction intermediate. The DNA cleavage event also requires the divalent metal cation magnesium. A large conformational change reveals a large hydrophobic interface that allows for subunit rotation which may be driven by superhelical torsion within the protein-DNA complex. After this 180° rotation, Hin returns to its native conformation and re-ligates the cleaved DNA, without the aid of high energy cofactors and without the loss of any DNA.
References
Genetics techniques
Enzymes | Hin recombinase | [
"Engineering",
"Biology"
] | 478 | [
"Genetics techniques",
"Genetic engineering"
] |
8,559,787 | https://en.wikipedia.org/wiki/Nucleogenic | A nucleogenic isotope, or nuclide, is one that is produced by a natural terrestrial nuclear reaction, other than a reaction beginning with cosmic rays (the latter nuclides by convention are called by the different term cosmogenic). The nuclear reaction that produces nucleogenic nuclides is usually interaction with an alpha particle or the capture of fission or thermal neutrons. Some nucleogenic isotopes are stable and others are radioactive.
Example
An example of a nucleogenic nuclide is neon-21 produced from neon-20 that absorbs a thermal neutron (though some neon-21 is also primordial). Other nucleogenic reactions that produce heavy neon isotopes are (fast neutron capture, alpha emission) reactions, starting with magnesium-24 and magnesium-25, respectively. The source of the neutrons in these reactions is often secondary neutrons produced by alpha radiation from natural uranium and thorium in rock.
Types
Because nucleogenic isotopes have been produced later than the birth of the solar system (and the nucleosynthetic events that preceded it), nucleogenic isotopes, by definition, are not primordial nuclides. However, nucleogenic isotopes should not be confused with much more common radiogenic nuclides that are also younger than primordial nuclides, but which arise as simple daughter isotopes from radioactive decay. Nucleogenic isotopes, as noted, are the result of a more complicated nuclear reaction, although such reactions may begin with a radioactive decay event.
Alpha particles that produce nucleogenic reactions come from natural alpha particle emitters in uranium and thorium decay chains. Neutrons to produce nucleogenic nuclides may be produced by a number of processes, but due to the short half-life of free neutrons, all of these reactions occur on Earth. Among the most common are cosmic ray spallation production of neutrons from elements near the surface of the Earth. Alpha emission produced by some radioactive decay also produces neutrons by spallation knockout of neutron rich isotopes, such as the reaction of alpha particles with oxygen-18. Neutrons are also produced by neutron emission (a form of radioactive decay in some neutron-rich nuclides) and spontaneous fission of fissile isotopes on Earth (particularly uranium-235).
Nucleogenesis
Nucleogenesis (also known as nucleosynthesis) as a general phenomenon is a process usually associated with production of nuclides in the Big Bang or in stars, by nuclear reactions there. Some of these neutron reactions (such as the r-process and s-process) involve absorption by atomic nuclei of high-temperature (high energy) neutrons from the star. These processes produce most of the chemical elements in the universe heavier than zirconium (element 40), because nuclear fusion processes become increasingly inefficient and unlikely for elements heavier than this. By convention, such heavier elements produced in normal elemental abundance, are not referred to as "nucleogenic". Instead, this term is reserved for nuclides (isotopes) made on Earth from natural nuclear reactions.
Also, the term "nucleogenic" by convention excludes artificially produced radionuclides, for example tritium, many of which are produced in large amounts by a similar artificial processes, but using the copious neutron flux produced by conventional nuclear reactors.
References
Nuclear physics
Radiation
Isotopes
Metrology
Radioactivity | Nucleogenic | [
"Physics",
"Chemistry"
] | 709 | [
"Nuclear fission",
"Isotopes",
"Astrophysics",
"Nucleosynthesis",
"Nuclear physics",
"Nuclear fusion",
"Radioactivity"
] |
8,560,057 | https://en.wikipedia.org/wiki/Fox%20Fur%20Nebula | The Fox Fur Nebula is a nebula (a formation of gas and dust) located in the constellation of Monoceros (the Unicorn) not far off the right arm of Orion and included in the NGC 2264 Region. In the Sharpless catalog it is number 273.
The image is a close-up of a small section of a much larger complex, generally known as the Christmas Tree cluster. The Cone Nebula is also a part of this same cloud.
The red regions of this nebula are caused by hydrogen gas that has been stimulated to emit its own light by the copious ultraviolet radiation coming from the hot, blue stars of the cluster. The blue areas shine by a different process: they are mainly dust clouds that reflect the bluish light of the same stars.
Its popular name arises because the nebula looks like the head of a stole made from the fur of a red fox.
References
External links
The Fox Fur Nebula
APOD: 2015 December 30 - The Fox Fur Nebula
The Fox Fur Nebula
Best of AOP: The Fox Fur Nebula
APOD: 14 marca 2005 - The Fox Fur Nebula
Cone and Fox Fur Nebula Region
Fox Fur Nebula Photo
NGC 2264
H II regions
Monoceros
Sharpless objects | Fox Fur Nebula | [
"Astronomy"
] | 244 | [
"Monoceros",
"Constellations"
] |
8,560,647 | https://en.wikipedia.org/wiki/Sentry%202020 | Sentry 2020 is a commercial software program for transparent disk encryption for PCs and PDAs using Microsoft Windows or Windows Mobile. It was developed by Softwinter, Inc. All information stored is encrypted and decrypted each time an application performs a read/write operation on a Sentry volume.
See also
LibreCrypt, an alternative system which also works on both PC and PDAs
Disk encryption
Disk encryption software
Comparison of disk encryption software
References
Cryptographic software
Windows security software
Disk encryption
Cross-platform software | Sentry 2020 | [
"Mathematics"
] | 107 | [
"Cryptographic software",
"Mathematical software"
] |
8,560,736 | https://en.wikipedia.org/wiki/Triosephosphate%20isomerase%20deficiency | Triosephosphate isomerase deficiency is a rare autosomal recessive metabolic disorder which was initially described in 1965.
It is a unique glycolytic enzymopathy that is characterized by chronic haemolytic anaemia, cardiomyopathy, susceptibility to infections, severe neurological dysfunction, and, in most cases, death in early childhood. The disease is exceptionally rare with fewer than 100 patients diagnosed worldwide.
Genetics
Thirteen different mutations in the respective gene, which is located at chromosome 12p13 and encodes the ubiquitous housekeeping enzyme triosephosphate isomerase (TPI), have been discovered so far. TPI is a crucial enzyme of glycolysis and catalyzes the interconversion of dihydroxyacetone phosphate and glyceraldehyde-3-phosphate. A marked decrease in TPI activity and an accumulation of dihydroxyacetone phosphate have been detected in erythrocyte extracts of homozygous (two identical mutant alleles) and compound heterozygous (two different mutant alleles) TPI deficiency patients. Heterozygous individuals are clinically unaffected, even if their residual TPI activity is reduced. Recent work suggests that not a direct inactivation, but an alteration in TPI dimerization might underlie the pathology. This might explain why the disease is rare, but inactive TPI alleles have been detected at higher frequency implicating a heterozygote advantage of inactive TPI alleles.
The most common mutation causing TPI deficiency is TPI Glu104Asp. All carriers of the mutation are descendants of a common ancestor, a person that lived in what is today France or England more than 1000 years ago.
Diagnosis
Treatment
See also
List of hematologic conditions
References
External links
Hereditary hemolytic anemias
Autosomal recessive disorders
Inborn errors of carbohydrate metabolism
Rare diseases | Triosephosphate isomerase deficiency | [
"Chemistry"
] | 403 | [
"Inborn errors of carbohydrate metabolism",
"Carbohydrate metabolism"
] |
8,560,978 | https://en.wikipedia.org/wiki/Triad%20%28anatomy%29 | In the histology of skeletal muscle, a triad is the structure formed by a T tubule with a sarcoplasmic reticulum (SR) known as the terminal cisterna on either side. Each skeletal muscle fiber has many thousands of triads, visible in muscle fibers that have been sectioned longitudinally. (This property holds because T tubules run perpendicular to the longitudinal axis of the muscle fiber.) In mammals, triads are typically located at the A-I junction; that is, the junction between the A and I bands of the sarcomere, which is the smallest unit of a muscle fiber.
Triads form the anatomical basis of excitation-contraction coupling, whereby a stimulus excites the muscle and causes it to contract. A stimulus, in the form of positively charged current, is transmitted from the neuromuscular junction down the length of the T tubules, activating dihydropyridine receptors (DHPRs). Their activation causes 1) a negligible influx of calcium and 2) a mechanical interaction with calcium-conducting ryanodine receptors (RyRs) on the adjacent SR membrane. Activation of RyRs causes the release of calcium from the SR, which subsequently initiates a cascade of events leading to muscle contraction. These muscle contractions are caused by calcium's bonding to troponin and unmasking the binding sites covered by the troponin-tropomyosin complex on the actin myofilament and allowing the myosin cross-bridges to connect with the actin.
See also
Diad, a homologous structure in cardiac muscle
References
Histology
Muscular system | Triad (anatomy) | [
"Chemistry"
] | 343 | [
"Histology",
"Microscopy"
] |
8,561,045 | https://en.wikipedia.org/wiki/Bucket-brigade%20device | A bucket brigade or bucket-brigade device (BBD) is a discrete-time analogue delay line, developed in 1969 by F. Sangster and K. Teer of the Philips Research Labs in the Netherlands. It consists of a series of capacitance sections C0 to Cn. The stored analogue signal is moved along the line of capacitors, one step at each clock cycle. The name comes from analogy with the term bucket brigade, used for a line of people passing buckets of water.
In most signal processing applications, bucket brigades have been replaced by devices that use digital signal processing, manipulating samples in digital form. Bucket brigades still see use in specialty applications, such as guitar effects.
A well-known integrated circuit device around 1976, the Reticon SAD-1024 implemented two 512-stage analog delay lines in a 16-pin DIP. It allowed clock frequencies ranging from 1.5 kHz to more than 1.5 MHz. The SAD-512 was a single delay line version. The Philips Semiconductors TDA1022 similarly offered a 512-stage delay line but with a clock rate range of 5–500 kHz. Other common BBD chips include the Panasonic MN3002, MN3005, MN3007, MN3204 and MN3205, with the primary differences being the available delay time. Some examples effects units utilizing Panasonic BBDs are the Boss CE-1 Chorus Ensemble and the Yamaha E1010.
In 2009, the guitar effects pedal manufacturer Visual Sound recommissioned production of the Panasonic-designed MN3102 and MN3207 BBD chip.
Despite being analog in their representation of individual signal voltage samples, these devices are discrete in the time domain and thus are limited by the Nyquist–Shannon sampling theorem; both the input and output signals are generally low-pass filtered. The input must be low-pass filtered to avoid aliasing effects, while the output is low-pass filtered for reconstruction. (A low-pass is used as an approximation to the Whittaker–Shannon interpolation formula.)
The concept of the bucket-brigade device led to the charge-coupled device (CCD) developed by Bell Labs for use in digital cameras. The idea of using capacitors to retain a voltage state has older origins and separately led to dynamic random-access memory, where the charges are not propagated, but refreshed, in place.
See also
Switched capacitor
References
Theuwissen, A. (1995). Solid-State Imaging with Charge-Coupled Devices.
Analog circuits | Bucket-brigade device | [
"Engineering"
] | 535 | [
"Analog circuits",
"Electronic engineering"
] |
8,561,047 | https://en.wikipedia.org/wiki/Wingsail | A wingsail, twin-skin sail or double skin sail is a variable-camber aerodynamic structure that is fitted to a marine vessel in place of conventional sails. Wingsails are analogous to airplane wings, except that they are designed to provide lift on either side to accommodate being on either tack. Whereas wings adjust camber with flaps, wingsails adjust camber with a flexible or jointed structure (for hard wingsails). Wingsails are typically mounted on an unstayed spar—often made of carbon fiber for lightness and strength. The geometry of wingsails provides more lift, and a better lift-to-drag ratio, than traditional sails. Wingsails are more complex and expensive than conventional sails.
Introduction
Wingsails are of two basic constructions that create an airfoil, "soft" and "hard", both mounted on an unstayed rotating mast. Whereas hard wingsails are rigid structures that are stowed only upon removal from the boat, soft wingsails can be furled or stowed on board.
L. Francis Herreshoff pioneered a precursor rig that had jib and main, each with a two-ply sail with leading edges attached to a rotating spar. The C Class Catamaran class has been experimenting and refining wingsails in a racing context since the 60s. Englishman, John Walker, explored the use of wingsails in cargo ships and developed the first practical application for sailing yachts in the 1990s. Wingsails have been applied to small vessels, like the Optimist dinghy and Laser, to cruising yachts, and most notably to high-performance multihull racing sailboats, like USA-17. The smallest craft have a unitary wing that is manually stepped. Cruising rigs have a soft rig that can be lowered, when not in use. High-performance rigs are often assembled of rigid components and must be stepped (installed) and unstepped by shore-side equipment.
Camber adjustment
Wingsails change camber (the asymmetry between the top and the bottom surfaces of the aerofoil), depending on tack and wind speed. A wingsail becomes more efficient with greater curvature on the downwind side. Since the windward side changes with each tack, so must sail curvature change. This happens passively on a conventional sail, as it fills in with wind on each tack. On a wingsail, a change in camber requires a mechanism. Wingsails also change camber to adjust for windspeed. On an aircraft, flaps increase the camber or curvature of the wing, raising the maximum lift coefficient—the lift a wing can generate—at lower air speeds (speed of the air passing over it). A wingsail has the same need for camber adjustment, as windspeed changes—a straighter camber curvature as windspeed increases, more curved as it decreases.
Mechanisms for camber adjustment are similar for soft and hard wingsails. Each employs independent leading and trailing airfoil segments that are adjusted independently for camber. More sophisticated rigs allow for variable adjustment of camber with height above the water to account for increased windspeed.
Comparison with conventional sailing rigs
The presence of rigging, supporting the mast of a conventional fore-and-aft rig limits sail geometry to shapes that are less efficient than the narrow chord of the wingsail. However, conventional sails are simple to adjust for windspeed by reefing. Wingsails typically are a fixed surface area. Conventional sails can be furled easily; some flexible wingsails can be dropped, when not in use; rigid wingsails must be removed when exposure to wind is undesirable.
Points of sail
Nielsen summarised the efficiencies of wingsails, compared with conventional sails, for different points of sail, as follows:
Close-hauled: At 30° apparent wind, the wingsail has a 10-degree angle of attack and more lift, compared to the conventional sail plan and its angles of attack of 15° for the jib and 20° for the mainsail.
Beam reach: At 90° apparent wind, the wingsail, positioned across the boat, functions efficiently as a wing, providing forward lift, whereas the jib of the conventional sail plan suffers from being difficult to shape as a wing (the main sail is still relatively efficient).
Broad reach: At 135° apparent wind, the wingsail may be eased in such a manner that it still functions efficiently as a wing, whereas the jib and main sail no longer provide lift—instead they present themselves perpendicular to the wind and provide force from drag only.
References
External links
Marine propulsion
Sailboat components
Sailing rigs and rigging
Wind-powered vehicles | Wingsail | [
"Engineering"
] | 966 | [
"Marine propulsion",
"Marine engineering"
] |
8,561,592 | https://en.wikipedia.org/wiki/Geometry%20of%20interaction | In proof theory, the Geometry of Interaction (GoI) was introduced by Jean-Yves Girard shortly after his work on linear logic. In linear logic, proofs can be seen as various kinds of networks as opposed to the flat tree structures of sequent calculus. To distinguish the real proof nets from all the possible networks, Girard devised a criterion involving trips in the network. Trips can in fact be seen as some kind of operator acting on the proof. Drawing from this observation, Girard described directly this operator from the proof and has given a formula, the so-called execution formula, encoding the process of cut elimination at the level of operators. Subsequent constructions by Girard proposed variants in which proofs are represented as flows, or operators in von Neumann algebras. Those models were later generalised by Seiller's Interaction Graphs models.
One of the first significant applications of GoI was a better analysis of Lamping's algorithm for optimal reduction for the lambda calculus. GoI had a strong influence on game semantics for linear logic and PCF.
Beyond the dynamic interpretation of proofs, geometry of interaction constructions provide models of linear logic, or fragments thereof. This aspect has been extensively studied by Seiller under the name of linear realisability, a version of realizability accounting for linearity.
GoI has been applied to deep compiler optimisation for lambda calculi. A bounded version of GoI dubbed the Geometry of Synthesis has been used to compile higher-order programming languages directly into static circuits.
References
Further reading
GoI tutorial given at Siena 07 by Laurent Regnier, in the Linear Logic workshop,
Proof theory
Philosophical logic
Logic in computer science
Semantics
Linear logic | Geometry of interaction | [
"Mathematics"
] | 346 | [
"Mathematical logic",
"Logic in computer science",
"Proof theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.