id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
15,988,913
https://en.wikipedia.org/wiki/Stripping%20%28chemistry%29
Stripping is a physical separation process where one or more components are removed from a liquid stream by a vapor stream. In industrial applications the liquid and vapor streams can have co-current or countercurrent flows. Stripping is usually carried out in either a packed or trayed column. Theory Stripping works on the basis of mass transfer. The idea is to make the conditions favorable for the component, A, in the liquid phase to transfer to the vapor phase. This involves a gas–liquid interface that A must cross. The total amount of A that has moved across this boundary can be defined as the flux of A, NA. Equipment Stripping is mainly conducted in trayed towers (plate columns) and packed columns, and less often in spray towers, bubble columns, and centrifugal contactors. Trayed towers consist of a vertical column with liquid flowing in the top and out the bottom. The vapor phase enters in the bottom of the column and exits out of the top. Inside of the column are trays or plates. These trays force the liquid to flow back and forth horizontally while the vapor bubbles up through holes in the trays. The purpose of these trays is to increase the amount of contact area between the liquid and vapor phases. Packed columns are similar to trayed columns in that the liquid and vapor flows enter and exit in the same manner. The difference is that in packed towers there are no trays. Instead, packing is used to increase the contact area between the liquid and vapor phases. There are many different types of packing used and each one has advantages and disadvantages. Variables The variables and design considerations for strippers are many. Among them are the entering conditions, the degree of recovery of the solute needed, the choice of the stripping agent and its flow, the operating conditions, the number of stages, the heat effects, and the type and size of the equipment. The degree of recovery is often determined by environmental regulations, such as for volatile organic compounds like chloroform. Frequently, steam, air, inert gases, and hydrocarbon gases are used as stripping agents. This is based on solubility, stability, degree of corrosiveness, cost, and availability. As stripping agents are gases, operation at nearly the highest temperature and lowest pressure that will maintain the components and not vaporize the liquid feed stream is desired. This allows for the minimization of flow. As with all other variables, minimizing cost while achieving efficient separation is the ultimate goal. The size of the equipment, and particularly the height and diameter, is important in determining the possibility of flow channeling that would reduce the contact area between the liquid and vapor streams. If flow channeling is suspected to be occurring, a redistribution plate is often necessary to, as the name indicates, redistribute the liquid flow evenly to reestablish a higher contact area. As mentioned previously, strippers can be trayed or packed. Packed columns, and particularly when random packing is used, are usually favored for smaller columns with a diameter less than 2 feet and a packed height of not more than 20 feet. Packed columns can also be advantageous for corrosive fluids, high foaming fluids, when fluid velocity is high, and when particularly low pressure drop is desired. Trayed strippers are advantageous because of ease of design and scale up. Structured packing can be used similar to trays despite possibly being the same material as dumped (random) packing. Using structured packing is a common method to increase the capacity for separation or to replace damaged trays. Trayed strippers can have sieve, valve, or bubble cap trays while packed strippers can have either structured packing or random packing. Trays and packing are used to increase the contact area over which mass transfer can occur as mass transfer theory dictates. Packing can have varying material, surface area, flow area, and associated pressure drop. Older generation packing include ceramic Raschig rings and Berl saddles. More common packing materials are metal and plastic Pall rings, metal and plastic Zbigniew Białecki rings, and ceramic Intalox saddles. Each packing material of this newer generation improves the surface area, the flow area, and/or the associated pressure drop across the packing. Also important, is the ability of the packing material to not stack on top of itself. If such stacking occurs, it drastically reduces the surface area of the material. Lattice design work has been increasing of late that will further improve these characteristics. During operation, monitoring the pressure drop across the column can help to determine the performance of the stripper. A changed pressure drop over a significant range of time can be an indication that the packing may need to be replaced or cleaned. Typical applications Stripping is commonly used in industrial applications to remove harmful contaminants from waste streams. One example would be the removal of TBT and PAH contaminants from harbor soils. The soils are dredged from the bottom of contaminated harbors, mixed with water to make a slurry and then stripped with steam. The cleaned soil and contaminant rich steam mixture are then separated. This process is able to decontaminate soils almost completely. Steam is also frequently used as a stripping agent for water treatment. Volatile organic compounds are partially soluble in water and because of environmental considerations and regulations, must be removed from groundwater, surface water, and wastewater. These compounds can be present because of industrial, agricultural, and commercial activity. See also Steam stripping (Similar concept, but more specialized to Refinery Operations ) Continuous distillation Distillation Distillation Design Fractionating column Packed bed Steam distillation Theoretical plate Stripping Enhanced Distillation References Separation processes
Stripping (chemistry)
[ "Chemistry" ]
1,166
[ "nan", "Separation processes" ]
15,989,048
https://en.wikipedia.org/wiki/Spacecraft%20Systems%20and%20Controls%20Lab
The Space Systems and Control Lab (SSCL), is a laboratory based at Iowa State University (ISU) in Ames, IA. SSCL focuses on space systems and has a massive number of independent projects. Within its department, the SSCL also has an AABL Wind and Gust Tunnel, Anechoic Chamber, Icing Tunnel, Neutral Buoyancy Tank, Rotational Diamond Anvil Cell, and a Tornado Simulator. History In 2007, the SSCL was changed to the Space Systems and Controls Lab as new leadership took over, and to reflect some of the changes the lab had undergone. The SSCL continues with a focus in space systems and has expanded to several new areas. The SSCL still has a strong emphasis on student involvement with both projects and leadership of the lab. Currently, the lab has 4 core projects, two active research projects, several capstone projects and well over 50 students from Electrical, Aerospace, and Mechanical Engineering, as well as students outside the College of Engineering. The lab is managed by Matthew Nelson, a staff member within the Aerospace Engineering department and the Director of Engineering and Operations for the lab. Funding for the lab is provided by the Aerospace Engineering Department, research grants, and private donations. Projects The SSCL has several core projects that are ongoing from year to year. In addition to these projects, the SSCL has had numerous capstone and independent projects led by students in the lab. HABET The longest running project at the SSCL is the High Altitude Balloon Experiments in Technology (HABET) program. This program has enabled students to design, build and fly spacecraft to the edge of our atmosphere and back to earth.. The HABET team has flown many experiments that have included micro gravity, worms, collection of atmospheric data, and high quality images and videos. The HABET team has flown over 130 flights, has obtained an altitude record of 121,793 feet (ASL), has flown payloads up to 50 lbs, and has continually been developing new techniques and hardware for High Altitude Balloons. IJEMS The ISAT project was never fully funded. In September 1994, an opportunity to fly an experiment aboard the space shuttle was presented. One of the original experiments for the ISAT project was incorporated into a design to be flown aboard the space shuttle in a project called the Iowa Joint Experiment in Microgravity Solidification (IJEMS). The project involved many institutions, including Iowa State University (ISU), the University of Iowa, the Ames Laboratory, the Institute for Physical Research and Technology, Rockwell International, and Space Industries Incorporated. In September 1995, the project was successfully flown on board STS-69. IJEMS had the following attributes: Microprocessor: 33 MHz 486SLC Storage Media: 3MB Flash memory formatted with FAT for executable and data storage Operating system: DOS Thermocouples quantity: 32 Solid state relays quantity: 24 Programming language: C++ Hosted experiments quantity: 4 Available power: 20A @ 28V Pre-flight acceleration testing: 9G Smart Can pressure regulation: 1/2 ATM References External links Space Systems and Controls Lab ISU Course Catalog - Aerospace Engineering (AER E) External links Iowa State University 1992 establishments in Iowa Aerospace organizations Systems and Controls La Systems and Controls Lab Space exploration
Spacecraft Systems and Controls Lab
[ "Astronomy" ]
672
[ "Space exploration", "Outer space" ]
15,989,478
https://en.wikipedia.org/wiki/Amateur%20radio%20propagation%20beacon
An amateur radio propagation beacon is a radio beacon, whose purpose is the investigation of the propagation of radio signals. Most radio propagation beacons use amateur radio frequencies. They can be found on LF, MF, HF, VHF, UHF, and microwave frequencies. Microwave beacons are also used as signal sources to test and calibrate antennas and receivers. The International Amateur Radio Union (IARU) and its member societies coordinate beacons established by radio amateurs. Transmission characteristics Most beacons operate in continuous wave (A1A) and transmit their identification (call sign and location). Some of them send long dashes to facilitate signal strength measurement. A small number of beacons transmit Morse code by frequency-shift keying (F1A). A few beacons transmit signals in digital modulation modes, like radioteletype (F1B) and PSK31 (G1B). Legality In the US, unattended beacons on frequencies lower than the 10-meter band (~28 MHz) are not legal. 2200-meter beacons Amateur experiments in the 2200-meter band (135.7–137.8 kHz) often involve operating temporary beacons. 1750-meter beacons In the United States and Canada, unlicensed experimenters ("LowFERs") establish low power beacons on radio frequencies between 160 kHz and 190 kHz. 160-meter beacons The International Amateur Radio Union Region 2 (North and South America) bandplan for the 160-meter band reserves the range 1999 kHz to 2000 kHz for propagation beacons. 10-meter beacons Most high frequency radio propagation beacons are found in the 10-meter band (28 MHz), where they are good indicators of Sporadic E ionospheric propagation. According to IARU bandplans, the following 28 MHz frequencies are allocated to radio propagation beacons: 6-meter beacons Due to unpredictable and intermittent long-distance propagation, usually achieved by a combination of ionospheric conditions, beacons are very important in providing early warning for 6-meter band (50 MHz) openings. Beacons traditionally operate in the lower part of the band, in the range 50.000 MHz to 50.080 MHz. IARU Region 1 is encouraging individual beacons to move to 50.4 MHz to 50.5 MHz. In the United States, the Federal Communications Commission (FCC) only permits unattended 6-meter beacon stations to operate between 50.060 and 50.080 MHz. Amateur beacons at 50 MHz have also been used as signal sources for academic propagation research 4-meter beacons Several countries in ITU Region 1 have access to frequencies in the 70 MHz region, called the 4-meter band. The band shares many propagation characteristics with 6 meters. The preferred location for beacons is 70.000–70.090 MHz; however, in countries where this segment is not allocated to Amateur Radio, beacons may operate elsewhere in the band. United States Brian Justin, WA1ZMS, of Forest, Virginia, applied for an experimental license to operate a propagation beacon on 4m with the FCC in January 2010. It was approved, and at 1200 UTC on Monday, May 3, 2010, the beacon went operational under the callsign WE9XFT. The beacon sits on Apple Orchard Mountain (4200 feet above sea level), a mountain along the Blueridge Parkway in Maidenhead grid square FM07fm, near Bedford, Virginia. Because there is no amateur band on 70 MHz in the United States, the beacon runs 24 hours a day under a non-amateur experimental license. Justin told the ARRL that he had no plans to introduce the 4-meter band to the United States, despite the fact that numerous European governments allow amateurs rights on the band. He said, "This beacon is solely for radio scientific usage as an E-skip detecting device" On 70.005 MHz, WE9XFT is transmitting 3 kW ERP to Europe. At the same location, Justin runs a 144 MHz remote-controlled transmitter, WA1ZMS. It is GPS locked and uses two 5-element stacked Yagis beaming at 60 degrees with a 500 W transmitter running at 7 kW ERP. Both signals are audible in the United States and Europe. VHF/UHF beacons Beacons on 144 MHz and higher frequencies are mainly used to identify tropospheric radio propagation openings. It is not uncommon for VHF and UHF beacons to use directional antennas. Frequencies set aside for beacons on VHF and UHF bands vary widely in different ITU regions and countries. The beacon sub-bands in the United Kingdom also reflect IARU Region 1 recommendations. SHF/microwave beacons In addition to identifying propagation, microwave beacons are also used as signal sources to test and calibrate antennas and receivers. SHF beacons are not as common as beacons on the lower bands, and beacons above the 3-centimeter band (10 GHz) are unusual. Beacon projects Most radio propagation beacons are operated by individual radio amateurs or amateur radio societies and clubs. As a result, there are frequent additions and deletions to the lists of beacons. There are, however a few major projects coordinated by organizations like the International Amateur Radio Union (IARU). IARU Beacon Project The International Beacon Project (IBP), which is coordinated by the Northern California DX Foundation and the International Amateur Radio Union, consists of 18 high frequency propagation beacons worldwide, which transmit in turns on 14.100 MHz, 18.110 MHz, 21.150 MHz, 24.930 MHz, and 28.200 MHz. DARC Beacon Project The Deutscher Amateur-Radio-Club sponsors two beacons which transmit from Scheggerott, near Kiel (). These beacons are DRA5 on 5195 kHz and DK0WCY on 10144 kHz. In addition to identification and location, every 10 minutes, these beacons transmit solar and geomagnetic bulletins. Transmissions are in Morse code for aural reception, RTTY and PSK31. DK0WCY operates also a limited service beacon on 3579 kHz at 0720–0900 and 1600–1900 local time. RSGB 5 MHz Beacon Project The Radio Society of Great Britain operates a radio propagation beacon GB3ORK on 5290 kHz, transmitting every 15 minutes commencing at 2 minutes past the hour. It is located in the Orkney Islands (). The GB3RAL VHF Beacon Cluster GB3RAL, which is located at the Rutherford Appleton Laboratory, transmits continuously on a number of low-band and mid-band VHF frequencies 40050, 50050, 60050 and 70050 kHz as well as 28215 kHz in the 10-meter amateur band. Weak Signal Propagation Reporter Network (WSPR) A large-scale beacon project is underway using the WSPR transmission scheme included with the WSJT software suite. The loosely coordinated beacon transmitters and receivers, collectively known as the WSPRnet, report the real-time propagation characteristics of a number of frequency bands and geographical locations via the Internet. The WSPRnet website provides detailed propagation report databases and real-time graphical maps of propagation paths. Synchronized Beacon Project The Synchronized Beacon Project (SBP) is an effort to deploy coordinated beacon transmitters on 50 MHz using a one-minute transmitting sequence of PI4, CW, and unmodulated carrier. Since modern beacon transmitters are multi-mode and frequency-agile, beacons that normally transmit on other time-multiplexed modes such as WSPR can take part in the SBP when not transmitting in their primary mode. Beacons alternating between frequencies on the same band should sign CALL/S when transmitting on the SBP frequency to ensure unique entries in band-specific propagation report databases. See also Ionosonde Electric beacon OZ7IGY the world's oldest beacon Notes and references Further reading IARU/NDXF International Beacon Project Beacon Beacons Amateur radio
Amateur radio propagation beacon
[ "Physics" ]
1,632
[ "Physical phenomena", "Spectrum (physical sciences)", "Radio frequency propagation", "Electromagnetic spectrum", "Waves" ]
15,990,144
https://en.wikipedia.org/wiki/Libin%20Cardiovascular%20Institute
The Libin Cardiovascular Institute is an entity of Alberta Health Services and the University of Calgary. It connects all cardiovascular research, education and patient care in Southern Alberta, serving a population of about two million. Its more than 1,500 members include physicians, clinicians and other health professionals, researchers and trainees. The Libin Cardiovascular Institute was made possible through the donation of founding donors Mona and Alvin Libin. On March 6, 2003, the Alvin and Mona Libin Foundation presented $15 million to Alberta Health Services and the University of Calgary to form the Libin Cardiovascular Institute. It was then the largest one-time donation to the organizations. The institute was formally created on January 27, 2004. The Foundation renewed their commitment to the Institute in May 2022 with a $7.5 million donation. Research Research within the Libin Cardiovascular Institute extends from basic biomedical and clinical research to health outcomes and care delivery research. Notable successes include: A global change in treatment of arrhythmia as a result of trials led by D. George Wyse. APPROACH database and Heart Alert Innovative STEMI protocol resulting in mean time to Percutaneous Coronary Intervention of a commendable 62 minutes Stephenson Cardiovascular MR Centre, ranking first internationally among CMR centres as measured by research impact factor points, and ranking first in North America among CMR centres as measured by volume of patient studies - a recent publication documented for the first time, the imaging of salvaged heart muscle as a result of a post-MI Intervention. To date, a White Paper on myocarditis with lead author Dr. Matthias Friedrich of the Stephenson CMR Centre, is the only White Paper ever published by the Journal of the American College of Cardiology. Highest 30-day myocardial infarction survival rate in Canada according to the Canadian Institute for Health Information Education Programs under the jurisdiction of the Libin Cardiovascular Institute include Cardiology and Cardiovascular Surgery, in addition to contributions to other medical programs as well as graduate studies in the sciences. The LCI also offers fellowships and/or advanced training in interventional cardiology, electrophysiology, amyloidosis, heart function and cardiac MRI. Sites The Libin Cardiovascular Institute is a wide-ranging program of cardiovascular integration which houses a growing list of scientists, clinicians, and researchers from various sites working together to advance the cardiovascular health of Albertans. Health Research Innovation Centre (HRIC) contains a new hub for the Libin Cardiovascular Institute's basic scientists. The space, co-located on the same campus as the Foothills Medical Centre, opened in June 2009. Elements of different University of Calgary Institutes occupy the various floors / areas in this building, as to encourage research integration. HRIC Teaching Research & Wellness Building (TRW), opened in Q3 of 2009, houses scientists focused on translational research. Directly connected to the primary area of HRIC, the spaces have been constructed to encourage interaction between basic and clinical researchers. South Health Campus, a $1.5B project completed in 2013, offers a full suite of services relating to cardiovascular health. Rockyview General Hospital Peter Lougheed Centre Alberta Children's Hospital Notable people Eldon Smith OC, FRCPC - Officer of the Order of Canada, penultimate Editor-in-Chief of the Canadian Journal of Cardiology, chair of the steering committee responsible for developing a new Heart Health Strategy to fight heart disease in Canada. D. George Wyse MD, FRCPC, PHD - Professor Emeritus, University of Calgary Alvin Libin LLD- Officer of the Order of Canada, Member of the Alberta Order of Excellence, Chair of the Libin Foundation Dr. Todd Anderson, MD, FRCPC, former director of the Libin Cardiovascular Institute and dean of the Cumming School of Medicine at the University of Calgary. Dr. Paul Fedak, MD, PHD, director of the Libin Cardiovascular Institute, cardiac surgeon, translational scientist, and senior medical leader at the University of Calgary. Libin/AHFMR Prize in Cardiovascular Research The Alberta Heritage Foundation for Medical Research (AHFMR) Prize for Excellence in Cardiovascular Research was established in honour of Mr. Alvin Libin for his many contributions to the AHFMR (now Alberta Innovates). This $25,000 prize is awarded to an outstanding international researcher whose work has had a major impact on the understanding, prevention, recognition, or treatment of cardiovascular disease. Past winners include: 2019 Robert Califf 2018 Christine Seidman 2016 Eric N. Olson 2012 Eric Topol 2010 A. John Camm 2008 Valentín Fuster 2006 The Texas Heart Institute James T. Willerson 2004 Eugene Braunwald References Sources and external links Libin Cardiovascular Institute - official web-site Magnetic resonance imaging Medical and health organizations based in Alberta Cardiac electrophysiology Heart disease organizations University of Calgary
Libin Cardiovascular Institute
[ "Chemistry" ]
974
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
15,990,232
https://en.wikipedia.org/wiki/Thermal%20management%20of%20high-power%20LEDs
High power light-emitting diodes (LEDs) can use 350 milliwatts or more in a single LED. Most of the electricity in an LED becomes heat rather than light – about 70% heat and 30% light. If this heat is not removed, the LEDs run at high temperatures, which not only lowers their efficiency, but also makes the LED less reliable, shortens its lifespan. Thus, thermal management of high power LEDs is a crucial area of the research and development. Limiting both the junction and the phosphor particles temperatures to a low value is required, which will guarantee desired LED lifetime. Thermal management is a universal problem having to do with power density, which occurs both at higher powers or in smaller devices. Many lighting applications wish to combine a high light flux with an extremely small light emitting substrate, causing concerns with LED power management to be particularly acute. Heat transfer procedure In order to maintain a low junction temperature to keep good performance of an LED, every method of removing heat from LEDs should be considered. Conduction, convection, and radiation are the three means of heat transfer. Typically, LEDs are encapsulated in a transparent polyurethane-based resin, which is a poor thermal conductor. Nearly all heat produced is conducted through the back side of the chip. Heat is generated from the p–n junction by electrical energy that was not converted to useful light, and conducted to outside ambience through a long path, from junction to solder point, solder point to board, and board to the heat sink and then to the atmosphere. A typical LED side view and its thermal model are shown in the figures. The junction temperature will be lower if the thermal impedance is smaller and likewise, with a lower ambient temperature. To maximize the useful ambient temperature range for a given power dissipation, the total thermal resistance from junction to ambient must be minimized. The values for the thermal resistance vary widely depending on the material or component supplier. For example, RJC will range from 2.6 °C/W to 18 °C/W, depending on the LED manufacturer. The thermal interface material’s (TIM) thermal resistance will also vary depending on the type of material selected. Common TIMs are epoxy, thermal grease, pressure-sensitive adhesive and solder. Power LEDs are often mounted on metal-core printed circuit boards (MCPCB), which will be attached to a heat sink. Heat conducted through the MCPCB and heat sink is dissipated by convection and radiation. In the package design, the surface flatness and quality of each component, applied mounting pressure, contact area, the type of interface material and its thickness are all important parameters to thermal resistance design. Passive thermal designs Some considerations for passive thermal designs to ensure good thermal management for high power LED operation include: Adhesive Adhesive is a thermal conductive interface layer, which is commonly used to bond LED and board, and board and heat sinks and further optimizes the thermal performance. Current commercial adhesive is limited by relatively low thermal conductivity ~1 W/(mK). Heat sink Heat sinks provide a path for the heat from the LED source to outside medium. Heat sinks can dissipate power in three ways: conduction - heat transfer from one solid to another convection - heat transfer from a solid to a moving fluid, which for most LED applications will be air radiation - heat transfer from two bodies of different surface temperatures through Thermal radiation. Also, heatsink: Material – The thermal conductivity of the material that the heat sink is made from directly affects the dissipation efficiency through conduction. Normally this is aluminum, although copper may be used with an advantage for flat-sheet heat sinks. New materials include thermoplastics that are used when heat dissipation requirements are lower than normal or complex shape would be advantaged by injection molding, and natural graphite solutions which offer better thermal transfer than copper with a lower weight than aluminum plus the ability to be formed into complex two-dimensional shapes. Graphite is considered an exotic cooling solution and does come at a higher production cost. Heat pipes may also be added to aluminum or copper heat sinks to reduce spreading resistance. Shape – Thermal transfer takes place at the surface of the heat sink. Therefore, heat sinks should be designed to have a large surface area. This goal can be reached by using a large number of fine fins or by increasing the size of the heat sink itself. Although a bigger surface area leads to better cooling performance, there must be sufficient space between the fins to generate a considerable temperature difference between the fin and the surrounding air. When the fins stand too close together, the air in between can become almost the same temperature as the fins, so that thermal transmission will not occur. Therefore, more fins do not necessarily lead to better cooling performance. Surface Finish – Thermal radiation of heat sinks is a function of surface finish, especially at higher temperatures. A painted surface will have a greater emissivity than a bright, unpainted one. The effect is most remarkable with flat-plate heat sinks, where about one-third of the heat is dissipated by radiation. Moreover, a perfectly flat contact area allows the use of a thinner layer of thermal compound, which will reduce the thermal resistance between the heat sink and LED source. On the other hand, anodizing or etching will also decrease the thermal resistance. Mounting method – Heat-sink mountings with screws or springs are often better than regular clips, thermal conductive glue or sticky tape. For heat transfer between LED sources over 15 Watt and LED coolers, it is recommended to use a high thermal conductive interface material (TIM) which will create a thermal resistance over the interface lower than 0.2 K/W. Currently, the most common solution is to use a phase-change material, which is applied in the form of a solid pad at room temperature, but then changes to a thick, gelatinous fluid once it rises above 45 °C. Heat pipes and vapor chambers Heat pipes and vapor chambers are passive, and have effective thermal conductivities ranging from 10,000 to 100,000 W/m K. They can provide the following benefits in LED thermal management: Transport heat to a remote heat sink with minimum temperature drop Isothermalize a natural convection heat sink, increasing its efficiency and reducing its size. In one case, adding five heat pipes reduced the heat sink mass by 34%, from 4.4 kg to 2.9 kg. Efficiently transform the high heat flux directly under an LED to a lower heat flux that can be removed more easily. PCB - printed circuit board MCPCB – Metal Core PCB are the boards, which incorporate a metal material base as heat spreader as an integral part of the circuit board. The metal core usually consists of aluminum or copper alloy. Furthermore MCPCB can take advantage of incorporating a dielectric polymer layer with high thermal conductivity to reduce thermal resistance. Separation – Separating the LED drive circuitry from the LED board prevents the heat generated by the driver from raising the LED junction temperature. Thick-film materials system Additive Process – Thick film is a selective additive deposition process which uses material only where it is needed. A more direct connection to the Al heat sink is provided; therefore thermal interface material is not needed for circuit building. Reduces the heat spreading layers and thermal footprint. Processing steps are reduced, along with the number of materials and amount of materials consumed. Insulated Aluminum Materials System – Increases thermal connectivity and provides high dielectric breakdown strength. Materials can be fired at less than 600 °C. Circuits are built directly onto aluminum substrates, eliminating the need for thermal interface materials. Through improved thermal connectivity, the junction temperature of the LED can be decreased by up to 10 °C. This allows the designer to either decrease the number of LEDs needed on a board, by increasing the power to each LED; or decrease the size of the substrate, to manage dimensional restrictions. It is also proven that decreasing the junction temperature of the LED dramatically improves the LED’s lifetime. Package type Flip chip – concept is similar to flip-chip in package configuration widely used in the silicon integrated circuit industry. Briefly speaking, the LED die is assembled face down on the sub-mount, which is usually silicon or ceramic, acting as the heat spreader and supporting substrate. The flip-chip joint can be eutectic, high-lead, lead-free solder or gold stub. The primary source of light comes from the back side of the LED chip, and there is usually a built-in reflective layer between the light emitter and the solder joints to reflect up the light which is emitted downward. Several companies have adopted flip-chip packages for their high-power LED, achieving about 60% reduction in the thermal resistance of the LED while keeping its thermal reliability. LED filament The LED filament style of lamp combines many relatively low-power LEDs on a transparent glass substrate, coated with phosphor, and then encapsulated in the silicone. The lamp bulb is filled with inert gas, which convects heat away from the extended array of LEDs to the envelope of the bulb. This design avoids the requirement for a large heat sink. Active thermal designs Some works about using active thermal designs to realize good thermal management for high power LED operation include: Thermoelectric (TE) device Thermoelectric devices are a promising candidate for thermal management of high power LED owing to the small size and fast response. A TE device made by two ceramic plates can be integrated into a high power LED and adjust the temperature of LED by heat-conducting and electrical current insulation. Since ceramic TE devices tend to have a coefficient of thermal expansion mismatch with the silicon substrate of LED, silicon-based TE devices have been invented to substitute traditional ceramic TE devices. Silicon owning higher thermal conductivity (149 W/(m·K)) compared with aluminum oxide(30 W/(m·K)) also makes the cooling performance of silicon-based TE devices better than traditional ceramic TE devices. The cooling effect of thermoelectric materials depends on the Peltier effect. When an external current is applied to a circuit composed of n-type and p-type thermoelectric units, the current will drive carriers in the thermoelectric units to move from one side to the other. When carriers move, heat also flows along with the carriers from one side to the other. Since the direction of heat transfer relies on the applied current, thermoelectric materials can function as a cooler with currents that drive carriers from the heated side to the other side. A typical silicon-based TE device has a sandwich structure. Thermoelectric materials are sandwiched between two substrates made by high thermal conductivity materials. N-type and p-type thermoelectric units are connected sequentially in series as the middle layer. When a high power LED generates heat, the heat will first transfer through the top substrate to the thermoelectric units. With an applied external current, the heat will then be forced to flow to the bottom substrate through the thermoelectric units so that the temperature of the high power LED can be stable. Liquid cooling system Cooling systems using liquids such as liquid metals, water, and stream also actively manage high power LED's temperature. Liquid cooling systems are made up of a driving pump, a cold plate, and a fan-cooled radiator. The heat generated by a high power LED will first transfer to liquids through a cold plate. Then liquids driven by a pump will circulate in the system to absorb the heat. Lastly, a fan-cooled radiator will cool the heated fluids for the next circulation. The circulation of liquids manages the temperature of the high power LED. See also LED lamp – solid state lighting (SSL) Thermal resistance in electronics Thermal management (electronics) Active cooling Synthetic jet References External links Thermal Management of Cree® XLamp® LEDs LED Thermal Management Thermal management of Osram Soleriq COB LED modules Light-emitting diodes Optical diodes Semiconductor technology
Thermal management of high-power LEDs
[ "Materials_science" ]
2,490
[ "Semiconductor technology", "Microtechnology" ]
15,990,281
https://en.wikipedia.org/wiki/Post-and-plank
The method of building wooden buildings with a traditional timber frame with horizontal plank or log infill has many names, the most common of which are piece sur piece (French. Also used to describe log building), corner post construction, post-and-plank, Ständerbohlenbau (German) and skiftesverk (Swedish). This traditional building method is believed to be the predecessor to half-timber construction widely known by its German name fachwerkbau which has wall infill of wattle and daub, brick, or stone. This carpentry was used from parts of Scandinavia to Switzerland to western Russia. Though relatively rare now, two types are found in a number of regions in North America, more common are the walls with planks or timbers which slide in a groove in the posts and less common is a type where horizontal logs are tenoned into individual mortises in the posts. This method is not the same as the plank-frame buildings in North America with vertical plank walls. Other names French: Pièce sur pièce poteaux et pièce coulissante (piece on piece sliding in a groove), pièce sur pièce en coulisse, poteaux et piece coulissante, pieces sur pieces, poiteau cannale, poteaux sur soles German (including southern Germany, Switzerland, and Austria): Blockstanderbau (log frame construction), Standerblockbau (frame log construction), Ständerbohlenbau (post plank construction), Bohlenständerbau (plank post construction), and sometimes Bohlenwand Polish: sumikowo-łątkowa (planks - sumiki, sumikami, palcami, post - łątki) English: Section plank wall, corner-post log construction, corner posting technique, corner posting, post cornering, vertical-post log construction, post and log, post and panel, Red River frame, Hudson's Bay style, Hudson's Bay corners, Rocky Mountain frame, Manitoba Frame, "Métis" style, "French" style, slotted post construction, grooved post, post and fill, panel construction, section panel, and running mortise and tenon (or tongue). Danish: bulhus (bole house which means plank house) Italian: a ritti e panconi European history "The support of horizontal timbers by corner posts is an old form of construction in Europe. It was apparently carried across much of the continent from Silesia by the Lausitz urnfield culture in the late Bronze Age." The Lausitz culture is also known as the Lusatian culture and within their territory is an archaeological site and archaeological open-air museum at Biskupin, Poland, where remnants of such structures were found and reconstructed. The structures found dated from 747 to 722 B.C and are similar in concept to piece sur piece construction. This historic carpentry is known in southern Sweden (skiftesverk), particularly Gotland where it is also known as bulhus, Germany, Poland, including Silesia, Bohemia - Czech Republic, Hungary, Lithuania, Switzerland, Austria. In 2018, an oak well structure assembled in a post-and plank method was unearthed in the Czech Republic, near Ostrov, Pardubice Region, during motorway construction. The wood was well preserved, as it was submerged in water. Its age was established using the dendrochronological method, by tree ring dating - the oak trees used to build the well were fell 5256/55 BC, and started growing in 5481 BC, during the Early Neolithic period, more than 7000 years ago. "The shape of the individual structural elements and tool marks preserved on their surface confirm sophisticated carpentry skills." researchers note. North American history Some researchers believe this building method was introduced to the United States by Alpine-Alemannic Germans or Swiss, and to by French fur trappers working for the Hudson's Bay Company. And, Others, who have studied the development house building in New France believe that the method was developed endemically in Canada as a local adaption of the half-timbered house, spreading from Québec to the Pacific through the Hudson's Bay Company. The Hudson's Bay Company adopted this style for most of its outposts all the way to the Pacific coast. Some examples of surviving houses of this structural type are the circa 1809 Cray House in Stevensville, Maryland, 1832 Jacob Highbarger House in Maryland, and the George Diehl Homestead. Red River Frame was a popular name for the post-and-plank construction technique used in the Red River Colony in the 19th century. The building style was characterized by a dressed timber structure with a horizontal log infill. The spaces between the logs were filled or 'chinked' with clay and straw. The exterior would either be whitewashed with a limestone/water plaster mixture, or in later years, the exterior would be covered by board siding. This style was popular because it could use smaller trees for logs—the longest trees needed were for the vertical logs. The Farm Manager's House at Lower Fort Garry, the William Brown House at the Historical Museum of St James—Assiniboia, the historical Fur Warehouse at Fort St. James National Historic Site of Canada and Riel House in Winnipeg, Manitoba are excellent examples of Red River Frame construction. In southeastern Pennsylvania, numerous log houses feature corner post construction. In many cases, these houses feature diagonal bracing that resembles half-timbered architecture of Europe. In Lancaster County, Pennsylvania, it is estimated that about a quarter of log houses are corner post construction. See also Slab hut – Australian English for vertical plank wall construction Plank house – Native American plank buildings References External links Detailed study of corner post construction in Pennsylvania, U.S.A. with bibliography Paper including information on Hungarian barns some of which are corner post construction. The Hoyle House also has diagonal bracing House styles Timber framing Vernacular architecture Structural system
Post-and-plank
[ "Technology", "Engineering" ]
1,230
[ "Structural system", "Structural engineering", "Timber framing", "Building engineering" ]
15,991,162
https://en.wikipedia.org/wiki/In%20vitro%20compartmentalization
In vitro compartmentalization (IVC) is an emulsion-based technology that generates cell-like compartments in vitro. These compartments are designed such that each contains no more than one gene. When the gene is transcribed and/or translated, its products (RNAs and/or proteins) become 'trapped' with the encoding gene inside the compartment. By coupling the genotype (DNA) and phenotype (RNA, protein), compartmentalization allows the selection and evolution of phenotype. History In vitro compartmentalization method was first developed by Dan Tawfik and Andrew Griffiths. Based on the idea that Darwinian evolution relies on the linkage of genotype to phenotype, Tawfik and Griffiths designed aqueous compartments of water-in-oil (w/o) emulsions to mimic cellular compartments that can link genotype and phenotype. Emulsions of cell-like compartments were formed by adding in vitro transcription/translation reaction mixture to stirred mineral oil containing surfactants. The mean droplet diameter was measured to be 2.6 μm by laser diffraction. As a proof of concept, Tawfik and Griffiths designed a selection experiment using a pool of DNA sequences, including the gene encoding HaeIII DNA methyltransferase (M.HaeIII) in the presence of 107-fold excess of genes encoding a different enzyme folA. The 3’ of each DNA sequences was purposely designed to contain a HaeIII recognition site which, in the presence of expressed methyltransferase, would be methylated and, thus, resistant to restriction enzyme digestion. By selecting for DNA sequences that survive the endonuclease digestion, Tawfik and Griffiths found that the M.HaeIII genes were enriched by at least 1000-fold over the folA genes within the first round of selection. Method Emulsion technology Water-in-oil (w/o) emulsions are created by mixing aqueous and oil phases with the help of surfactants. A typical IVC emulsion is formed by first generating oil-surfactant mixture by stirring, and then gradually adding the aqueous phase to the oil-surfactant mixture. For stable emulsion formation, a mixture of HLB (hydrophile-lipophile balance) and low HLB surfactants are needed. Some combinations of surfactants used to generate oil-surfactant mixture are mineral oil / 0.5% Tween 80 / 4.5% Span 80 / sodium deoxycholate and a more heat stable version, light mineral oil / 0.4% Tween 80 / 4.5% Span 80 / 0.05% Triton X-100. The aqueous phase containing transcription and/or translation components is slowly added to the oil surfactants, and the formation of w/o is facilitated by homogenizing, stirring or using hand extruding device. The emulsion quality can be determined by light microscopy and/or dynamic light scattering techniques. The emulsion is quite diverse, and greater homogenization speeds helps to produce smaller droplets with narrower size distribution. However, homogenization speeds has to be controlled, since speed over 13,500 r.p.m tends to result in a significant loss of enzyme activity on the level of transcription. The most widely used emulsion formation gives droplets with a mean diameter of 2-3μm, and an average volume of ~5 femtoliters, or 1010 aqueous droplet per ml of emulsions. The ratio of genes to droplets is designed such that most of the droplets contains no more than a single gene statistically. In vitro transcription/translation IVC enables the miniaturization of large-scale techniques that can now be done on the micro scale including coupled in vitro transcription and translation (IVTT) experiments. Streamlining and integrating transcription and translation allows for fast and highly controllable experimental designs. IVTT can be done both in bulk emulsions and in microdroplets by utilizing droplet-based microfluidics. Microdroplets, droplets on the scale of pico to femtoliters, have been successfully used as single DNA molecule vessels. This droplet technology allows high throughput analysis with many different selection pressures in a single experimental setup. IVTT in microdroplets is preferred when overexpression of a desired protein would be toxic to a host cell minimizing the utility of the transcription and translation mechanisms. IVC has used bacterial cell, wheat germ and rabbit reticulocyte (RRL) extracts for transcription and translation. It is also possible to use bacterial reconstituted translation system such as PURE in which translation components are individually purified and later combined. When expressing eukaryote or complex proteins, it is desirable to use eukaryotic translation systems such as wheat germ extract or more superior alternative, RRL extract. In order to use RRL for transcription and translation, traditional emulsion formulation cannot be used as it abolishes translation. Instead, a novel emulsion formulation: 4% Abil EM90 / light mineral oil was developed and demonstrated to be functional in expressing luciferase and human telomerase. Breaking emulsion and coupling of genotype and phenotype Once transcription and/or translation has completed in the droplets, emulsion will be broken by successive steps of removing mineral oil and surfactants to allow for subsequent selection. At this stage, it is crucial to have a method to ‘track’ each gene products to the encoding gene as they become free floating in a heterogeneous population of molecules. There are three major approaches to track down each phenotype to its genotype. The first method is to attach each DNA molecule with a biotin group and an additional coding sequence for streptavidin (STABLE display). All the newly formed proteins/peptides will be in fusion with streptavidin molecules and bind to their biotinylated coding sequence. An improved version attached two biotin molecules to the ends of a DNA molecule to increase the avidity between DNA molecule and streptavidin-fused peptides, and used a low GC content synthetic streptavidin gene to increase efficiency and specificity during PCR amplification. The second method is to covalently link DNA and protein. Two strategies have been demonstrated. The first is to form M.HaeIII fusion proteins. Each expressed protein/polypeptide will be in fusion with Hae III DNA methyltransferase domain, which is able to bind covalently to DNA fragments containing the sequence 5′-GGC*-3′, where C* is 5-fluoro-2 deoxycytidine. The second strategy is to use monomeric mutant of VirD2 enzyme. When a protein/peptide is expressed in fusion with Agrobacterium protein VirD2, it will bind to its DNA coding sequence that has a single-stranded overhang comprising VirD2 T-border recognition sequences. The third method is to link phenotype and genotype via beads. The beads used will be coated with streptavidin to allow for the binding of biotinylated DNA, in addition, the beads will also display cognate binding partner to the affinity tag that will be expressed in fusion with the protein/peptide. Selection Depending on the phenotype to be selected, difference selection strategies will be used. Selection strategy can be divided into three major categories: selection for binding, selection for catalysis and selection for regulation. The phenotype to be selected can range from RNA to peptide to protein. By selecting for binding, the most commonly evolved phenotypes are peptide/proteins that have selective affinity to a specific antibody or DNA molecule. An example is the selection of proteins that have affinity to zinc finger DNA by Sepp et al. By selecting for catalytic proteins/RNAs, new variants with novel or improved enzymatic property are usually isolated. For example, new ribozyme variants with trans-ligase activity were selected and exhibited multiple turnovers. By selecting for regulation, inhibitors of DNA nucleases can be selected, such as protein inhibitors of the Colicin E7 DNase. Advantages Comparing to other in vitro display technologies, IVC has two major advantages. The first advantage is its ability to control reactions within the droplets. Hydrophobic and hydrophilic components can be delivered to each droplet in a step-wise fashion without compromising the chemical integrity of the droplet, and thus by controlling what to be added and when to be added, the reaction in each droplet is controlled. In addition, depending on the nature of the reaction to be carried out, the pH of each droplet can also be changed. More recently, photocaged substrates were used and their participation in a reaction was regulated by photo-activation. The second advantage is that IVC allows the selection of catalytic molecules. As an example, Griffiths et al. was able to select for phosphotriesterase variants with higher Kcat by detecting product formation and amount using anti-product antibody and flow cytometry respectively. Related technologies CIS display Phage display Bacterial display Yeast display Ribosome display mRNA display References Biotechnology
In vitro compartmentalization
[ "Biology" ]
1,919
[ "nan", "Biotechnology" ]
15,991,282
https://en.wikipedia.org/wiki/Missions%20H%C3%A9liographiques
Missions Héliographiques was a 19th-century project to photograph landmarks and monuments around France so that they could be restored. The project was established by Prosper Mérimée, France's Inspector General of Historical Monuments and author of Carmen, in 1851. The intent was to supplement Monument historique, a program Mérimée started in 1837 to classify, protect and restore French landmarks. Mérimée hired Edouard Baldus, Hippolyte Bayard, Gustave Le Gray, Henri Le Secq and Auguste Mestral to carry out the photography, with the aim that architect Eugène Viollet-le-Duc could eventually restore them. Although the daguerrotype originated in France, Mérimée preferred the calotype, which offered more detailed textures. Mestral and Le Gray photographed areas southwest from Paris, Le Secq the north and east. Bayard, who chose to work with glass negatives instead of paper, went west to Brittany and Normandy. Baldus covered the south and east, including the Palace of Fontainebleau. While several of the images are classic examples of early photography, the overall results did not meet requirements, often portraying the decaying buildings artistically and obscuring their need for restoration. References Photography in France Photographic collections Architectural history
Missions Héliographiques
[ "Engineering" ]
259
[ "Architectural history", "Architecture" ]
15,991,422
https://en.wikipedia.org/wiki/Embryonic%20hemoglobin
The human embryonic haemoglobins were discovered in 1961. These include Hb-Gower 1, consisting of 2 zeta chains and 2 epsilon chains, and Hb-Gower 2, which consists of 2 αlpha-chains and 2 epsilon-chains, the zeta and epsilon chains being the embryonic haemoglobin chains. Embryonic hemoglobin is a tetramer produced in the blood islands in the embryonic yolk sac during the mesoblastic stage (from 3rd week of pregnancy until 3 months). The protein is commonly referred to as hemoglobin ε. Chromosomal abnormalities can lead to a delay in switching from embryonic hemoglobin. Hemoglobin Gower 1 Hemoglobin Gower 1 (also referred to as ζ2ε2 or HbE Gower-1) is a form of hemoglobin existing only during embryonic life, and is the primary embryonic hemoglobin. It is composed of two zeta chains and two epsilon chains, and is relatively unstable, breaking down easily. Hemoglobin Gower 2 Hemoglobin Gower 2 (also referred to as α2ε2 or HbE Gower-2) is a form of hemoglobin existing at low levels during embryonic and fetal life. It is composed of two alpha chains and two epsilon chains, and is somewhat unstable, though not as much as hemoglobin Gower 1. Due to its relative stability compared to hemoglobin Gower 1 and hemoglobin S, it has been proposed as a subject for reactivation in the adult in cases of severe β thalassemia and hemoglobinopathies in subjects for which the reactivation of hemoglobin F is contraindicated due to toxicity concerns. Hemoglobin Portland I Hemoglobin Portland I (also referred to as ζ2γ2 or HbE Portland-1) is a form of hemoglobin existing at low levels during embryonic and fetal life, composed of two zeta chains and two gamma chains. Hemoglobin Portland II Hemoglobin Portland II (also referred to as ζ2β2 or HbE Portland-2) is a form of hemoglobin existing at low levels during embryonic and fetal life, composed of two zeta chains and two beta chains. It is quite unstable, more so than even hemoglobin Gower 1, and breaks down very rapidly under stress. Despite this, it has been proposed as a candidate for reactivation in cases of severe α thalassemia or hemoglobinopathies afflicting the alpha chain. Table References Hemoglobins
Embryonic hemoglobin
[ "Chemistry" ]
558
[ "Biochemistry stubs", "Protein stubs" ]
15,991,827
https://en.wikipedia.org/wiki/Biotone
A biotone is a biogeographical region characterized not by distinctive biota but rather by a distinctive transition from one set of biota to another. They often contain the limits of distribution of the biota of neighbouring regions. Biotones are especially useful in marine biogeography, where the movement of water may result in substantial overlap in the floral and faunal components of adjacent regions. In such case, the regions of overlap is considered a biotone. A simple example would be mid-latitude waters where tropical and temperate waters mix. This region is a biotone characterized by the transition between tropical and temperate waters. It would contain both tropical and temperate biota. Tropical biota that do not extend into temperate areas would be at the limit of their range in this biotone, and vice versa. The co-occurrence of biota that are normally distinct can result in unusual ecological relationships. References Biogeography
Biotone
[ "Biology" ]
185
[ "Biogeography" ]
15,991,830
https://en.wikipedia.org/wiki/St%C3%B8rmer%20number
In mathematics, a Størmer number or arc-cotangent irreducible number is a positive integer for which the greatest prime factor of is greater than or equal to . They are named after Carl Størmer. Sequence The first few Størmer numbers are: Density John Todd proved that this sequence is neither finite nor cofinite. More precisely, the natural density of the Størmer numbers lies between 0.5324 and 0.905. It has been conjectured that their natural density is the natural logarithm of 2, approximately 0.693, but this remains unproven. Because the Størmer numbers have positive density, the Størmer numbers form a large set. Application The Størmer numbers arise in connection with the problem of representing the Gregory numbers (arctangents of rational numbers) as sums of Gregory numbers for integers (arctangents of unit fractions). The Gregory number may be decomposed by repeatedly multiplying the Gaussian integer by numbers of the form , in order to cancel prime factors from the imaginary part; here is chosen to be a Størmer number such that is divisible by . References Eponymous numbers in mathematics Integer sequences
Størmer number
[ "Mathematics" ]
258
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
15,992,030
https://en.wikipedia.org/wiki/Gregory%20number
In mathematics, a Gregory number, named after James Gregory, is a real number of the form: where x is any rational number greater or equal to 1. Considering the power series expansion for arctangent, we have Setting x = 1 gives the well-known Leibniz formula for pi. Thus, in particular, is a Gregory number. Properties See also Størmer number References Sets of real numbers
Gregory number
[ "Mathematics" ]
85
[ "Number theory stubs", "Number theory" ]
15,993,881
https://en.wikipedia.org/wiki/1000%20Genomes%20Project
The 1000 Genomes Project (1KGP), taken place from January 2008 to 2015, was an international research effort to establish the most detailed catalogue of human genetic variation at the time. Scientists planned to sequence the genomes of at least one thousand anonymous healthy participants from a number of different ethnic groups within the following three years, using advancements in newly developed technologies. In 2010, the project finished its pilot phase, which was described in detail in a publication in the journal Nature. In 2012, the sequencing of 1092 genomes was announced in a Nature publication. In 2015, two papers in Nature reported results and the completion of the project and opportunities for future research. Many rare variations, restricted to closely related groups, were identified, and eight structural-variation classes were analyzed. The project united multidisciplinary research teams from institutes around the world, including China, Italy, Japan, Kenya, Nigeria, Peru, the United Kingdom, and the United States contributing to the sequence dataset and to a refined human genome map freely accessible through public databases to the scientific community and the general public alike. The International Genome Sample Resource was created to host and expand on the data set after the project's end. Background Since the completion of the Human Genome Project advances in human population genetics and comparative genomics enabled further insight into genetic diversity. The understanding about structural variations (insertions/deletions (indels), copy number variations (CNV), retroelements), single-nucleotide polymorphisms (SNPs), and natural selection were being solidified. The diversity of Human genetic variation such as that Indels were being uncovered and investigating human genomic variations Natural selection It also aimed to provide evidence that can be used to explore the impact of natural selection on population differences. Patterns of DNA polymorphisms can be used to reliably detect signatures of selection and may help to identify genes that might underlie variation in disease resistance or drug metabolism. Such insights could improve understanding of phenotypic variations, genetic disorders and Mendelian inheritance and their effects on survival and/or reproduction of different human populations. Project description Goals The 1000 Genomes Project was designed to bridge the gap of knowledge between rare genetic variants that have a severe effect predominantly on simple traits (e.g. cystic fibrosis, Huntington disease) and common genetic variants have a mild effect and are implicated in complex traits (e.g. cognition, diabetes, heart disease). The primary goal of this project was to create a complete and detailed catalogue of human genetic variations, which can be used for association studies relating genetic variation to disease. The consortium aimed to discover >95 % of the variants (e.g. SNPs, CNVs, indels) with minor allele frequencies as low as 1% across the genome and 0.1-0.5% in gene regions, as well as to estimate the population frequencies, haplotype backgrounds and linkage disequilibrium patterns of variant alleles. Secondary goals included the support of better SNP and probe selection for genotyping platforms in future studies and the improvement of the human reference sequence. The completed database was expected be a useful tool for studying regions under selection, variation in multiple populations and understanding the underlying processes of mutation and recombination. Outline The human genome consists of approximately 3 billion DNA base pairs and is estimated to carry around 20,000 protein coding genes. In designing the study the consortium needed to address several critical issues regarding the project metrics such as technology challenges, data quality standards and sequence coverage. Over the course of the next three years, scientists at the Sanger Institute, BGI Shenzhen and the National Human Genome Research Institute’s Large-Scale Sequencing Network planned to sequence a minimum of 1,000 human genomes. Due to the large amount of sequence data that was required, recruiting additional participants was maintained. Almost 10 billion bases were to be sequenced per day over a period of the two year production phase, equating to more than two human genomes every 24 hours. The intended sequence dataset was to comprise 6 trillion DNA bases, 60-fold more sequence data than what has been published in DNA databases at the time. To determine the final design of the full project three pilot studies were to be carried out within the first year of the project. The first pilot intends to genotype 180 people of 3 major geographic groups at low coverage (2×). For the second pilot study, the genomes of two nuclear families (both parents and an adult child) are going to be sequenced with deep coverage (20× per genome). The third pilot study involves sequencing the coding regions (exons) of 1,000 genes in 1,000 people with deep coverage (20×). It was estimated that the project would likely cost more than $500 million if standard DNA sequencing technologies were used. Several newer technologies (e.g. Solexa, 454, SOLiD) were to be applied, lowering the expected costs to between $30 million and $50 million. The major support was provided by the Wellcome Trust Sanger Institute in Hinxton, England; the Beijing Genomics Institute, Shenzhen (BGI Shenzhen), China; and the NHGRI, part of the National Institutes of Health (NIH). In keeping with Fort Lauderdale principles all genome sequence data (including variant calls) is freely available as the project progresses and can be downloaded via ftp from the 1000 genomes project webpage. Human genome samples Based on the overall goals for the project, the samples will be chosen to provide power in populations where association studies for common diseases are being carried out. Furthermore, the samples do not need to have medical or phenotype information since the proposed catalogue will be a basic resource on human variation. For the pilot studies human genome samples from the HapMap collection will be sequenced. It will be useful to focus on samples that have additional data available (such as ENCODE sequence, genome-wide genotypes, fosmid-end sequence, structural variation assays, and gene expression) to be able to compare the results with those from other projects. Complying with extensive ethical procedures, the 1000 Genomes Project will then use samples from volunteer donors. The following populations will be included in the study: Yoruba in Ibadan (YRI), Nigeria; Japanese in Tokyo (JPT); Chinese in Beijing (CHB); Utah residents with ancestry from northern and western Europe (CEU); Luhya in Webuye, Kenya (LWK); Maasai in Kinyawa, Kenya (MKK); Toscani in Italy (TSI); Peruvians in Lima, Peru (PEL); Gujarati Indians in Houston (GIH); Chinese in metropolitan Denver (CHD); people of Mexican ancestry in Los Angeles (MXL); and people of African ancestry in the southwestern United States (ASW). * Population that was collected in diaspora Community meeting Data generated by the 1000 Genomes Project is widely used by the genetics community, making the first 1000 Genomes Project one of the most cited papers in biology. To support this user community, the project held a community analysis meeting in July 2012 that included talks highlighting key project discoveries, their impact on population genetics and human disease studies, and summaries of other large-scale sequencing studies. Project findings Pilot phase The pilot phase consisted of three projects: low-coverage whole-genome sequencing of 179 individuals from 4 populations high-coverage sequencing of 2 trios (mother-father-child) exon-targeted sequencing of 697 individuals from 7 populations It was found that on average, each person carries around 250–300 loss-of-function variants in annotated genes and 50-100 variants previously implicated in inherited disorders. Based on the two trios, it is estimated that the rate of de novo germline mutation is approximately 10−8 per base per generation. See also Human Genome Project HapMap Project Personal genomics Population groups in biomedicine 1000 Plant Genomes Project List of biological databases References External links 1000 Genomes - A Deep Catalog of Human Genetic Variation - official web page International HapMap Project - official web page Human Genome Project Information Human genome projects Population genetics organizations Single-nucleotide polymorphisms Genome projects Genomics Bioinformatics
1000 Genomes Project
[ "Chemistry", "Engineering", "Biology" ]
1,712
[ "Biological engineering", "Single-nucleotide polymorphisms", "Bioinformatics", "Biodiversity", "Molecular biology", "Genome projects", "Human genome projects" ]
15,994,159
https://en.wikipedia.org/wiki/Underwater%20acoustic%20communication
Underwater acoustic communication is a technique of sending and receiving messages in water. There are several ways of employing such communication but the most common is by using hydrophones. Underwater communication is difficult due to factors such as multi-path propagation, time variations of the channel, small available bandwidth and strong signal attenuation, especially over long ranges. Compared to terrestrial communication, underwater communication has low data rates because it uses acoustic waves instead of electromagnetic waves. At the beginning of the 20th century some ships communicated by underwater bells as well as using the system for navigation. Submarine signals were at the time competitive with the primitive maritime radionavigation. The later Fessenden oscillator allowed communication with submarines. Types of modulation used for underwater acoustic communications In general the modulation methods developed for radio communications can be adapted for underwater acoustic communications (UAC). However some of the modulation schemes are more suited to the unique underwater acoustic communication channel than others. Some of the modulation methods used for UAC are as follows: Frequency-shift keying (FSK) Phase-shift keying (PSK) Frequency-hopping spread spectrum (FHSS) Direct-sequence spread spectrum (DSSS) Frequency and pulse-position modulation (FPPM and PPM) Multiple frequency-shift keying (MFSK) Orthogonal frequency-division multiplexing (OFDM) Continuous Phase Modulation (CPM) The following is a discussion on the different types of modulation and their utility to UAC. Frequency-shift keying FSK is the earliest form of modulation used for acoustic modems. FSK usually employs two distinct frequencies to modulate data; for example, frequency F1 to indicate bit 0 and frequency F2 to indicate bit 1. Hence a binary string can be transmitted by alternating these two frequencies depending on whether it is a 0 or 1. The receiver can be as simple as having analogue matched filters to the two frequencies and a level detector to decide if a 1 or 0 was received. This is a relatively easy form of modulation and therefore used in the earliest acoustic modems. However more sophisticated demodulator using digital signal processors (DSP) can be used in the present day. The biggest challenge FSK faces in the UAC is multi-path reflections. With multi-path (particularly in UAC) several strong reflections can be present at the receiving hydrophone and the threshold detectors become confused, thus severely limiting the use of this type of UAC to vertical channels. Adaptive equalization methods have been tried with limited success. Adaptive equalization tries to model the highly reflective UAC channel and subtract the effects from the received signal. The success has been limited due to the rapidly varying conditions and the difficulty to adapt in time. Phase-shift keying Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing (modulating) the phase of a reference signal (the carrier wave). The signal is impressed into the magnetic field x,y area by varying the sine and cosine inputs at a precise time. It is widely used for wireless LANs , RFID and Bluetooth communication. Orthogonal frequency-division multiplexing Orthogonal frequency-division multiplexing (OFDM) is a digital multi-carrier modulation scheme. OFDM conveys data on several parallel data channels by incorporating closely spaced orthogonal sub-carrier signals. OFDM is a favorable communication scheme in underwater acoustic communications thanks to its resilience against frequency selective channels with long delay spreads. Continuous phase modulation Continuous phase modulation (CPM) is a modulation technique, which is a continuous phase shift, where the phase of the carrier signal varies over time and avoids abrupt changes between successive symbols. This smooth phase trajectory reduces spectral side lobes. Reducing spectral side lobes increases the spectral efficiency of CPM and enables it to transmit data within a narrower bandwidth. Notable variants of CPM include minimum shift keying (MSK) and Gaussian minimum shift keying (GMSK), which uses a Gaussian filter to smooth out phase shifts. Since the underwater environment is highly scattered, it can cause multipath propagation and signal degradation. The CPM's continuous phase featur mitigates these effects and maintains signal integrity. Besides itss high spectral efficiency helps make optimal use of limited bandwidth underwater. Use of vector sensors Compared to a scalar pressure sensor, such as a hydrophone, which measures the scalar acoustic field component, a vector sensor measures the vector field components such as acoustic particle velocities. Vector sensors can be categorized into inertial and gradient sensors. Vector sensors have been widely researched over the past few decades. Many vector sensor signal processing algorithms have been designed. Underwater vector sensor applications have been focused on sonar and target detection. They have also been proposed to be used as underwater multi‐channel communication receivers and equalizers. Other researchers have used arrays of scalar sensors as multi‐channel equalizers and receivers. Applications Underwater telephone The underwater telephone, also known as UQC, AN/WQC-2, or Gertrude, was used by the U.S. Navy in 1945 after in Kiel, Germany, in 1935 different realizations at sea were demonstrated. The terms UQC and AN/WQC-2 follow the nomenclature of the Joint Electronics Type Designation System. The type designation "UQC" stands for General Utility (multi use), Sonar and Underwater Sound and Communications (Receiving/Transmitting, two way). The "W" in WQC stands for Water Surface and Underwater combined. The underwater telephone is used on all crewed submersibles and many Naval surface ships in operation. Voice or an audio tone (morse code) communicated through the UQC are heterodyned to a high pitch for acoustic transmission through water. JANUS In April 2017, NATO's Centre for Maritime Research and Experimentation announced the approval of JANUS, a standardized protocol to transmit digital information underwater using acoustic sound (like modems and fax machines do over telephone lines). Documented in STANAG 4748, it uses 900 Hz to 60 kHz frequencies at distances of up to . It is available for use with military and civilian, NATO and non-NATO devices; it was named after the Roman god of gateways, openings, etc. The JANUS specification (ANEP-87) provides for a flexible plug-in-based payload scheme. A baseline JANUS packet consists of 64 bits to which further arbitrary data (Cargo) can be appended. This enables multiple different applications such as Emergency location, Underwater AIS (Automatic Identification System), and Chat. An example of an Emergency Position and Status message is the following JSON representation:{ "ClassUserID": 0, "ApplicationType": 3, "Nationality": "PT", "Latitude": "38.386547", "Longitude": "-9.055858", "Depth": "16", "Speed": "1.400000", "Heading": "0.000000", "O2": "17.799999", "CO2": "5.000000", "CO": "76.000000", "H2": "3.500000", "Pressure": "45.000000", "Temperature": "21.000000", "Survivors": "43", "MobilityFlag": "1", "ForwardingCapability": "1", "TxRxFlag": "0", "ScheduleFlag": "0" } This Emergency Position and Status Message (Class ID 0 Application 3 Plug-in) message shows a Portuguese submarine at 38.386547 latitude -9.055858 longitude at a depth of 16 meters. It is moving north at 1.4 meters per second, and has 43 survivors on board and shows the environmental conditions. Underwater messaging Commercial hardware products have been designed to enable two-way underwater messaging between scuba divers. These support sending from a list of pre-defined messages from a dive computer using acoustic communication. Research efforts have also explored the use of smartphones in water-proof cases for underwater communication, using acoustic modem hardware as phone attachments as well as using a software app without any additional hardware. The Android software app, AquaApp, from University of Washington uses the microphones and speakers on existing smartphones and smart watches to enable underwater acoustic communication. It had been tested to send digital messages using smartphones between divers at distances of up to 100 m. See also Acoustic communication in aquatic animals Acoustic communication in fish Telecommunications References External links DSPComm – underwater acoustic modem manufacturer uWAVE - the smallest underwater acoustic modem NetSim UWAN - underwater acoustic network simulation Telecommunications techniques Acoustics
Underwater acoustic communication
[ "Physics" ]
1,809
[ "Classical mechanics", "Acoustics" ]
15,994,978
https://en.wikipedia.org/wiki/HotHardware
HotHardware is an online publication about computer hardware, consumer electronics and related technologies, mobile computing and PC gaming. It regularly features coverage of new products and technologies from vendors including Intel, Dell, AMD, and NVIDIA. "Daily Hardware Round-ups" also offer reviews and news submitted by other technology-related sites. Content is organized by category and is searchable through a content management system (CMS), with a blog-style comments section for registered users, and a web forum with integrated comments section, topic tagging/filing and a content rating system. Forum members can also take part in contests to win hardware that has been featured on the site. References External links Computing websites Internet properties established in 1999 1999 establishments in the United States
HotHardware
[ "Technology" ]
154
[ "Computing websites" ]
15,995,078
https://en.wikipedia.org/wiki/FMRI%20adaptation
Functional magnetic resonance imaging adaptation (FMRIa) is a method of functional magnetic resonance imaging that reads the brain changes occurring in response to long exposure to evocative stimulus. If Stimulus 1 (S1) excites a certain neuronal population, repeated exposure to S1 will result in subsequently attenuated responses. This adaptation may be due to neural fatigue or coupled hemodynamic processes. However, when S1 is followed by a unique stimulus, S2, the response amplitudes should not be attenuated as a fresh sub-population of neurons is excited. Using this technique can allow researchers to determine if the same or unique neuronal groups are involved in processing two stimuli. Usage This technique has been used successfully in examination of the visual system, particularly orientation, motion, and face recognition. See also Adaptive system Functional magnetic resonance imaging Neural adaptation References Magnetic resonance imaging
FMRI adaptation
[ "Chemistry" ]
180
[ "Nuclear magnetic resonance stubs", "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
15,995,094
https://en.wikipedia.org/wiki/Bhutanese%20art
Bhutanese art (Dzongkha: འབྲུག་པའི་སྒྱུ་རྩལ) is similar to Tibetan art. Both are based upon Vajrayana Buddhism and its pantheon of teachers and divine beings. The major orders of Buddhism in Bhutan are the Drukpa Lineage and the Nyingma. The former is a branch of the Kagyu school and is known for paintings documenting the lineage of Buddhist masters and the 70 Je Khenpo (leaders of the Bhutanese monastic establishment). The Nyingma school is known for images of Padmasambhava ("Guru Rinpoche"), who is credited with introducing Buddhism into Bhutan in the 7th century. According to legend, Padmasambhava hid sacred treasures for future Buddhist masters, especially Pema Lingpa, to find. Tertöns are also frequent subjects of Nyingma art. Each divine being is assigned special shapes, colors, and/or identifying objects, such as lotus, conch-shell, thunderbolt, and begging bowl. All sacred images are made to exact specifications that have remained remarkably unchanged for centuries. Bhutanese art is particularly rich in bronzes of different kinds that are collectively known by the name Kham-so (made in Kham) even though they are made in Bhutan because the technique of making them was originally imported from that region of Tibet. Wall paintings and sculptures, in these regions, are formulated on the principal ageless ideals of Buddhist art forms. Even though their emphasis on detail is derived from Tibetan models, their origins can be discerned easily, despite the profusely embroidered garments and glittering ornaments with which these figures are lavishly covered. In the grotesque world of demons, the artists apparently had a greater freedom of action than when modeling images of divine beings. The arts and crafts of Bhutan that represents the exclusive "spirit and identity of the Himalayan kingdom" is defined as the art of Zorig Chosum, which means the “thirteen arts and crafts of Bhutan”; the thirteen crafts are carpentry, painting, paper making, blacksmithery, weaving, sculpting and many other crafts. The Institute of Zorig Chosum in Thimphu is the premier institution of traditional arts and crafts set up by the Government of Bhutan with the sole objective of preserving the rich culture and tradition of Bhutan and training students in all traditional art forms; there is another similar institution in eastern Bhutan known as Trashi Yangtse. Bhutanese rural life is also displayed in the Folk Heritage Museum in Thimphu. There is also a Voluntary Artists Studio in Thimphu to encourage and promote the art forms among the youth of Thimphu. The thirteen arts and crafts of Bhutan and the institutions established in Thimphu to promote these art forms are: Traditional Bhutanese arts In Bhutan, the traditional arts are known as zorig chusum (zo = the ability to make; rig = science or craft; chusum = thirteen). These practices have been gradually developed through the centuries, often passed down through families with long-standing relations to a particular craft. These traditional crafts represent hundreds of years of knowledge and ability that has been passed down through generations. The great 15th century tertön, Pema Lingpa is traditionally credited with introducing the arts into Bhutan. In 1680, Ngawang Namgyal, the Zhabdrung Rinpoche, ordered the establishment of the school for instruction in the thirteen traditional arts. Although the skills existed much earlier, it is believed that the zorig chusum was first formally categorized during the rule of Gyalse Tenzin Rabgye (1680-1694), the 4th Druk Desi (secular ruler). The thirteen traditional arts are: Dezo - Paper Making: Handmade paper made mainly from the Daphne plant and gum from a creeper root. Dozo - Stonework: Stone arts used in the construction of stone pools and the outer walls of dzongs, gompas, stupas and some other buildings. Garzo - Blacksmithing: The manufacture of iron goods, such as farm tools, knives, swords, and utensils. Jinzo - Clay arts: The making of religious statues and ritual objects, pottery and the construction of buildings using mortar, plaster, and rammed earth. Lhazo - Painting: From the images on thangkas, walls paintings, and statues to the decorations on furniture and window-frames. Lugzo - Bronze casting: Production of bronze roof-crests, statues, bells, and ritual instruments, in addition to jewelry and household items using sand casting and lost-wax casting. Larger statues are made by repoussé. Parzo - Wood, slate, and stone carving: In wood, slate or stone, for making such items as printing blocks for religious texts, masks, furniture, altars, and the slate images adorning many shrines and altars. Shagzo - Woodturning: Making a variety of bowls, plates, cups, and other containers. Shingzo - Woodworking: Employed in the construction of dzongs and gompas Thagzo - Weaving: The production of some of the most intricately woven fabrics produced in Asia. Trözo - Silver- and gold-smithing: Working in gold, silver, and copper to make jewelry, ritual objects, and utilitarian household items. Tshazo - Cane and bamboo work: The production of such varied items as bows and arrows, baskets, drinks containers, utensils, musical instruments, fences, and mats. Tshemazo – Needlework: Working with needle and thread to make clothes, boots, or the most intricate of appliqué thangkas. Characteristics of Bhutanese arts Articles for everyday use are still fashioned today as they were centuries ago. Traditional artisanship is handed down from generation to generation. Bhutan's artisans are skilled workers in metals, wood and slate carving, and clay sculpture. Artifacts made of wood include bowls and dishes, some lined with silver. Elegant yet strong woven bamboo baskets, mats, hats, and quivers find both functional and decorative usage. Handmade paper is prepared from tree bark by a process passed down the ages. Each region has its specialties: raw silk comes from eastern Bhutan, brocade from Lhuntshi (Kurtoe), woolen goods from Bumthang, bamboo wares from Kheng, woodwork from Tashi Yangtse, gold and silver work from Thimphu, and yak-hair products from the north or the Black Mountains. Most Bhutanese art objects are produced for use of the Bhutanese themselves. Except for goldsmiths, silversmiths, and painters, artisans are peasants who produce these articles and fabrics in their spare time, with the surplus production being sold. Most products, particularly fabrics, are relatively expensive. In the highest qualities, every step of production is performed by hand, from dyeing hanks of thread or hacking down bamboo in the forest, to weaving or braiding the final product. The time spent in producing handicrafts is considerable and can involve as much as two years for some woven textiles. At the same time, many modern innovations are also used for less expensive items, especially modern dyes, and yarns - Bhutan must be one of the few places where hand-woven polyester garments can be bought. Products Textiles Bhutanese textiles are a unique art form inspired by nature made in the form of clothing, crafts and different types of pots in eye-catching blend of colour, texture, pattern and composition. This art form is witnessed all over Bhutan and in Thimphu in the daily life of its people. It is also a significant cultural exchange garment that is gifted to mark occasions of birth and death, auspicious functions such as weddings and professional achievements and in greeting dignitaries. Each region has its own special designs of textiles, either made of vegetable dyed wool known as yathra or pure silk called Kishuthara. It is the women, belonging to a small community, who weave these textiles as a household handicrafts heritage. Paintings Most Bhutanese art, including ‘Painting in Bhutanese art’, known as lhazo, is invariably religion centric. These are made by artists without inscribing their names on them. The paintings encompass various types including the traditional thangkas, which are scroll paintings made in “highly stylised and strict geometric proportions” of Buddhist iconography that are made with mineral paints. Most houses in Bhutan have religious and other symbolic motifs painted inside their houses and also on the external walls. Sculptures The art of making religious sculptures is unique in Bhutan and hence very popular in the Himalayan region. The basic material used for making the sculptures is clay, which is known as jinzob. The clay statues of Buddhist religious icons, made by well-known artists of Bhutan, embellish various monasteries in Bhutan. This art form of sculpture is taught to students by professional artists at the Institute of Zorig Chosum in Thimphu. Paper making Handmade paper known as deysho is in popular usage in Bhutan and it is durable and insect resistant. The basic material used is the bark of the Daphne plant. This paper is used for printing religious texts; traditional books are printed on this paper. It is also used for packaging gifts. Apart from handmade paper, paper factories in Bhutan also produce ornamental art paper with designs of flower petals, and leaves, and other materials. For use on special occasions, vegetable dyed paper is also made. Wood carving Wood carving known as Parzo is a specialised and ancient art form, which is significantly blended with modern buildings in the resurgent Bhutan. Carved wood blocks are used for printing religious prayer flags that are seen all over Bhutan in front of monasteries, on hill ridges and other religious places. Carving is also done on slate and stone. The wood that is used for carving is seasoned for at least one year prior to carving. Sword making The art of sword making falls under the tradition of garzo (or blacksmithing), an art form that is used to make all metal implements such as swords, knives, chains, darts and so forth. Ceremonial swords are made and gifted to people who are honoured for their achievements. These swords are to be sported by men on all special occasions. Children, wear a traditional short knife known as the dudzom. Terton Pema Lingpa, a religious treasure hunter from central Bhutan, was the most famous sword maker in Bhutan. Boot Making It is not uncommon to see Bhutan’s traditional boots made of cloth. The cloth is hand stitched, embroidered and appliquéd with Bhutanese motifs. They are worn on ceremonial occasions (mandatory); the colours used on the boot denote the rank and status of the person wearing it. In the pecking order, Ministers wear orange, senior officials wear red and the common people wear white boots. This art form has been revived at the Institute of Zorig Chosum in Thimphu. Women also wear boots but of shorter length reaching just above the ankle. Bamboo Craft Bamboo Craft made with cane and bamboo is known as thazo. It is made in many rural communities in many regions of Bhutan. Few special items of this art form are the belo and the bangchung, popularly known as the Bhutanese “Tupperware” basket made in various sizes. Baskets of varying sizes are used in the homes and for travel on horseback, and as flasks for local drink called the arra. Bow and Arrow Making To meet the growing demand for bow and arrow used in the national sport of archery, bamboo bows and arrows are made by craftsmen using specific types of bamboo and mountain reeds. The bamboo used are selected during particular seasons, shaped to size and skilfully made into the bow and arrow. Thimphu has the Changlimithang Stadium & Archery Ground where Archery is a special sport. Jewellery Intricate jewellery with motif, made of silver and gold, are much sought after by women of Bhutan. The traditional jewellery made in Bhutan are heavy bracelets, komas or fasteners attached to the kira, the traditional dress of Bhutanese women, loop ear rings set with turquoise and necklaces inlaid with gem stones such as antique turquoise, coral beads and the zhi stone. The zhi stone is considered a prized possession as it is said to have “protective powers”; this stone has black and white spiral designs called “eyes”. The zhi is also said to be an agate made into beads. Institutions National Institute of Zorig Chusum The National Institute of Zorig Chusum is the centre for Bhutanese Art education. Painting is the main theme of the institute, which provides 4–6 years of training in Bhutanese traditional art forms. The curricula cover a comprehensive course of drawing, painting, wood carving, embroidery, and carving of statues. Images of Buddha are a popular painting done here. Handicrafts emporiums There is a large government run emporium close to the National Institute of Zorig Chusum, which deals with exquisite handicrafts, traditional arts and jewelry; gho and kira, the national dress of Bhutanese men and women, are available in this emporium. The town has many other privately owned emporiums which deal with thangkas, paintings, masks, brassware, antique jewellery, painted lama tables known as choektse, drums, Tibetan violins and so forth; Zangma Handicrafts Emporium, in particular, sells handicrafts made in the Institute of Zorig Chusum. Folk Heritage Museum Folk Heritage Museum in Kawajangsa, Thimphu is built on the lines of a traditional Bhutanese farm house with more-than-100-year-old vintage furniture. It is built as a three storied structure with rammed mud walls and wooden doors, windows and roof covered with slates. It reveals much about Bhutanese rural life. Voluntary Artists Studio Located in an innocuous building, the Voluntary Artist Studio’s objective is to encourage traditional and contemporary art forms among the youth of Thimphu who are keen to imbibe these art forms. The art works of these young artists is also available on sale in the 'Art Shop Gallery' of the studio. National Textile Museum The National Textile Museum in Thimphu displays various Bhutanese textiles that are extensive and rich in traditional culture. It also exhibits colourful and rare kiras and ghos (traditional Bhutanese dress, kira for women and gho for men). Exhibitions The Honolulu Museum of Art spent several years developing and curating The Dragon’s Gift: The Sacred Arts of Bhutan exhibition. The February - May 2008 exhibition in Honolulu will travel in 2008 and 2009 to locations around the world including the Rubin Museum of Art (New York City), the Asian Art Museum (San Francisco), Guimet Museum (Paris), the Museum of East Asian Art (Cologne, Germany), and the Museum Rietberg Zürich (Switzerland). Selected examples of Bhutanese art See also Phallus paintings in Bhutan Buddhism in Bhutan Dzong architecture Music of Bhutan Vajrayana Buddhism Eastern art history References Bartholomew, Terese Tse, The Art of Bhutan, Orientations, Vol. 39, No. 1, Jan./Feb. 2008, 38-44. Bartholomew, Terese Tse, John Johnston and Stephen Little, The Dragon's Gift, the Sacred Arts of Bhutan, Chicago, Serindia Publications, 2008. Johnston, John, "The Buddhist Art of Bhutan", Arts of Asia, Vol. 38, No. 6, Nov./Dec. 2008, 58-68. Mehra, Girish N., Bhutan, Land of the Peaceful Dragon, Delhi, Vikas Publishing House, 1974. Singh, Madanjeet, Himalayan Art, wall-painting and sculpture in Ladakh, Lahaul and Spiti, the Siwalik Ranges, Nepal, Sikkim, and Bhutan, New York, Macmillan, 1971. External links Art and the youth of Bhutan Manuel Valencia Contemporary artist with clear Buthanese inspiration Textile arts Religious objects Art by country Buddhism in Bhutan
Bhutanese art
[ "Physics" ]
3,325
[ "Religious objects", "Physical objects", "Matter" ]
15,995,191
https://en.wikipedia.org/wiki/Nanoradio
A nanoradio (also called carbon nanotube radio) is a nanotechnology acting as a radio transmitter and receiver by using carbon nanotubes. One of the first nanoradios was constructed in 2007 by researchers under Alex Zettl at the University of California, Berkeley where they successfully transmitted an audio signal. Due to the small size, nanoradios can have several possible applications such as radio function in the bloodstream. History The first observation of a nanoradio can be accredited to a Japanese physicist Sumio Iijima in 1991 who saw a "a luminous discharge of electricity" coming from a carbon nanotube on a graphite electrode. On October 31, 2007, a team of researchers under Alex Zettl at the University of California, Berkeley created one of the first nanoradios. Their experiment consisted of placing a multilayered nanotube placed on a silicon electrode and connecting it to a counter electrode through a wire and a DC battery. Both the electrode and nanotube were also put in a vacuum of about 10−7 Torr. They then placed the apparatus into a high-resolution transmission electron microscope to document the movement of the nanotube. They observed the nanoradio vibrating and transmitted a song called "Layla" by Eric Clapton. After some minor adjustments, the team was able to transmit and receive signals from a couple meters across the laboratory; however, the initial audio receptions from the radio were scratchy which Zettl believed was due to the lack of a better vacuum. Properties The small size, roughly 10 nanometers wide and hundreds of nanometers long, and composition of nanoradios provide several distinct properties. The small size of nanoradios enables electrons to pass through without much friction, making nanoradios efficient conductors. Nanoradios can also come in different sizes; they can be double-walled, tripled-walled and multi-walled. Aside from the different sizes, nanoradios can also take different shapes such as bent, straight or toroidal. Common among all nanoradios is how relatively strong they are. The resistance can be attributed to the strength of the bonds between carbon atoms. Function The fundamental parts of a radio are the antenna, tuner, demodulator and amplifier. Carbon nanotubes are special in that they can function as these parts without the need of extra circuitry. Antenna The nanoradio is small enough for electromagnetic signals to mechanically vibrate the nanoradio. The nanoradio essentially acts as an antenna by vibrating with the same frequency as the signal from incoming electromagnetic waves; this is in contrast with traditional radio antennas, which are generally stationary. The nanotube can vibrate in high frequencies, from "thousands to millions of times per second." Tuner The nanoradio can also function as a tuner by extending or reducing the length of the nanotube; doing so changes the resonance frequency at which it vibrates, enabling the radio to tune into specific frequencies. The length of the nanotube can be extended by pulling the tip with a positive electrode and can be shortened by removing atoms off the tip. Consequently, changing the length is permanent and can't be reversed; however, the method of varying the electric field can also affect the frequency that the nanoradio responds without being permanent. Amplifier As a benefit of the microscopic size and needle-like shape, the nanoradio functions naturally as an amplifier. The nanoradio exhibits field emission, in which a small voltage emits a flow of electrons; due to this, a small electromagnetic wave would produce a large flow of electrons, amplifying the signal. Demodulator Demodulation is essentially the separation of the information signal from the carrier wave. When the nanoradio vibrates in sync with the carrier wave, the nanoradio responds only to the information signal and ignores the carrier wave; and so, the nanoradio can act as a demodulator without the need of circuitry. Medical Application Currently, chemotherapy uses chemicals that harm not only cancerous cells, but also healthy ones since they are put into the blood stream. Nanoradios can be used to prevent damage to healthy cells by remotely communicating with the radio to release drugs and specifically target cancerous cells. Nanoradios can also be injected into individual cells to release certain chemicals, enabling repair of specific cells. Nanoradios can also be used to monitor insulin levels of diabetes patients and use that information to release a drug or chemical. Complications The implanting of nanoradios in the body is now feasible with manipulation of directed energy. The nanoradio radiates about 4.5 x 10−27 W of electromagnetic power; however, much of this power is lost when passing through the body. The amount of energy input can be increased, which would generate much heat in the body, which can pose a safety risk. References Nanoelectronics Radio technology Radio electronics
Nanoradio
[ "Materials_science", "Technology", "Engineering" ]
1,004
[ "Information and communications technology", "Radio electronics", "Telecommunications engineering", "Radio technology", "Nanoelectronics", "Nanotechnology" ]
15,995,267
https://en.wikipedia.org/wiki/Xylitol%20pentanitrate
Xylitol pentanitrate (XPN) is a nitrated ester primary explosive first synthesized in 1891 by Gabriel Bertrand. Law enforcement has taken an interest in XPN along with erythritol tetranitrate (ETN) and pentaerythritol tetranitrate (PETN) due to their ease of synthesis, which makes them accessible to amateur chemists and terrorists. Properties At room temperature XPN exists as a white crystalline solid. When heated to 163 °C, liquid xylitol pentanitrate begins to crackle and produce a dark vapour. When decomposed, a gram of XPN produces 200 mL of gas, which makes it a high performance explosive. Rotter impact analysis of XPN found a figure of insensitiveness of 25 (RDX = 80). XPN displayed a similar sensitivity to static discharge to ETN and PETN. Synthesis Xylitol pentanitrate is formed by nitration of xylitol pentaacetate. Nowadays, fuming nitric acid and glacial acetic acid is often used, but Bertrand originally employed a cheaper nitrating agent, the mixture of nitric and sulfuric acids (he called it mélange nitrosulfurique, the common English name is "mixed acid"). Complete oxidation Much like ETN, XPN has a positive oxygen balance, which means the carbon and hydrogen in the molecule can be fully oxidized without another oxidizing agent being added: The decomposition of four molecules of XPN releases three O2. The free oxygen molecules can be used to oxidize an added metal dust or negative oxygen balanced explosive such as TNT. See also Nitroglycerine Xylitan trinitrate – used as plasticizer in propellants similarly to nitroglycerine Mannitol hexanitrate References External links Explosive chemicals Nitrate esters Sugar alcohol explosives
Xylitol pentanitrate
[ "Chemistry" ]
401
[ "Explosive chemicals" ]
15,996,046
https://en.wikipedia.org/wiki/Bisphenol%20S
Bisphenol S (BPS, dioxydiphenylsulfone) is an organic compound with the formula . It has two phenol functional groups on either side of a sulfonyl group. It is commonly used in curing fast-drying epoxy resin adhesives. It is classified as a bisphenol, and a close molecular analog of bisphenol A (BPA). BPS differentiates from BPA by possessing a sulfone group as the central linker of the molecule instead of a dimethylmethylene group , which is the case of bisphenol A. History German chemist Ludwig Glutz (1844–1873) first prepared the compound from phenol and hot sulfuric acid in 1867, designating it oxysulphobenzide. It was used starting in 1869 as a dye. BPS received the modern name in the late 1950s and is currently common in everyday consumer products. BPS is an analog of BPA that has replaced BPA in a variety of ways, being present in thermal receipt paper, plastics, and indoor dust. After health concerns associated with bisphenol A grew in 2012, BPS began to be used as a replacement. Use BPS is used in curing fast-drying epoxy glues and as a corrosion inhibitor. It is also commonly used as a reactant in polymer reactions. BPS has become increasingly common as a building block in polyethersulfone and some epoxies, following the public awareness that BPA has estrogen-mimicking properties, and widespread-belief that enough of it remains in the products to be dangerous. However, BPS may have comparable estrogenic effects to BPA. BPS is now used to a variety of common consumer products. In some cases, BPS is used where the legal prohibition on BPA allows products (esp. plastic containers) containing BPS to be labelled "BPA free". BPS also has the advantage of being more stable to heat and light than BPA. To comply with restrictions and regulations on BPA due to its confirmed toxicity, manufacturers are gradually replacing BPA with other related compounds, mainly bisphenol S, as substitutes in industrial applications. BPS is also used as an anticorrosive agent in epoxy glues. Chemically, BPS is being used as a reagent in polymer reactions. BPS has also been reported to occur in canned foodstuffs, such as tin cans. In a 2015 study analyzing BPS in a variety of paper products worldwide, BPS was found in 100% of tickets, mailing envelopes, airplane boarding passes, and airplane luggage tags. In this study, very high concentrations of BPS were detected in thermal receipt samples collected from cities in the United States, Japan, Korea, and Vietnam. The BPS concentrations were large but varied greatly, from a few tens of nanograms per gram to several milligrams per gram. Nevertheless, concentrations of BPS used in thermal paper are usually lower compared to those of BPA. Finally, BPS can get into the human body through dermal absorption from handling banknotes. Health effects Cardiac effects Although there is no direct link established between BPS and cardiac disease, it is thought that BPS may operate by a similar mechanism to BPA and could cause cardiac toxicity. In animal studies, BPS has been shown to hinder MI recovery, induce cardiac arrhythmias and cause cardiac developmental deformities. Rats exposed to high doses of BPS were reported to have increased risk of atherosclerosis (a significant risk factor in cardiac disease) due to BPS inducing synthesis of cholesterol in peripheral tissues. Neurobehavioral effects BPS has the potential to have an effect on a wide range of neurological functions. A recent study showed that exposure to BPS during pregnancy may disrupt thyroid hormone levels. These are important in foetal neurodevelopment and prenatal exposure to BPS has been linked to impaired psychomotor development in children. In a study using human embryonic stem cells, BPS was shown to cause a reduction in length of neurites in neuron-like cells. This disruption could lead to neurobehavioral problems such as ASD. The mechanism of the neurological impact of BPS is thought to be related to its oestrogenic effect which can interfere in the levels and action of thyroid hormone, which is essential for normal development of the nervous system; it regulates migration and differentiation of neural cells, synaptogenesis and myelination. Effects on obesity It has been proposed that BPS has the potential to affect body weight, and several studies have found a correlation between exposure to bisphenols and increased body weight. This is thought to be due to an accumulation of lipids in adipocytes i.e. a build-up of fat in fat cells. It has also been suggested that BPS leads to the formation of new adipocytes as exposure to it increases the expression of related markers. A correlation between exposure to BPS before birth and being overweight has been found in mice, although this was only found when they were also fed a high fat diet. The pathway through which BPS acts on cells to increase body weight is suggested to be different to the pathway through which BPA acts, even though they have very similar chemical structures. Only one study has demonstrated a decrease in body weight after BPS exposure, and the affected mice quickly regained the weight they had lost. Other metabolic effects BPS levels in the human body can be measured in the urine. In one study of children, there was a significant correlation between urinary levels of BPS and insulin resistance, abnormal kidney function and abnormal vascular function. It has been suggested that there is a link between gestational diabetes mellitus and urinary BPS. Therefore, exposure to BPS may be a risk factor for developing the condition. Effects on skeletal development The effect of long term exposure to BPS is an enrichment of osteoclast differentiation and enhanced development of the embryonic skeletal system. Effects on early development BPS, like BPA, can cross the placenta in sheep and alter the endocrine functionality of the placenta. It does this by reducing the maternal serum concentration of trophoblastic proteins. BPS shows almost identical effects on the placenta as BPA, with both BPS and BPS altering almost identical sets of genes. Fetal exposure to BPS through the placenta, during a critical period, can have negative effects on the developmental programming of the fetus. BPS exposure in the zebrafish model affected development of the hypothalamus and resulted in hyperactive behaviour. Studies in the Mouse model have shown that exposure to BPS significantly reduced the secretion of testosterone within the mouse fetal testes, with exposure to BPS in female mice also causing a significant fall in egg number, whilst also negatively affecting the quality of oocytes. Zebrafish and humans share 70% of the same genes that are expressed during development therefore, they are a useful model organism to understand the effects of BPS. Studies in the Zebrafish model have shown that parental exposure to BPS causes disrupted thyroid hormone levels in both the parental generation and F1 generation. Fetal exposure to BPS through the placenta, during a critical period, can have negative effects on the developmental programming of the fetus. Additionally, there is evidence to suggest that embryos with high levels of BPS exhibit teratogenic effects of vital organs such as the heart and liver. Additionally, BPS inhibits the expression of genes within the liver used for the metabolism thus leading to increased stress of the liver through the zebrafish life. Adult zebrafish that are exposed to low levels of BPS during development display hyperactivity due to an exponential increase in neural activity within the hypothalamus. It is not clear the mechanism of BPS's effect on thyroid hormone levels after human exposure. Effects on reproductive health The endocrine disrupting nature of BPS has encouraged investigations into its affinity to estrogenic receptors, showing BPS to be a weak agonist; similar in potency to BPA, which it has come to substitute. Select studies show BPS to be capable of mimicking estradiol, and sometimes being more effective. The estrogenic activity of BPS has been demonstrated through in vivo rodent studies, inducing growth of the womb, with a range of dosages. These are pathways necessary for cell function, cell cycle regulation, and neuroendocrine induced behaviours which are important for reproduction. BPS has shown to both disrupt signalling and damage DNA. Androgenic and antiandrogenic activity have also been confirmed by BPS disrupting function of the androgen receptors. Studies on zebrafish have shown decreased egg quality, reduced sperm count, an increased frequency of embryo abnormalities, as well as changes in the mass of gonads; suggesting that BPS is a reproductive toxin for both sexes. The use of Bisphenol-A in manufacturing of household products has been reduced due to its effects as an endocrine disruptor, with research suggesting a disposition to greater deleterious effects to women as compared to men. Research has suggested that BPA and its cousins (BPS,BPF, etc.) have sex dependent effects on development. Male zebrafish exposed to BPS indicated a significant increase in estrogen levels and a decrease in testosterone levels. The decrease in testosterone when exposed to BPS was found to be 200 times more than the decrease in testosterone by BPA. There are increased levels of mRNA transcription of the aromatase gene and increased levels of mRNA transcription of GnRH genes with decreased levels of follicle stimulating hormone and luteinizing hormone. Bisphenol-S concentrations within populations A relationship to higher BPS concentrations is linked to individuals within certain socio-economic classes hence placing those individuals at greater risk of possible deleterious effects. Individuals with an annual income of less than $20,000 were found to have the highest concentrations of bisphenol and individuals with an annual income of $75,000 or more to have the lowest concentrations, suggesting a linear relationship between bodily concentrations of BPS and income. Black women had the highest concentrations of BPS with levels 93% higher than those of white women. Environmental considerations Recent work suggests that, like BPA, BPS also has endocrine disrupting properties. What makes BPS, and BPA, endocrine disruptors is the presence of the hydroxy group on the benzene ring. This phenol moiety allows BPA and BPS to mimic estradiol. In a study of human urine, BPS was found in 81% of the samples tested. This percentage is comparable to BPA which was found in 95% of urine samples. Another study done on thermal receipt paper shows that 88% of human exposure to BPS is through receipts. The recycling of thermal paper can introduce BPS into the cycle of paper production and cause BPS contamination of other types of paper products. A recent study showed presence of BPS in more than 70% of the household waste paper samples, potentially indicating spreading of BPS contamination through paper recycling. BPS is more resistant to environmental degradation than BPA, and although not persistent cannot be characterised as readily biodegradable. Regulation In the US it is difficult for consumers to determine if a product contains BPS due to limited labelling regulations. In January 2023 the European Chemicals Agency added bisphenol S to the candidate list for substance of very high concern designation, while it is investigated for reproductive toxicity and endocrine disruption. Synthesis Bisphenol S is prepared by the reaction of two equivalents of phenol with one equivalent of sulfuric acid or oleum. This reaction can also produce 2,4'-sulfonyldiphenol, a common isomeric complication in electrophilic aromatic substitution reactions. See also Bisphenol A Tetramethyl Bisphenol F References Bisphenols Endocrine disruptors Estrogens Nonsteroidal antiandrogens Benzosulfones
Bisphenol S
[ "Chemistry" ]
2,515
[ "Endocrine disruptors" ]
15,996,507
https://en.wikipedia.org/wiki/Banjo%20fitting
A banjo fitting is actually called a hose connecting bolt, or internally relieved bolt, and a spherical union for fluid transfer. See DIN 7643. It is typically used to connect a fluid line to a rigid, internally threaded hydraulic component. The bolt is assembled through the center of the union, usually with face seals on either side of the union, to create a fluid path between the external ports on the union and bolt. A flexible hose or a rigid pipe may be connected to the union port. The main advantage of the fitting is in high pressure applications (i.e. more than 50 bar). The name stems from the shape of the fitting, having a large circular section connected to a thinner pipe, generally similar to the shape of a banjo. Compared to pipe fittings that are themselves threaded, banjo fittings have the advantage that they do not have to be rotated relative to the host fitting. This avoids damage that can be caused by twisting the hose during installation. It also allows the pipe exit direction to be adjusted relative to the fitting, then the bolt tightened independently. Common applications Banjo fittings are commonly found in automotive fuel, motor oil and hydraulic systems (e.g.: brakes and clutch). General applications include: Hydraulic power systems Power steering fluid Variable valve timing systems Brake caliper connectors Turbo charger oil feeds Fuel filter connectors Carburetor connector Hydraulic clutch systems Fuel dosing for SCR systems References Hydraulics Threaded fasteners
Banjo fitting
[ "Physics", "Chemistry" ]
297
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
15,997,225
https://en.wikipedia.org/wiki/List%20of%20watershed%20topics
This list embraces topographical watersheds and drainage basins and other topics focused on them. Terms – different uses The source of a river or stream is the furthest place from its estuary or confluence with another river, and is alternatively known as a "watershed" and/or "headwaters" in some countries. The confluence is the meeting of two rivers or streams, and may sometimes be known as "headwaters". A drainage basin is an area of land where all surface water converges to a single point at a lower elevation. In North America, "watershed" is used for this sense, while elsewhere terms like "catchment" or "drainage area" are used. A drainage divide is the line that separates neighboring drainage basins. In English-speaking countries outside of North America, this is normally known as a "watershed". Drainage divides Drainage divides category The European Watershed, the line dividing the drainage basins of the major rivers of Europe The Drainage divides, the lines dividing drainage basins in North America (i.e.: Great Basin Divide and Great Basin hydrologic region) Watersheds and drainage basins Basins category Drainage basins category List of drainage basins by area: A worldwide list of watersheds Drainage basins of the Atlantic Ocean category Drainage basins of the Arctic Ocean category Watersheds - basins by country Watersheds of Canada category Watersheds of the United States category Drainage systems of Australia category Examples Lake Erie Watershed (Pennsylvania) Guadalupe Watershed Hudson River Watershed Humber Watershed Turtle Creek Watershed Watershed and drainage basin organisations and institutions Management organisations Watershed management, the management of drainage basins Watershed Protection and Flood Prevention Act, a United States law controlling drainage and water storage Watershed district (Minnesota), one of a number of government entities in the US state of Minnesota which monitor and regulate the use of water in drainage basins Watershed district (Russia), one of twenty groups of water bodies listed in the Water Code of the Russian Federation Examples Council for Watershed Health Great Swamp Watershed Association Minnehaha Creek Watershed District Santa Fe Watershed Association Walla Walla Basin Watershed Council Yukon River Inter-Tribal Watershed Council Study institutions University of South Carolina Upstate: Center for Watershed Ecology Other uses Watershed College, Zimbabwe Watershed Media Centre, Bristol, England Stony Brook Millstone Watershed Arboretum "River of Words"; watersheds as subject poetry competition - List of Watershed topics Watershed Water and the environment
List of watershed topics
[ "Environmental_science" ]
467
[ "Hydrology", "Drainage basins" ]
15,997,468
https://en.wikipedia.org/wiki/Polyacrylic%20acid
Poly(acrylic acid) (PAA; trade name Carbomer) is a polymer with the formula (CH2−CHCO2H)n. It is a derivative of acrylic acid (CH2=CHCO2H). In addition to the homopolymers, a variety of copolymers and crosslinked polymers, and partially deprotonated derivatives thereof, are known and of commercial value. In a water solution at neutral pH, PAA is an anionic polymer, i.e., many of the side chains of PAA lose their protons and acquire a negative charge. Partially or wholly deprotonated PAAs are polyelectrolytes, with the ability to absorb and retain water and swell to many times their original volume. These properties acid–base and water-attracting are the basis of many applications. Synthesis PAA, like any acrylate polymer, is usually synthesized through a process known as free radical polymerization, though graft polymerization may also be used. Free radical polymerization involves the conversion of monomers, in this case, acrylic acid (CH2=CHCO2H), into a polymer chain through the action of free radicals. The process typically follows these steps: Initiation: Free radicals are generated by initiators such as potassium persulfate (K2S2O8) or Azobisisobutyronitrile (AIBN). These radicals are highly reactive and can start the polymerization process by reacting with the monomer units. Propagation: Once the radical reacts with a monomer, it creates a new radical at the end of the growing chain. This new radical can react with additional monomer units, allowing the chain to grow. Termination: The reaction continues until two radicals recombine, or a radical is transferred to another molecule, terminating the growth of the polymer chain. Chain transfer and inhibition: Other reactions can also occur, such as chain transfer (where the radical is transferred to a different molecule, creating a new radical) or inhibition (where impurities stop the growth of the chain). Production The global market was estimated to be worth $3.4 billion in 2022. Structure and derivatives Polyacrylic acid is a weak anionic polyelectrolyte, whose degree of ionisation is dependent on solution pH. In its non-ionised form at low pHs, PAA may associate with various non-ionic polymers (such as polyethylene oxide, poly-N-vinyl pyrrolidone, polyacrylamide, and some cellulose ethers) and form hydrogen-bonded interpolymer complexes. In aqueous solutions PAA can also form polycomplexes with oppositely charged polymers such as chitosan, surfactants, and drug molecules (for example, streptomycin). Physical properties Dry PAAs are sold as white, fluffy powders. Derivatives In the dry powder form of sodium polyacrylate, the positively charged sodium ions are bound to the polyacrylate, however, in aqueous solutions the sodium ions can dissociate. The presence of sodium cations allows the polymer to absorb a high amount of water. Applications Absorbent PAA is widely used in dispersants. Its molecular weight has a significant impact on the rheological properties and dispersion capacity, and hence applications. The dominant application for PAA is as a superabsorbent. About 25% of PAA is used for detergents and dispersants. Polyacrylic acid and its derivatives (particularly sodium polyacrylate) are used in disposable diapers. Acrylic acid is also the main component of Superabsorbent Polymers (SAPs), which are cross-linked polyacrylates that can absorb and retain more than 100 times of their own weight in liquid. The US Food and Drug Administration authorised the use of SAPs in packaging with indirect food contact. Cleaning Detergents often contain copolymers of acrylic acid that assist in sequestering dirt. Cross-linked polyacrylic acid has also been used in the production of household products, including floor cleaners. PAA may inactivate the antiseptic chlorhexidine gluconate. Biocompatible materials The neutralized polyacrylic acid gels are suitable biocompatible matrices for medical applications such as gels for skin care products. PAA films can be deposited on orthopaedic implants to protect them from corrosion. Crosslinked hydrogels of PAA and gelatin have also been used as medical glue. Paints and cosmetics Other applications involve paints and cosmetics. They stabilize suspended solid in liquids, prevent emulsions from separating, and control the consistency in flow of cosmetics. Carbomer codes (910, 934, 940, 941, and 934P) are an indication of molecular weight and the specific components of the polymer. For many applications PAAs are used in form of alkali metal or ammonium salts, e.g. sodium polyacrylate. Emerging applications Hydrogels derived from PAA have attracted much study for use as bandages and aids for wound healing. Drilling fluid and metal quenching A few reports were made on PAA use as deflocculant (so called alkaline polyacrylates) for oil drilling industry. It was also reported to be used for metal quenching in metalworking (see Sodium polyacrylate). References Acrylate polymers Cosmetics chemicals Polyelectrolytes Polymers
Polyacrylic acid
[ "Chemistry", "Materials_science" ]
1,170
[ "Polymers", "Polymer chemistry" ]
16,000,580
https://en.wikipedia.org/wiki/Simulation%20Model%20Portability
The Simulation Model Portability is a standard for simulation models developed by ESA together with various stakeholders in the European Space Industry. The first version, also known as SMI standard, was implemented in SIMSAT and EuroSim, simulator infrastructures in use at various ESA locations. The second version, also known as Simulation Modelling Platform - SMP2, is currently at version 1.2. It is implemented in several Space Simulators such as Basiles (CNES) or SimTG (AIRBUS D&S) The ECSS has now taken the lead for further iterations of the SMP2 standard. At first, it has been published as a Technical Memorandum ECSS-E-TM-40-07 and in 2020 it become the ECSS standard ECSS-E-ST-40-07. The first implementations have already been released by several stakeholders in the European Space Industry. Simulations models have already been exchanged, using the new ECSS SMP between SIMULUS (ESA) and SimTG (AIRBUS D&S) simulators. An update of the ECSS SMP E-40-07 is currently ongoing to specify the exchanged artefacts; called ECSS SMP Level 2 used to instanciate, configure and schedule a model execution. It is a collection of XSD files that define the model exchange format. References External links Simulation Model Portability SMP implementation in Eurosim Presentation at the 6th NASA/ESA Workshop on product exchange 25 January 2011: ECSS-E-TM-40-07 "Simulation modelling platform" made available European Space Agency
Simulation Model Portability
[ "Astronomy" ]
325
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
16,000,973
https://en.wikipedia.org/wiki/E-40-07
E-40-07 is the code for the ECSS standard titled Simulation Model Portability. This standard builds upon the SMP2 standard version 1.2, released on October 28, 2005 by ESA. The first issue of the E-40-07 standard was published on 2020-03-02 and is available at the ecss.nl website References Space standards
E-40-07
[ "Astronomy" ]
76
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
7,161,512
https://en.wikipedia.org/wiki/List%20of%20cryptographic%20file%20systems
This is a list of filesystems with support for filesystem-level encryption. Not to be confused with full-disk encryption. General-purpose filesystems with encryption AdvFS on Digital Tru64 UNIX Novell Storage Services on Novell NetWare and Linux NTFS with Encrypting File System (EFS) for Microsoft Windows ZFS since Pool Version 30 Ext4, added in Linux kernel 4.1 in June 2015 F2FS, added in Linux kernel 4.2 UBIFS, added in Linux kernel 4.10 CephFS, added in Linux kernel 6.6 bcachefs (experimental), added in Linux kernel 6.7 APFS, macOS High Sierra (10.13) and later. Cryptographic filesystems FUSE-based file systems Integrated into the Linux kernel eCryptfs Rubberhose filesystem (discontinued) StegFS (discontinued) Integrated into other UNIXes geli on FreeBSD EFS (Encrypted File System) on AIX See also Comparison of disk encryption software References Computing-related lists Disk encryption File systems
List of cryptographic file systems
[ "Technology" ]
232
[ "Computing-related lists", "Cryptography lists and comparisons" ]
7,162,263
https://en.wikipedia.org/wiki/System-specific%20impulse
System-specific Impulse, Issp is a measure that describes performance of jet propulsion systems. A reference number is introduced, which defines the total impulse, Itot, delivered by the system, divided by the system mass, mPS: Issp=Itot/mPS Because of the resulting dimension, - delivered impulse per kilogram of system mass mPS, this number is called ‘System-specific Impulse’. In SI units, impulse is measured in newton-seconds (N·s) and Issp in N·s/kg. The Issp allows a more accurate determination of the propulsive performance of jet propulsion systems than the commonly used Specific Impulse, Isp, which only takes into account the propellant and the thrust engine performance characteristics. Therefore, the Issp permits an objective and comparative performance evaluation of systems of different designs and with different propellants. The Issp can be derived directly from actual jet propulsion systems by determining the total impulse delivered by the mass of contained propellant, divided by the known total (wet) mass of the propulsion system. This allows a quantitative comparison of for example, built systems. In addition, the Issp can be derived analytically, for example for spacecraft propulsion systems, in order to facilitate a preliminary selection of systems (chemical, electrical) for spacecraft missions of given impulse and velocity-increment requirements. A more detailed presentation of derived mathematical formulas for Issp and their applications for spacecraft propulsion are given in the cited references. In 2019 Koppel and others used ISSP as a criterion in selection of electric thrusters. See also Specific Impulse References Spacecraft propulsion Physical quantities Classical mechanics
System-specific impulse
[ "Physics", "Mathematics" ]
328
[ "Physical phenomena", "Physical quantities", "Quantity", "Classical mechanics", "Mechanics", "Physical properties" ]
7,162,550
https://en.wikipedia.org/wiki/Pieter%20Winsemius
Pieter Winsemius (born 7 March 1942) is a retired Dutch politician of the People's Party for Freedom and Democracy (VVD) and businessman. Winsemius worked as a researcher at the Leiden University from February 1966 until October 1970 and as a management consultant at the McKinsey & Company from October 1970 until November 1982. After the election of 1982 Winsemius was appointed as Minister of Housing, Spatial Planning and the Environment in the Cabinet Lubbers I, taking office on 4 November 1982. After the election of 1986 Winsemius was not given a ministerial post in the new cabinet. The Cabinet Lubbers I was replaced by the Cabinet Lubbers II on 14 July 1986. Winsemius semi-retired from active politics and returned to the private sector and the public sector and occupied numerous seats as a corporate director and nonprofit director on several board of directors and supervisory boards (World Wide Fund for Nature, Vereniging Natuurmonumenten, Stichting Max Havelaar, European Centre for Nature Conservation, Stichting Pensioenfonds ABP and the Energy Research Centre) and served on several state commissions and councils on behalf of the government (Organisation for Scientific Research, National Insurance Bank, Staatsbosbeheer, Meteorological Institute and the Scientific Council for Government Policy). Winsemius also returned as a senior management consultant of the McKinsey & Company from August 1986 until October 1992 and served as a distinguished professor of Environmental management at the Tilburg University from 1 October 1999 until 1 September 2012. Winsemius was appointed again as Minister of Housing, Spatial Planning and the Environment in the caretaker Cabinet Balkenende III following the resignation of Sybilla Dekker, taking office on 26 September 2006. The Cabinet Balkenende III was replaced by the Cabinet Balkenende IV on 22 February 2007. Following the end of his active political career, Winsemius again returned to the private sector and the public sector and resumed his previous positions (Vereniging Natuurmonumenten, Energy Research Centre, Stichting Pensioenfonds ABP and the Scientific Council for Government Policy) and as an advocate, lobbyist and activist for Conservation, Environmentalism, Sustainable development and Climate change issues. Political career Winsemius is the son of economist Albert Winsemius. Trained as a physics scientist, and active as partner in the business consultancy firm McKinsey, he was Minister of Housing, Spatial Planning and the Environment (VROM) in the First Lubbers cabinet, on behalf of the VVD. As a young minister, he brought environmental laws to effect, including the rules for environmental impact assessments. After his ministerial period, he became chairman of the Vereniging Natuurmonumenten. On 22 September 2006, he again became minister of VROM, temporarily succeeding Sybilla Dekker during the Third Balkenende cabinet, until a completely new government had been formed on 22 February 2007 and he was succeeded by Jacqueline Cramer. Since 2003 until 21 November 2012, he has been member of the Scientific Council for Government Policy, for which he was awarded the grade of Commander of the Order of Orange-Nassau upon his retirement. Winsemius has written books about management and social issues, including Speel nooit een uitwedstrijd (lit. 'never play away games') (1988), in which he compared managing to professional soccer. During the 1980s, Winsemius was co-host of the television show Aktua in bedrijf. Academic career Since October 1999, Winsemius holds a professorate for Management of Sustainable Development, at the Tilburg University. On 7 March 2007, he was elected as most influential sustainable Dutchman in the De Duurzame 100 investigation by the newspaper Trouw and broadcasting group LLink. Decorations References External links Official Prof.Dr. P. (Pieter) Winsemius Parlement & Politiek 1942 births Living people Commanders of the Order of Orange-Nassau Dutch academic administrators Dutch business writers Dutch climate activists Dutch conservationists Dutch corporate directors Dutch education writers Dutch nonprofit directors Dutch nonprofit executives Dutch lobbyists Dutch management consultants Dutch science writers 20th-century Dutch physicists Dutch public administration scholars Dutch publishers (people) Environmental social scientists Environmental writers Hybrid electric vehicle advocates Knights of the Order of the Netherlands Lion Leiden University alumni McKinsey & Company people Members of the Royal Netherlands Academy of Arts and Sciences Members of the Scientific Council for Government Policy Ministers of housing and spatial planning of the Netherlands People from Voorburg People's Party for Freedom and Democracy politicians Sustainability advocates Academic staff of Tilburg University 20th-century Dutch businesspeople 20th-century Dutch civil servants 20th-century Dutch educators 20th-century Dutch male writers 20th-century Dutch politicians 21st-century Dutch businesspeople 21st-century Dutch civil servants 21st-century Dutch educators 21st-century Dutch male writers 21st-century Dutch politicians
Pieter Winsemius
[ "Environmental_science" ]
1,008
[ "Environmental social scientists", "Environmental social science" ]
7,162,656
https://en.wikipedia.org/wiki/Stellar%20triangulation
Stellar triangulation is a method of geodesy and of its subdiscipline space geodesy used to measure Earth's geometric shape. Stars were first used for this purpose by the Finnish astronomer Yrjö Väisälä in 1959, who made astrometric photographs of the night sky at two stations together with a lighted balloon probe between them. Even this first step showed the potential of the method, as Väisälä got the azimuth between Helsinki and Turku (a distance of 150 km) with an accuracy of 1″. Soon the method was successfully tested by ballistic rockets and for some special satellites. Adequate computer programs were written for the astrometric reduction of the photographic plates, the intersection of the "observation planes" containing the stations and the targets, and the least-squares adjustment of stellar-terrestrial networks with redundancy. The advantages of stellar triangulation were the possibility to cross far distances (terrestrial observations are restricted to approx. 30 km, and even in high mountains to 60 km), and the independency of the Earth's gravity field. The results are azimuths between the stations in the stellar-inertial navigation system, despite no direct line of sight. In 1960, the first appropriate space probe was launched: Project Echo, a 30 m diameter balloon satellite. By then the whole of Western Europe could be linked together geodetically with accuracies 2–10 times better than by classical triangulation. During the late 1960s, a global project was begun by H.H. Schmid (Switzerland) to connect 45 stations all over the continents, with distances of 3000–5000 km. It was finished in 1974 by precise reduction of some 3000 stellar plates and network adjustment of 46 stations (2 additional ones in Germany and the Pacific, but without the areas of Russia and China). The mean accuracy was between ±5 m (Europe, USA) and 7–10 m (Africa, Antarctica), depending on weather and infrastructure conditions. Combined with Doppler measurements (such as from Transit) the global accuracy was even 3 m. This is more than 20 times better than previously, because the gravity field up to 1974 couldn't be calculated better than 100 meters between distant continents. The use of stars as a reference system was expanded in the 70s and early 80s for continental networks, but then the laser and electronic distance measurements became better than 2 m and could be carried out automatically. Nowadays some similar techniques are carried out by interferometry with very distant radio quasars (VLBI) instead of optical satellite & star observations. The geodetic connection of radio telescopes is now possible up to mm–cm precision as published periodically by the community. This global project group was founded in 2000 by Harald Schuh (Munich/TU Vienna) and some dozen research projects worldwide, and is now a permanent service of International Union of Geodesy and Geophysics (IUGG) and International Earth Rotation and Reference Systems Service (IERS). The photographic observations as done in 1959–1985 are considered irrelevant now because of their expense, but they have led to a revival of electro-optical techniques like CCD. See also Figure of the Earth Fundamental station Triangulation Trilateration Satellite geodesy Satellite geodesy#Optical triangulation PAGEOS satellite Satellite laser ranging (SLR) Stellar parallax for distances to stars References A.Berroth, W.Hofmann: Kosmische Geodäsie(Cosmic Geodesy) (356 p.), G.Braun, Karlsruhe 1960 Karl Ledersteger: "Astronomische und Physikalische Geodäsie (Erdmessung)", Handbuch der Vermessungskunde, Wilhelm Jordan, Otto Eggert and Max Kneissl ed., Volume V, (870 S., espec. §§ 2, 5, 13), J.B.Metzler, Stuttgart 1968. Hellmut Schmid: Das Weltnetz der Satelitentriangulation. Wiss. Mitteilungen ETH Zurich and Journal of Geophysical Research, 1974. Klaus Schnädelbach et al.: Western European Satellite Triangulation Programme (WEST), 2nd Experimental Computation. Mitteilungen Geodät.Inst. Graz 11/1, Graz 1972 Nothnagel, Schlüter, Seeger: Die Geschichte der geodätischen VLBI in Deutschland, Bonn 2000. External links NOAA's geodesy photo library Geodesy Astrometry
Stellar triangulation
[ "Astronomy", "Mathematics" ]
941
[ "Applied mathematics", "Geodesy", "Astrometry", "Astronomical sub-disciplines" ]
7,162,744
https://en.wikipedia.org/wiki/Society%20for%20Imaging%20Science%20and%20Technology
The Society for Imaging Science and Technology (IS&T) is a professional society (a type of research and education organization) in the field of photography. Founded in 1947 as the Society of Photographic Scientists and Engineers (SPSE), it is headquartered in Springfield, Virginia. In 2018 it had about 850 members worldwide, and 5,000 participants in its various technical and industry-related programs. IS&T is perhaps best known for its technical conferences and courses on various aspects of imaging science and technology, including digital imaging, digital printing, color imaging, photofinishing, archiving, and digital fabrication. The society publishes The Journal of Imaging Science and Technology and, in collaboration with SPIE, The Journal of Electronic Imaging. In 2018, IS&T introduced the open access Journal of Perceptual Imaging. See also Medical imaging International Commission for Optics Optical Society of America SPIE References External links Scientific organizations established in 1947 Engineering societies Optics institutions 1947 establishments in the United States Organizations based in Virginia
Society for Imaging Science and Technology
[ "Engineering" ]
202
[ "Engineering societies" ]
7,162,957
https://en.wikipedia.org/wiki/RaceCam
RaceCam is a video camera system used primarily in motor racing, which uses a network of car-mounted cameras, microwave radio transmitters, and relays from helicopters to send live images from inside a race car to both pit crews and television audiences. History Although a vehicle-mounted 16mm motion picture camera was used as early as 1973, the technology was first developed in the late 1970s by the Seven Network in Australia, who introduced it for the 1979 Hardie-Ferodo 1000 endurance race at Mount Panorama in Bathurst, New South Wales with Sydney-based driver Peter Williamson able to give commentary from his Toyota Celica. RaceCam in Australia was unique in that the drivers were often wired for sound and able to converse with the television commentary team during races with top touring car drivers such as Dick Johnson, Allan Grice, Peter Brock and later Glenn Seton, Jim Richards, Mark Skaife, Wayne Gardner and Channel Seven's own commentator turned racer Neil Crompton all becoming regular users of the system. RaceCam (with drivers doing their own commentary) became a staple of Seven's Australian Touring Car Championship and Bathurst 1000 broadcasts during the 1980s and 1990s. American audiences were first introduced to RaceCam at NASCAR's 1981 Daytona 500 on CBS network with Terry Labonte's Buick Regal, and later at the 1983 Indianapolis 500, when ABC acquired the rights to use a streamlined version of the technology for their coverage of the race. The first Indy winning car with a RaceCam was that of Rick Mears in 1991. Over the years, the camera location varied from "over-the-shoulder" in 1983, to rear-mounted (looking backwards) in 1988, nosecone-mounted in 1994, and rollbar/above-mounted in 1997. Later, the above-mounted cameras were improved to be able to rotate 360°. Other camera views have included the rear wing (just above the rear tyre), the gearbox, the driver's helmet ("Visor cam"), a "footcam" looking at the driver's feet (to illustrate the heel-and-toe shifting process in road racing), and a view from the sidepod. Additional mounting locations inside the cockpit gave a face view of the driver, but usually little or no view of the track. The "CrewCam" was another view, mounted on a pit crew member's hat or helmet, showing the point of view of a pit crew member performing his duties on pit road. In the same time-frame, CBS and ESPN began using on-board cameras during NASCAR telecasts from different developers. The large, boxy interior of the NASCAR stock cars allowed modified, nearly regular-sized video cameras to be mounted in the cockpit. CBS used a remote controlled, 360° rotating camera, and 1984 Daytona 500 winner Cale Yarborough carried one to victory. While Racecam units had become common place in NASCAR, unlike in Australian touring car racing the drivers generally refused to be wired to talk to the television commentators while driving, saying that it was too distracting. In a NASCAR first, at the 1988 Goodyear NASCAR 500 held at the Calder Park Thunderdome in Melbourne, Australia (which was also the first NASCAR race held outside of North America), Australian drivers Dick Johnson and Allan Grice talked to the Channel 7 commentators during the race. Johnson, who had been using Racecam since 1982, also created a first for American NASCAR viewers when he was able to talk to the ESPN commentators during the 1989 Banquet Frozen Foods 300 at Sears Point Raceway. When Johnson's car went off on oil during the race, he was famously caught dropping the F-bomb just before riding up a bank. Typically in NASCAR, any conversations with drivers are done before the race, after the race, or during safety car periods as not to interfere with normal driver to crew communications. From the mid-to-late 1990s, mid-race conversations between drivers and commentators fell out of favour in Australia - with sporadic in-race interviews held during Safety Car periods. Over the years, RaceCam has been refined and led to further developments. Besides the natural upgrades for high-definition television, the "Bumpercam" uses a camera mounted on the car's bumper. The "Roofcam" is a camera mounted on a car's roof, which gives a broader view, and a more authentic perspective of the driver's sightlines. Both systems are popular with NASCAR viewers. "Clearview" is another system, which removes grit and dust from the lens. Formula One has also incorporates similar technology, with each car featuring a distinctive streamlined "camera pod" mounted above each car's airbox, giving video from a perspective similar to the driver's point of view, while also allowing a rearward-facing view for cars trailing behind. FIA regulations mandate that a total of five cameras (or dummy camera housings) must be mounted on the car, in a choice of several predetermined positions. In IndyCar, all cars in the field are equipped with multiple "camera pod" housing units - one each above the roll bar, one embedded within the front nosecone, one in the aeroscreen, and in previous season, one the rear wing, and inside one of the rear-view mirrors - regardless if they are actually carrying cameras in those locations. This rule is such that cars carrying cameras will not have an aerodynamic disadvantage (or advantage) compared to cars not carrying cameras. In addition, camera-less cars carry equivalent ballast in place of the cameras, to ensure all cars have equal weight characteristics. Driver's Eye In 2019, the FIA Formula E Championship developed a miniature camera titled "Driver's Eye", designed to fit within the padding of a drivers' helmet. Evolving out of FIA safety regulations disallowing professional drivers to mount GoPros or CamBoxes to their helmets during race weekends, the first trial was held at the 2019 Diriyah ePrix with Felipe Massa used as test subject. American motorsport apparel company Racing Force Group acquired the rights to the product and it has since been used in Formula One, NASCAR and the Supercars Championship. References Australian inventions Auto racing equipment Auto racing mass media Cameras IndyCar Series on television NASCAR on television Seven Network Sports television technology
RaceCam
[ "Technology" ]
1,270
[ "Recording devices", "Cameras" ]
7,163,283
https://en.wikipedia.org/wiki/Rippled%20glass
Rippled glass refers to textured glass with marked surface waves. Louis Comfort Tiffany made use of such textured glass to represent, for example, water or leaf veins. The texture is created during the glass sheet-forming process. A sheet is formed from molten glass with a roller that spins on itself, while travelling forward. Normally the roller spins at the same speed as its own forward motion, and the resulting sheet has a smooth surface. In the manufacture of rippled glass, the roller spins faster than its own forward motion. The rippled effect is retained as the glass cools. In order to cut rippled glass, the sheet may be scored on the smoother side with a carbide glass cutter and broken at the score line with breaker-grozier pliers. See also Architectural glass Beveled glass Came glasswork Cathedral glass Crown glass (window) Drapery glass Fracture glass Fracture-streamer glass Ring mottle glass Stained glass Streamer glass References Glass types Glass art Glass architecture
Rippled glass
[ "Materials_science", "Engineering" ]
200
[ "Glass architecture", "Glass engineering and science" ]
7,163,297
https://en.wikipedia.org/wiki/Oxygen-17
Oxygen-17 (17O) is a low-abundance, natural, stable isotope of oxygen (0.0373% in seawater; approximately twice as abundant as deuterium). As the only stable isotope of oxygen possessing a nuclear spin (+5/2) and a favorable characteristic of field-independent relaxation in liquid water, 17O enables NMR studies of oxidative metabolic pathways through compounds containing 17O (i.e. metabolically produced H217O water by oxidative phosphorylation in mitochondria) at high magnetic fields. Water used as nuclear reactor coolant is subjected to intense neutron flux. Natural water starts out with 373 ppm of 17O; heavy water starts out incidentally enriched to about 550 ppm of oxygen-17. The neutron flux slowly converts 16O in the cooling water to 17O by neutron capture, increasing its concentration. The neutron flux slowly converts 17O (with much greater cross section) in the cooling water to carbon-14, an undesirable product that can escape to the environment: 17O (n,α) → 14C Some tritium removal facilities make a point of replacing the oxygen of the water with natural oxygen (mostly 16O) to give the added benefit of reducing 14C production. History The isotope was first hypothesized and subsequently imaged by Patrick Blackett in Rutherford's lab in 1925: It was a product out of the first man-made transmutation of 14N and 4He2+ conducted by Frederick Soddy and Ernest Rutherford in 1917–1919. Its natural abundance in Earth's atmosphere was later detected in 1929 by Giauque and Johnson in absorption spectra. References Environmental isotopes Isotopes of oxygen
Oxygen-17
[ "Chemistry" ]
360
[ "Isotopes of oxygen", "Environmental isotopes", "Isotopes" ]
7,163,587
https://en.wikipedia.org/wiki/Anchor%20bolt
Anchor bolts are used to connect structural and non-structural elements to concrete. The connection can be made by a variety of different components: anchor bolts (also named fasteners), steel plates, or stiffeners. Anchor bolts transfer different types of load: tension forces and shear forces. A connection between structural elements can be represented by steel columns attached to a reinforced concrete foundation. A common case of a non-structural element attached to a structural one is the connection between a facade system and a reinforced concrete wall. Types Cast-in-place The simplest – and strongest – form of anchor bolt is cast-in-place, with its embedded end consisting of a standard hexagonal head bolt and washer, 90-bend, or some sort of forged or welded flange (see also stud welding). The last are used in concrete-steel composite structures as shear connectors. Other uses include anchoring machines to poured concrete floors and buildings to their concrete foundations. Various typically disposable aids, mainly of plastic, are produced to secure and align cast-in-place anchors prior to concrete placement. Moreover, their position must also be coordinated with the reinforcement layout. Different types of cast-in-place anchors might be distinguished: Lifting inserts: used for lifting operations of plain or prestressed RC beams. The insert can be a threaded rod. See also bolt (climbing). Anchor channels: used in precast concrete connections. The channel can be a hot-rolled or a cold-formed steel shape in which a T-shape screw is placed in order to transfer the load to the base material. Headed stud: consist of a steel plate with headed studs welded on (see also threaded rod). Threaded sleeves: consist of a tube with an internal thread which is anchored back into the concrete. For all the type of the cast-in-place anchors, the load-transfer mechanisms is the mechanical interlock, i.e. the embedded part of the anchors in concrete transfers and the applied load (axial or shear) via bearing pressure at the contact zone. At failure conditions, the level of bearing pressure can be higher than 10 times the concrete compressive strength, if a pure tension force is transferred. Cast-in-place type anchors are also utilized in masonry applications, placed in wet mortar joints during the laying of brick and cast blocks (CMUs). Post-installed Post-installed anchors can be installed in any position of hardened concrete after a drilling operation. A distinction is made according to their principle of operation. Mechanical expansion anchors The force-transfer mechanism is based on friction mechanical interlock guaranteed by expansion forces. They can be further divided into two categories: torque controlled: the anchor is inserted into the hole and secured by applying a specified torque to the bolt head or nut with a torque wrench. A particular sub-category of this anchor is called wedge type. As shown in the figure, tightening the bolt results in a wedge being driven up against a sleeve, which expands it and causes it to compress against the material it is being fastened to. displacement controlled: usually consist of an expansion sleeve and a conical expansion plug, whereby the sleeve is internally threaded to accept a threaded element. Undercut anchors The force-transfer mechanism is based on mechanical interlock. A special drilling operation allows to create a contact surface between the anchor head and the hole's wall where bearing stresses are exchanged. Bonded anchors Bonded anchors are also referred as adhesive anchors or chemical anchors. The anchoring material is an adhesive (also called mortar) usually consisting of epoxy, polyester, or vinylester resins. In bonded anchors, the force-transfer mechanism is based on bond stresses provided by binding organic materials. Both ribbed bars and threaded rods can be used and a change of the local bond mechanism can be appreciated experimentally. In ribbed bars the resistance is prevalently due to shear behavior of concrete between the ribs whereas for threaded rods friction prevails (see also anchorage in reinforced concrete). The performance of this anchor's types in terms of 'load-bearing capacity', especially under tension loads, is strictly related to the cleaning condition of the hole. Experimental results showed that the reduction of the capacity is up to 60%. The same applies also for moisture condition of concrete, for wet concrete the reduction is of 20% using polyester resin. Other issues are represented by high temperature behavior and creep response. Screw anchors The force-transfer mechanism of the screw anchor is based on concentrated pressure exchange between the screw and concrete through the pitches. Plastic anchors Their force-transfer mechanism is similar to mechanical expansion anchors. A torque moment is applied to a screw which is inserted in a plastic sleeve. As the torque is applied the plastic expands the sleeve against the sides of the hole acting as expansion force. Tapcon screws Tapcon screws are a popular anchor that stands for self tapping (self threading) concrete screw. Larger diameter screws are referred to as LDT's. This type of fastener requires a pre-drilled hole—using a Tapcon drillbit—and are then screwed into the hole using a standard hex or phillips bit. These screws are often blue, white, or stainless. They are also available in versions for marine or high stress applications. Powder-actuated anchors They act transferring the forces via mechanical interlock. This fastening technology is used in steel-to-steel connection, for instance to connect cold-formed profiles. A screw is inserted into the base material via a gas actuated gas gun. The driving energy is usually provided by firing a combustible propellant in powder form. The fastener's insertion provokes the plastic deformation of the base material which accommodates the fastener's head where the force transfer takes place. Mechanical behavior Modes of failure in tension Anchors can fail in different way when loaded in tension: Steel failure: the weak part of the connection is represented by the rod. The failure corresponds to the tensile break-out of steel as in case of tensile testing. In this case, concrete base material might be undamaged. Pull-out: the anchor is pulled out from the drilled hole partially damaging the surrounding concrete. When the concrete is damaged the failure is also indicated as pull-through. Concrete cone: after reaching the load-bearing capacity a cone shape is formed. The failure is governed by crack growth in concrete. This kind of failure is typical in pull-out test. Splitting failure: failure is characterized by a splitting crack which divides the base material into two parts. This kind of failure occurs when the dimensions of the concrete component are limited or the anchor is installed close to an edge. Blow-out failure: failure is characterized by the lateral spalling of concrete in the proximity of the anchor's head. This kind of failure occurs for anchors (prevalently cast-in-place) installed near the edge of the concrete element. In design verification under ultimate limit state, codes prescribe to verify all the possible failure mechanisms. Modes of failure in shear Anchors can fail in different way when loaded in shear: Steel failure: the rod reaches the yielding capacity then rupture occurs after development of large deformations. Concrete edge: a semi-conical fracture surface develops originating from the point of bearing up to the free surface. This type of failure occurs, for an anchor in the proximity of the edge of the concrete member. Pry-out: a semi-conical fracture surface develops characterize the failure. The pryout mechanism for cast-in anchors usually occurs with very short, stocky studs. The studs are typically so short and stiff that under a direct shear load, they bend causing contemporarily crushing in front of the stud and a crater of concrete behind. In design verification under ultimate limit state, codes prescribe to verify all the possible failure mechanisms. Combined tension/shear When contemporarily tension and shear load are applied to an anchor the failure occurs earlier (at a less load-bearing capacity) with respect the un-coupled case. In current design codes a linear interaction domain is assumed. Group of anchors In order to increase the load-carrying capacity anchors are assembled in group, moreover this allow also to arrange a bending moment resisting connection. For tension and shear load, the mechanical behavior is markedly influenced by (i) the spacing between the anchors and (ii) the possible difference in the applied forces. Service load behavior Under service loads (tension and shear) anchor's displacement must be limited. The anchor performance (load-carrying capacity and characteristic displacements) under different loading condition is assessed experimentally, then an official document is produced by technical assessment body. In design phase, the displacement occurring under the characteristic actions should be not larger than the admissible displacement reported in the technical document. Seismic load behavior Under seismic loads and there would be the possibility that an anchor is contemporarily (i) installed in a crack and (ii) subjected to inertia loads proportional both to the mass and the acceleration of the attached element (secondary structure) to the base material (primary structure). The load conditions in this case can be summarized as follow: Pulsating Axial load: force aligned with the anchor's axis, positive in case of pullout condition and zero in case of pushing-in. Reverse Shear load (also named “alternate shear”): force perpendicular to the anchor's axis, positive and negative depending on an arbitrary sign convention. Cyclic Crack (also named “crack movement”): RC primary structure undergoes in severe damage condition (i.e. cracking) and the most un-favorable case for anchor performance is when the crack plane contains the anchor's axis and the anchor is loaded by a positive axial force (constant during crack cycles). Exceptional loads behavior Exceptional loads differ from ordinary static loads for their rise time. High displacement rates are involved in impact loading. Regarding steel to concrete connections, some examples consist in collision of vehicle on barriers connected to concrete base and explosions. Apart from these extraordinary loads, structural connections are subjected to seismic actions, which rigorously have to be treated via dynamic approach. For instance, seismic pull-out action on anchor can have 0.03 seconds of rise time. On the contrary, in a quasi-static test, 100 second may be assumed as time interval to reach the peak load. Regarding the concrete base failure mode: Concrete cone failure loads increase with elevated loading rates with respect the static one. Designs See also Well nut References Structural connectors Threaded fasteners Wall anchors
Anchor bolt
[ "Engineering" ]
2,148
[ "Structural engineering", "Structural connectors" ]
7,163,828
https://en.wikipedia.org/wiki/Copper%20sheathing
Copper sheathing is a method for protecting the hull of a wooden vessel from attack by shipworm, barnacles and other marine growth through the use of copper plates affixed to the surface of the hull, below the waterline. It was pioneered and developed by the Royal Navy during the 18th century. In antiquity, ancient Chinese used copper plates while ancient Greeks used lead plates to protect the underwater hull. Development Deterioration of the hull of a wooden ship was a significant problem during the Age of Sail. Ships' hulls were under continuous attack by shipworm, barnacles and other marine growth, all of which had some adverse effect on the ship, be it structurally, in the case of the worm, or affecting speed and handling in the case of the weeds. The most common methods of dealing with these problems were through the use of wood, and sometimes lead, sheathing. Expendable wood sheathing effectively provided a non-structural skin to the hull for the worm to attack, and could be easily replaced in dry dock at regular intervals. However, weed grew rapidly and slowed ships. Lead sheathing, while more effective than wood in mitigating these problems, reacted badly with the iron bolts of the ships. Even older than the sheathing methods were the various graving and paying techniques. There were three main substances used: white stuff, which was a mixture of whale oil, rosin and brimstone; black stuff, a mixture of tar and pitch; and brown stuff, which was simply brimstone added to black stuff. It was common practice to coat the hull with the selected substance, then cover that with a thin outer layer of wooden planking. The use of copper sheathing was first suggested by Charles Perry in 1708, though it was rejected by the Navy Board on grounds of high cost and perceived maintenance difficulties. The first experiments with copper sheathing were made in the late 1750s: the bottoms and sides of several ships' keels and false keels were sheathed with copper plates. In 1761, the experiment was expanded, and the 32-gun frigate was ordered to have her entire bottom coppered, in response to the terrible condition in which she had returned from service in the West Indies. HMS Alarm was chosen because, in 1761, a letter had been sent regarding the ship's condition, saying that the worms from the waters had taken a significant toll on the ship’s wooden hull. Before the copper plates were applied, the hull was covered with "soft stuff", which was simply hair, yarn and brown paper. The copper performed very well, both in protecting the hull from worm invasion and in preventing weed growth for, when in contact with water, the copper produced a poisonous film, composed mainly of copper oxychloride, that deterred these marine organisms. Furthermore, as this film was slightly soluble, it gradually washed away, leaving no way in which marine life could attach itself to the ship. However, it was soon discovered by the Admiralty that the copper bolts used to hold the plates to the hull had reacted with the iron bolts used in the construction of the ship, rendering many bolts nearly useless. In 1766, because of the poor condition of the iron bolts, Alarms copper was removed. After this experiment, and deterred by the unanticipated and not understood galvanic reaction between the copper and iron, lead sheathing was tried again, though it was found to be unsuitable to the task, as the plates tended to fall from the hull alarmingly quickly. By 1764, a second vessel, , had been sheathed in copper, specifically to prepare her for a voyage of discovery in tropical waters. Dolphins hull was inspected in 1768 after the ship had twice circumnavigated the world; there was significant corrosion of the hull's iron components, which had to be replaced. In 1769 another attempt was made at coppering a ship's hull, this time on a new ship that had been constructed using bolts made from a copper alloy. The results were far more favourable this time, but still the problems with the bolting remained. The onset and intensification from 1773 of the war with America took the focus off the bolting issue necessary to allow a full-scale coppering programme. By the 1780s the technology had spread to India. The ruler of Mysore, Tipu Sultan, ordered that all his navy vessels receive copper sheathing after observing the benefits in French and East India Company ships. Widespread implementation With the American war in full swing, the Royal Navy set about coppering the bottoms of the entire fleet in 1778. This would not have happened but for war. This also came about because in 1778 a Mr. Fisher, a Liverpool shipbuilder (who did a brisk trade with West Africa) sent a letter to the Navy Board. In it he recommended "copper sheathing" as a solution to the problems of ship worm in warm tropical waters, and the effect on speed of tendrils of seaweed latching onto hulls. The letter itself does not survive and is obliquely referred to in other official correspondence held by the National Maritime Museum; it may have contained or been coincidental with a critical new technical breakthrough of protecting the iron bolting by applying thick paper between the copper plates and the hull. This had recently been trialled successfully (probably) on . This breakthrough was to be what would win over the Admiralty. Fisher's letter was seen by the new Navy Board Controller Charles Middleton, who had the major problem at the time with supplying over 100 ships for the American Revolutionary War (1775–1783), which was compounded that same year (1778) by French opportunism in declaring war on Britain to support the American rebels. This effectively turned what was a local civil war into a global conflict. Spain followed in 1779 and the Netherlands in 1780, and so Britain had to face its three greatest rivals. Middleton took the view that Britain was "outnumbered at every station", and the Navy was required to "extricate us from present danger". He understood that coppering allowed the navy to stay at sea for much longer without the need for cleaning and repairs to the underwater hull, making it a very attractive, if expensive, proposition. He had to expand the Navy but there was no time to add to the fleet, and limited resources available. It could take five years and 2000 trees to build a warship. However he could refurbish the existing fleet, he grasped Fisher's solution and on 21 January 1779 wrote to the Admiralty. He also petitioned King George III directly on this "matter of the gravest importance" for the necessary funding. He took with him a model of showing a coppered bottom to illustrate the method. The King backed him for what was an expensive process for an untested technology. Each ship on average required 15 tonnes of copper applied on average as 300 plates. All the copper was supplied by British mines (the only country in the world at that time that could do so), the largest mine being Parys Mountain in Anglesey, north Wales. The Parys mine had recently begun large-scale production and had glutted the British market with cheap copper; however, the 14 tons of metal required to copper a 74-gun third-rate ship of the line still cost £1,500, compared to £262 for wood. The benefits of increased speed and time at sea were deemed to justify the costs involved. Middleton, in May 1779, placed orders at the Portsmouth Docks for coppering all ships up to and including 32 guns when next they entered dry dock. In July, this order was expanded to include ships of 44 guns and fewer, in total 51 ships within a year. It was then decided that the entire fleet should be coppered, due to the difficulties in maintaining a mixed fleet of coppered and non-coppered ships. By 1781, 82 ships of the line had been coppered, along with fourteen 50-gun ships, 115 frigates, and 182 unrated vessels. All this was too late to avert the loss of the American colonies, however; meanwhile the French were threatening the lucrative sugar trade in the Caribbean, reckoned at the time as being of more importance to British interests than the 13 colonies. The sugar trade was paying for the costs of the American Revolutionary War and the Anglo-French War (1778–1783). The Royal Navy's newly coppered ships, as yet untested, were used successfully by Rodney in defeating the French at the Battle of the Saintes off Dominica in 1782. By the time the war ended in 1783, problems with the hull bolting were once more becoming apparent. Finally, a suitable alloy for the hull bolts was found, that of copper and zinc. At great cost the Admiralty decided in 1786 to go ahead with the re-bolting of every ship in the navy, thus finally eliminating the bolt corrosion problem. This process lasted several years, after which no significant changes to the coppering system were required and metal plating remained a standard method of protecting a ship's underwater hull until the advent of modern anti-fouling paint. In the 19th century, pure copper was partially superseded by Muntz metal, an alloy of 60% copper, 40% zinc and a trace of iron. Muntz metal had the advantage of being somewhat cheaper than copper. Civilian use With its widespread adoption by the Royal Navy, some shipping owners employed the method on their merchant vessels. A single coppered vessel was recorded on the register of Lloyd's of London in 1777, a slaver sloop Hawke, 140 tons. This particular vessel impressed the Admiralty when it was inspected by Sandwich in 1775 at Sheerness after a 5-year voyage to India. By 1786, 275 vessels (around 3% of the merchant fleet) were coppered. By 1816, this had risen to 18% of British merchant ships. Copper sheets were exported to India for use on ships built there. In the late 18th and early 19th century, around 30% of Indian ships were coppered. Merchant ship owners were attracted by the savings made possible by copper sheathing, despite the initial outlay. As the coppering was expensive, only the better owners tended to invest in the method, and as a result the use of copper sheathing tended to indicate a well-found and maintained ship, which led to Lloyd's of London charging lower insurance premiums, as the vessels were better risks. From this stems the phrase "copper-bottomed" as an indication of quality. Coppering was more commonly used on merchant ships sailing in warm waters. Ships sailing in colder, northern waters often continued to use replaceable, wooden sheathing planks. Wood-boring organisms were less of a problem for these vessels and they were often routinely careened – an operation that could cause considerable damage to expensive coppering. Coppering was widely used on slave ships. After the Abolition of the Slave Trade Act became British law in 1807, the slave trade became illegal so slavers valued fast ships that were more likely to evade patrolling Royal Navy vessels intent on capturing them. Humphry Davy's experiments with copper sheathing In the late 18th to early 19th century, Sir Humphry Davy performed many experiments to determine how to lessen the corrosion that the seawater caused on unprotected copper sheathing. To this end he had various thicknesses of copper submerged on the shore and then measured how much the sea water had degraded each one. Sheets of different metals remained in the seawater for four months and then were examined. Two harbour ships were also used in this test, one with an additional zinc band, the other with an iron one. Davy observed, that while the zinc and iron themselves became covered in carbonate that allowed weeds, plant life and insects to attach themselves to the metal, the copper sheets that were connected to cast iron or zinc parts were free of any attached life forms or discoloration. Unprotected copper would quickly go from a reddish color to a greenish color of corrosion. When the other metals were mixed with copper in ratios from 1:40 to 1:150, there was no visible sign of corrosion and minimal weight loss. When the ratio was changed to 1:200 and 1:400, there was significant corrosion and weight loss. Davy therefore advocated cast iron for protecting copper, since it was the cheapest to manufacture, and in his observations malleable iron and zinc seemed to wear down faster. Other uses The term copper-bottomed continues to be used to describe a venture, plan or investment that is safe and is certain to be successful. The related copper-fastened (and verb form copperfasten) is used similarly, though with the nuance of "secured, unambiguous", rather than "trustworthy, reliable". See also Careening Hull (watercraft) Nelson Chequer Citations General and cited references External links National Pollutant Inventory - Copper and compounds fact sheet Copper Shipbuilding Sailing ship components
Copper sheathing
[ "Engineering" ]
2,640
[ "Shipbuilding", "Marine engineering" ]
7,164,750
https://en.wikipedia.org/wiki/Dimethyl%20carbonate
Dimethyl carbonate (DMC) is an organic compound with the formula OC(OCH3)2. It is a colourless, flammable liquid. It is classified as a carbonate ester. This compound has found use as a methylating agent and as a co-solvent in lithium-ion batteries. Notably, dimethyl carbonate is a weak methylating agent, and is not considered as a carcinogen. Instead, dimethyl carbonate is often considered to be a green reagent, and it is exempt from the restrictions placed on most volatile organic compounds (VOCs) in the United States. Production World production in 1997 was estimated at 1000 barrels a day. Production of dimethyl carbonate worldwide is limited to Asia, the Middle East, and Europe. Dimethyl carbonate is traditionally prepared by the reaction of phosgene and methanol. Methyl chloroformate is produced as an intermediate: COCl2 + CH3OH → CH3OCOCl + HCl CH3OCOCl + CH3OH → CH3OCO2CH3 + HCl This synthesis route has been largely replaced by oxidative carbonylation. In this process, carbon monoxide and an oxidizer provide the equivalent of CO2+: CO + 1/2 O2 + 2 CH3OH → (CH3O)2CO + H2O It can also be produced industrially by a transesterification of ethylene carbonate or propylene carbonate and methanol, which also affords respectively ethylene glycol or propylene glycol. This route is complicated by the methanol-DMC azeotrope, which requires azeotropic distillation or other techniques. Reactions and potential applications Methylating agent Dimethyl carbonate methylates anilines, carboxylic acids, and phenols, albeit usually slowly. Sometimes these reactions require the use of an autoclave. Dimethyl carbonate's main benefit over other methylating reagents such as iodomethane and dimethyl sulfate is its low toxicity. Additionally, it is biodegradable. Unfortunately, it is a relatively weak methylating agent compared to these traditional reagents. Solvent In the US, dimethyl carbonate was exempted under the definition of volatile organic compounds (VOCs) by the U.S. EPA in 2009. Due to its classification as VOC exempt, dimethyl carbonate has grown in popularity and applications as a replacement for methyl ethyl ketone (MEK) and parachlorobenzotrifluoride, as well as tert-butyl acetate until it too was exempted. Dimethyl carbonate has an ester- or alcohol-like odor, which is more favorable to users than most hydrocarbon solvents it replaces. Dimethyl carbonate has an evaporation rate of 3.22 (butyl acetate = 1.0), which slightly slower than MEK (3.8) and ethyl acetate (4.1), and faster than toluene (2.0) and isopropanol (1.7). Dimethyl carbonate has solubility profile similar to common glycol ethers, meaning dimethyl carbonate can dissolve most common coating resins except perhaps rubber based resins. Hildebrand solubility parameter is 20.3 MPa and Hansen solubility parameters are: dispersion = 15.5, polar = 3.9, H bonding = 9.7. Dimethyl carbonate is partially soluble in water up to 13%, however it is hydrolyzed in water-based systems over time to methanol and CO2 unless properly buffered. Dimethyl carbonate can freeze at same temperatures as water, it can be thawed out with no loss of properties to itself or coatings based on dimethyl carbonate. Intermediate in polycarbonate synthesis A large captive use of dimethyl carbonate is for the production of diphenyl carbonate through transesterification with phenol. Diphenyl carbonate is a widely used raw material for the synthesis of bisphenol-A-polycarbonate in a melt polycondensation process, the resulting product being recyclable by reversing the process and transesterifying the polycarbonate with phenol to yield diphenyl carbonate and bisphenol A. Alternative fuel additive There is also interest in using this compound as a fuel oxygenate additive. In lithium-ion and lithium-metal batteries Similar to ethylene carbonate, dimethyl carbonate forms an electronically-insulating Li+-conducting film at negative electrode potentials. However, the film in dry DMC solutions is not as effective in passivating the negative electrode as the film in wet solutions. For this reason dimethyl carbonate is rarely used in lithium batteries without a co-solvent. Safety DMC is a flammable liquid with a flash point of 17 °C (63 °F), which limits its use in consumer and indoor applications. DMC is still safer than acetone, methyl acetate and methyl ethyl ketone from a flammability point of view. The National Center for Sustainable Transportation recommends limiting exposure by inhalation to less than 100 ppm over an 8-hour work day, which is similar to that of a number of common industrial solvents (toluene, methyl ethyl ketone). Workers should wear protective organic vapor respirators when using DMC indoors or in other conditions where concentrations exceed the REL. DMC is metabolized by the body to methanol and carbon dioxide, so accidental ingestion should be treated in the same manner as methanol poisoning. See also Dimethyl dicarbonate References Methylating agents Carbonate esters Methyl esters
Dimethyl carbonate
[ "Chemistry" ]
1,202
[ "Methylation", "Methylating agents" ]
7,165,122
https://en.wikipedia.org/wiki/False%20keel
The false keel was a timber, forming part of the hull of a wooden sailing ship. Typically thick for a 74-gun ship in the 19th century, the false keel was constructed in several pieces, which were scarfed together, and attached to the underside of the keel by iron staples. The false keel was intended to protect the main keel from damage, and also protect the heads of the bolts holding the main keel together. The false keel could easily be replaced when it became damaged. See also Worm shoe References Sailing ship components Shipbuilding
False keel
[ "Engineering" ]
108
[ "Shipbuilding", "Marine engineering" ]
7,166,138
https://en.wikipedia.org/wiki/Perth%20Art%20Gallery
Perth Art Gallery is the principal art gallery and exhibition space in the city of Perth, Scotland. It is located partly in the Marshall Monument, named in memory of Thomas Hay Marshall, a former provost of Perth. The building was formerly known as Perth Museum and Art Gallery, ceasing to be so in anticipation of the new Perth Museum opening within Perth City Hall. History The museum's location was formerly the site of a late 12th-century motte and bailey castle, built in 1160 to protect the Tay crossing. A great flood in 1209 washed the castle away. The King, William I, was staying in it at the time and had to escape with his wife and entourage by boat to Scone. The Marshall Monument was designed by David Morison and sculpted by John Cochrane and Brothers. Construction began in 1822, and it was opened as a library and museum by the Literary and Antiquarian Society of Perth in 1824. It is one of the United Kingdom's oldest purpose-built museums, and in 1915 it was gifted to the city by the Society on the condition that it was continued to be used only as a library or museum. Extension After large donations of money and paintings were bequeathed to the museum, an extension was planned for the building. In 1930 an architecture competition took place and was judged by Sir James John Burnett, a Scottish architect. A Perth firm, Smart, Stewart & Mitchell, won and the extension was begun with the laying of the foundation stone by lord provost Thomas Dempster on 2 December 1932. Work continued between 1933 and 1935, and it was opened on 10 August 1935 by the Duke and Duchess of York, the future King George VI and Queen Elizabeth. This extension housed the donated paintings as well as the Natural History collections of Perthshire Society of Natural Science which had previously been held at its museum at 62–72 Tay Street. It was made a category B listed building in May 1965. The museum's collection The museum collection includes the South Corston fragment of the Strathmore meteorite and the mummy of a woman named Ta-Kr-Hb. See also List of listed buildings in Perth, Scotland References External links Culture Perth and Kinross: Perth Museum and Art Gallery Local museums in Scotland Natural history museums in Scotland Decorative arts museums in Scotland Glass museums and galleries Category B listed buildings in Perth and Kinross Listed buildings in Perth, Scotland Art museums and galleries in Perth, Scotland
Perth Art Gallery
[ "Materials_science", "Engineering" ]
492
[ "Glass engineering and science", "Glass museums and galleries" ]
7,166,412
https://en.wikipedia.org/wiki/Global%20Area%20Reference%20System
The Global Area Reference System (GARS) is a standardized geospatial reference system developed by the National Geospatial-Intelligence Agency (NGA) for use across the United States Department of Defense. Under the Chairman of the Joint Chiefs of Staff Instruction CJCSI 3900.01C dated 30 June 2007, GARS was adopted for use by the US DoD as "the “area-centric” counterpart to the “point-centric” MGRS". It uses the WGS 1984 Datum and is based on lines of longitude (LONG) and latitude (LAT). It is intended to provide an integrated common frame of reference for joint force situational awareness to facilitate air-to-ground coordination, deconfliction, integration, and synchronization. This area reference system provides a common language between the components and simplifies communications. GARS is primarily designed as a battlespace management tool and not to be used for navigation or targeting. Design GARS divides the surface of the earth into 30-minute by 30-minute cells. Each cell is identified by a five-character designation. (ex. 006AG) The first three characters designate a 30-minute wide longitudinal band. Beginning with the 180-degree meridian and proceeding eastward, the bands are numbered from 001 to 720, so that 180 E to 179 30’W is band 001; 179 30’W to 179 00’W is band 002; and so on. The fourth and fifth characters designate a 30-minute wide latitudinal band. Beginning at the south pole and proceeding northward, the bands are lettered from AA to QZ (omitting I and O) so that 90 00’S to 89 30’S is band AA; 89 30’S to 89 00’S is band AB; and so on. Each 30-minute cell is divided into four 15-minute by 15-minute quadrants. The quadrants are numbered sequentially, from west to east, starting with the northernmost band. Specifically, the northwest quadrant is “1”; the northeast quadrant is “2”; the southwest quadrant is “3”; the southeast quadrant is “4”. Each quadrant is identified by a six-character designation. (ex. 006AG3) The first five characters comprise the 30-minute cell designation. The sixth character is the quadrant number. Each 15-minute quadrant is divided into nine 5-minute by 5-minute areas. The areas are numbered sequentially, from west to east, starting with the northernmost band. The graphical representation of a 15-minute quadrant with numbered 5-minute by 5-minute areas resembles a telephone keypad. Each 5-minute by 5-minute area, or keypad “key” is identified by a seven-character designation. The first six characters comprise the 15-minute quadrant designation. The seventh character is the keypad “key” number. (ex.006AG39) See also Military Grid Reference System List of geodesic-geocoding systems External links NGA's description Chairman of the Joint Chiefs of Staff Instruction CJCSI 3900.01C dated 30 June 2007 Geographic coordinate systems Military cartography
Global Area Reference System
[ "Mathematics" ]
660
[ "Geographic coordinate systems", "Coordinate systems" ]
7,167,202
https://en.wikipedia.org/wiki/Exponential-Golomb%20coding
An exponential-Golomb code (or just Exp-Golomb code) is a type of universal code. To encode any nonnegative integer x using the exp-Golomb code: Write down x+1 in binary Count the bits written, subtract one, and write that number of starting zero bits preceding the previous bit string. The first few values of the code are: 0 ⇒ 1 ⇒ 1 1 ⇒ 10 ⇒ 010 2 ⇒ 11 ⇒ 011 3 ⇒ 100 ⇒ 00100 4 ⇒ 101 ⇒ 00101 5 ⇒ 110 ⇒ 00110 6 ⇒ 111 ⇒ 00111 7 ⇒ 1000 ⇒ 0001000 8 ⇒ 1001 ⇒ 0001001 ... In the above examples, consider the case 3. For 3, x+1 = 3 + 1 = 4. 4 in binary is '100'. '100' has 3 bits, and 3-1 = 2. Hence add 2 zeros before '100', which is '00100' Similarly, consider 8. '8 + 1' in binary is '1001'. '1001' has 4 bits, and 4-1 is 3. Hence add 3 zeros before 1001, which is '0001001'. This is identical to the Elias gamma code of x+1, allowing it to encode 0. Extension to negative numbers Exp-Golomb coding is used in the H.264/MPEG-4 AVC and H.265 High Efficiency Video Coding video compression standards, in which there is also a variation for the coding of signed numbers by assigning the value 0 to the binary codeword '0' and assigning subsequent codewords to input values of increasing magnitude (and alternating sign, if the field can contain a negative number): 0 ⇒ 0 ⇒ 1 ⇒ 1 1 ⇒ 1 ⇒ 10 ⇒ 010 −1 ⇒ 2 ⇒ 11 ⇒ 011 2 ⇒ 3 ⇒ 100 ⇒ 00100 −2 ⇒ 4 ⇒ 101 ⇒ 00101 3 ⇒ 5 ⇒ 110 ⇒ 00110 −3 ⇒ 6 ⇒ 111 ⇒ 00111 4 ⇒ 7 ⇒ 1000 ⇒ 0001000 −4 ⇒ 8 ⇒ 1001 ⇒ 0001001 ... In other words, a non-positive integer x≤0 is mapped to an even integer −2x, while a positive integer x>0 is mapped to an odd integer 2x−1. Exp-Golomb coding is also used in the Dirac video codec. Generalization to order k To encode larger numbers in fewer bits (at the expense of using more bits to encode smaller numbers), this can be generalized using a nonnegative integer parameter  k. To encode a nonnegative integer x in an order-k exp-Golomb code: Encode ⌊x/2k⌋ using order-0 exp-Golomb code described above, then Encode x mod 2k in binary An equivalent way of expressing this is: Encode x+2k−1 using the order-0 exp-Golomb code (i.e. encode x+2k using the Elias gamma code), then Delete k leading zero bits from the encoding result See also Elias gamma (γ) coding Elias delta (δ) coding Elias omega (ω) coding Universal code References Entropy coding Numeral systems Data compression
Exponential-Golomb coding
[ "Mathematics" ]
685
[ "Numeral systems", "Mathematical objects", "Numbers" ]
7,167,651
https://en.wikipedia.org/wiki/Pneumocystis%20jirovecii
Pneumocystis jirovecii (previously P. carinii) is a yeast-like fungus of the genus Pneumocystis. The causative organism of Pneumocystis pneumonia, it is an important human pathogen, particularly among immunocompromised hosts. Prior to its discovery as a human-specific pathogen, P. jirovecii was known as P. carinii. Lifecycle The complete lifecycles of any of the species of Pneumocystis are not known, but presumably all resemble the others in the genus. The terminology follows zoological terms, rather than mycological terms, reflecting the initial misdetermination as a protozoan parasite. It is an extracellular fungus. All stages are found in lungs and because they cannot be cultured ex vivo, direct observation of living Pneumocystis is difficult. The trophozoite stage is thought to be equivalent to the so-called vegetative state of other species (such as Schizosaccharomyces pombe), which like Pneumocystis, belong to the Taphrinomycotina branch of the fungal kingdom. The trophozoite stage is single-celled and appears amoeboid (multilobed) and closely associated with host cells. Globular cysts eventually form that have a thicker wall. Within these ascus-like cysts, eight spores form, which are released through rupture of the cyst wall. The cysts often collapse, forming crescent-shaped bodies visible in stained tissue. Whether meiosis takes place within the cysts, or what the genetic status is of the various cell types, is not known for certain. Homothallism The lifecycle of P. jirovecii is thought to include both asexual and sexual phases. Asexual multiplication of haploid cells likely occurs by binary fission. The mode of sexual reproduction appears to be primary homothallism, a form of self-fertilization. The sexual phase takes place in the host's lungs. This phase is presumed to involve formation of a diploid zygote, followed by meiosis, and then production of an ascus containing the products of meiosis, eight haploid ascospores. The ascospores may be disseminated by airborne transmission to new hosts. Medical relevance Pneumocystis pneumonia is an important disease of immunocompromised humans, particularly patients with HIV, but also patients with an immune system that is severely suppressed for other reasons, for example, following a bone marrow transplant. In humans with a normal immune system, it is an extremely common silent infection. Identified by methenamine silver stain of lung tissue, type I pneumocytes, and type II pneumocytes over-replicate and damage alveolar epithelium, causing death by asphyxiation. Fluid leaks into alveoli, producing an exudate seen as honeycomb/cotton candy appearance on hematoxylin and eosin-stained slides. Drug of choice is trimethoprim/sulfamethoxazole, pentamidine, or dapsone. In HIV patients, most cases occur when the CD4 count is below 200 cells per microliter. Nomenclature At first, the name Pneumocystis carinii was applied to the organisms found in both rats and humans, as the parasite was not yet known to be host-specific. In 1976, the name "Pneumocystis jiroveci" was proposed for the first time, to distinguish the organism found in humans from variants of Pneumocystis in other animals. The organism was named thus in honor of Czech parasitologist Otto Jirovec, who described Pneumocystis pneumonia in humans in 1952. After DNA analysis showed significant differences in the human variant, the proposal was made again in 1999 and has come into common use. The name was spelled according to the International Code of Zoological Nomenclature, since the organism was believed to be a protozoan. After it became clear that it was a fungus, the name was changed to Pneumocystis jirovecii, according to the International Code of Nomenclature for algae, fungi, and plants (ICNafp), which requires such names be spelled with double i (ii). Both spellings are commonly used, but according to the ICNafp, P. jirovecii is correct. A change in the ICNafp now recognizes the validity of the 1976 publication, making the 1999 proposal redundant, and cites Pneumocystis and P. jiroveci as examples of the change in ICN Article 45, Ex 7. The name P. jiroveci is typified (both lectotypified and epitypified) by samples from human autopsies dating from the 1960s. The term PCP, which was widely used by practitioners and patients, has been retained for convenience, with the rationale that it now stands for the more general Pneumocystis pneumonia rather than Pneumocystis carinii pneumonia. The name P. carinii is incorrect for the human variant, but still describes the species found in rats, and that name is typified by an isolate from rats. Pneumocystis genome Pneumocystis species cannot be grown in culture, so the availability of the human disease-causing agent, P. jirovecii, is limited. Hence, investigation of the whole genome of a Pneumocystis is largely based upon true P. carinii available from experimental rats, which can be maintained with infections. Genetic material of other species, such as P. jirovecii, can be compared to the genome of P. carinii. The genome of P. jirovecii has been sequenced from a bronchoalveolar lavage sample. The genome is small, low in G+C content, and lacks most amino-acid biosynthesis enzymes. History The earliest report of this genus appears to have been that of Carlos Chagas in 1909, who discovered it in experimental animals, but confused it with part of the lifecycle of Trypanosoma cruzi (causal agent of Chagas disease) and later called both organisms Schizotrypanum cruzi, a form of trypanosome infecting humans. The rediscovery of Pneumocystis cysts was reported by Antonio Carini in 1910, also in Brazil. The genus was again discovered in 1912 by Delanoë and Delanoë, this time at the Pasteur Institute in Paris, who found it in rats and proposed the genus and species name Pneumocystis carinii after Carini. Pneumocystis was redescribed as a human pathogen in 1942 by two Dutch investigators, van der Meer and Brug, who found it in three new cases: a 3-month-old infant with congenital heart disease and in two of 104 autopsy cases – a 4-month-old infant and a 21-year-old adult. There being only one described species in the genus, they considered the human parasite to be P. carinii. Nine years later (1951), Dr. Josef Vanek at Charles University in Prague, Czechoslovakia, showed in a study of lung sections from 16 children that the organism labelled "P. carinii" was the causative agent of pneumonia in these children. The following year, Czech scientist Otto Jírovec reported "P. carinii" as the cause of interstitial pneumonia in neonates. Following the realization that Pneumocystis from humans could not infect experimental animals such as rats, and that the rat form of Pneumocystis differed physiologically and had different antigenic properties, Frenkel was the first to recognize the human pathogen as a distinct species. He named it "Pneumocystis jiroveci" (corrected to P. jirovecii - see nomenclature above). Controversy existed over the relabeling of P. carinii in humans as P. jirovecii, which is why both names still appear in publications. However, only the name P. jirovecii is used exclusively for the human pathogen, whereas the name P. carinii has had a broader application to many species. Frenkel and those before him believed that all Pneumocystis were protozoans, but soon afterwards evidence began accumulating that Pneumocystis was a fungal genus. Recent studies show it to be an unusual, in some ways a primitive genus of Ascomycota, related to a group of yeasts. Every tested primate, including humans, appears to have its own type of Pneumocystis that is incapable of cross-infecting other host species and has co-evolved with each species. Currently, only five species have been formally named: P. jirovecii from humans, P. carinii as originally named from rats, P. murina from mice, P. wakefieldiae also from rats, and P. oryctolagi from rabbits. Historical and even recent reports of P. carinii from humans are based upon older classifications (still used by many, or those still debating the recognition of distinct species in the genus Pneumocystis) which does not mean that the true P. carinii from rats actually infects humans. In an intermediate classification system, the various taxa in different mammals have been called formae speciales or forms. For example, the human "form" was called Pneumocystis carinii f. [or f. sp.] hominis, while the original rat infecting form was called Pneumocystis carinii f. [or f. sp.] carinii. This terminology is still used by some researchers. The species of Pneumocystis originally seen by Chagas have not yet been named as distinct species. Many other undescribed species presumably exist and those that have been detected in many mammals are only known from molecular sample detection from lung tissue or fluids, rather than by direct physical observation. Currently, they are cryptic taxa. References External links Ascomycota Parasitic fungi Fungal pathogens of humans Fungus species
Pneumocystis jirovecii
[ "Biology" ]
2,176
[ "Fungi", "Fungus species" ]
7,167,911
https://en.wikipedia.org/wiki/Pelletizing
Pelletizing is the process of compressing or molding a material into the shape of a pellet. A wide range of different materials are pelletized including chemicals, iron ore, animal compound feed, plastics, waste materials, and more. The process is considered an excellent option for the storage and transport of said materials. The technology is widely used in the powder metallurgy engineering and medicine industries. Pelletizing of iron ore Edward W Davis of the University of Minnesota is credited for devising the process of pelletizing iron ore. Pelletizing iron ore is undertaken due to the excellent physical and metallurgical properties of iron ore pellets. Iron ore pellets are spheres of typically to be used as raw material for blast furnaces. They typically contain 64–72% Fe and various additional material adjusting the chemical composition and the metallurgic properties of the pellets. Typically limestone, dolomite and olivine is added and Bentonite is used as binder. The process of pelletizing combines mixing of the raw material, forming the pellet and a thermal treatment baking the soft raw pellet to hard spheres. The raw material is rolled into a ball, then fired in a kiln or in travelling grate to sinter the particles into a hard sphere. The configuration of iron ore pellets as packed spheres in the blast furnace allows air to flow between the pellets, decreasing the resistance to the air that flows up through the layers of material during the smelting. The configuration of iron ore powder in a blast furnace is more tightly-packed and restricts the air flow. This is the reason that iron ore is preferred in the form of pellets rather than in the form of finer particles. The quality of the iron ore pellets depends on different factors, which include feed particle size, amount of water used, disc rotating speed, inclination angle of the disc bottom, residence time in the disc as well as the quality and quantity of the binder(s) used. Preparation of raw materials Additional materials are added to the iron ore (pellet feed) to meet the requirements of the final pellets. This is done by placing the mixture in the pelletizer, which can hold different types of ores and additives, and mixing to adjust the chemical composition and the metallurgic properties of the pellets. In general, the following stages are included in this period of processing: concentration / separation, homogenization of the substance ratios, milling, classification, increasing thickness, homogenization of the pulp and filtering. Formation of the raw Pellets The formation of raw iron ore pellets, also known as pelletizing, has the objective of producing pellets in an appropriate band of sizes and with mechanical properties high usefulness during the stresses of transference, transport, and use. For example, waste materials are ground before being heated and introduced into a press for compression. Both mechanical force and thermal processes are used to produce the correct pellet properties. From an equipment point of view there are two alternatives for industrial production of iron ore pellets: the drum and the pelletizing disk. Thermal processing In order to confer to the pellets high resistance metallurgic mechanics and appropriate characteristics, the pellets are subjected to thermal processing, which involves stages of drying, preheating, firing, after-firing and cooling. The duration of each stage and the temperature that the pellets are subjected to have a strong influence on the final product quality. Pharmaceutical industry In the field of medicine, pelletization is referred to as the agglomeration process that converts fine powders or granules into more or less spherical pellets. The use of the technology increased because it allows for the controlled release of dosage form, which also lead to a uniform absorption with less mucosal irritation within the gastrointestinal tract. There are different pelletization processes applied in the pharmaceutical industry and these typically vary according to the bonding forces. Some examples of the processes include balling, compression, and spray congealing. Balling is similar to the wet (or green) pelletization used in the iron ore industry. Pelletizing of animal feeds Pelletizing of animal feeds can result in pellets from (shrimp feeds), through to (poultry feeds) up to (stock feeds). The pelletizing of stock feed is done with the pellet mill machinery, which is done in a feed mill. Preparation of raw ingredients Feed ingredients are normally first hammered to reduce the particle size of the ingredients. Ingredients are then batched, and then combined and mixed thoroughly by a feed mixer. Once the feed has been prepared to this stage the feed is ready to be pelletized. Formation of the feed pellets Pelletizing is done in a pellet mill, where feed is normally conditioned and thermal-treated in the fitted conditioners of a pellet mill. The feed is then pushed through the holes and exit the pellet mill as pelleted feed. Pelletizing of wood Wood pellets made by compressing sawdust or other ground woody materials are used in a variety of energy and non-energy applications. In the energy sector, wood pellets are often used to replace coal with power plants such as Drax, in England, replacing most of their coal use with woody pellet. As sustainably harvested wood does not lead to a long-term increase in atmospheric carbon dioxide levels, wood fuels are considered to be a low-carbon form of energy. Wood pellets are also used for domestic and commercial heating either in the form of automated boilers or pellet stoves. Compared to other fuels made from wood, pellets have the advantage of higher energy density, simpler handling as it flows similar to grain, and low moisture. Concerns have been raised about the short-term carbon balance of wood pellet production, particularly if it is driving the harvesting of old or mature harvests that would otherwise not be logged. Areas of concern include the inland rainforests of British Columbia These claims are contested by the pellet and forest industries. After pelleting processes After pelleting, the pellets are cooled with a cooler to bring the temperature of the feed down. Other post pelleting applications include post-pelleting conditioning, sorting via a screen, and maybe coating if required. Gallery See also Blast furnace Iron ore Nurdle (bead) Pellet (disambiguation) Pellet mill Prill References Iron Livestock Metallurgical processes
Pelletizing
[ "Chemistry", "Materials_science" ]
1,349
[ "Metallurgical processes", "Metallurgy" ]
7,168,569
https://en.wikipedia.org/wiki/Scanning%20acoustic%20microscope
A scanning acoustic microscope (SAM) is a device which uses focused sound to investigate, measure, or image an object (a process called scanning acoustic tomography). It is commonly used in failure analysis and non-destructive evaluation. It also has applications in biological and medical research. The semiconductor industry has found the SAM useful in detecting voids, cracks, and delaminations within microelectronic packages. History The first scanning acoustic microscope (SAM), with a 50 MHz ultrasonic lens, was developed in 1974 by R. A. Lemons and C. F. Quate at the Microwave Laboratory of Stanford University. A few years later, in 1980, the first high-resolution (with a frequency up to 500 MHz) through-transmission SAM was built by R.Gr. Maev and his students at his Laboratory of Biophysical Introscopy of the Russian Academy of Sciences. First commercial SAM ELSAM, with a broad frequency range from 100 MHz up to 1.8 GHz, was built at the Ernst Leitz GmbH by the group led by Martin Hoppe and his consultants Abdullah Atalar (Stanford University), Roman Maev (Russian Academy of Sciences) and Andrew Briggs (Oxford University.) Since then, many improvements to such systems have been made to enhance resolution and accuracy. Most of them were described in detail in the monograph Advanced in Acoustic Microscopy, Ed. by Andrew Briggs, 1992, Oxford University Press and in monograph by Roman Maev, Acoustic Microscopy Fundamentals and Applications, Monograph, Wiley & Son - VCH, 291 pages, August 2008, as well as recently in. C-SAM versus other Techniques There are many methods for failure analysis of damages in microelectronic packages, including laser decapsulation, wet etch decapsulation, optical microscopy, SEM microscopy, and X-ray. The problem with most of these methods is the fact that they are destructive. This means it’s possible that the damage itself will be done during preparation. Also, most of these destructive methods need time-consuming and complicated sample preparation. So, in most cases, it is important to study damages with a non-destructive technique. And unlike other non-destructive techniques such as X-Ray, CSAM is highly sensitive to the elastic properties of the materials it travels through. For example, CSAM is highly sensitive to the presence of delaminations and air-gaps at sub-micron thicknesses, so it is particularly useful for inspection of small, complex devices. Physics Principle The technique makes use of the high penetration depth of acoustic waves to image the internal structure of the specimen. So, in scanning acoustic microscopy either reflected or transmitted acoustic waves are processed to analyze the internal features. When the acoustic wave propagates though the sample it may be scattered, absorbed or reflected at media interfaces. Thus, the technique registers the echo generated by the acoustic impedance (Z) contrast between two materials. Scanning acoustic microscopy works by directing focused sound from a transducer at a small point on a target object. Sound hitting the object is either scattered, absorbed, reflected (scattered at 180°) or transmitted (scattered at 0°). It is possible to detect the scattered pulses travelling in a particular direction. A detected pulse informs of the presence of a boundary or object. The `time of flight' of the pulse is defined as the time taken for it to be emitted by an acoustic source, scattered by an object and received by the detector, which is usually coincident with the source. The time of flight can be used to determine the distance of the inhomogeneity from the source given knowledge of the speed through the medium. Based on the measurement, a value is assigned to the location investigated. The transducer (or object) is moved slightly and then insonified again. This process is repeated in a systematic pattern until the entire region of interest has been investigated. Often the values for each point are assembled into an image of the object. The contrast seen in the image is based either on the object's geometry or material composition. The resolution of the image is limited either by the physical scanning resolution or the width of the sound beam (which in turn is determined by the frequency of the sound). Methodology Different types of analysis modes are available in high-definition SAM. The main three modes are A-scans, B-scans, and C-scans. Each one provides different information about the integrity of the sample’s structure. The A-scan is the amplitude of the echo signal over ToF. The transducer is mounted on the z-axis of the SAM. It can be focused to a specific target layer located in a hard-to-access area by changing the z-position with respect to the sample under testing that is mechanically fixed. The B-scan provides a vertical cross section of the sample with visualization of the depth information. It is a very good feature when it comes to damage detection in the cross section. The C-scan is a commonly used scanning mode, which gives 2D images (slices) of a target layer at a specific depth in the samples; multiple equidistant layers are feasible through the X-scan mode. Pulse-reflection method 2D or 3D-dimensional images of the internal structure become available by means of the pulse-reflection method, in which the impedance mismatch between two materials leads to a reflection of the ultrasonic beam. Phase inversion of the reflected signal can allow for discrimination of the delamination (acoustic impedance almost zero) from inclusions and particles, but not from air bubbles, which show same impedance behavior as delamination. The higher the impedance mismatch at the interface, the higher the intensity of the reflected signal (more brightness in the 2D image), which is measured by the echo amplitude. In the case of an interface with air (Z = 0), total reflection of the ultrasonic wave occurs; therefore, SAM is highly sensitive to any entrapped air in the sample under testing. In order to enhance the insertion of the acoustic wave into the specimen both the acoustic transducer and the sample are immersed in a coupling media, typically water, to avoid the high reflection at air interfaces. In the pulse-wave mode, a lens having good focusing properties on an axis is used to focus the ultrasonic waves onto a spot on the specimen and to receive the reflected waves back from the spot, typically in less than 100 ns. The acoustic beam can be focused to a sufficiently small spot at a depth up to 2–3 mm to resolve typical interlaminar cracks and other critical crack geometries. The received echoes are analysed and stored for each point to build up an image of the entire scanned area. The reflected signal is monitored and sent to a synchronous display to develop a complete image, as in a scanning electron microscope. Applications - Fast production control - Standards : IPC A610, Mil-Std883, J-Std-035, Esa, etc - Parts sorting - Inspection of solder pads, flip-chip, underfill, die-attach - Sealing joints - Brazed and welded joints - Qualification and fast selection of glues, adhesive, comparative analyses of aging, etc - Inclusions, heterogeneities, porosities, cracks in material Medicine and biology SAM can provide data on the elasticity of cells and tissues, which can give useful information on the physical forces holding structures in a particular shape and the mechanics of structures such as the cytoskeleton. These studies are particularly valuable in investigating processes such as cell motility. Some work has also been performed to assess penetration depth of particles injected into skin using needle-free injection Another promising direction was initiated by different groups to design and build portable hand-held SAM for subsurface diagnostics of soft and hard tissues and this direction currently in the commercialization process in clinical and cosmetology practice. See also Acoustic microscopy References Acoustics Microscopes American inventions
Scanning acoustic microscope
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,639
[ "Classical mechanics", "Measuring instruments", "Acoustics", "Microscopes", "Microscopy" ]
7,168,628
https://en.wikipedia.org/wiki/Network%20Centric%20Operations%20Industry%20Consortium
The Network Centric Operations Industry Consortium (NCOIC) is an international not-for-profit, chartered in the United States, whose goal is to facilitate the adoption of cross-domain interoperability standards. Formed in September 2004, the organization is composed of more than 50 members and advisors representing business, government organizations and academic institutions in 12 countries. NCO is the application of the fundamental tenets of network-centric warfare to aspects of national security, especially industry support for the missions of both the United States Department of Defense and the Department of Homeland Security (DHS). NCOIC does not only subscribe to the military use of this theory, but also works to apply NCO and interoperability across nations and industries, including emergency response, health care, aerospace, information technology cyber security & cloud computing, energy and financial services. NCOIC's technical teams have developed resources to further the use of network-centric systems and interoperability in both the public and private sectors. These resources – including processes, tools, frameworks, patterns, principles and databases—are available free of charge on the NCOIC website. They are aimed at helping an organization lower engineering costs, speed program implementation, increase capability and reduce risk. The consortium also provides training and services such as interoperability demonstrations, acquisition strategies, evaluations and verification. NCOIC focuses on four interdependent areas in identifying solutions that will enable cross-domain interoperability: business, culture, governance and technical. The interaction, influence and impact of factors—such as financial objectives, business goals, laws and regulations, and cultural considerations – are all taken into account when planning and/or implementing technology change. Key Technical Resources Systems, Capabilities, Operations, Programs, & Enterprises (SCOPE) Model • The SCOPE interoperability assessment model is designed to characterize interoperability-relevant aspects or capabilities of a system or set of systems over a network in terms of a set of dimensions and values along those dimensions. NCOIC Interoperability Framework (NIF) • The NIF is a development framework that helps system architects and system engineers to embed interoperability elements throughout the life cycle of programs, beginning with requirements. Whenever possible, those resources are based upon standards. Net Centric Patterns • NCOIC Net Centric Patterns contain prescriptive recommendations on approaches and standards in specific interoperability domains. Network Centric Analysis Tool (NCAT) • NCAT is a collaborative, web-enabled questionnaire-based tool developed to assist NCOIC teams and member companies to enhance the likelihood and reduce the time and effort of member companies developing interoperable systems consistent with customers' policies and guidelines, reference models and architectures. It is also available in an excel format. NCOIC QuadTrangle • The QuadTrangle™ developed by the Network Centric Operations Industry Consortium shows the four, interdependent areas that must be considered when developing a reliable and trusted interoperable environment: business, culture, governance and technical. References Information technology organizations Business organizations based in the United States Net-centric Organizations established in 2004
Network Centric Operations Industry Consortium
[ "Technology" ]
632
[ "Information technology", "Information technology organizations" ]
7,168,931
https://en.wikipedia.org/wiki/Use%20of%20force%20continuum
A use of force continuum is a standard that provides law enforcement officers and civilians with guidelines as to how much force may be used against a resisting or compliant subject in a given situation. In some ways, it is similar to the U.S. military's escalation of force (EOF). The purpose of these models is to clarify, both for law enforcement officers and civilians, the complex subject of use of force. They are often central parts of law enforcement agencies' use of force policies. Various criminal justice agencies have developed different models of the continuum, and there is no universal or standard model. Generally, each different agency will have their own use of force policy. Some agencies may separate some of the hand-to-hand based use of force. For example, take-downs and pressure point techniques may be one step before actual strikes and kicks. Also, for some agencies the use of aerosol pepper spray and electronic control devices (TASER) may fall into the same category as take-downs, or the actual strikes. The first examples of a use of force continuum were developed in the 1980s and early 1990s. Early models were depicted in various formats, including graphs, semicircular "gauges", and linear progressions. Most often the models are presented in "stair step" fashion, with each level of force matched by a corresponding level of subject resistance, although it is generally noted that an officer need not progress through each level before reaching the final level of force. These progressions rest on the premise that officers should escalate and de-escalate their level of force in response to the subject's actions. Although the use of force continuum is used primarily as a training tool for law enforcement officers, it is also valuable with civilians, such as in criminal trials or hearings by police review boards. In particular, a graphical representation of a use of force continuum is useful to a jury when deciding whether an officer's use of force was reasonable. Example model While the specific progression of force varies considerably (especially the wide gap between empty hand control and deadly force) among different agencies and jurisdictions, one example of a general use of force continuum model cited in a U.S. government publication on use of force is shown below. Officer presence – the professionalism, uniform, and utility belt of the law enforcement officer and the marked vessel or vehicle the officer arrives in. The visual presence of authority is normally enough for a subject to comply with an officer's lawful demands. Depending on the totality of the circumstances, a call/situation may require additional officers or on scene officers may request assistance in order to gain better control of the situation and ensure a more safe environment for all involved. It also will depend on the circumstances of the situation. For example, depending on how many people are at the scene with the officer, a larger presence may be required. However, if 10 officers arrive at a scene with only a single suspect, the public may perceive the situation as an excessive use of officer presence within the use of force continuum. Verbal commands/cooperative controls – clear and understandable verbal direction by an officer aimed at the subject. In some cases, it is necessary for the officer to include a consequence to the verbal direction so that the subject understands what will happen if the subject refuses to comply with the officer’s direction. The verbal command and the consequence must be legal and not considered excessive according to the continuum. For example, an officer could not order a disabled person in a wheel chair to stand up or be sprayed by Oleoresin Capsicum (OC) Pepper Spray. Soft Control, PPCT – Pressure Point Control Tactics, Control Tactics, techniques – a level of force that has a low probability of causing soft connective tissue damage or bone fractures. This would include joint manipulation techniques, applying pressure to pressure points and normal application of hand-cuffs. Hard control Techniques/Aggressive response techniques – the amount of force that has a probability of causing soft connective tissue damage or bone fractures or irritation of the skin, eyes, and mucous membranes. This would include kicks, punches, stuns and use of aerosol sprays such as oleoresin capsicum (OC) pepper spray. Some models split these techniques between empty hand, soft control and intermediate weapon techniques but only include 5 levels of the continuum. Intermediate weapons – an amount of force that would have a high probability of causing soft connective tissue damage or bone fractures. (e.g. expandable baton, baton, pepper spray, Taser, beanbag rounds, rubber fin stabilized ammunition, Mace (spray), police dogs, etc.) Intermediate weapon techniques are designed to impact muscles, arms and legs, and intentionally using an intermediate weapon on the head, neck, groin, knee caps, or spine would be classified as deadly or lethal force. Lethal force/Deadly force – a force with a high probability of causing death or serious bodily injury. Serious bodily injury includes unconsciousness, protracted or obvious physical disfigurement, or protracted loss of or impairment to the function of a bodily member, organ, or the mental faculty. A firearm is the most widely recognized lethal or deadly force weapon, however, an automobile or weapon of opportunity could also be defined as a deadly force utility. The U.S. Navy teaches a six-step model: Officer presence, Verbal commands, Soft controls, Hard controls, Intermediate Weapons, and Lethal force. Hard controls includes the use of tools such as hand-cuffs, while soft controls equates to empty hand above, describing techniques where the officer may engage a resisting detainee. When escalating, voluntary submission to cuffs is a viable way to prevent the need for empty hand submission techniques which place the officer and the detainee at physical risk. When de-escalating, hard controls (i.e.: cuffs and isolation in the rear seat of a cruiser) give officers a reasonable and achievable goal after altercation with a detainee during which higher levels of force may have been required. Subject classifications In all use of force continuum models, the actions of the subject is classified in order for the officer to quickly determine what level of force is authorized and may be necessary to apprehend or compel compliance from the individual. Listed below are examples of how subjects are classified. Passive compliant – a person who recognizes the authority of the officers presence and follows the verbal commands of the officer. Passive resistor – a person who refuses to follow the verbal commands of the officer but does not resist attempts by officers to take positive physical control over them. Active resistor – a person who does not follow verbal commands, resists attempts by the officer to take positive physical control over them, but does not try to inflict harm on the officer. Active aggressor – a person who does not follow verbal commands, resists attempts by the officer to take positive physical control over them and attempts to cause harm to the officer or others. Generally, the passive subjects and active resistors fall under levels 1–3 of the use of force continuum, while active aggressors fall under levels 4–6. The officers are trained to apply the proper measure of force within the continuum based on the actions and classification of the subject. Reasonableness standard The United States Supreme Court, in the case of Graham v. Connor, (1989) ruled that excessive use of force claims must be evaluated under the "objectively reasonable" standard of the Fourth Amendment. Therefore, the "reasonableness" factor of a use of force incident must be judged from the perspective of a reasonable officer on the scene, and judged with the understanding that police officers are often forced to make split-second decisions about the amount of force necessary in a particular situation. Broadly speaking, the use of force by an officer becomes necessary and is permitted under specific circumstances, such as in self-defense or in defense of another individual or group. However, there is no all encompassing consensus about when an officer would always need to use force, nor is there any agreed upon method that can efficiently measure or predict specific types of force actions that one would deem reasonable before the time comes. The International Association of Chiefs of Police, has described use of force as the "amount of effort required by police to compel compliance by an unwilling subject". When force is observed Garner and Maxwell (1996) found that when force was necessary, in 80 percent of the encounters, police opted to use weaponless force such as grabbing or shoving. Alpert and Dunham (1999) show that police use of force is reactionary, initiated by suspect resisting arrest. Force is more likely to be employed if suspect is disrespectful, intoxicated, and/or wielding a weapon. Research has also found that special division officers are more likely to use deadly force on suspects. Studies examining gender influences on the use of force are still inconclusive. Some findings suggest that male suspects are more likely to have force used against them, whereas others show insignificant differences. However, research examining male-female patrol teams show that these pairings are less likely to use force compared to male-male pairings. Conclusions suggest that female officers may be more effective at diffusing tense situations. See also Friedrich Glasl's model of conflict escalation Pain compliance Peelian principles Police brutality Reasonable force Footnotes References Law of War, Rules of Engagement, and Escalation of Force Guide, Marine Corps Center for Lessons Learned. 31 August 2007. Marine Corps marinecorpsconceptsandprograms.com External links Law Enforcement Police Integrity - United States Department of Justice integratedbyaardvark.com marinecorpsconceptsandprograms.com Violence Law enforcement terminology
Use of force continuum
[ "Biology" ]
1,983
[ "Behavior", "Aggression", "Human behavior", "Violence" ]
7,169,703
https://en.wikipedia.org/wiki/Krogh%27s%20principle
Krogh's principle states that "for such a large number of problems there will be some animal of choice, or a few such animals, on which it can be most conveniently studied." This concept is central to those disciplines of biology that rely on the comparative method, such as neuroethology, comparative physiology, and more recently functional genomics. History Krogh's principle is named after the Danish physiologist August Krogh, winner of the Nobel Prize in Physiology for his contributions to understanding the anatomy and physiology of the capillary system, who described it in The American Journal of Physiology in 1929. However, the principle was first elucidated nearly 60 years prior to this, and in almost the same words as Krogh, in 1865 by Claude Bernard, the French instigator of experimental medicine, on page 27 of his "Introduction à l'étude de la médecine expérimentale": Krogh wrote the following in his 1929 treatise on the then current 'status' of physiology (emphasis added): "Krogh's principle" was not utilized as a formal term until 1975 when the biochemist Hans Adolf Krebs (who initially described the Citric Acid Cycle), first referred to it. More recently, at the International Society for Neuroethology meeting in Nyborg, Denmark in 2004, Krogh's principle was cited as a central principle by the group at their 7th Congress. Krogh's principle has also been receiving attention in the area of functional genomics, where there has been increasing pressure and desire to expand genomics research to a more wide variety of organisms beyond the traditional scope of the field. Philosophy and applications A central concept to Krogh's principle is evolutionary adaptation. Evolutionary theory maintains that organisms are suited to particular niches, some of which are highly specialized for solving particular biological problems. These adaptations are typically exploited by biologists in several ways: Methodology: (e.g. Taq polymerase and PCR): The need to manipulate biological systems in the laboratory has driven the use of an organismal specialization. One example of Krogh's principle presents itself in the heavily used Polymerase Chain Reaction (PCR), a method which relies on the rapid exposure of DNA to high heat for amplification of particular sequences of interest. DNA polymerase enzyme from many organisms would denature at high temperatures, however, to solve this problem, Chien and colleagues turned to Thermus aquaticus, a strain of bacteria native to hydrothermal vents. Thermus aquaticus has a polymerase that is heat stable at temperatures necessary for PCR. Biochemically modified Taq polymerase, as it is usually called, is now routinely used in PCR applications. Overcoming technical limitations: (e.g. large neurons in Mollusca): Two Nobel Prize–winning bodies of study were facilitated by using ideas central to Krogh's principle to overcome technical limitations in nervous system physiology. The ionic basis of the action potential was elucidated in the squid giant axon in 1958 by Hodgkin and Huxley, developers of the original voltage clamp device and co-recipients of the 1963 Nobel Prize in Physiology or Medicine. The voltage clamp is now a central piece of technology in modern neurophysiology, but was only possible to develop using the wide diameter of the squid giant axon. Another marine mollusc, the opisthobranch Aplysia possesses relatively small number of large nerve cells that are easily identified and mapped from individual to individual. Aplysia was selected for these reasons for the study of the cellular and molecular basis of learning and memory which led to Eric Kandel's receipt of the Nobel Prize in 2000. Understanding more complex/subtle systems (e.g. Barn owls and sound localization): Beyond overcoming technical limitations, Krogh's principle has particularly important implications in the light of convergent evolution and homology. Either because of evolutionary history, or particular constraints on a given niche, there are not infinite solutions to all biological problems. Instead, organisms utilize similar neural algorithms, behaviors, or even structures to accomplish similar tasks. If one's goal is to understand how the nervous system might localize objects using sound, one may take the approach of using an auditory 'specialist' such as the barn owl studied by Mark Konishi, Eric Knudsen and their colleagues. A nocturnal predator by nature, the barn owl relies heavily on using precise information on the time of arrival of sound in its ears. The information gleaned from this approach has contributed heavily to our understanding of how the brain maps sensory space, and how nervous systems encode timing information. See also August Krogh Comparative physiology Evolutionary physiology Krogh length Neuroethology Further reading Bennett AF (2003). Experimental evolution and the Krogh Principle: generating biological novelty for functional and genetic analyses. Physiological and Biochemical Zoology 76:1-11. PDF Burggren WW (1999/2000). Developmental physiology, animal models, and the August Krogh principle. Zoology 102:148-156. Chien A, Edgar DB, Trela JM (1976). "Deoxyribonucleic acid polymerase from the extreme thermophile Thermus aquaticus". J. Bacteriol. 174: 1550-1557 Crawford, DL (2001). "Functional genomics does not have to be limited to a few select organisms". Genome Biology 2(1):interactions1001.1-1001.2. Krebs HA (1975). The August Krogh principle: "For many problems there is an animal on which it can be most conveniently studied." Journal of Experimental Zoology 194:221-226. Krogh A (1929). The progress of physiology. American Journal of Physiology 90:243-251. "Krogh's principle for a new era." (2003) [Editorial] Nature Genetics 34(4) pp. 345–346. Miller G. (2004) Behavioral Neuroscience Uncaged. Science 306(5695):432-434. Biology experiments Neuroethology Biology theories
Krogh's principle
[ "Biology" ]
1,291
[ "Ethology", "Behavior", "Neuroethology", "Biology theories" ]
7,170,399
https://en.wikipedia.org/wiki/Autosomal%20dominant%20nocturnal%20frontal%20lobe%20epilepsy
Autosomal dominant nocturnal frontal lobe epilepsy (ADNFLE) is an epileptic disorder that causes frequent violent seizures during sleep. These seizures often involve complex motor movements, such as hand clenching, arm raising/lowering, and knee bending. Vocalizations such as shouting, moaning, or crying are also common. ADNFLE is often misdiagnosed as nightmares. Attacks often occur in clusters and typically first manifest in childhood. There are four known loci for ADNFLE, three with known causative genes. These genes, CHRNA4, CHRNB2, and CHRNA2, encode various nicotinic acetylcholine receptor α and β subunits. Signs and symptoms Causes While not well understood, it is believed that malfunction in thalamocortical loops plays a vital role in ADNFLE. The reasons for this belief are threefold. Firstly, thalamocortical loops are important in sleep and the frontal cortex is the origin of ADNFLE seizures. Secondly, both the thalamus and cortex receive cholinergic inputs and acetylcholine receptor subunits comprise the three known causative genes for ADNFLE. Thirdly, K-complex are almost invariably present at the start of seizures. Mechanism CHRNA4 The first mutation associated with ADNFLE is a serine to phenylalanine transition at position 248 (S248F), located in the second transmembrane spanning region of the gene encoding a nicotinic acetylcholine receptor α4 subunit. Using the numbering based on the human CHRNA4 protein, this mutation is called S280F. Receptors containing this mutant subunit are functional, but desensitize at a much faster pace compared to wild-type only receptors. These mutant containing receptors also recover from desensitization at a much slower rate than wild-type only receptors. These mutant receptors also have a decreased single channel conductance than wild-type and have a lower affinity for acetylcholine. Also importantly, this mutation along with the others in CHRNA4 produce receptors less sensitive to calcium. The second discovered ADNFLE mutation was also in CHRNA4. This mutation, L259_I260insL, is caused by the insertion of three nucleotides (GCT) between a stretch of leucine amino acids and an isoleucine. As with the S248F mutation, the L259_I260insL mutation is located in the second transmembrane spanning region. Electrophysiological experiments have shown that this mutant is tenfold more sensitive to acetylcholine than wild-type. Calcium permeability, however, is notably decreased in mutant compared to wild-type containing receptors. Furthermore, this mutant shows slowed desensitization compared to both wild-type and S248F mutant receptors. Also located in the second transmembrane spanning region, the S252L mutation has also been associated with ADNFLE. This mutant displays increased affinity for acetylcholine faster desensitization compared to wild-type receptors. The most recently discovered mutation in CHRNA4 associated with ADNFLE is T265M, again located in the second transmembrane spanning segment. This mutation has been little studied and all that is known is that it produces receptors with increased sensitivity to acetylcholine and has a low penetrance. 15q24 Some families have been shown to not have mutations in CHRNA4 and, furthermore, to show no linkage around it. Instead some of these families show strong linkage on chromosome 15 (15q24) near CHRNA3, CHRNA5, and CHRNB4. Causative genes in this area are still unknown. CHRNB2 Three mutations have been found in the gene CHRNB2, which encodes an acetylcholine receptor β2 subunit. Two of these mutations, V287L and V287M, occur at the same amino acid, again in the second transmembrane spanning region. The V287L mutation results in receptors that desensitize at a much slower rate compared to wild-type. The V287M mutant displays a higher affinity for acetylcholine when compared to wild-type receptors. As with the mutations in CHRNA4, these mutants lead to receptors less sensitive to calcium. The other known mutation in CHRNB2 is I312M, located in the third membrane-spanning region. Receptors containing these mutant subunits display much larger currents and a higher sensitivity to acetylcholine than wild-type receptors. CHRNA2 Recently, the I279N mutation has been discovered in the first transmembrane spanning segment of CHRNA2, which encodes a nicotinic acetylcholine receptor α2 subunit similar to the nAChR α4 encoded by CHRNA4. This mutant shows a higher sensitivity to acetylcholine and unchanged desensitization compared to wild-type. Diagnosis Management References Further reading GeneReviews/NCBI/NIH/UW entry on Autosomal Dominant Nocturnal Frontal Lobe Epilepsy External links Epilepsy types Sleep disorders Channelopathies Unsolved problems in neuroscience Frontal lobe Autosomal dominant disorders
Autosomal dominant nocturnal frontal lobe epilepsy
[ "Biology" ]
1,106
[ "Behavior", "Sleep", "Sleep disorders" ]
7,170,579
https://en.wikipedia.org/wiki/Thermophoresis
Thermophoresis (also thermomigration, thermodiffusion, the Soret effect, or the Ludwig–Soret effect) is a phenomenon observed in mixtures of mobile particles where the different particle types exhibit different responses to the force of a temperature gradient. This phenomenon tends to move light molecules to hot regions and heavy molecules to cold regions. The term thermophoresis most often applies to aerosol mixtures whose mean free path is comparable to its characteristic length scale , but may also commonly refer to the phenomenon in all phases of matter. The term Soret effect normally applies to liquid mixtures, which behave according to different, less well-understood mechanisms than gaseous mixtures. Thermophoresis may not apply to thermomigration in solids, especially multi-phase alloys. Thermophoretic force The phenomenon is observed at the scale of one millimeter or less. An example that may be observed by the naked eye with good lighting is when the hot rod of an electric heater is surrounded by tobacco smoke: the smoke goes away from the immediate vicinity of the hot rod. As the small particles of air nearest the hot rod are heated, they create a fast flow away from the rod, down the temperature gradient. While the kinetic energy of the particles is similar at the same temperature, lighter particles acquire higher velocity compared to the heavy ones. When they collide with the large, slower-moving particles of the tobacco smoke they push the latter away from the rod. The force that has pushed the smoke particles away from the rod is an example of a thermophoretic force, as the mean free path of air at ambient conditions is 68 nm and the characteristic length scales are between 100–1000 nm. Thermodiffusion is labeled "positive" when particles move from a hot to cold region and "negative" when the reverse is true. Typically the heavier/larger species in a mixture exhibit positive thermophoretic behavior while the lighter/smaller species exhibit negative behavior. In addition to the sizes of the various types of particles and the steepness of the temperature gradient, the heat conductivity and heat absorption of the particles play a role. Recently, Braun and coworkers have suggested that the charge and entropy of the hydration shell of molecules play a major role for the thermophoresis of biomolecules in aqueous solutions. The quantitative description is given by: particle concentration; diffusion coefficient; and the thermodiffusion coefficient. The quotient of both coefficients is called Soret coefficient. The thermophoresis factor has been calculated from molecular interaction potentials derived from known molecular models. Applications The thermophoretic force has a number of practical applications. The basis for applications is that, because different particle types move differently under the force of the temperature gradient, the particle types can be separated by that force after they have been mixed together, or prevented from mixing if they are already separated. Impurity ions may move from the cold side of a semiconductor wafer towards the hot side, since the higher temperature makes the transition structure required for atomic jumps more achievable. The diffusive flux may occur in either direction (either up or down the temperature gradient), dependent on the materials involved. Thermophoretic force has been used in commercial precipitators for applications similar to electrostatic precipitators. It is exploited in the manufacturing of optical fiber in vacuum deposition processes. It can be important as a transport mechanism in fouling. Thermophoresis has also been shown to have potential in facilitating drug discovery by allowing the detection of aptamer binding by comparison of the bound versus unbound motion of the target molecule. This approach has been termed microscale thermophoresis. Furthermore, thermophoresis has been demonstrated as a versatile technique for manipulating single biological macromolecules, such as genomic-length DNA, and HIV virus in micro- and nanochannels by means of light-induced local heating. Thermophoresis is one of the methods used to separate different polymer particles in field flow fractionation. History Thermophoresis in gas mixtures was first observed and reported by John Tyndall in 1870 and further understood by John Strutt (Baron Rayleigh) in 1882. Thermophoresis in liquid mixtures was first observed and reported by Carl Ludwig in 1856 and further understood by Charles Soret in 1879. James Clerk Maxwell wrote in 1873 concerning mixtures of different types of molecules (and this could include small particulates larger than molecules): "This process of diffusion... goes on in gases and liquids and even in some solids.... The dynamical theory also tells us what will happen if molecules of different masses are allowed to knock about together. The greater masses will go slower than the smaller ones, so that, on an average, every molecule, great or small, will have the same energy of motion. The proof of this dynamical theorem, in which I claim the priority, has recently been greatly developed and improved by Dr. Ludwig Boltzmann." It has been analyzed theoretically by Sydney Chapman. Thermophoresis at solids interfaces was numerically discovered by Schoen et al. in 2006 and was experimentally confirmed by Barreiro et al. Negative thermophoresis in fluids was first noticed in 1967 by Dwyer in a theoretical solution, and the name was coined by Sone. Negative thermophoresis at solids interfaces was first observed by Leng et al. in 2016. See also Deposition (aerosol physics) Dufour effect Maxwell–Stefan diffusion Microscale thermophoresis References External links A short introduction to thermophoresis, including helpful animated graphics, is at aerosols.wustl.edu Ternary mixtures HCl Alkali bromides Non-equilibrium thermodynamics Aerosols
Thermophoresis
[ "Chemistry", "Mathematics" ]
1,222
[ "Non-equilibrium thermodynamics", "Aerosols", "Colloids", "Dynamical systems" ]
7,170,877
https://en.wikipedia.org/wiki/Benzeneselenol
Benzeneselenol, also known as selenophenol, is the organoselenium compound with the chemical formula , often abbreviated PhSeH. It is the selenium analog of phenol. This colourless, malodorous compound is a reagent in organic synthesis. Synthesis Benzeneselenol is prepared by the reaction of phenylmagnesium bromide and selenium: PhMgBr + Se → PhSeMgBr PhSeMgBr + HCl → PhSeH + MgBrCl Since benzeneselenol does not have a long shelf life, it is often generated in situ. A common method is by reduction of diphenyldiselenide. A further reason for this conversion is that often, it is the anion that is sought. Reactions More so than thiophenol, benzeneselenol is easily oxidized by air. The facility of this reaction reflects the weakness of the Se-H bond, bond dissociation energy of which is estimated to be between 67 and 74 kcal/mol. In contrast, the S-H BDE for thiophenol is near 80 kcal/mol. The product is diphenyl diselenide as shown in this idealized equation: The presence of the diselenide in benzeneselenol is indicated by a yellow coloration. The diselenide can be converted back to the selenol by reduction followed by acidification of the resulting . PhSeH is acidic with a pKa of 5.9. Thus at neutral pH, it is mostly ionized: It is approximately seven times more acidic than the related thiophenol. Both compounds dissolve in water upon the addition of base. The conjugate base is , a potent nucleophile. History Benzeneselenol was first reported in 1888 by the reaction of benzene with selenium tetrachloride () in the presence of aluminium trichloride (). Safety The compound is intensely malodorous and, like other organoselenium compounds, toxic. References Organoselenium compounds Selenols Phenyl compounds Foul-smelling chemicals Reagents for organic chemistry
Benzeneselenol
[ "Chemistry" ]
466
[ "Reagents for organic chemistry" ]
7,171,253
https://en.wikipedia.org/wiki/Single-ended%20primary-inductor%20converter
The single-ended primary-inductor converter (SEPIC) is a type of DC/DC converter that allows the electrical potential (voltage) at its output to be greater than, less than, or equal to that at its input. The output of the SEPIC is controlled by the duty cycle of the electronic switch (S1). A SEPIC is essentially a boost converter followed by an inverted buck-boost converter. While similar to a traditional buck-boost converter, it has a few advantages. It has a non-inverted output (the output has the same electrical polarity as the input). Its use of a series capacitor to couple energy from the input to the output allows the circuit to respond more gracefully to a short-circuit output. And it is capable of true shutdown: when the switch S1 is turned off enough, the output (V0) drops to 0 V, following a fairly hefty transient dump of charge. SEPICs are useful in applications in which a battery voltage can be above and below that of the regulator's intended output. For example, a single lithium ion battery typically discharges from 4.2 volts to 3 volts; if other components require 3.3 volts, then the SEPIC would be effective. Circuit operation The schematic diagram for a basic SEPIC is shown in Figure 1. As with other switched mode power supplies (specifically DC-to-DC converters), the SEPIC exchanges energy between the capacitors and inductors in order to convert from one voltage to another. The amount of energy exchanged is controlled by switch S1, which is typically a transistor such as a MOSFET. MOSFETs offer much higher input impedance and lower voltage drop than bipolar junction transistors (BJTs), and do not require biasing resistors as MOSFET switching is controlled by differences in voltage rather than a current, as with BJTs. Continuous mode A SEPIC is said to be in continuous-conduction mode ("continuous mode") if the currents through inductors L1 and L2 never fall to zero during an operating cycle. During a SEPIC's steady-state operation, the average voltage across capacitor C1 (VC1) is equal to the input voltage (Vin). Because capacitor C1 blocks direct current (DC), the average current through it (IC1) is zero, making inductor L2 the only source of DC load current. Therefore, the average current through inductor L2 (IL2) is the same as the average load current and hence independent of the input voltage. Looking at average voltages, the following can be written: Because the average voltage of VC1 equals VIN, therefore VL1 = −VL2. For this reason, the two inductors can be wound on the same core, which begins to resemble a flyback converter, the most basic of the transformer-isolated switched-mode power supply topologies. Since the voltages are the same in magnitude, their effects on the mutual inductance will be zero, assuming the polarity of the windings is correct. Also, since the voltages are the same in magnitude, the ripple currents from the two inductors will be equal in magnitude. The average currents can be summed as follows (average capacitor currents must be zero): When switch S1 is turned on, current IL1 increases and the current IL2 goes more negative. (Mathematically, it decreases due to arrow direction.) The energy to increase the current IL1 comes from the input source. Since S1 is a short while closed, and the instantaneous voltage VL1 is approximately VIN, the voltage VL2 is approximately −VC1. Therefore, D1 is opened and the capacitor C1 supplies the energy to increase the magnitude of the current in IL2 and thus increase the energy stored in L2. IL is supplied by C2. The easiest way to visualize this is to consider the bias voltages of the circuit in a DC state, then close S1. When switch S1 is turned off, the current IC1 becomes the same as the current IL1, since inductors do not allow instantaneous changes in current. The current IL2 will continue in the negative direction, in fact it never reverses direction. It can be seen from the diagram that a negative IL2 will add to the current IL1 to increase the current delivered to the load. Using Kirchhoff's Current Law, it can be shown that ID1 = IC1 - IL2. It can then be concluded, that while S1 is off, power is delivered to the load from both L2 and L1. C1, however is being charged by L1 during this off cycle (as C2 by L1 and L2), and will in turn recharge L2 during the following on cycle. Because the potential (voltage) across capacitor C1 may reverse direction every cycle, a non-polarized capacitor should be used. However, a polarized tantalum or electrolytic capacitor may be used in some cases, because the potential (voltage) across capacitor C1 will not change unless the switch is closed long enough for a half cycle of resonance with inductor L2, and by this time the current in inductor L1 could be quite large. The capacitor CIN has no effect on the ideal circuit's analysis, but is required in actual regulator circuits to reduce the effects of parasitic inductance and internal resistance of the power supply. The boost/buck capabilities of the SEPIC are possible because of capacitor C1 and inductor L2. Inductor L1 and switch S1 create a standard boost converter, which generates a voltage (VS1) that is higher than VIN, whose magnitude is determined by the duty cycle of the switch S1. Since the average voltage across C1 is VIN, the output voltage (VO) is VS1 - VIN. If VS1 is less than double VIN, then the output voltage will be less than the input voltage. If VS1 is greater than double VIN, then the output voltage will be greater than the input voltage. Discontinuous mode A SEPIC is said to be in discontinuous-conduction mode or discontinuous mode if the current through either of inductors L1 or L2 is allowed to fall to zero during an operating cycle. Reliability and efficiency The voltage drop and switching time of diode D1 is critical to a SEPIC's reliability and efficiency. The diode's switching time needs to be extremely fast in order to not generate high voltage spikes across the inductors, which could cause damage to components. Fast conventional diodes or Schottky diodes may be used. The resistances in the inductors and the capacitors can also have large effects on the converter efficiency and output ripple. Inductors with lower series resistance allow less energy to be dissipated as heat, resulting in greater efficiency (a larger portion of the input power being transferred to the load). Capacitors with low equivalent series resistance (ESR) should also be used for C1 and C2 to minimize ripple and prevent heat build-up, especially in C1 where the current is changing direction frequently. Disadvantages Like the buck–boost converter, the SEPIC has a pulsating output current. The similar Ćuk converter does not have this disadvantage, but it can only have negative output polarity, unless the isolated Ćuk converter is used. Since the SEPIC converter transfers all its energy via the series capacitor, a capacitor with high capacitance and current handling capability is required. The fourth-order nature of the converter also makes the SEPIC converter difficult to control, making it only suitable for very slow varying applications. See also Switched-mode power supply (SMPS) DC to DC converter Buck converter Boost converter Buck-boost converter Flyback converter Ćuk converter References Maniktala, Sanjaya. Switching Power Supply Design & Optimization, McGraw-Hill, New York 2005 SEPIC Equations and Component Ratings, Maxim Integrated Products. Appnote 1051, 2005. TM SEPIC converter in PFC Pre-Regulator, STMicroelectronics. Application Note AN2435. This application note presents the basic equation of the SEPIC converter, in addition to a practical design example. High Frequency Power Converters, Intersil Corporation. Application Note AN9208, April 1994. This application note covers various power converter architectures, including the various conduction modes of SEPIC converters. DC-to-DC converters Voltage regulation
Single-ended primary-inductor converter
[ "Physics" ]
1,822
[ "Voltage", "Physical quantities", "Voltage regulation" ]
9,312,326
https://en.wikipedia.org/wiki/Gabriel%20graph
In mathematics and computational geometry, the Gabriel graph of a set of points in the Euclidean plane expresses one notion of proximity or nearness of those points. Formally, it is the graph with vertex set in which any two distinct points and are adjacent precisely when the closed disc having as a diameter contains no other points. Another way of expressing the same adjacency criterion is that and should be the two closest given points to their midpoint, with no other given point being as close. Gabriel graphs naturally generalize to higher dimensions, with the empty disks replaced by empty closed balls. Gabriel graphs are named after K. Ruben Gabriel, who introduced them in a paper with Robert R. Sokal in 1969. Percolation For Gabriel graphs of infinite random point sets, the finite site percolation threshold gives the fraction of points needed to support connectivity: if a random subset of fewer vertices than the threshold is given, the remaining graph will almost surely have only finite connected components, while if the size of the random subset is more than the threshold, then the remaining graph will almost surely have an infinite component (as well as finite components). This threshold was proved to exist by , and more precise values of both site and bond thresholds have been given by Norrenbrock. Related geometric graphs The Gabriel graph is a subgraph of the Delaunay triangulation. It can be found in linear time if the Delaunay triangulation is given. The Gabriel graph contains, as subgraphs, the Euclidean minimum spanning tree, the relative neighborhood graph, and the nearest neighbor graph. It is an instance of a beta-skeleton. Like beta-skeletons, and unlike Delaunay triangulations, it is not a geometric spanner: for some point sets, distances within the Gabriel graph can be much larger than the Euclidean distances between points. References Euclidean plane geometry Geometric graphs
Gabriel graph
[ "Mathematics" ]
381
[ "Planes (geometry)", "Euclidean plane geometry" ]
9,312,350
https://en.wikipedia.org/wiki/Element%20distinctness%20problem
In computational complexity theory, the element distinctness problem or element uniqueness problem is the problem of determining whether all the elements of a list are distinct. It is a well studied problem in many different models of computation. The problem may be solved by sorting the list and then checking if there are any consecutive equal elements; it may also be solved in linear expected time by a randomized algorithm that inserts each item into a hash table and compares only those elements that are placed in the same hash table cell. Several lower bounds in computational complexity are proved by reducing the element distinctness problem to the problem in question, i.e., by demonstrating that the solution of the element uniqueness problem may be quickly found after solving the problem in question. Decision tree complexity The number of comparisons needed to solve the problem of size , in a comparison-based model of computation such as a decision tree or algebraic decision tree, is . Here, invokes big theta notation, meaning that the problem can be solved in a number of comparisons proportional to (a linearithmic function) and that all solutions require this many comparisons. In these models of computation, the input numbers may not be used to index the computer's memory (as in the hash table solution) but may only be accessed by computing and comparing simple algebraic functions of their values. For these models, an algorithm based on comparison sort solves the problem within a constant factor of the best possible number of comparisons. The same lower bound applies as well to the expected number of comparisons in the randomized algebraic decision tree model. Real RAM Complexity If the elements in the problem are real numbers, the decision-tree lower bound extends to the real random-access machine model with an instruction set that includes addition, subtraction and multiplication of real numbers, as well as comparison and either division or remaindering ("floor"). It follows that the problem's complexity in this model is also . This RAM model covers more algorithms than the algebraic decision-tree model, as it encompasses algorithms that use indexing into tables. However, in this model all program steps are counted, not just decisions. Turing Machine complexity A single-tape deterministic Turing machine can solve the problem, for n elements of bits each, in time , while on a nondeterministic machine the time complexity is . Quantum complexity Quantum algorithms can solve this problem faster, in queries. The optimal algorithm is by Andris Ambainis. Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended his work to obtain the lower bound for all functions. Generalization: Finding repeated elements Elements that occur more than times in a multiset of size may be found by a comparison-based algorithm, the Misra–Gries heavy hitters algorithm, in time . The element distinctness problem is a special case of this problem where . This time is optimal under the decision tree model of computation. See also Collision problem References Polynomial-time problems
Element distinctness problem
[ "Mathematics" ]
621
[ "Mathematical problems", "Computational problems", "Polynomial-time problems" ]
9,312,650
https://en.wikipedia.org/wiki/Crawl%20space
A crawl space or crawlspace is an unoccupied, unfinished, narrow space within a building, between the ground and the first (or ground) floor. The crawl space is so named because there is typically only enough room to crawl rather than stand; anything larger than about and beneath the ground floor would tend to be considered a basement. Uses A crawl space is often built when building a basement would be impractical. A crawl space can also substitute for a concrete slab foundation that would hinder building inspections. The crawl space's functions include providing access to repair plumbing, electrical wiring, and heating and cooling systems without the need for excavation. Building insulation can also be installed in a crawl space. The crawl space can provide a protective buffer between the damp ground and the wooden parts of a home and, with adequate sealing, help with radon mitigation. Crawl spaces are also sometimes used for storage of items such as canned goods that are not particularly susceptible to destruction by mildew or unstable temperatures. A crawl space foundation can be used to elevate the lowest floors of residential buildings located in Special Flood Hazard Areas above the Base Flood Elevation. The Federal Emergency Management Agency recommends that the floor of the crawlspace be at or above the lowest grade adjacent to the building. Disadvantages Crawl spaces are not usually an option in cold regions, such as the northern United States, where a full basement is needed to get the foundation below the frost line. Another downside of crawl spaces compared to basements is that they offer less protection against earthquakes, tornadoes, and hurricanes. Crawl spaces also tend to be more expensive than slab foundations. Problems with crawl spaces, such as leaks, may not be noticed as quickly as problems with basements, since people typically do not go into their crawl space as often. A crawl space also may not be as well-suited to a sloped lot as a basement. HVAC equipment in unconditioned crawl spaces tends not to operate as efficiently as it would in a conditioned space such as a basement. Designs Crawl spaces can be actively or passively vented, or closed. An advantage of a vented crawl space is that harmful gases such as radon or carbon monoxide (e.g. from gas furnaces or water heaters) can escape or be diluted before they can enter the living space. However, in regions with a humid climate, vents to the outside can also allow moist air to come in, which can then condense if temperatures (e.g. on cooler surfaces such as ductwork) drop below the dew point, creating a damp environment that is hospitable to indoor mold growth as well as infestations by rodents and insects, possibly including wood-damaging ones such as termites or carpenter ants. Even without condensation, relative humidity above 80% can support mold growth and rot wooden structural materials such as floor joists. Humidity in some sealed crawl spaces is controlled using a dehumidifier. Encapsulation is sometimes used to prevent the passage of air from the crawl space to the living environment, to save energy and improve indoor air quality, since air in the crawl space might otherwise tend to rise due to the stack effect. Encapsulation involves adding a vapor barrier to the floor, sealing off all openings to the outdoors, adding thermal insulation to the walls, and sealing off any remaining gaps and cracks (such as plumbing and wiring penetrations) between the crawl space and the floor of the home. A 2005 U.S. Department of Energy study of homes in the southeastern United States found that closed crawl spaces with sealed foundation wall vents, sealed polyethylene film liners and various insulation and drying strategies had significantly reduced space conditioning energy use compared to traditional wall-vented crawl spaces with perimeter wall vents and unsealed polyethylene film covering the ground surface. As a further encapsulation measure, crawl space access doors are sometimes located inside the home, or an airtight, insulated access door is built in the perimeter wall. A crawl space can be susceptible to flooding, a risk that is sometimes mitigated by such measures as using rain drainage such as rain gutters to conduct rainwater away from the house and sloping the earth away from the house. Crawl space wall materials may include, e.g., solid concrete or concrete masonry units. Crawl space encapsulation in St. Louis typically costs between $5,000 and $15,000. The final price depends on factors such as the size and condition of the crawl space. Larger spaces and those requiring more extensive preparation, like mold removal, will cost more. See also Cockloft, sometimes mistakenly called a "crawl space" References Construction Architectural elements
Crawl space
[ "Technology", "Engineering" ]
960
[ "Building engineering", "Construction", "Architectural elements", "Components", "Architecture" ]
9,313,361
https://en.wikipedia.org/wiki/NASBA%20%28molecular%20biology%29
Nucleic acid sequence-based amplification, commonly referred to as NASBA, is a method in molecular biology which is used to produce multiple copies of single stranded RNA. NASBA is a two-step process that takes RNA and anneals specially designed primers, then utilizes an enzyme cocktail to amplify it. Background Nucleic acid amplification is a technique used to produce several copies of a specific segment of RNA/DNA. Amplified RNA and DNA can be used for a variety of applications, such as genotyping, sequencing, and detection of bacteria or viruses. There are two different types of amplification, non-isothermal and isothermal. Non-isothermal amplification produces multiple copies of RNA/DNA through reiterative cycling between different temperatures. Isothermal amplification produces multiple copies of RNA/DNA at a constant reaction temperature. NASBA takes single stranded RNA, anneals primers to it at 65°C, and then amplifies it at 41°C to produce multiple copies of single stranded RNA. In order for successful amplification to occur, an enzyme cocktail containing, Avian Myeloblastosis Reverse Transcriptase (AMV-RT), RNase H, and RNA polymerase is used. AMV-RT synthesizes a complementary DNA strand (cDNA) from the RNA template once the primer is annealed. RNase H then degrades the RNA template and the other primer binds to the cDNA to form double stranded DNA, which RNA polymerase uses to synthesize copies of RNA. One key aspect of NASBA is that the starting material and end product is always single stranded RNA. That being said, it can be used to amplify DNA, but the DNA must be translated into RNA in order for successful amplification to occur. Loop-mediated isothermal amplification (LAMP) is another isothermal amplification technique. History NASBA was developed by J Compton in 1991, who defined it as "a primer-dependent technology that can be used for the continuous amplification of nucleic acids in a single mixture at one temperature". Immediately after the invention of NASBA it was used for the rapid diagnosis and quantification of HIV-1 in patient sera. Although RNA can also be amplified by PCR using a reverse transcriptase (in order to synthesize a complementary DNA strand as a template), NASBA's main advantage is that it works under isothermal conditions – usually at a constant temperature of 41 °C or two different temperatures, depending on the primers and enzymes used. Even when two different temperatures are applied, it is still considered isothermal, because it does not cycle back and forth between those temperatures. NASBA can be used in medical diagnostics as an alternative to PCR that is quicker and more sensitive in some circumstances. Procedure Explained briefly, NASBA works as follows: RNA template added to the reaction mixture, the first primer with the T7 promoter region on its 5' end attaches to its complementary site at the 3' end of the template. Reverse transcriptase synthesizes the opposite complementary DNA strand extending the 3' end of the primer, moving upstream along the RNA template. RNAse H destroys the RNA template from the DNA-RNA compound (RNAse H only destroys RNA in RNA-DNA hybrids, but not single-stranded RNA). The second primer attaches to the 5' end of the (antisense) DNA strand. Reverse transcriptase again synthesizes another DNA strand from the attached primer resulting in double stranded DNA. T7 RNA polymerase binds to the promoter region on the double strand. Since T7 RNA polymerase can only transcribe in the 3' to 5' direction the sense DNA is transcribed and an anti-sense RNA is produced. This is repeated, and the polymerase continuously produces complementary RNA strands of this template which results in amplification. Now a cyclic phase can begin similar to the previous steps. Here, however, the second primer first binds to the (-)RNA The reverse transcriptase now produces a (+)cDNA/(-)RNA duplex. RNAse H again degrades the RNA and the first primer binds to the now single stranded +(cDNA) The reverse transcriptase now produces the complementary (-)DNA, creating a dsDNA duplex Exactly like step 6, the T7 polymerase binds to the promoter region to produce (-)RNA, and the cycle is complete. Clinical applications The NASBA technique has been used to develop rapid diagnostic tests for several pathogenic viruses with single-stranded RNA genomes, e.g. influenza A, zika virus, foot-and-mouth disease virus, severe acute respiratory syndrome (SARS)-associated coronavirus, human bocavirus (HBoV) and also parasites like Trypanosoma brucei. Recently, NASBA reaction with fluoresce, dipstick and next generation sequencing readout has been developed for COVID-19 diagnosis. See also Real-time polymerase chain reaction References Amplifiers NASBA
NASBA (molecular biology)
[ "Chemistry", "Technology", "Biology" ]
1,049
[ "Biochemistry", "Amplifiers", "Molecular biology" ]
9,313,819
https://en.wikipedia.org/wiki/Paul%20Tholey
Paul Tholey (14 March 1937 – 7 December 1998) was a German Gestalt psychologist, and a professor of psychology and sports science at the University of Frankfurt and the Technical University of Braunschweig. Tholey started the study of oneirology in an attempt to prove that dreams occur in color. Given the unreliability of dream memories and following the critical realism approach, he used lucid dreaming as an epistemological tool for investigating dreams, in a similar fashion to Stephen LaBerge. He devised the reflection technique for inducing lucid dreams, consisting in continuously suspecting waking life to be a dream, in the hope that such a habit would manifest itself during dreams. Tholey's research included the examination of the cognitive abilities of dreamers, as well as the cognitive abilities of dream figures. In the latter study, nine trained lucid dreamers were directed to set other dream figures arithmetic and verbal tasks during lucid dreaming (Cognitive abilities of dream figures in lucid dreams, 1983). Dream figures who agreed to perform the tasks proved more successful in verbal than in arithmetic tasks. Bibliography Techniques for inducing and manipulating lucid dreams. Perceptual and Motor Skills, 57, 1983, pp 79–90. Relation between dream content and eye movements tested by lucid dreams. Perceptual and Motor Skills, 56, 1983, pp 875–878. Cognitive abilities of dream figures in lucid dreams. Lucidity Letter, 71, 1983. Overview of the Development of Lucid Dream Research in Germany . Lecture at the VI. International Conference of the Association for the Study of Dreams in London 1989. First published in: Lucidity Letter, 8(2) (1989), pp 1–30. Conversation Between Stephen LaBerge and Tholey in July 1989. B. Holzinger (ed.). Lucidity, 10(1&2), 1991, pp 62–71. A complete bibliography of articles in German , some of which have been translated into French, English, Hungarian. Gestalttheorie von Sport, Klartraum und Bewusstsein. Ausgewählte Arbeiten, herausgegeben und eingeleitet von Gerhard Stemberger ("Gestalt theory of sports. lucid dreaming, and consciousness. Selected works, edited and introduced by Gerhard Stemberger"). Wien: Krammer 2018. . Full text of Contents and Introduction (in German). External links Paul Tholey's bio and online articles Another Paul Tholey's bio Dream researchers Gestalt psychologists Sleep researchers Oneirologists Lucid dreams 1937 births 1998 deaths 20th-century German psychologists
Paul Tholey
[ "Biology" ]
550
[ "Sleep researchers", "Behavior", "Sleep" ]
9,314,644
https://en.wikipedia.org/wiki/Dynamic%20problem%20%28algorithms%29
Dynamic problems in computational complexity theory are problems stated in terms of changing input data. In its most general form, a problem in this category is usually stated as follows: Given a class of input objects, find efficient algorithms and data structures to answer a certain query about a set of input objects each time the input data is modified, i.e., objects are inserted or deleted. Problems in this class have the following measures of complexity: Space the amount of memory space required to store the data structure; Initialization time time required for the initial construction of the data structure; Insertion time time required for the update of the data structure when one more input element is added; Deletion time time required for the update of the data structure when an input element is deleted; Query time time required to answer a query; Other operations specific to the problem in question The overall set of computations for a dynamic problem is called a dynamic algorithm. Many algorithmic problems stated in terms of fixed input data (called static problems in this context and solved by static algorithms) have meaningful dynamic versions. Special cases Incremental algorithms, or online algorithms, are algorithms in which only additions of elements are allowed, possibly starting from empty/trivial input data. Decremental algorithms are algorithms in which only deletions of elements are allowed, starting with the initialization of a full data structure. If both additions and deletions are allowed, the algorithm is sometimes called fully dynamic. Examples Maximal element Static problem For a set of N numbers find the maximal one. The problem may be solved in O(N) time. Dynamic problem For an initial set of N numbers, dynamically maintain the maximal one when insertion and deletions are allowed. A well-known solution for this problem is using a self-balancing binary search tree. It takes space O(N), may be initially constructed in time O(N log N) and provides insertion, deletion and query times in O(log N). The priority queue maintenance problem It is a simplified version of this dynamic problem, where one requires to delete only the maximal element. This version may do with simpler data structures. Graphs Given a graph, maintain its parameters, such as connectivity, maximal degree, shortest paths, etc., when insertion and deletion of its edges are allowed. See also Dynamization Dynamic connectivity Kinetic data structure References Computational complexity theory
Dynamic problem (algorithms)
[ "Technology" ]
486
[ "Computing stubs", "Computer science", "Computer science stubs" ]
9,314,943
https://en.wikipedia.org/wiki/Tinbergen%27s%20four%20questions
Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular: behavioural adaptive functions phylogenetic history; and the proximate explanations underlying physiological mechanisms ontogenetic/developmental history. Four categories of questions and explanations When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem. Evolutionary (ultimate) explanations First question: Function (adaptation) Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive. The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function and evolution are often presented as separate and distinct explanations of behaviour. On the other hand, the common definition of adaptation is a central concept in evolution: a trait that was functional to the reproductive success of the organism and that is thus now present due to being selected for; that is, function and evolution are inseparable. However, a trait can have a current function that is adaptive without being an adaptation in this sense, if for instance the environment has changed. Imagine an environment in which having a small body suddenly conferred benefit on an organism when previously body size had had no effect on survival. A small body's function in the environment would then be adaptive, but it would not become an adaptation until enough generations had passed in which small bodies were advantageous to reproduction for small bodies to be selected for. Given this, it is best to understand that presently functional traits might not all have been produced by natural selection. The term "function" is preferable to "adaptation", because adaptation is often construed as implying that it was selected for due to past function. This corresponds to Aristotle's final cause. Second question: Phylogeny (evolution) Evolution captures both the history of an organism via its phylogeny, and the history of natural selection working on function to produce adaptations. There are several reasons why natural selection may fail to achieve optimal design (Mayr 2001:140–143; Buss et al. 1998). One entails random processes such as mutation and environmental events acting on small populations. Another entails the constraints resulting from early evolutionary development. Each organism harbors traits, both anatomical and behavioural, of previous phylogenetic stages, since many traits are retained as species evolve. Reconstructing the phylogeny of a species often makes it possible to understand the "uniqueness" of recent characteristics: Earlier phylogenetic stages and (pre-) conditions which persist often also determine the form of more modern characteristics. For instance, the vertebrate eye (including the human eye) has a blind spot, whereas octopus eyes do not. In those two lineages, the eye was originally constructed one way or the other. Once the vertebrate eye was constructed, there were no intermediate forms that were both adaptive and would have enabled it to evolve without a blind spot. It corresponds to Aristotle's formal cause. Proximate explanations Third question: Mechanism (causation) Some prominent classes of Proximate causal mechanisms include: The brain: For example, Broca's area, a small section of the human brain, has a critical role in linguistic capability. Hormones: Chemicals used to communicate among cells of an individual organism. Testosterone, for instance, stimulates aggressive behaviour in a number of species. Pheromones: Chemicals used to communicate among members of the same species. Some species (e.g., dogs and some moths) use pheromones to attract mates. In examining living organisms, biologists are confronted with diverse levels of complexity (e.g. chemical, physiological, psychological, social). They therefore investigate causal and functional relations within and between these levels. A biochemist might examine, for instance, the influence of social and ecological conditions on the release of certain neurotransmitters and hormones, and the effects of such releases on behaviour, e.g. stress during birth has a tocolytic (contraction-suppressing) effect. However, awareness of neurotransmitters and the structure of neurons is not by itself enough to understand higher levels of neuroanatomic structure or behaviour: "The whole is more than the sum of its parts." All levels must be considered as being equally important: cf. transdisciplinarity, Nicolai Hartmann's "Laws about the Levels of Complexity." It corresponds to Aristotle's efficient cause. Fourth question: Ontogeny (development) Ontogeny is the process of development of an individual organism from the zygote through the embryo to the adult form. In the latter half of the twentieth century, social scientists debated whether human behaviour was the product of nature (genes) or nurture (environment in the developmental period, including culture). An example of interaction (as distinct from the sum of the components) involves familiarity from childhood. In a number of species, individuals prefer to associate with familiar individuals but prefer to mate with unfamiliar ones (Alcock 2001:85–89, Incest taboo, Incest). By inference, genes affecting living together interact with the environment differently from genes affecting mating behaviour. A simple example of interaction involves plants: Some plants grow toward the light (phototropism) and some away from gravity (gravitropism). Many forms of developmental learning have a critical period, for instance, for imprinting among geese and language acquisition among humans. In such cases, genes determine the timing of the environmental impact. A related concept is labeled "biased learning" (Alcock 2001:101–103) and "prepared learning" (Wilson, 1998:86–87). For instance, after eating food that subsequently made them sick, rats are predisposed to associate that food with smell, not sound (Alcock 2001:101–103). Many primate species learn to fear snakes with little experience (Wilson, 1998:86–87). See developmental biology and developmental psychology. It corresponds to Aristotle's material cause. Causal relationships The figure shows the causal relationships among the categories of explanations. The left-hand side represents the evolutionary explanations at the species level; the right-hand side represents the proximate explanations at the individual level. In the middle are those processes' end products—genes (i.e., genome) and behaviour, both of which can be analyzed at both levels. Evolution, which is determined by both function and phylogeny, results in the genes of a population. The genes of an individual interact with its developmental environment, resulting in mechanisms, such as a nervous system. A mechanism (which is also an end-product in its own right) interacts with the individual's immediate environment, resulting in its behaviour. Here we return to the population level. Over many generations, the success of the species' behaviour in its ancestral environment—or more technically, the environment of evolutionary adaptedness (EEA) may result in evolution as measured by a change in its genes. In sum, there are two processes—one at the population level and one at the individual level—which are influenced by environments in three time periods. Examples Vision Four ways of explaining visual perception: Function: To find food and avoid danger. Phylogeny: The vertebrate eye initially developed with a blind spot, but the lack of adaptive intermediate forms prevented the loss of the blind spot. Mechanism: The lens of the eye focuses light on the retina. Development: Neurons need the stimulation of light to wire the eye to the brain (Moore, 2001:98–99). Westermarck effect Four ways of explaining the Westermarck effect, the lack of sexual interest in one's siblings (Wilson, 1998:189–196): Function: To discourage inbreeding, which decreases the number of viable offspring. Phylogeny: Found in a number of mammalian species, suggesting initial evolution tens of millions of years ago. Mechanism: Little is known about the neuromechanism. Ontogeny: Results from familiarity with another individual early in life, especially in the first 30 months for humans. The effect is manifested in nonrelatives raised together, for instance, in kibbutzs. Romantic love Four ways of explaining romantic love have been used to provide a comprehensive biological definition (Bode & Kushnick, 2021): Function: Mate choice, courtship, sex, pair-bonding. Phylogeny: Evolved by co-opting mother-infant bonding mechanisms sometime in the recent evolutionary history of humans. Mechanisms: Social, psychological mate choice, genetic, neurobiological, and endocrinological mechanisms cause romantic love. Ontogeny: Romantic love can first manifest in childhood, manifests with all its characteristics following puberty, but can manifest across the lifespan. Sleep Sleep has been described using Tinbergen's four questions as a framework (Bode & Kuula, 2021): Function: Energy restoration, metabolic regulation, thermoregulation, boosting immune system, detoxification, brain maturation, circuit reorganization, synaptic optimization, avoiding danger. Phylogeny: Sleep exists in invertebrates, lower vertebrates, and higher vertebrates. NREM and REM sleep exist in eutheria, marsupialiformes, and also evolved in birds. Mechanisms: Mechanisms regulate wakefulness, sleep onset, and sleep. Specific mechanisms involve neurotransmitters, genes, neural structures, and the circadian rhythm. Ontogeny: Sleep manifests differently in babies, infants, children, adolescents, adults, and older adults. Differences include the stages of sleep, sleep duration, and sex differences. Use of the four-question schema as "periodic table" Konrad Lorenz, Julian Huxley and Niko Tinbergen were familiar with both conceptual categories (i.e. the central questions of biological research: 1. - 4. and the levels of inquiry: a. - g.), the tabulation was made by Gerhard Medicus. The tabulated schema is used as the central organizing device in many animal behaviour, ethology, behavioural ecology and evolutionary psychology textbooks (e.g., Alcock, 2001). One advantage of this organizational system, what might be called the "periodic table of life sciences," is that it highlights gaps in knowledge, analogous to the role played by the periodic table of elements in the early years of chemistry. This "biopsychosocial" framework clarifies and classifies the associations between the various levels of the natural and social sciences, and it helps to integrate the social and natural sciences into a "tree of knowledge" (see also Nicolai Hartmann's "Laws about the Levels of Complexity"). Especially for the social sciences, this model helps to provide an integrative, foundational model for interdisciplinary collaboration, teaching and research (see The Four Central Questions of Biological Research Using Ethology as an Example – PDF). References Sources Alcock, John (2001) Animal Behaviour: An Evolutionary Approach, Sinauer, 7th edition. . Buss, David M., Martie G. Haselton, Todd K. Shackelford, et al. (1998) "Adaptations, Exaptations, and Spandrels," American Psychologist, 53:533–548. http://www.sscnet.ucla.edu/comm/haselton/webdocs/spandrels.html Buss, David M. (2004) Evolutionary Psychology: The New Science of the Mind, Pearson Education, 2nd edition. . Cartwright, John (2000) Evolution and Human Behaviour, MIT Press, . Krebs, John R., Davies N.B. (1993) An Introduction to Behavioural Ecology, Blackwell Publishing, . Lorenz, Konrad (1937) Biologische Fragestellungen in der Tierpsychologie (I.e. Biological Questions in Animal Psychology). Zeitschrift für Tierpsychologie, 1: 24–32. Mayr, Ernst (2001) What Evolution Is, Basic Books. . Gerhard Medicus (2017, chapter 1). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB Medicus, Gerhard (2017) Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015, Nesse, Randolph M (2013) "Tinbergen's Four Questions, Organized," Trends in Ecology and Evolution, 28:681-682. Moore, David S. (2001) The Dependent Gene: The Fallacy of 'Nature vs. Nurture''', Henry Holt. . Pinker, Steven (1994) The Language Instinct: How the Mind Creates Language, Harper Perennial. . Tinbergen, Niko (1963) "On Aims and Methods of Ethology," Zeitschrift für Tierpsychologie, 20: 410–433. Wilson, Edward O. (1998) Consilience: The Unity of Knowledge'', Vintage Books. . External links Diagrams The Four Areas of Biology pdf The Four Areas and Levels of Inquiry pdf Tinbergen's four questions within the "Fundamental Theory of Human Sciences" ppt Tinbergen's Four Questions, organized pdf Derivative works On aims and methods of cognitive ethology (pdf) by Jamieson and Bekoff. Behavioral ecology Ethology Evolutionary psychology Sociobiology
Tinbergen's four questions
[ "Biology" ]
2,997
[ "Behavior", "Behavioral ecology", "Behavioural sciences", "Sociobiology", "Ethology" ]
9,315,181
https://en.wikipedia.org/wiki/Amanita%20velosa
Amanita velosa, commonly known as the springtime amanita, or bittersweet orange ringless amanita is a species of agaric found in California, as well as southern Oregon and Baja California. Although a prized edible mushroom, it bears similarities to some deadly poisonous species. Description It is part of Amanita section Vaginatae, and like other species in this group, it is characterized by its lack of an annulus, striate pileus margin, thick universal veil remnants comprising the veil, volva, and pileus patches, inamyloid spores, and lack of characteristic Amanita toxins such as amatoxins and ibotenic acid. It is distinguished from other species in section Vaginatae by its lack of any kind of umbo on its pileus, its short pileus striae, and its distinct pale orange to pale salmon coloration when young. Its coloration can become more brownish with age and entirely white specimens are occasionally seen as well. Like many other Amanita, the gills are white, but occasionally have a distinct pinkish or orangish tint. In older specimens, the odor can become pungent and fishy. The cap is 5–15 cm wide, convex then plane, with an orange-pink or salmon-like color; it usually has a white universal veil patch. The gills are adnexed to free, close and white (or pinkish with age). The stalk is 5–15 cm long, and 1–3 cm wide. The volva is white, saclike and sheathes the stalk base. The spores are white, smooth, elliptical, and inamyloid. Similar species The deadly A. ocreata and occasionally A. phalloides are found in the same habitat at the same time of year as A. velosa, and can often be found in close proximity. A. ocreata and A. phalloides have thin universal veil remnants, a sac-like volva, an annulus, a non-striate pileus margin, and a pileus that is a different color than A. velosa. These differences can fade as the fruiting body ages, making it important to collect only specimens that have all of their identifying characteristics intact. Distribution and habitat A. velosa is a late-season mushroom in its range of occurrence, being primarily found in the coastal regions of California, Oregon, and Baja California, from midwinter up until the end of the California rainy season. Its favored habitat is the ecotone between oak (particularly coast live oak) woodlands and open grassland, living in an ectomycorrhizal relationship with young oak trees. The species is also reported to have been found in association with aspen and conifers in the Sierra Nevada, with one report of it being found growing with spruce in the eastern United States' Great Smoky Mountains National Park. Edibility It is considered to be an outstanding edible species with a distinctively sweet or nutty flavor, but great caution must be exercised to properly identify it due to its similarity to deadly species. References External links Amanita velosa Mushroom Observer: Amanita velosa velosa Edible fungi Fungi of California Fungi of Oregon Fungi of Mexico Fungi described in 1895 Taxa named by Charles Horton Peck Fungi without expected TNC conservation status Fungus species Edible fungi of California
Amanita velosa
[ "Biology" ]
697
[ "Fungi", "Fungus species" ]
9,315,192
https://en.wikipedia.org/wiki/Fractal%20sequence
In mathematics, a fractal sequence is one that contains itself as a proper subsequence. An example is 1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ... If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many. Definition The precise definition of fractal sequence depends on a preliminary definition: a sequence x = (xn) is an infinitive sequence if for every i, (F1) xn = i for infinitely many n. Let a(i,j) be the jth index n for which xn = i. An infinitive sequence x is a fractal sequence if two additional conditions hold: (F2) if i+1 = xn, then there exists m < n such that (F3) if h < i then for every j there is exactly one k such that According to (F2), the first occurrence of each i > 1 in x must be preceded at least once by each of the numbers 1, 2, ..., i-1, and according to (F3), between consecutive occurrences of i in x, each h less than i occurs exactly once. Example Suppose θ is a positive irrational number. Let S(θ) = the set of numbers c + dθ, where c and d are positive integers and let cn(θ) + θdn(θ) be the sequence obtained by arranging the numbers in S(θ) in increasing order. The sequence cn(θ) is the signature of θ, and it is a fractal sequence. For example, the signature of the golden ratio (i.e., θ = (1 + sqrt(5))/2) begins with 1, 2, 1, 3, 2, 4, 1, 3, 5, 2, 4, 1, 6, 3, 5, 2, 7, 4, 1, 6, 3, 8, 5, ... and the signature of 1/θ = θ - 1 begins with 1, 1, 2, 1, 2, 1, 3, 2, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 2, 4, 1, 3, 5, ... These are sequences and in the On-Line Encyclopedia of Integer Sequences, where further examples from a variety of number-theoretic and combinatorial settings are given. See also Thue-Morse Sequence External links On-Line Encyclopedia of Integer Sequences: References Fractals Integer sequences
Fractal sequence
[ "Mathematics" ]
587
[ "Sequences and series", "Functions and mappings", "Integer sequences", "Mathematical structures", "Mathematical analysis", "Recreational mathematics", "Mathematical objects", "Fractals", "Combinatorics", "Mathematical relations", "Numbers", "Number theory" ]
9,315,206
https://en.wikipedia.org/wiki/Amanita%20ocreata
Amanita ocreata, commonly known as the death angel, destroying angel, angel of death or more precisely western North American destroying angel, is a deadly poisonous basidiomycete fungus, one of many in the genus Amanita. The large fruiting bodies (the mushrooms) generally appear in spring; the cap may be white or ochre and often develops a brownish centre, while the stipe, ring, gill and volva are all white. A. ocreata resembles several edible species commonly consumed by humans, increasing the risk of accidental poisoning. Mature fruiting bodies can be confused with the edible A. velosa (springtime amanita), A. lanei or Volvopluteus gloiocephalus, while immature specimens may be difficult to distinguish from edible Agaricus mushrooms or puffballs. The species occurs in the Pacific Northwest and California Floristic Provinces of North America, associating with oak trees. Similar in toxicity to the death cap (A. phalloides) and destroying angels of Europe (A. virosa) and eastern North America (A. bisporigera), it is a potentially deadly fungus responsible for several poisonings in California. Its principal toxic constituent, α-Amanitin, damages the liver and kidneys, often fatally, and has no known antidote, though silybin and N-acetylcysteine show promise. The initial symptoms are gastrointestinal and include abdominal pain, diarrhea and vomiting. These subside temporarily after 2–3 days, though ongoing damage to internal organs during this time is common; symptoms of jaundice, diarrhea, delirium, seizures, and coma may follow with death from liver failure 6–16 days post ingestion. Taxonomy Amanita ocreata was first described by American mycologist Charles Horton Peck in 1909 from material collected by Charles Fuller Baker in Claremont, California. The specific epithet is derived from the Latin ocrěātus 'wearing greaves' from ocrea 'greave', referring to its loose, baggy volva. Amanita bivolvata is a botanical synonym. The mushroom belongs to the same section (Phalloideae) and genus (Amanita) as several deadly poisonous fungi including the death cap (A. phalloides) and several all-white species of Amanita known as "destroying angels": A. bisporigera of eastern North America, and the European A. virosa. "Death angel" is used as an alternate common name. Description Amanita ocreata is generally stouter than the other fungi termed destroying angels. It first appears as a white egg-shaped object covered with a universal veil. As it grows, the mushroom breaks free, though there may rarely be ragged patches of veil left at the cap edges. The cap is initially hemispherical, before becoming more convex and flattening, sometimes irregularly. This may result in undulations in the cap, which may be between in diameter. The colour varies from white, through yellowish-white to shades of ochre, sometimes with a brownish centre. Occasionally parts of the fruiting bodies may have pinkish tones. The rest of the fungus below the cap is white. The crowded gills are free to narrowly adnate. The stipe ranges from in height and is about thick, bearing a thin white membranous ring until old age. The volva is thin, smooth and sac-like, although may be quite extensive and contain almost half the stipe. The spore print is white, and the subglobose to ovoid to subellipsoid, amyloid spores are 9–14 x 7–10 μm viewed under a microscope. There is typically no smell, though some fruiting bodies may have a slight odour, described as that of bleach or chlorine, dead fish or iodine. Like other destroying angels, the flesh stains yellow when treated with potassium hydroxide. Similar species This fungus resembles the edible mushrooms Agaricus arvensis and A. campestris, and the puffballs (Lycoperdon spp.) before the caps have opened and the gills have become visible, so those collecting immature fungi run the risk of confusing the varieties. It also resembles and grows in the same areas as the edible and prized Amanita velosa, which can be distinguished from A. ocreata by its lack of ring, striate cap margin and thick universal veil remnants comprising the veil. The edible Amanita calyptroderma lacks a ring and is more likely to have veil patches remaining on its cap, which is generally darker. Volvariella speciosa has pink spores and no ring or volva. Distribution and habitat Appearing from January to April, A. ocreata occurs later in the year than other amanitas except A. calyptroderma. It is found in mixed woodland on the Pacific coast of North America, from Washington south through California to Baja California in Mexico. It may feasibly occur on Vancouver Island in British Columbia though this has never been confirmed. It forms ectomycorrhizal relationships and is found in association with coast live oak (Quercus agrifolia), as well as hazel (Corylus spp.). In Oregon and Washington, it may also be associated with the Garry oak (Quercus garryana). Toxicity A. ocreata is highly toxic, and has been responsible for mushroom poisonings in western North America, particularly in the spring. It contains highly toxic amatoxins, as well as phallotoxins, a feature shared with the closely related death cap (A. phalloides), half a cap of which can be enough to kill a human, and other species known as destroying angels. There is some evidence it may be the most toxic of all the North American phalloideae, as a higher proportion of people consuming it had organ damage and 40% perished. Dogs have also been known to consume this fungus in California with fatal results. Amatoxins consist of at least eight compounds with a similar structure, that of eight amino-acid rings; of those found in A. ocreata, α-Amanitin is the most prevalent and along with β-Amanitin is likely to be responsible for the toxic effects. The major toxic mechanism is the inhibition of RNA polymerase II, a vital enzyme in the synthesis of messenger RNA (mRNA), microRNA, and small nuclear RNA (snRNA). Without mRNA, essential protein synthesis and hence cell metabolism stop and the cell dies. The liver is the principal organ affected, as it is the first organ encountered after absorption by the gastrointestinal tract, though other organs, especially the kidneys, are susceptible to the toxins. The phallotoxins consist of at least seven compounds, all of which have seven similar peptide rings. Although they are highly toxic to liver cells, phallotoxins have since been found to have little input into the destroying angel's toxicity as they are not absorbed through the gut. Furthermore, one phallotoxin, phalloidin, is also found in the edible (and sought-after) blusher (Amanita rubescens). Signs and symptoms Signs and symptoms of poisoning by A. ocreata are initially gastrointestinal in nature and include colicky abdominal pain, with watery diarrhea and vomiting which may lead to dehydration and, in severe cases, hypotension, tachycardia, hypoglycemia, and acid-base disturbances. The initial symptoms resolve two to three days after ingestion of the fungus. A more serious deterioration signifying liver involvement may then occur—jaundice, diarrhea, delirium, seizures, and coma due to fulminant liver failure and attendant hepatic encephalopathy caused by the accumulation of normally liver-removed substances in the blood. Kidney failure (either secondary to severe hepatitis or caused by direct toxic renal damage) and coagulopathy may appear during this stage. Life-threatening complications include increased intracranial pressure, intracranial hemorrhage, sepsis, pancreatitis, acute kidney injury, and cardiac arrest. Death generally occurs six to sixteen days after the poisoning. Treatment Consumption of A. ocreata is a medical emergency that requires hospitalization. There are four main categories of therapy for poisoning: preliminary medical care, supportive measures, specific treatments, and liver transplantation. Preliminary care consists of gastric decontamination with either activated carbon or gastric lavage. However, due to the delay between ingestion and the first symptoms of poisoning, it is commonplace for patients to arrive for treatment long after ingestion, potentially reducing the efficacy of these interventions. Supportive measures are directed towards treating the dehydration which results from fluid loss during the gastrointestinal phase of intoxication and correction of metabolic acidosis, hypoglycemia, electrolyte imbalances, and impaired coagulation. No definitive antidote for amatoxin poisoning is available, but some specific treatments such as intravenous penicillin G have been shown to improve survival. There is some evidence that intravenous silibinin, an extract from the blessed milk thistle (Silybum marianum), may be beneficial in reducing the effects of amatoxins, preventing their uptake by hepatocytes, thereby protecting undamaged hepatic tissue. In patients developing liver failure, a liver transplant is often the only option to prevent death. Liver transplants have become a well-established option in amatoxin poisoning. This is a complicated issue, however, as transplants themselves may have significant complications and mortality; patients require long-term immunosuppression to maintain the transplant. Evidence suggests that, although survival rates have improved with modern medical treatment, in patients with moderate to severe poisoning up to half of those who did recover suffered permanent liver damage. However, a follow-up study has shown that most survivors recover completely without any sequelae if treated within 36 hours of the mushrooms ingestion. See also List of Amanita species List of deadly fungi References Works cited External links Key to species of Amanita Section Phalloideae from North and Central America - Amanita studies website California Fungi—Amanita ocreata ocreata Fungi of North America Hepatotoxins Deadly fungi Poisonous fungi Fungi described in 1909 Taxa named by Charles Horton Peck Fungus species
Amanita ocreata
[ "Biology", "Environmental_science" ]
2,200
[ "Poisonous fungi", "Fungi", "Toxicology", "Fungus species" ]
9,315,395
https://en.wikipedia.org/wiki/Parametricity
In programming language theory, parametricity is an abstract uniformity property enjoyed by parametrically polymorphic functions, which captures the intuition that all instances of a polymorphic function act the same way. Idea Consider this example, based on a set X and the type T(X) = [X → X] of functions from X to itself. The higher-order function twiceX : T(X) → T(X) given by twiceX(f) = f ∘ f, is intuitively independent of the set X. The family of all such functions twiceX, parametrized by sets X, is called a "parametrically polymorphic function". We simply write twice for the entire family of these functions and write its type as X. T(X) → T(X). The individual functions twiceX are called the components or instances of the polymorphic function. Notice that all the component functions twiceX act "the same way" because they are given by the same rule. Other families of functions obtained by picking one arbitrary function from each T(X) → T(X) would not have such uniformity. They are called "ad hoc polymorphic functions". Parametricity is the abstract property enjoyed by the uniformly acting families such as twice, which distinguishes them from ad hoc families. With an adequate formalization of parametricity, it is possible to prove that the parametrically polymorphic functions of type X. T(X) → T(X) are one-to-one with natural numbers. The function corresponding to the natural number n is given by the rule f fn, i.e., the polymorphic Church numeral for n. In contrast, the collection of all ad hoc families would be too large to be a set. History The parametricity theorem was originally stated by John C. Reynolds, who called it the abstraction theorem. In his paper "Theorems for free!", Philip Wadler described an application of parametricity to derive theorems about parametrically polymorphic functions based on their types. Programming language implementation Parametricity is the basis for many program transformations implemented in compilers for the Haskell programming language. These transformations were traditionally thought to be correct in Haskell because of Haskell's non-strict semantics. Despite being a lazy programming language, Haskell does support certain primitive operations—such as the operator seq—that enable so-called "selective strictness", allowing the programmer to force the evaluation of certain expressions. In their paper "Free theorems in the presence of seq", Patricia Johann and Janis Voigtlaender showed that because of the presence of these operations, the general parametricity theorem does not hold for Haskell programs; thus, these transformations are unsound in general. Dependent types See also Parametric polymorphism Non-strict programming language References External links Wadler: Parametricity Programming language topics Type theory Polymorphism (computer science)
Parametricity
[ "Mathematics", "Engineering" ]
608
[ "Polymorphism (computer science)", "Mathematical structures", "Mathematical logic", "Mathematical objects", "Type theory", "Software engineering", "Programming language topics" ]
9,317,055
https://en.wikipedia.org/wiki/Programmed%20fuel%20injection
Programmed Fuel Injection, or PGMFI/PGM-FI, is the name given by Honda to a proprietary digital electronic multi-point injection system for internal combustion engines. It has been available since the early 1980s. This system has been used in motorcycles, automobiles, and outboard motors. History With its origins beginning with the CX500 and CX650 turbocharged motorcycles in 1982 and 1983, respectively, Honda's PGM-FI made its way into their automobiles in the early 1980s with the ER engine equipped City Turbo. The system gained popularity in the late 1980s in their Accord and Prelude models with A20A, A20A3 & A20A4 engines (Honda A engine), and its motorcycles later on. In 1998, Honda built its third motorcycle with multi-point injection; the VFR800FI. Operation The PGM-FI system relies on a piezoelectric sensor to measure intake manifold air pressure, then combines that data with the crankshaft rpm and other info to compute the air quantity, and interprets the data using performance maps. Fuel is injected intermittently into the inlet ports. The PGM-FI also has a trailing throttle fuel cutoff and a self-diagnosis system. References External links Honda Article on PGM-FI in motorcycles An explanation on how the system works in motorcycles Honda engines Motorcycle engines Fuel injection systems
Programmed fuel injection
[ "Technology" ]
280
[ "Motorcycle engines", "Engines" ]
9,318,403
https://en.wikipedia.org/wiki/Markov%20brothers%27%20inequality
In mathematics, the Markov brothers' inequality is an inequality, proved in the 1890s by brothers Andrey Markov and Vladimir Markov, two Russian mathematicians. This inequality bounds the maximum of the derivatives of a polynomial on an interval in terms of the maximum of the polynomial. For k = 1 it was proved by Andrey Markov, and for k = 2,3,... by his brother Vladimir Markov. The statement Let P be a polynomial of degree ≤ n. Then for all nonnegative integers This inequality is tight, as equality is attained for Chebyshev polynomials of the first kind. Related inequalities Bernstein's inequality (mathematical analysis) Remez inequality Applications Markov's inequality is used to obtain lower bounds in computational complexity theory via the so-called "Polynomial Method". References Theorems in analysis Inequalities
Markov brothers' inequality
[ "Mathematics" ]
177
[ "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems" ]
9,318,685
https://en.wikipedia.org/wiki/Elementary%20proof
In mathematics, an elementary proof is a mathematical proof that only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. Historically, it was once thought that certain theorems, like the prime number theorem, could only be proved by invoking "higher" mathematical theorems or techniques. However, as time progresses, many of these results have also been subsequently reproven using only elementary techniques. While there is generally no consensus as to what counts as elementary, the term is nevertheless a common part of the mathematical jargon. An elementary proof is not necessarily simple, in the sense of being easy to understand or trivial. In fact, some elementary proofs can be quite complicated — and this is especially true when a statement of notable importance is involved. Prime number theorem The distinction between elementary and non-elementary proofs has been considered especially important in regard to the prime number theorem. This theorem was first proved in 1896 by Jacques Hadamard and Charles Jean de la Vallée-Poussin using complex analysis. Many mathematicians then attempted to construct elementary proofs of the theorem, without success. G. H. Hardy expressed strong reservations; he considered that the essential "depth" of the result ruled out elementary proofs: However, in 1948, Atle Selberg produced new methods which led him and Paul Erdős to find elementary proofs of the prime number theorem. Friedman's conjecture Harvey Friedman conjectured, "Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in elementary arithmetic." The form of elementary arithmetic referred to in this conjecture can be formalized by a small set of axioms concerning integer arithmetic and mathematical induction. For instance, according to this conjecture, Fermat's Last Theorem should have an elementary proof; Wiles's proof of Fermat's Last Theorem is not elementary. However, there are other simple statements about arithmetic such as the existence of iterated exponential functions that cannot be proven in this theory. References Elementary mathematics Mathematical proofs
Elementary proof
[ "Mathematics" ]
440
[ "Elementary mathematics", "nan" ]
9,319,349
https://en.wikipedia.org/wiki/Carpentier%20joint
A carpentier joint is a hinge consisting of several thin metal strips of curved cross section, similar in structure to retracting steel measuring tape, or some retractable radio antennas. It has two configurations: closed and open. The defining property of the joint is that it is self-opening, does not need mechanical elements such as guides, and maintains a certain degree of stiffness when in the open configuration. The hinge is used in antenna deployment, solar arrays and sensor deployment in satellite applications. The hinge locks in the open condition. To fold, the spring strips are subjected to sufficient bending moment to pop the curved section of each strip flat in the centre, and flex along the length from the centre in an elastic curve to the desired angle, where it must be held in place until it is to be deployed. Release of the restraining force allows the elasticity of the material and the stored energy of bending to restore the strips to straight configuration, at which point the sectional curve will pop back and lock the strip straight. Depending on the length of the spring strips and the width of the end-pieces, the hinge may be folded to angles in the order of 180° The entire folding motion is provided by elastic deformation of the strips, there is no sliding contact surface. References Hardware (mechanical)
Carpentier joint
[ "Physics", "Technology", "Engineering" ]
267
[ "Physical systems", "Machines", "Hardware (mechanical)", "Construction" ]
9,320,596
https://en.wikipedia.org/wiki/Carleman%27s%20inequality
Carleman's inequality is an inequality in mathematics, named after Torsten Carleman, who proved it in 1923 and used it to prove the Denjoy–Carleman theorem on quasi-analytic classes. Statement Let be a sequence of non-negative real numbers, then The constant (euler number) in the inequality is optimal, that is, the inequality does not always hold if is replaced by a smaller number. The inequality is strict (it holds with "<" instead of "≤") if some element in the sequence is non-zero. Integral version Carleman's inequality has an integral version, which states that for any f ≥ 0. Carleson's inequality A generalisation, due to Lennart Carleson, states the following: for any convex function g with g(0) = 0, and for any -1 < p < ∞, Carleman's inequality follows from the case p = 0. Proof Direct proof An elementary proof is sketched below. From the inequality of arithmetic and geometric means applied to the numbers where MG stands for geometric mean, and MA — for arithmetic mean. The Stirling-type inequality applied to implies for all Therefore, whence proving the inequality. Moreover, the inequality of arithmetic and geometric means of non-negative numbers is known to be an equality if and only if all the numbers coincide, that is, in the present case, if and only if for . As a consequence, Carleman's inequality is never an equality for a convergent series, unless all vanish, just because the harmonic series is divergent. By Hardy’s inequality One can also prove Carleman's inequality by starting with Hardy's inequality for the non-negative numbers , ,… and , replacing each with , and letting . Versions for specific sequences Christian Axler and Mehdi Hassani investigated Carleman's inequality for the specific cases of where is the th prime number. They also investigated the case where . They found that if one can replace with in Carleman's inequality, but that if then remained the best possible constant. Notes References External links Real analysis Inequalities
Carleman's inequality
[ "Mathematics" ]
433
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
9,320,720
https://en.wikipedia.org/wiki/Conditional%20quantifier
In logic, a conditional quantifier is a kind of Lindström quantifier (or generalized quantifier) QA that, relative to a classical model A, satisfies some or all of the following conditions ("X" and "Y" range over arbitrary formulas in one free variable): (The implication arrow denotes material implication in the metalanguage.) The minimal conditional logic M is characterized by the first six properties, and stronger conditional logics include some of the other ones. For example, the quantifier ∀A, which can be viewed as set-theoretic inclusion, satisfies all of the above except [symmetry]. Clearly [symmetry] holds for ∃A while e.g. [contraposition] fails. A semantic interpretation of conditional quantifiers involves a relation between sets of subsets of a given structure—i.e. a relation between properties defined on the structure. Some of the details can be found in the article Lindström quantifier. Conditional quantifiers are meant to capture certain properties concerning conditional reasoning at an abstract level. Generally, it is intended to clarify the role of conditionals in a first-order language as they relate to other connectives, such as conjunction or disjunction. While they can cover nested conditionals, the greater complexity of the formula, specifically the greater the number of conditional nesting, the less helpful they are as a methodological tool for understanding conditionals, at least in some sense. Compare this methodological strategy for conditionals with that of first-degree entailment logics. References Serge Lapierre. Conditionals and Quantifiers, in Quantifiers, Logic, and Language, Stanford University, pp. 237–253, 1995. Quantifier (logic)
Conditional quantifier
[ "Mathematics" ]
377
[ "Basic concepts in set theory", "Predicate logic", "Quantifier (logic)", "Mathematical logic" ]
9,321,064
https://en.wikipedia.org/wiki/BeeBase
BeeBase was an online bioinformatics database that hosted data related to Apis mellifera, the European honey bee along with some pathogens and other species. It was developed in collaboration with the Honey Bee Genome Sequencing Consortium. In 2020 it was archived and replaced by the Hymenoptera Genome Database. Data and services Biological data and services available on BeeBase included: DNA and protein sequence data official bee gene set (developed by and hosted at Beebase) genome browser linkage maps server to search the honey bee genome using BLAST Services In Feb 2007, BeeBase consisted of a GBrowser-based genome viewer and a Cmap-based comparative map viewer, both modules of the Generic Model Organism Database (GMOD) project. The genome viewer included tracks for known honey bee genes, predicted gene sets (Ensembl, NCBI, EMBL-Heidelberg), STS markers (Solignac and Hunt linkage maps), honey bee expressed sequence tags (ESTs), homologs in fruit fly, mosquito and other insects and transposable elements. The honey bee comparative map viewer displayed linkage maps and the physical map (genome assembly), highlighting markers that are common among maps. Additionally, a QTL viewer and a gene expression database were planned. The genome sequence was to serve as a reference to link these diverse data types. Beebase organized the community annotation of the bee genome in collaboration with Baylor College of Medicine Human Genome Sequencing Center. Data The now archived site hosts the genome sequence for apis mellifera along with those of the following pathogens: Bombus terrestris Bombus impatiens Two additional species were under analysis: Apis dorsata Apis florea See also Wormbase Flybase Xenbase References External links BeeBase Model organism databases Genome projects Beekeeping
BeeBase
[ "Biology" ]
374
[ "Model organism databases", "Model organisms", "Genome projects" ]
9,321,146
https://en.wikipedia.org/wiki/CRAL-TRIO%20domain
CRAL-TRIO domain is a protein structural domain that binds small lipophilic molecules. This domain is named after cellular retinaldehyde-binding protein (CRALBP) and TRIO guanine exchange factor. CRALB protein carries 11-cis-retinol or 11-cis-retinaldehyde. It modulates interaction of retinoids with visual cycle enzymes. TRIO is involved in coordinating actin remodeling, which is necessary for cell migration and growth. Other members of the family are alpha-tocopherol transfer protein and phosphatidylinositol-transfer protein (Sec14). They transport their substrates (alpha-tocopherol and phosphatidylinositol or phosphatidylcholine, respectively) between different intracellular membranes. Family also include a guanine nucleotide exchange factor that may function as an effector of RAC1 small G-protein. The N-terminal domain of yeast ECM25 protein has been identified as containing a lipid binding CRAL-TRIO domain. Structure The Sec14 protein was the first CRAL-TRIO domain for which the structure was determined. The structure contains several alpha helices as well as a beta sheet composed of 6 strands. Strands 2,3,4 and 5 form a parallel beta sheet with strands 1 and 6 being anti-parallel. The structure also identified a hydrophobic binding pocket for lipid binding. Human proteins containing this domain C20orf121; MOSPD2; PTPN9; RLBP1; RLBP1L1; RLBP1L2; SEC14L1; SEC14L2; SEC14L3; SEC14L4; TTPA; NF1; References External links - Calculated spatial positions of CRAL-TRIO domains in membrane Peripheral membrane proteins Protein domains Water-soluble transporters
CRAL-TRIO domain
[ "Biology" ]
396
[ "Protein domains", "Protein classification" ]
9,321,714
https://en.wikipedia.org/wiki/Sterol%20carrier%20protein
Sterol carrier proteins (also known as nonspecific lipid transfer proteins) is a family of proteins that transfer steroids and probably also phospholipids and gangliosides between cellular membranes. These proteins are different from plant nonspecific lipid transfer proteins but structurally similar to small proteins of unknown function from Thermus thermophilus. This domain is involved in binding sterols. The human sterol carrier protein 2 (SCP2) is a basic protein that is believed to participate in the intracellular transport of cholesterol and various other lipids. Human proteins containing this domain HSD17B4; HSDL2; SCP2; STOML1; See also Steroidogenic acute regulatory protein and START domain References External links Sterol carrier proteins in SCOP SCP-2 sterol transfer family in Pfam Peripheral membrane proteins Protein domains Protein families Water-soluble transporters
Sterol carrier protein
[ "Biology" ]
194
[ "Protein families", "Protein domains", "Protein classification" ]
9,322,003
https://en.wikipedia.org/wiki/Forest%20steppe
A forest steppe is a temperate-climate ecotone and habitat type composed of grassland interspersed with areas of woodland or forest. Locations Forest steppe primarily occurs in a belt of forest steppes across northern Eurasia from the eastern lowlands of Europe to eastern Siberia in northeast Asia. It forms transition ecoregions between the temperate grasslands and temperate broadleaf and mixed forests biomes. Much of Russia belongs to the forest steppe zone, stretches from Central Russia, across Volga, Ural, Siberian and Far East Russia. In upper North America another example of the forest steppe ecotone is the aspen parkland in the central Prairie Provinces, northeastern British Columbia, North Dakota, and Minnesota. It is the transition ecoregion from the Great Plains prairie and steppe temperate grasslands in the south to the Taiga biome forests in the north. In central Asia the forest steppe ecotone is found in ecoregions in the mountains of the Iranian Plateau, in Iran, Afghanistan, and Balochistan. Forest steppe ecoregions East European forest steppe forms a transition between the Central European and Sarmatic mixed forests to the north and the Pontic–Caspian steppe to the south. It extends from Romania in the west to the Ural Mountains in the east. The Kazakh forest steppe lies east of the Urals, between the West Siberian broadleaf and mixed forests and the Kazakh steppe. Altai montane forest and forest steppe The Southern Siberian rainforest includes forest-steppe areas. Selenge-Orkhon forest steppe The Daurian forest steppe lies between the Trans-Baikal conifer forests and East Siberian Taiga to the north and the Mongolian-Manchurian grassland to the south. Zagros Mountains forest steppe Elburz Range forest steppe Kopet Dag woodlands and forest steppe Kuhrud-Kohbanan Mountains forest steppe Canadian Aspen forests and parklands—North Dakota, Minnesota, and Canada External links References Ecoregions of the United States Ecoregions Forests Grasslands of Canada Grasslands of Russia Grasslands of the United States Grasslands Montane forests Montane grasslands and shrublands Nearctic ecoregions Palearctic ecoregions Temperate broadleaf and mixed forests Temperate grasslands, savannas, and shrublands
Forest steppe
[ "Biology" ]
436
[ "Forests", "Grasslands", "Ecosystems" ]
9,322,242
https://en.wikipedia.org/wiki/Whitney%20conditions
In differential topology, a branch of mathematics, the Whitney conditions are conditions on a pair of submanifolds of a manifold introduced by Hassler Whitney in 1965. A stratification of a topological space is a finite filtration by closed subsets Fi , such that the difference between successive members Fi and F(i − 1) of the filtration is either empty or a smooth submanifold of dimension i. The connected components of the difference Fi − F(i − 1) are the strata of dimension i. A stratification is called a Whitney stratification if all pairs of strata satisfy the Whitney conditions A and B, as defined below. The Whitney conditions in Rn Let X and Y be two disjoint (locally closed) submanifolds of Rn, of dimensions i and j. X and Y satisfy Whitney's condition A if whenever a sequence of points x1, x2, … in X converges to a point y in Y, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then T contains the tangent j-plane to Y at y. X and Y satisfy Whitney's condition B if for each sequence x1, x2, … of points in X and each sequence y1, y2, … of points in Y, both converging to the same point y in Y, such that the sequence of secant lines Lm between xm and ym converges to a line L as m tends to infinity, and the sequence of tangent i-planes Tm to X at the points xm converges to an i-plane T as m tends to infinity, then L is contained in T. John Mather first pointed out that Whitney's condition B implies Whitney's condition A in the notes of his lectures at Harvard in 1970, which have been widely distributed. He also defined the notion of Thom–Mather stratified space, and proved that every Whitney stratification is a Thom–Mather stratified space and hence is a topologically stratified space. Another approach to this fundamental result was given earlier by René Thom in 1969. David Trotman showed in his 1977 Warwick thesis that a stratification of a closed subset in a smooth manifold M satisfies Whitney's condition A if and only if the subspace of the space of smooth mappings from a smooth manifold N into M consisting of all those maps which are transverse to all of the strata of the stratification, is open (using the Whitney, or strong, topology). The subspace of mappings transverse to any countable family of submanifolds of M is always dense by Thom's transversality theorem. The density of the set of transverse mappings is often interpreted by saying that transversality is a 'generic' property for smooth mappings, while the openness is often interpreted by saying that the property is 'stable'. The reason that Whitney conditions have become so widely used is because of Whitney's 1965 theorem that every algebraic variety, or indeed analytic variety, admits a Whitney stratification, i.e. admits a partition into smooth submanifolds satisfying the Whitney conditions. More general singular spaces can be given Whitney stratifications, such as semialgebraic sets (due to René Thom) and subanalytic sets (due to Heisuke Hironaka). This has led to their use in engineering, control theory and robotics. In a thesis under the direction of Wieslaw Pawlucki at the Jagellonian University in Kraków, Poland, the Vietnamese mathematician Ta Lê Loi proved further that every definable set in an o-minimal structure can be given a Whitney stratification. See also Thom–Mather stratified space Topologically stratified space Thom's first isotopy lemma Stratified space References Mather, John Notes on topological stability, Harvard, 1970 (available on his webpage at Princeton University). Thom, René Ensembles et morphismes stratifiés, Bulletin of the American Mathematical Society Vol. 75, pp. 240–284), 1969. Trotman, David Stability of transversality to a stratification implies Whitney (a)-regularity, Inventiones Mathematicae 50(3), pp. 273–277, 1979. Trotman, David Comparing regularity conditions on stratifications, Singularities, Part 2 (Arcata, Calif., 1981), volume 40 of Proc. Sympos. Pure Math., pp. 575–586. American Mathematical Society, Providence, R.I., 1983. Whitney, Hassler Local properties of analytic varieties. Differential and Combinatorial Topology (A Symposium in Honor of Marston Morse) pp. 205–244 Princeton Univ. Press, Princeton, N. J., 1965. Whitney, Hassler, Tangents to an analytic variety, Annals of Mathematics 81, no. 3 (1965), pp. 496–549. Differential topology Singularity theory Stratifications
Whitney conditions
[ "Mathematics" ]
1,065
[ "Topology", "Differential topology", "Stratifications" ]
9,322,479
https://en.wikipedia.org/wiki/Phlegmacium%20ponderosum
Phlegmacium ponderosum, also known as the Ponderous Cortinarius, is a species of mushroom producing fungus in the family Cortinariaceae. It is very large and due to its thick stem it can be mistaken for Boletus edulis. Taxonomy It was described in 1939 by the American mycologist Alexander H. Smith who classified it as Cortinarius ponderosus. In 2022 the species was transferred from Cortinarius and reclassified as Phlegmacium ponderosum based on genomic data. Description This mushroom is one of the largest mushrooms in the family Cortinariaceae, with a convex cap that ranges from and becomes plane in age. It often has an olive metallic tinge, and the surface is viscid, often with small rusty brown scales. The margin is ocher and remains inrolled until the mushroom is fully mature. The flesh of the mushroom is yellow-white, thick and firm, with a mild to sour odor. The gills are rusty brown, adnate and slightly decurrent. The stalk is thick, 4–10 cm wide, and bulbous at the base. It has a slimy yellow universal veil, and the cortina leaves a rusty brown hairy area on the upper stalk. The spores are brown and elliptical. Its edibility is unknown, but it is not recommended due to its similarity to deadly poisonous species. Cortinarius infractus is a similar species that usually has a smaller cap. Etymology The specific epithet ponderosum (originally ponderosus) is named for the Pinus ponderosa trees which Smith observed the mushrooms growing under. Habitat and distribution Smith observed the mushrooms growing under Pinus ponderosa and Quercus (Oak) species near Cave City in Oregon and under Spruce trees near Crescent City, California. See also List of Cortinarius species References -- Cortinarius ponderosus Cortinarius ponderosus photo Cortinarius ponderosus info ponderosus Fungi of North America Fungi described in 1939 Fungus species
Phlegmacium ponderosum
[ "Biology" ]
421
[ "Fungi", "Fungus species" ]
4,123,943
https://en.wikipedia.org/wiki/Doebner%E2%80%93Miller%20reaction
The Doebner–Miller reaction is the organic reaction of an aniline with α,β-unsaturated carbonyl compounds to form quinolines. This reaction is also known as the Skraup-Doebner-Von Miller quinoline synthesis, and is named after the Czech chemist Zdenko Hans Skraup (1850–1910), and the Germans Oscar Döbner (Doebner) (1850–1907) and Wilhelm von Miller (1848–1899). When the α,β-unsaturated carbonyl compound is prepared in situ from two carbonyl compounds (via an Aldol condensation), the reaction is known as the Beyer method for quinolines. The reaction is catalyzed by Lewis acids such as tin tetrachloride and scandium(III) triflate and Brønsted acids such as p-toluenesulfonic acid, perchloric acid, amberlite and iodine. Reaction mechanism The reaction mechanism for this reaction and the related Skraup synthesis is a matter of debate. A 2006 study proposes a fragmentation-recombination mechanism based on carbon isotope scrambling experiments. In this study 4-isopropylaniline 1 is reacted with a mixture (50:50)of ordinary pulegone and the 13C-enriched isomer 2 and the reaction mechanism is outlined in scheme 2 with the labeled carbon identified with a red dot. The first step is a nucleophilic conjugate addition of the amine with the enol to the amine ketone 3 in a reversible reaction. This intermediate then fragments to the imine 4a and the saturated cyclohexanone 4b in a non-reversible reaction and both fragments recombine in a condensation reaction to the conjugated imine 5. In the next step 5 reacts with a second aniline molecule in a nucleophilic conjugate addition to imine 6 and subsequent electrophilic addition and proton transfer to leads to 7. elimination of one aniline molecule through 8 and rearomatization leads to final product 9. Because α-amino protons are not available in this model compound the reaction is not taken to the fully fledged quinoline. The fragmentation to 4a and 4b is key to this mechanism because it explains the isotope scrambling results. In the reaction only half the pulegone reactant (2) is labeled and on recombining a labeled imine fragment can react with another labeled ketone fragment or an unlabeled fragment and likewise a labeled ketone fragment can react with a labeled or unlabeled imine fragment. The resulting product distribution is confirmed by mass spectrometry of the final product 9. See also Combes quinoline synthesis Doebner reaction Gould–Jacobs reaction Knorr quinoline synthesis Skraup synthesis References Condensation reactions Quinoline forming reactions Name reactions
Doebner–Miller reaction
[ "Chemistry" ]
607
[ "Name reactions", "Condensation reactions", "Organic reactions" ]
4,124,045
https://en.wikipedia.org/wiki/Ceremonial%20use%20of%20lights
The ceremonial use of lights occurs in liturgies of various Christian Churches, as well as in Jewish, Zoroastrian, and Hindu rites and customs. Fire is used as an object of worship in many religions. Fire-worship still has its place in at least two of the great religions of the world. The Zoroastrians revere fire as the visible expression of Ahura Mazda, the eternal principle of light and righteousness; the Hindus worship it as divine and omniscient. One of the most popular festivals of Hinduism, Diwali (from the Sanskrit dīpāwali meaning "row or series of lights") symbolizes the spiritual "victory of light over darkness, good over evil, and knowledge over ignorance". According to the Talmud and Kabbalah, in the Holy of Holies of the Tabernacle, there was a cloud of light (shekinah), and before it stood the candlestick with six branches, on each of which and on the central stem was a lamp eternally burning; while in the forecourt was an altar on which the sacred fire was never allowed to go out. Similarly the Jewish synagogues have each their eternal lamp. Ancient Greece and Rome The Greeks and Romans, too, had their sacred fire and their ceremonial lights. In Greece the Lampadedromia or Lampadephoria (torch-race) had its origin in Greek ceremonies, connected with the relighting of the sacred fire. Pausanias mentions the golden lamp made by Callimachus which burned night and day in the sanctuary of Athena Polias on the Acropolis, and tells of a statue of Hermes Agoraios, in the market-place of Pharae in Achaea, before which lamps were lighted. Among the Romans lighted candles and lamps formed part of the cult of the domestic tutelary deities; on all festivals doors were garlanded and lamps lighted. In the Cult of Isis lamps were lighted by day. In the ordinary temples were candelabra, e.g. that in the temple of Apollo Palatinus at Rome, originally taken by Alexander from Thebes, which was in the form of a tree from the branches of which lights hung like fruit. The lamps in the pagan temples were not symbolical, but votive offerings to the gods. Torches and lamps were also carried in religious processions. Lamps for the dead The pagan custom of burying lamps with the dead was to provide the dead with the means of obtaining light in the next world; the lamps were for the most part unlighted. It was of Asiatic origin, traces of it having been observed in Phoenicia and in the Punic colonies, but not in Egypt or Greece. In Europe it was confined to the countries under the domination of Rome. Christianity Early Christian uses In Christianity, from the very first, fire and light are conceived as symbols, if not as visible manifestations, of the divine nature and the divine presence. Christ is the true Light, and at his transfiguration the fashion Christian of his countenance was altered, and his raiment was white and glistering; when the Holy Ghost descended upon the apostles, there appeared unto them cloven tongues of fire, and it sat upon each of them; at the conversion of St Paul there shined round him a great light from heaven; while the glorified Christ is represented as standing in the midst of seven candlesticks ... his head and hairs white like wool, as white as snow; and his eyes as a flame of fire. Christians are children of Light at perpetual war with the powers of darkness. Light represents the purifying presence of god. There is no evidence of any ceremonial use of lights in Christian worship during its first two centuries. It is recorded, indeed, that on the occasion of St. Paul's preaching at Alexandria in Troas there were many lights in the upper chamber; but this was at night. And the most that can be hazarded is that a specially large number were lighted as a festive illumination, as in modern Church festivals. As to a purely ceremonial use, such early evidence as exists is all the other way. A single sentence of Tertullian sufficiently illuminates Christian practice during the 2nd century. On days of rejoicing, he says, we do not shade our door-posts with laurels nor encroach upon the day-light with lamp laurels (). Lactantius, writing early in the 4th century, is even more sarcastic in his references to the heathen practice. They kindle lights, he says, as though to one who is in darkness. Can he be thought sane who offers the light of lamps and candles to the Author and Giver of all light? . This is primarily an attack on votive lights, and does not necessarily exclude their ceremonial use in other ways. There is, indeed, evidence that they were so used before Lactantius wrote. The 34th canon of the Synod of Elvira (305), which was contemporary with him, forbade candles to be lighted in cemeteries during the daytime, which points to an established custom as well as to an objection to it; and in the Roman catacombs lamps have been found of the 2nd and 3rd centuries which seem to have been ceremonial or symbolical. Again, according to the Acts of St Cyprian (died 258), his body was borne to the grave , and Prudentius, in his hymn on the 2nd and martyrdom of St Lawrence, says that in the time of St Laurentius, i.e. the middle of the 3rd century, candles stood in the churches of Rome on golden candelabra. The gift, mentioned by Anastasius, made by Constantine to the Vatican basilica, of a pharum of gold, garnished with 500 dolphins each holding a lamp, to burn before St Peters tomb, points also to a custom well established before Christianity became the state religion. Whatever previous custom may have been and for the earliest ages it is difficult to determine absolutely because the Christians held their services at night. By the close of the 4th century the ceremonial use of lights had become firmly and universally established in the Church. This is clear, to pass by much other evidence, from the controversy of St Jerome with Vigilantius. Vigilantius, a presbyter of Barcelona, still occupied the position of Tertullian and Lactantius in this matter. We see, he wrote, a rite peculiar to the pagans introduced into the churches on pretext of religion, and, while the sun is still shining, a mass of wax tapers lighted. ... A great honor to the blessed martyrs, whom they think to illustrate with contemptible little candles (). Jerome, the most influential theologian of the day, took up the cudgels against Vigilantius, who, in spite of his fatherly admonition, had dared again to open his foul mouth and send forth a filthy stink against the relics of the holy martyrs. If candles are lit before their tombs, are these the ensigns of idolatry? In his treatise contra Vigilantium he answers the question with much common sense. There can be no harm if ignorant and simple people or religious women, light candles in honor of the martyrs. We are not born, but reborn, Christians, and that which when done for idols was detestable is acceptable when done for the martyrs. As in the case of the woman with the precious box of ointment, it is not the gift that merits reward, but the faith that inspires it. As for lights in the churches, he adds that in all the churches of the East, whenever the gospel is to be read, lights are lit, though the sun be rising (), not in order to disperse the darkness, but as a visible sign of gladness (). Taken in connection with a statement which almost immediately precedes this , this seems to point to the fact that the ritual use of lights in the church services, so far as already established, arose from the same conservative habit as determined the development of liturgical vestments, i.e. the lights which had been necessary at the nocturnal meetings were retained, after the hours of service had been altered, and invested with a symbolical meaning. Already they were used at most of the conspicuous functions of the Church. Paulinus, bishop of Nola (died 431), describes the altar at the eucharist as crowned with crowded lights, and even mentions the eternal lamp. For their use at baptisms we have, among much other evidence, that of Zeno of Verona for the West, and that of Gregory of Nazianzus for the East. Their use at funerals is illustrated by Eusebius's description of the burial of Constantine, and Jerome's account of that of Saint Paula. At ordinations they were used, as is shown by the 6th canon of the Council of Carthage (398), which decrees that the acolyte is to hand to the newly ordained deacon . This symbolism was not pagan, i.e. the lamps were not placed in the graves as part of the furniture of the dead; in the Catacombs they are found only in the niches of the galleries and the arcosolia, nor can they have been votive in the sense popularized later. . . .. Middle Ages As to the blessing of candles, according to the Liber pontificalis Pope Zosimus in 417 ordered these to be blessed, and the Gallican and Mozarabic rituals also provided for this ceremony. The Feast of the Purification of the Virgin, known as Candlemas, because on this day the candles for the whole year are blessed, was established according to some authorities by Pope Gelasius I about 492. As to the question of altar lights, however, it must be borne in mind that these were not placed upon the altar, or on a retable behind it, until the 12th century. These were originally the candles carried by the deacons, according to the Ordo Romanus (i. 8; ii. 5; iii. 7) seven in number, which were set down, either on the steps of the altar, or, later, behind it. In certain of the Eastern Churches to this day, there are no lights on the high altar; the lighted candles stand on a small altar beside it, and at various parts of the service are carried by the lectors or acolytes before the officiating priest or deacon. The crowd of lights described by Paulinus as crowning the altar were either grouped round it or suspended in front of it; they are represented by the sanctuary lamps of the Latin Church and by the crown of lights suspended in front of the altar in the Greek. To trace the gradual elaboration of the symbolism and use of ceremonial lights in the Church, until its full development and systematization in the Middle Ages, would be impossible here. It must suffice to note a few stages in development of the process. The burning of lights before the tombs of martyrs led naturally to their being burned also before relics and lastly before images and pictures. This latter practice, hotly denounced as idolatry during the iconoclastic controversy, was finally established as orthodox by the Second General Council of Nicaea (787), which restored the use of images. A later development, however, by which certain lights themselves came to be regarded as objects of worship and to have other lights burned before them, was condemned as idolatrous by the Synod of Noyon in 1344. The passion for symbolism extracted ever new meanings out of the candles and their use. Early in the 6th century Magnus Felix Ennodius, bishop of Pavia, pointed out the threefold elements of a wax candle (Opusc. ix. and x.), each of which would make it an offering acceptable to God; the rush-wick is the product of pure water, the wax is the offspring of virgin, bees in the flame is sent from heaven.12 Clearly, wax was a symbol of the Blessed Virgin and the holy humanity of Christ. The later Middle Ages developed the idea. Durandus, in his Rationale, interprets the wax as the body of Christ, the wick as his soul, the flame as his divine nature; and the consuming candle as symbolizing his passion and death. This may be the Paschal Candle only. In some codices the text runs: . In the three variants of the notice of Zosimus given in Duchesnes edition of the (I~86I892) the word is, however, alone used. Nor does the text imply that he gave to the suburbicarian churches a privilege hitherto exercised by the metropolitan church. The passage runs: , &c. "" here obviously refers to the headgear of the deacons, not to the candles. See also the Peregrinoiio Sylviae (386), 86, &c., for the use of lights at Jerusalem, and Isidore of Seville for the usage in the West. That even in the 7th century the blessing of candles was by no means universal is proved by the 9th canon of the Council of Toledo (671):De benedicendo cereo et lucerna in privilegiis Paschae. This canon states that candles and lamps are not blessed in some churches, and that inquiries have been made why we do it. In reply, the council decides that it should be done to celebrate the mystery of Christ's resurrection. See Isidore of Seville, Conc., in Migne, Pat, tat. lxxxiv. 369. Eastern Christian usage In the Eastern Orthodox Church and those Eastern Catholic Churches which follow the Byzantine Rite, there is a large amount of ceremonial use of light. The most important usage is the reception of the Holy Fire at the Church of the Holy Sepulchre in Jerusalem on the afternoon of Holy Saturday. This flame is often taken by the faithful to locations all over the world. The temple When a new temple (church building) is consecrated the bishop kindles a flame in the sanctuary which traditionally should burn perpetually from that time forward. This sanctuary lamp is usually an oil lamp located either on or above the Holy Table (altar). In addition, in the Eastern Orthodox Church there must be candles on the Holy Table during the celebration of the Divine Liturgy. In some places this takes the form of a pair of white candles, in others, it may be a pair of five-branch candlesticks. There is also traditionally a seven-branch candlestick on or behind the Holy Table, recalling the one mandated in the Old Testament Tabernacle and the Temple in Jerusalem. Around the temple, there are a number of oil lamps burning in front of the icons, especially on the iconostasis. Additionally, the faithful will offer beeswax candles in candle stands in front of important icons. The faithful offer candles as they pray for both the living and the departed. It is customary during funerals and memorial services for everyone to stand holding lit candles. Often everyone will either extinguish their candles or put them in a candle stand at a certain point near the end of the memorial service to indicate that at some point, everyone will have to surrender their soul to God. Special moments The reading from the Gospel Book must always be accompanied by lighted candles, as a sign that Christ is the Light which enlightens all (). When the priest and deacon cense the temple, the deacon will walk with a lighted candle. During processions, and in some places during the liturgical entrances, either candles or lanterns are carried by altar servers. On certain feast days, the clergy, and sometimes all of the faithful, will stand holding candles for certain solemn moments during the service. This is especially so during Holy Week during the reading of the 12 Passion Gospels on Great Friday, and the Lamentations around the epitaphios on Great Saturday. Certain moments during the All Night Vigil will be accentuated by the lighting or extinguishing of lamps or candles. The Polyeleos is an important moment in the service when all of the lamps and candles in the church should be illuminated. Whenever the bishop celebrates the divine services, he will bless with a pair of candlesticks known as dikirion and trikirion, holding two and three candles, respectively. In the home The faithful will often keep a lamp burning perpetually in their icon corner. In the Russian Orthodox Church, it is customary to try to preserve the flame from the service of the 12 Passion Gospels and bring it home to bless their house: there is a custom of using the flame from this candle to mark a cross on the lintel of one's doorway before entering after the service, and of then using the flame to re-kindle the lamp in the icon corner. Paschal Vigil and Bright Week During the Paschal Vigil, after the Midnight Office, all of the candles and lamps in the temple are extinguished, with the exception of the sanctuary lamp behind the iconostasis, and all wait in silence and darkness. (In Orthodox churches, when possible, the Holy Fire arrives from the Holy Sepulchre during Holy Saturday afternoon and it is used to light anew the flame in the sanctuary lamp.) At the stroke of midnight, the priest censes around the Holy Table, and lights his candle from the sanctuary lamp. Then the Holy Doors are opened and all the people light their candles from the priest's candle. Then, all the clergy and the people exit the church and go in procession three times around it holding lighted candles and singing a hymn of the resurrection. During the Paschal Vigil, and throughout Bright Week, the priest will hold a special paschal candle—in the Greek tradition a single candle, in the Slavic tradition a triple candlestick—at the beginning of the service, whenever he senses, and at other special moments during the service. In the Slavic tradition, the deacon also carries a special paschal candle which he holds at the beginning, whenever he senses, and whenever he chants an ektenia (litany). Oriental Orthodox In the Ethiopian Orthodox Church, it is customary to light bonfires on the Feast of Timkat (Epiphany). Roman Catholic usage in the early 20th century In the Latin Church or Roman Catholic Church, the use of ceremonial lights falls under three heads. (1) They may be symbolical of the light of Gods presence, of Christ as Light Roman of Light, or of the children of Light in conflict with Catholic the powers of darkness; they may even be no more than expressions of joy on the occasion of great festivals. (2) They may be votive, i.e. offered as an act of worship (latria) to God. (3) They are, in virtue of their benediction by the Church, sacramental id, i.e. efficacious for the good of men's souls and bodies, and for the confusion of the powers of darkness. With one or more of these implications, they are employed in all the public functions of the Church. At the consecration of a church twelve lights are placed around the walls at the twelve spots. Dedication where these are anointed by the bishop with holy oil, of a and on every anniversary these are relighted; at the church, dedication of an altar tapers are lighted and censed at each place where the table is anointed (Pontificale Rom. p. ii. De ecci. dedicat. seu consecrat.). Mass At every liturgical service, and especially at Mass and at choir services, there must be at least two lighted tapers on the altar, as symbols of the presence at Mass of God and tributes of adoration. For the Mass the rule is that there are six lights at High Mass, four at missa cantata, and two at private masses. At a Pontifical High Mass (i.e. when the bishop celebrates) the lights are seven, because seven golden candlesticks surround the risen Saviour, the chief bishop of the Church (see Rev. i. 12). At most pontifical functions, moreover, the bishop as the representative of Christ is preceded by an acolyte with a burning candle on a candlestick or bugia. The Ceremoniale Episcoporum (i. 12) further orders that a burning lamp is to hang at all times before each altar, three in front of the high altar, and five before the reserved Sacrament, as symbols of the eternal Presence. In practice, however, it is usual to have only one Altar lamp lighted before the tabernacle in which the Host is reserved. The special symbol of the real presence of Christ is the Sanctus candle, which is lighted at the moment of consecration and kept burning until the communion. The same symbolism is intended by the lighted tapers which must accompany the Host whenever it is carried in procession, or to the sick and dying. As symbols of light and joy, a candle is held on each side of the deacon when reading the Gospel at Mass; and the same symbolism underlies the multiplication of lights on festivals, their number varying with the importance of the occasion. As to the number of these latter no rule is laid down. They differ from liturgical lights in that, whereas these must be tapers of pure beeswax or lamps fed with pure olive oil (except by special dispensation under Certain circumstances), those used merely to add splendour to the celebration may be of any material; the only exception being, that in the decoration of the altar, gas-lights are forbidden. In general, the ceremonial use of lights in the Roman Catholic Church is conceived as a dramatic representation in fire of the life of Christ and of the whole scheme of salvation. On Easter Eve the new fire, symbol of the light of the newly risen Christ, is produced, and from this are kindled all the lights used throughout the Christian year until, in the gathering darkness (tenebrae) of the Passion, they are gradually extinguished. This quenching of the light of the world is symbolized at the service of Tenebrae in Holy Week by the placing on a stand before the altar of thirteen lighted tapers arranged pyramidally, the rest of the church being in darkness. The penitential psalms are sung, and at the end of each a candle is extinguished. When only the central one is left it is taken down and carried behind the altar, thus symbolizing the nocturnal darkness, so our hearts are illumined by invisible fire, &c. (Missale Rom.). In the form for the blessing of candles extra diem Purificationis B. Mariae Virg. the virtue of the consecrated candles in discomfiting demons is specially brought out: that in whatever places they may be lighted, or placed, the princes of darkness may depart, and tremble, and may fly terror-stricken with all their ministers from those habitations, nor presume further to disquiet and molest those who serve thee, Almighty God (Rituale Rom.) Altar candlesticks consist of five parts: the foot, stem, knob in the centre, bowl to catch the drippings, and pricket (a sharp point on which the candle is fixed). It is permissible to use a long tube, pointed to imitate a candle, in which a small taper is forced to the top by a spring (Cong. Rit., tIth May I&78). Easter On Easter Eve new fire is made with a flint and steel, and blessed; from this three candles are lighted, the lumen Christi, and from these again the Paschal Candle. This is the symbol of the risen and victorious Christ, and burns at every solemn service until Ascension Day, when it is extinguished and removed after the reading of the Gospel at High Mass. This, of course, symbolizes the Ascension; but meanwhile the other lamps in the church have received their light from the Paschal Candle, and so symbolize throughout the year the continued presence of the light of Christ. Baptism At the consecration of the baptismal water the burning Paschal Candle is dipped into the font so that the power of the Holy Ghost may descend into it and make it an effective instrument of regeneration. This is the symbol of baptism as rebirth as children of Light. Lighted tapers are also placed in the hands of the newly baptized, or of their god-parents, with the admonition to preserve their baptism inviolate, so that they may go to meet the Lord when he comes to the wedding. Thus, too, as children of Light, candidates for ordination and novices about to take the vows carry lights. when they come before the bishop; and the same idea 17, CEo. underlies the custom of carrying lights at weddings, at the first communion, and by priests going to their first mass, though none of these are liturgically prescribed. Finally, lights are placed around the bodies of the dead and carried beside them to the grave, partly as symbols that they still live in the light of Christ, partly to frighten away the powers of darkness. Funeral During the funeral service, the Paschal Candle is placed, burning, near the coffin, as a reminder of the deceased's baptismal vows and hope of eternal life and salvation brought about by the death and resurrection of Jesus, and of faith in the resurrection of the dead. Excommunication Conversely, the extinction of lights is part of the ceremony of excommunication (Pontificale Rom. pars iii.). Regino, abbot of Prum, describes the ceremony as it was carried out in his day, when its terrors were yet unabated (De eccles. disciplina, Excom ii. 409). Twelve priests should stand about the bishop, holding in their hands lighted torches, which at the conclusion of the anathema or excommunication they should cast down and trample under foot. When the excommunication is removed, the symbol of reconciliation is the handing to the penitent of a burning taper. Lutheran usage In the Lutheran Churches they were retained, and in Evangelical Germany have even survived most of the other medieval rites and ceremonies (e.g. the use of vestments) which were not abolished at the Reformation itself. The custom of placing lighted candles around the bodies of the dead is still practised by Lutherans. Anglican usage In the Church of England the practice has been less consistent. The first Book of Common Prayer directed two lights to be placed on the altar. This direction was omitted in the second Prayer-book; but the Ornaments Rubric of Queen Elizabeth's Prayer-book again made them obligatory. The question of how far this did so is a much-disputed one and is connected with the whole problem of the meaning and scope of the rubric. Uncertainty reigns with regard to the actual usage of the Church of England from the Reformation onwards. Lighted candles certainly continued to burn in Queen Elizabeth's chapel, to the scandal of Protestant zealots. They also seem to have been retained in certain cathedral and collegiate churches. There is, however, no mention of ceremonial candles in the detailed account of the services of the Church of England given by William Harrison (Description of England, 1570). They seem never to have been illegal under the Acts of Uniformity. The use of wax lights and tapers formed one of the indictments brought by Peter Smart, a Puritan prebendary of Durham, against Dr. Burgoyne, John Cosin and others for setting up superstitious ceremonies in the cathedral contrary to the Act of Uniformity. The indictments were dismissed in 1628 by Sir James Whitelocke, chief justice of Chester and a judge of the Kings Bench, and in 1629 by Sir Henry Yelverton, a judge of Common Pleas and himself a strong Puritan. The use of ceremonial lights was among the indictments in the impeachment of Laud and other bishops by the House of Commons, but these were not based on the Act of Uniformity. From the Restoration onwards the use of ceremonial lights, though far from universal, was usual again in cathedrals and collegiate churches. It was not, however, until the Oxford Movement of the 19th century that their use was widely extended in parish churches. The growing custom met with some opposition; the law was appealed to, and in 1872 the Privy Council declared altar lights to be illegal (Martin v. Mackonochie). This judgment, founded as was afterwards admitted on insufficient knowledge, produced no effect. In the absence of any authoritative negative pronouncement, churches returned to practically the whole ceremonial use of lights as practised in the Roman Catholic Church. The matter was again raised in the case of Read and others v. the Bishop of Lincoln, one of the counts of the indictment being that the bishop had, during the celebration of Holy Communion, allowed two candles to be alight on a shelf or retable behind the communion table when they were not necessary for giving light. The Archbishop of Canterbury, in whose court the case was heard (1889), decided that the mere presence of two candles on the table, burning during the service but lit before it began, was lawful under the first Prayer-Book of Edward VI. and had never been made unlawful. On the case being appealed to the Privy Council, this particular indictment was dismissed on the ground that the vicar, not the bishop, was responsible for the presence of the lights. The custom of placing lighted candles around the bodies of the dead, especially when lying in state, has never wholly died out in the Anglican communion. In the 18th century, moreover, it was still customary in England to accompany a funeral with lighted tapers. A contemporary illustration shows a funeral cortege preceded and accompanied by boys, each carrying four lighted candles in a branched candlestick. The usage in this respect in Anglo-Catholic churches is a revival of pre-Reformation ceremonial as is found in the Roman Catholic Church. Reformed usage As a result of the Reformation, the use of ceremonial lights was either greatly modified, or totally abolished in the Reformed Churches. Candles and lamps were only used to provide necessary illumination. Since the nineteenth century, many churches in the Reformed tradition, especially in the United States, commonly use two or more candles on the Communion Table, influenced by the liturgical movement. The use of the Advent wreath has gained near universal acceptance, even in churches traditionally hostile to ceremonial lights, such as the Church of Scotland. Usage in Hinduism In almost all Hindu homes, lamps are lit daily, sometimes before an altar. In some houses, oil lamps or candles are lit at dawn, in some houses they are lit at both dawn and dusk, and in a few, lamps are maintained continuously. A diya, or clay lamp, is frequently used in Hindu celebrations and forms an integral part of many social rites. It is a strong symbol of enlightenment, hope, and prosperity. Diwali is the festival of lights celebrated by followers of dharmic religions. In its traditional and simplest form, the diya is made from baked clay or terracotta and holds oil or ghee that is lit via a cotton wick. Traditional diyas have now evolved into a form wherein waxes are used as replacements for oils. Usage in Sikhism Lamps are lit in Sikhism on Diwali, the festival of light, as well as being lit everyday by followers of Dharmic religions. Buddhism Candles are a traditional part of Buddhist ritual observances. Along with incense and flowers, candles (or some other type of light source, such as butter lamps) are placed before Buddhist shrines or images of the Buddha as a show of respect. They may also be accompanied by offerings of food and drink. The light of the candles is described as representing the light of the Buddha's teachings, echoing the metaphor of light used in various Buddhist scriptures. See Loy Krathong and Ubon Ratchathani Candle Festival for examples of Buddhist festivals that makes extensive use of candles. Christianity In Christianity the candle is commonly used in worship both for decoration and ambiance, and as a symbol that represents the light of God or, specifically, the light of Christ. The altar candle is often placed on the altar, usually in pairs. Candles are also carried in processions, especially to either side of the processional cross. A votive candle or taper may be lit as an accompaniment to prayer. Candles are lit by worshippers in front of icons in Eastern Orthodox, Oriental Orthodox, Eastern Catholic and other churches. This is referred to as "offering a candle", because the candle is a symbol of the worshiper offering himself or herself to God (and proceeds from the sale of the candle are offerings by the faithful which go to help the church). Among the Eastern Orthodox, there are times when the entire congregation stands holding lit tapers, such as during the reading of the Matins Gospels on Good Friday, the Lamentations on Holy Saturday, funerals, Memorial services, etc. There are also special candles that are used by Orthodox clergy. A bishop will bless using dikirion and trikirion (candlesticks holding two and three candles, respectively). At Pascha (Easter) the priest holds a special Paschal trikirion, and the deacon holds a Paschal candle. The priest will also bless the faithful with a single candle during the Liturgy of the Presanctified Gifts (celebrated only during Great Lent). In the Roman Catholic Church a liturgical candle must be made of at least 51% beeswax, the remainder may be paraffin or some other substance. In the Orthodox Church, the tapers offered should be 100% beeswax, unless poverty makes this impossible. The stumps from burned candles can be saved and melted down to make new candles. In some Western churches, a special candle known as the Paschal candle, specifically represents the Resurrected Christ and is lit only at Easter, funerals, and baptisms. In the Eastern Orthodox Church, during Bright Week (Easter Week) the priest holds a special Paschal trikirion (triple candlestick) and the deacon holds a large candle during all of the services at which they serve. In Sweden (and other Scandinavian countries), St. Lucia Day is celebrated on December 13 with the crowning of a young girl with a wreath of candles. In many Western churches, a group of candles arranged in a ring, known as an Advent wreath, are used in church services in the Sundays leading up to Christmas. In households in some Western European countries, a single candle marked with the days of December is gradually burned down, day by day, to mark the passing of the days of Advent; this is called an Advent candle. Judaism In Judaism, a pair of Shabbat candles are lit on Friday evening prior to the start of the weekly Sabbath celebration. On Saturday night, a special candle with several wicks and usually braided is lit for the Havdalah ritual marking the end of the Sabbath and the beginning of the new week. The eight-day holiday of Hanukkah, also known as the Festival of Lights, is celebrated by lighting a special Hanukkiyah each night to commemorate the rededication of the Temple in Jerusalem. A memorial candle is lit on the Yahrtzeit, or anniversary of the death of a loved one according to the Hebrew calendar. The candle burns for 24 hours. A memorial candle is also lit on Yom HaShoah, a day of remembrance for all those murdered in The Holocaust. A seven-day memorial candle is lit following the funeral of a spouse, parent, sibling or child. Candles are also lit prior to the onset of the Three Festivals (Sukkot, Passover and Shavuot) and the eve of Yom Kippur, and Rosh Hashana. A candle is also used on the night before Passover in a symbolic search for chametz, or leavened bread, which is not eaten on Passover. Other traditions The Candle is also used in celebrations of Kwanzaa, which is an African American holiday which runs from December 26 to January 1. A Kinara is used to hold candles in these celebrations. It holds seven candles; three red candles to represent African American struggles, one black candle to represent the African American people and three green candles to represent African American hopes. During satanic rituals black candles are the only light source, except for one white candle on the altar. The dim lighting is used to create an air of mystique and the color of the candles has symbolic meaning. References Article Lucerna, by J. Toutain, in Daremberg and Saglio's Dictionnaire des Antiquités Grecques et Romaines (Paris, 1904) J. Marquardt, Römische Privatalterthumer (vol. v. of Wilhelm Adolf Becker, Handbuch der römische Alterthumer ii. 238–301) Article Cierges et lampes, in Joseph-Alexander Martigny, Dictionnaire des Antiquités Chrétiennes (Pwsdsdaris, 1865) Articles Lichter and Koimetarien (pp. 834 seq ) in Herzog-Hauck's Realencyklopedie (3rd ed., Leipzig. 1901) Article Licht in Wetzer and Welte's Kirchenlexikon (Freiburg-i.-B.,1882–1901), an exposition of the symbolism from the Catholic point of view, also Kerze and Lichter W. Smith and S. Cheetham, Dictionary of Christian Antiquities (London. 1875–1880), i. 939 seq. W. Mühlbauer, Geschichte und Bedeutung der Wachslichter bei den kirchlichen Funktionen (Augsburg, 1874) V. Thalhofer, Handbuch der Katholischen Liturgik (Freiburg-i.-B., 1887), i. 666 seq. Hierurgia Anglicana, edition by Vernon Staley (London, 1903) Notes Light sources Ritual Religious objects Sacramentals
Ceremonial use of lights
[ "Physics", "Biology" ]
7,853
[ "Religious objects", "Behavior", "Physical objects", "Ritual", "Human behavior", "Matter" ]
4,124,553
https://en.wikipedia.org/wiki/Bruno%20Grollo
Bruno Gordano Grollo (born 1942, Melbourne, Victoria) is an Australian businessman, property developer, former Director of Grocon and is noted for his controversy surrounding the Swanston Street Wall incident on 29 March 2013. Bruno is the son of Luigi Grollo, who founded Grocon, one of Australia’s largest construction companies, in 1948 after immigrating to Australia from Italy. Bruno’s role in his company remains despite handing the title of chief executive and chairman to his son, Daniel Grollo in 1999. Following public disputes with Infrastructure NSW in 2020, Grocon announced that it and 86 of its subsidiaries have entered Voluntary Administration. Early life Bruno Grollo was born in Melbourne in 1942 and is the son of accountant Emma Girardi (1913-1986) and builder Luigi Arturo Grollo (1909-1994). His grandfather, Giovanni Grollo, was a farmer. Luigi Grollo emigrated to Australia at 18 years old, due to his adolescent life being rife with war, drought, storms and the death of his mother at 52 years old. He said of the experience growing up in Italy, ‘The following year, 1928, I saw that things were still going bad there. There was another storm that carried off everything. It left only the soles of our feet! Here were some new debts to pay off.’ Luigi Grollo and his family left their hometown of Arcade, Treviso, Italy after it became a World War I battleground and was no longer habitable. At 18 years old, with his older brother sponsoring him, he boarded the passenger ship named the Principe d’Udine and arrived in Melbourne on 24 July 1928 to start a new life in Australia. His cousin Carlo Zanatta was awaiting his arrival but did not recognise Luigi as they had not been together since he was a young boy. Luigi said of Carlo, ‘He was a good man to me. Zanatta took me to a boarding house in Russell Street, Melbourne. There we stayed all one day and one night. The next morning we left for Healesville to go to work.’ In 1938, Luigi settled in Carlton and began concreting work, whilst building his construction business, formerly known as L Grollo & Sons, on the weekends, while wife, Emma, helped with bookkeeping and accounts. Luigi’s one-man company began with residential paths, gutters, fireplace foundations and swimming pools before rapidly expanding in the 1950s to become the Grollo Group, transitioning to constructing multiple high-rises in Melbourne. Bruno had a substantial role in his father’s company whilst growing up; he and his brother would help out, gaining trade experience whilst still at school. He had minimal formal education growing up, recalling his attendance as a ‘series of Catholic schools’ before beginning his career as a labourer. In 1958, at 15 years old, Bruno left school and began his career in construction when he joined his father’s company, of, at the time almost 130 employees. His brother, Rino Grollo, soon after joined the company in 1965. In 1968, after suffering a heart-attack, the patriarch and Director of Grocon, Luigi Grollo, retired and left sons, Bruno and Rino as co-Directors of his company Grocon. Following the stressful period after their mother’s death in 2001, Bruno and Rino divided the company and its assets into two. Bruno headed Grocon Constructions and multiple building assets and in 2003 made his two sons, Adam and Daniel, joint managing directors. Controversy Bruno Grollo has been involved in several media controversies concerning himself and his company, Grocon. On Trial In 1997, Bruno Grollo and co-accuseds John William Flanagan and Robert Charles Howard were acquitted of conspiracy charges. They were accused of bribing a Federal Police officer, Superintendent Lloyd Farrell, and of conspiring to pervert the course of justice. This conspiracy arose from fears surrounding the taxation office in which the court alleged that Grollo had failed to declare $59 million in the process of building the Rialto Towers. Recorded as one of the longest trials in Victorian history, running for 13 months, this investigation into the taxation affairs of the Grollo Group resulted in a not-guilty verdict on all charges for all three men, Grollo, Flanagan and Howard, and ended on 26 June 1997. Swanston Street collapse On 28 March 2013, during wind gusts of up to 102 kilometres per hour (63 mph) a Grocon building site construction wall collapsed on Swanston Street, Melbourne killing three pedestrians walking by. This collapse resulted in the death of Bridget Jones, Alexander Jones and Marie-Faith Fiawoo. This fatal incident in which promotional hoarding incorrectly fastened to a Grocon brick wall, resulted in a court case in which Grocon Victoria Street Pty Ltd pleaded guilty to a charge of failing to warrant a safe workplace. Grollo stated about the incident, ‘I personally, along with all of the directors and employees of Grocon, reiterate our deep regret at the tragic and untimely loss’. The court case against WorkSafe Victoria, concluding in 2014, resulted in a guilty verdict and a $250,000 fine for Grollo’s company, Grocon. Grocon Constructions As the new co-director of Grocon, Bruno and his company were involved in many of the projects that created Melbourne’s skyline. His projects included the Rialto Towers, the Hyatt Hotel and the Eureka Tower in 2006, which was one of the world’s highest residential towers at the time. Continuing his expansion into Sydney with the Governor Phillip Tower, the Macquarie Towers and 1 Bligh Street, the two-brothers led Australia’s construction industry to new heights. Grollo Tower The Grollo Tower proposal was a $1.7 billion, 500m skyscraper for the Melbourne Docklands, proposed by Bruno as a gift to Melburnians in 1995, but also partially funded by the Victorian public. Bruno stated of the tower, 'It would be a golden building for a golden city for the golden times to come ... it has to put the city on the world map’ . His ambitious ideals underlined many aspects of the company, Grollo stated he wanted, ‘To do something for Melbourne that did what the pyramids did for Egypt, or the Colosseum did for Rome, or the Opera House and Harbour Bridge did for Sydney'. The Grollo Tower, although never coming to fruition, would have been the tallest in the world at that time. The proposal was reviewed again in 2003 for construction to begin in Dubai, commissioned by The Grollo Corporation and Emaar Properties, the largest development company in the Arab Emirates. The $3 billion deal was proposed as an exact replica of the original Grollo Tower, however ultimately the project was cancelled and Bruno’s ambitious skyscraper was never built. Cyclone Tracy restoration On Christmas Day in 1974, Cyclone Tracy destroyed more than 70 percent of Darwin’s buildings, including 80 percent of its houses; this led to the Northern Territory Government signing a contract with the Grollo Group to help with restorations. Both Rino and Bruno were involved in the restoration of the cyclone-torn city, building 400 cyclone-proof houses with various designs for the government. This contract substantially grew their business and by the 1980s, Grocon had a total workforce of over 1,000 employees. Voluntary administration In 2020, following public disputes with Infrastructure NSW, Grocon announced that it and 86 of its subsidiaries had been placed in voluntary administration. The Grocon predicament began in November 2020 when Daniel Grollo experienced troubles with the latest Grocon projects, in Barangaroo, Sydney and inner-city Melbourne. In January 2018, Grocon was awarded construction rights for a project in Central Barangaroo, Sydney as a deal with Aqualand and Scentre Group. In 2019, during a court battle with Dexus over a $28 million lease claim they put two subsidiaries into voluntary administration. In 2020, during the COVID-19 pandemic, Grocon's only Melbourne based project consisting of a $111 million office development stopped construction, with subcontractors, employees and creditors said to be owed more than $100 million. Grocon is suing the Government of New South Wales, claiming they lost $270 million during the sale of the Barangaroo Central project to Aqualand for $73 million in 2020, and it is due to be seen in the Supreme Court of New South Wales in 2022. Personal life Bruno Grollo married Dina Bettiol in 1965 and they had three children together, Danie, Leanna and Adam. They were married for 26 years before Dina suffered a stroke which left her severely paralysed until her death, aged 58, in December 2001; Bruno keeps a room in her honour at his house. On 14 February 2004, Grollo was remarried to Pierina Biondo at St Patrick's Cathedral, Melbourne. In 2014, he revealed in an interview with Melbourne journalist, Ruth Ostrow, of his ongoing struggles with leukemia, melanoma and prostate cancer. Grollo stated, ‘My biggest goal now is staying alive. I’m trying to live long enough to see the success of gene, nano and stem-cell therapies which will keep us alive.’ He now employs a professional team within his Melbourne home in Thornbury, ‘Casa Del Matto’ which translates to House of the Madman in English to research products on the market and new science on anti-ageing and longevity. Grollo stated, ‘This is cutting-edge biology and those young and healthy enough to be around will be able to live indefinitely.’ Grollo takes up to 100 tablets per day, exercises regularly and every day will hang upside down on a machine with a backwards tilt to increase longevity. Since retiring from the construction industry, stating, ‘buildings are hard work, they’re stressful, they are draining. They’re hard to put up. I’d had enough. I got out.' Bruno has found a passion for meditation and Maharishi yoga and has since invested $3 million into a transcendental meditation college in Watsonia. Stating that, ‘The Maharishi said consciousness is everything. It’s the closest thing to what God might be, your consciousness, mine, the dog, the cat, the flowers, the trees… transcendental meditation was the closest thing to euphoria and youth I’ve ever discovered.''' In 1991, Grollo was appointed an Officer of the Order of Australia for service to building and construction and to the community. Net worth In 2006, Grollo was listed in Forbes top 40 richest people in Australia and New Zealand. Bruno Grollo and family were listed on the Financial Review Rich List 2018 with an assessed net worth of 702 million. Bruno Grollo and family did not appear on the 2019 Rich List, although Rino Grollo and his family were independently assessed with a net worth of 583 million. Bruno, Rino, and/or their father, Luigi (whilst living), are one of thirteen living Australians who have appeared on every Financial Review Rich List, since it was first published in 1984. {| class="wikitable" ! rowspan=2 | Year ! colspan=2 width=40% | Financial Review Rich List ! colspan=2 width=40% | Forbes |- ! Rank ! Net worth bn ! Rank ! Net worth bn |- | 2006 | align="center" | | align="right" | | align="center" | | align="right" | |- ! colspan=5 style="background:#cccccc;" | |- | 2014 | align="center" | | align="right" | | align="center" | | align="right" | |- | 2015 | align="center" | | align="right" | | align="center" | n/a | align="right" | not listed |- | 2016 | align="center" | | align="right" | | align="center" | n/a | align="right" | not listed |- | 2017 | align="center" | | align="right" | 0.720 | align="center" | n/a | align="right" | not listed |- | 2018 | align="center" | 113 | align="right" | 0.702 | align="center" | | align="right" | |- | 2019 | align="center" | n/a | align="right" | not listed | align="center" | n/a | align="right" | not listed |} Philanthropy Bruno and his brother, Rino, along with their wives, Dina Bettiol and Diana Ruzzene, became well known in the Melbourne community for being generous philanthropists. They would all often donate to community groups, charities, educational organisations and sporting institutions. After their mother’s death in December 2001, they established The Emma Grollo Memorial Scholarship in her memory funded by Bruno, Rino and the Grollo Group. The scholarship seeks to provide financial support to students studying Italian language or literature at the University of Melbourne. Bruno remembers his mother with these words, ‘My mother had a unique ability to keep us united. She managed to keep us united right up until the very end ... and sometimes this was not easy ... Of all her merits, this for me was the greatest.’ References External links Official website Grocon website Eureka Tower website 1942 births Australian businesspeople Australian people of Italian descent Living people Construction and civil engineering companies Cyclone Tracy Italian-Australian culture Transcendental Meditation Officers of the Order of Australia
Bruno Grollo
[ "Engineering" ]
2,896
[ "Construction and civil engineering companies", "Civil engineering organizations" ]
4,125,012
https://en.wikipedia.org/wiki/Demjanov%20rearrangement
The Demjanov rearrangement is the chemical reaction of primary amines with nitrous acid to give rearranged alcohols. It involves substitution by a hydroxyl group with a possible ring expansion. It is named after the Russian chemist Nikolai Jakovlevich Demjanov (Dem'anov, Demianov), who first reported it in 1903. Reaction mechanism The reaction process begins with diazotization of the amine by nitrous acid. The diazonium group is a good leaving group, forming nitrogen gas when displaced from the organic structure. This displacement can occur via a rearrangement (path A), in which one of the sigma bonds adjacent to the diazo group migrates. This migration results in an expansion of the ring. The resulting carbocation is then attacked by a molecule of water. Alternately, the diazo group can be displaced directly by a molecule of water in an SN2 reaction (path B). Both routes lead to formation of an alcohol. Uses The Demjanov rearrangement is a method to produce a 1-carbon ring enlargement in four, five or six membered rings. The resulting five, six, and seven-membered rings can then be used in further synthetic reactions. It has been shown that the Demjanov reaction is susceptible to regioselectivity. One example of this is a study conducted by D. Fattori looking at the regioselectivity of the Demjanov rearrangement in one-carbon enlargements of naked sugars. It showed that when an exo methylamine underwent Demjanov nitrous acid deamination, ring enlargement was not produced. However, when the endo methylamine underwent the same conditions, a mixture of rearranged alcohols were produced. Problems This rearrangement also leads to a substituted, but not expanded, byproduct. Thus it can be difficult to isolate the two products and acquire the desired yield. Also, stereoisomers are produced depending on the direction of addition of the water molecule and other molecules may be produced depending on rearrangements. Variations Tiffeneau-Demjanov rearrangement The Tiffeneau-Demjanov rearrangement (after Marc Tiffeneau and Nikolai Demjanov) is a variation of the Demjanov rearrangement, which involves both a ring expansion and the production of a ketone by using sodium nitrite and hydrogen cation. Using the Tiffeneau-Demjanov reaction is often advantageous as, while there are rearrangements possible in the products, the reactant always undergoes ring enlargement. As in the Demjanov rearrangement, products illustrate regioselectivity in the reaction. Migratory aptitudes of functional groups dictate rearrangement products. Use of diazomethane Diazomethane also produces ring enlargement, and its reaction is mechanistically similar to the Tiffeneau-Demjanov rearrangement. References (Review) Rearrangement reactions Name reactions
Demjanov rearrangement
[ "Chemistry" ]
632
[ "Name reactions", "Rearrangement reactions", "Organic reactions" ]
4,125,123
https://en.wikipedia.org/wiki/Rydberg%E2%80%93Ritz%20combination%20principle
The Rydberg–Ritz combination principle is an empirical rule proposed by Walther Ritz in 1908 to describe the relationship of the spectral lines for all atoms, as a generalization of an earlier rule by Johannes Rydberg for the hydrogen atom and the alkali metals. The principle states that the spectral lines of any element include frequencies that are either the sum or the difference of the frequencies of two other lines. Lines of the spectra of elements could be predicted from existing lines. Since the frequency of light is proportional to the wavenumber or reciprocal wavelength, the principle can also be expressed in terms of wavenumbers which are the sum or difference of wavenumbers of two other lines. Another related version is that the wavenumber or reciprocal wavelength of each spectral line can be written as the difference of two terms. The simplest example is the hydrogen atom, described by the Rydberg formula where is the wavelength, is the Rydberg constant, and and are positive integers such that . This is the difference of two terms of form . The exact Ritz Combination formula was mathematically derived from this where: Where: is the wavenumber, is the limit of the series, is a universal constant, (now known as R) is the numeral, (now known as n) and are constants. Relation to quantum theory The combination principle is explained using quantum theory. Light consists of photons whose energy E is proportional to the frequency and wavenumber of the light: (where h is the Planck constant, c is the speed of light, and is the wavelength. A combination of frequencies or wavenumbers is then equivalent to a combination of energies. According to the quantum theory of the hydrogen atom proposed by Niels Bohr in 1913, an atom can have only certain energy levels. Absorption or emission of a particle of light or photon corresponds to a transition between two possible energy levels, and the photon energy equals the difference between their two energies. On dividing by hc, the photon wavenumber equals the difference between two terms, each equal to an energy divided by hc or an energy in wavenumber units (cm−1). Energy levels of atoms and molecules are today described by term symbols which indicate their quantum numbers. Also, a transition from an initial to a final energy level involves the same energy change whether it occurs in a single step or in two steps via an intermediate state. The energy of transition in a single step is the sum of the energies of transition in two steps: . The NIST database tables of lines of spectra contains observed lines and the lines calculated by use of the Ritz combination principle. History The spectral lines of hydrogen had been analyzed and found to have a mathematical relationship in the Balmer series. This was later extended to a general formula called the Rydberg formula. This could only be applied to hydrogen-like atoms. In 1908 Ritz derived a relationship that could be applied to all atoms which he calculated prior to the first 1913 quantum atom and his ideas are based on classical mechanics. This principle, the Rydberg–Ritz combination principle, is used today in identifying the transition lines of atoms. References External links Emission spectroscopy Old quantum theory
Rydberg–Ritz combination principle
[ "Physics", "Chemistry" ]
641
[ "Spectrum (physical sciences)", "Emission spectroscopy", "Quantum mechanics", "Old quantum theory", "Spectroscopy" ]
4,126,613
https://en.wikipedia.org/wiki/Pyrophyte
Pyrophytes are plants which have adapted to tolerate fire. Fire acts favourably for some species. "Passive pyrophytes" resist the effects of fire, particularly when it passes over quickly, and hence can out-compete less resistant plants, which are damaged. "Active pyrophytes" have a similar competing advantage to passive pyrophytes, but they also contain volatile oils and hence encourage the incidence of fires which are beneficial to them. "Pyrophile" plants are plants which require fire in order to complete their cycle of reproduction. Passive pyrophytes These resist fire with adaptations including thick bark, tissue with high moisture content, or underground storage structures. Examples include: Longleaf pine (Pinus palustris) Giant sequoia (Sequoiadendron giganteum) Coast redwood (Sequoia sempervirens) Cork oak (Quercus suber) Niaouli (Melaleuca quinquenervia) which is extending in areas where bush fires are a mode of clearing (e.g. New Caledonia). Venus fly trap (Dionaea muscipula) – this grows low to the ground in acid marshes in North Carolina, and resists fires passing over due to being close to the moist soil; fire suppression threatens the species in its natural environment. White asphodel (Asphodelus albus) For some species of pine, such as Aleppo pine (Pinus halepensis), European black pine (Pinus nigra) and lodgepole pine (Pinus contorta), the effects of fire can be antagonistic: if moderate, it helps pine cone bursting, seed dispersion and the cleaning of the underwoods; if intense, it destroys these resinous trees. Active pyrophytes Some trees and shrubs such as the Eucalyptus of Australia actually encourage the spread of fires by producing inflammable oils, and are dependent on their resistance to the fire which keeps other species of tree from invading their habitat. Pyrophile plants Other plants which need fire for their reproduction are called pyrophilic. Longleaf pine (Pinus palustris) is a pyrophile, depending on fire to clear the ground for seed germination. The passage of fire, by increasing temperature and releasing smoke, is necessary to raise seeds dormancy of pyrophile plants such as Cistus and Byblis an Australian passive carnivorous plant. Imperata cylindrica is a plant of Papua New Guinea. Even green, it ignites easily and causes fires on the hills. Evolution 99 million-year-old amber-preserved fossils of Phylica piloburmensis, belonging to the modern pyrophytic genus Phylica, show clear adaptations to fire including pubescent, needle-like leaves, further affirmed by the presence of burned plant remains from other Burmese amber specimens. These indicate that frequent fires have exerted an evolutionary pressure on flowering plants ever since their origins in the Cretaceous, and that adaptation to fire has been present in the family Rhamnaceae for over 99 million years. See also Fire ecology Serotiny References Plant physiology
Pyrophyte
[ "Biology" ]
662
[ "Plant physiology", "Plants" ]
4,126,653
https://en.wikipedia.org/wiki/NGC%202915
NGC 2915 is a blue dwarf galaxy located 12 million light-years away in the southern constellation Chamaeleon, right on the edge of the Local Group. The optical galaxy corresponds to the core of a much larger spiral galaxy traced by radio observation of neutral hydrogen. The galaxy has a short central bar, much like the Milky Way and very extended spiral arms. The central disk appears to be rotating in the opposite direction to the extended spiral arms. The reason for the spiral arms and majority of the galaxy's disk to be still neutral hydrogen (as opposed to have formed stars) is not well understood but is thought to be related to the galaxy's isolation, in that it has no nearby satellite galaxies and no nearby major galaxies to force star formation. Notes References External links 2915 26761 Dwarf irregular galaxies Chamaeleon
NGC 2915
[ "Astronomy" ]
170
[ "Chamaeleon", "Galaxy stubs", "Astronomy stubs", "Constellations" ]
4,126,880
https://en.wikipedia.org/wiki/Dichroic%20glass
Dichroic glass is glass which can display multiple different colors depending on lighting conditions. One dichroic material is a modern composite non-translucent glass that is produced by stacking layers of metal oxides which give the glass shifting colors depending on the angle of view, causing an array of colors to be displayed as an example of thin-film optics. The resulting glass is used for decorative purposes such as stained glass, jewelry and other forms of glass art. The commercial title of "dichroic" can also display three or more colors (trichroic) and even iridescence in some cases. The term dichroic is used more precisely when labelling interference filters for laboratory use. Another dichroic glass material first appeared in a few pieces of Roman glass from the 4th century and consists of a translucent glass containing colloidal gold and silver particles dispersed in the glass matrix in certain proportions so that the glass has the property of displaying a particular transmitted color and a completely different reflected color, as certain wavelengths of light either pass through or are reflected. In ancient dichroic glass, as seen in the most famous piece, the 4th-century Lycurgus cup in the British Museum, the glass has a green color when lit from in front in reflected light, and another, purple-ish red, when lit from inside or behind the cup so that the light passes through the glass. This is not due to alternating thin metal films but colloidal silver and gold particles dispersed throughout the glass, in an effect similar to that seen in gold ruby glass, though that has only one color whatever the lighting. Invention Modern dichroic glass is available as a result of materials research carried out by NASA and its contractors, who developed it for use in dichroic filters. However, color changing glass dates back to at least the 4th century AD, though only very few pieces, mostly fragments, survive. It was also made in the Renaissance in Venice and by imitators elsewhere; these pieces are also rare. Manufacture of modern dichroic glass Multiple ultra-thin layers of transparent oxides of such metals as titanium, chromium, aluminium, zirconium, or magnesium; or silica are vaporised by an electron beam in a vacuum chamber. The vapor then condenses on the surface of the glass in the form of a crystal structure. A protective layer of quartz crystal is sometimes added. Other variants of such physical vapor deposition (PVD) coatings are also possible. The finished glass can have as many as 30 to 50 layers of these materials, yet the thickness of the total coating is approximately 30 to 35 millionths of an inch (about 760 to 890 nm). The coating that is created is very similar to a gemstone and, by careful control of thickness, different colors may be obtained. The total light that hits the dichroic layer equals the wavelengths reflected plus the wavelengths passing through the dichroic layer. A plate of dichroic glass can be fused with other glass in multiple firings. Due to variations in the firing process, individual results can never be exactly predicted, so each piece of fused dichroic glass is unique. Over 45 colours of dichroic coatings are available to be placed on any glass substrate. Uses Optics Dichroic glass is used in various dichroic optical filters to select narrow bands of spectral colors, for example in fluorescence microscopy, LCD projectors, or 3D movies. Artists Dichroic glass is now available to artists through dichroic coating manufacturers. Glass artists often refer to dichroic glass as "dichro". Images can be formed by removing the dichroic coating from parts of the glass, creating everything from abstract patterns to letters, animals, or faces. The standard method for precision removal of the coating involves a laser. Dichroic glass is specifically designed to be hotworked but can also be used in its raw form. Sculpted glass elements that have been shaped by extreme heat and then fused together may also be coated with dichroic afterwards to make them reflect an array of colors. Architecture The corporate headquarters of Amazon.com in Seattle, Washington, incorporates dichroic glass into the exterior of its high-rise building, reflecting light into various colors that depend on the time of day. The Museum at Prairiefire in Overland Park, Kansas, which opened in May 2014, is devoted primarily to natural history. It borrows displays from larger museums and hosts at least two major traveling exhibits per year. Its striking glass exterior was designed to reference the intentional prairie fires that were an integral part of farming life in Kansas. The glass is dichroic, which means that its color changes with the light of the day. The museum is itself a work of art. References External links Thesis – University of Neufchatel (Switzerland), PDF in English and French Optical materials Glass compositions Glass physics Thin-film optics es:Vidrio dicroico
Dichroic glass
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,034
[ "Glass engineering and science", "Glass chemistry", "Planes (geometry)", "Glass compositions", "Glass physics", "Materials", "Optical materials", "Condensed matter physics", "Thin-film optics", "Thin films", "Matter" ]
4,127,125
https://en.wikipedia.org/wiki/GHP%20formalism
The GHP formalism (or Geroch–Held–Penrose formalism), also known as the compacted spin-coefficient formalism, is a technique used in the mathematics of general relativity that involves singling out a pair of null directions at each point of spacetime. It is a rewriting of the Newman–Penrose formalism which respects the covariance of Lorentz transformations preserving two null directions. This is desirable for Petrov Type D spacetimes, including black holes in general relativity, where there is a preferred pair of degenerate principal null directions but no natural additional structure to fully fix a preferred Newman–Penrose (NP) frame. Covariance The GHP formalism notices that given a spin-frame with the complex rescaling does not change normalization. The magnitude of this transformation is a boost, and the phase tells one how much to rotate. A quantity of weight is one that transforms like One then defines derivative operators which take tensors under these transformations to tensors. This simplifies many NP equations, and allows one to define scalars on 2-surfaces in a natural way. See also General relativity NP formalism References Mathematical methods in general relativity
GHP formalism
[ "Physics" ]
247
[ "Relativity stubs", "Theory of relativity" ]
4,127,308
https://en.wikipedia.org/wiki/Center%20for%20Veterinary%20Medicine
The Center for Veterinary Medicine (CVM) is a branch of the U.S. Food and Drug Administration (FDA) that regulates the manufacture and distribution of food, food additives, and drugs that will be given to animals. These include animals from which human foods are derived, as well as food additives and drugs for pets or companion animals. CVM is responsible for regulating drugs, devices, and food additives given to, or used on, over one hundred million companion animals, plus millions of poultry, cattle, swine, and minor animal species. Minor animal species include animals other than cattle, swine, chickens, turkeys, horses, dogs, and cats. CVM monitors the safety of animal foods and medications. Much of the center's work focuses on animal medications used in food animals to ensure that significant drug residues are not present in the meat or other products from these animals. CVM does not regulate vaccines for animals; these are handled by the United States Department of Agriculture History In 1953, a Veterinary Medical Branch of the FDA was created within the Bureau of Medicine. A separate Bureau of Veterinary Medicine (BVM) was established in 1965. At this time, the BVM included a Division of Veterinary Medical Review, Division of Veterinary New Drugs, and a Division of Veterinary Research. In 1970, the Division of Compliance and Division of Nutritional Sciences were added. The Bureau underwent reorganization in 1976 and in 1984, it was renamed the Center for Veterinary Medicine. Dr. Steven Solomon, DVM became the Director of the Center in 2017. He received his Doctor of Veterinary Medicine (DVM) degree from The Ohio State University and a Master's in Public Health from Johns Hopkins University. He succeeded Tracey Forfa, who had been acting director for a few months. The previous director was Dr. Bernadette Dunham; she served as Director from 2008 to 2016. Mission and vision The mission of the center is "protecting human and animal health" and the vision of the organization is "Excellence. Innovation. Leadership." The organization works across multiple disciplines to promote public health. Office structure The Center for Veterinary Medicine is divided into six key offices. The Office of the Director coordinates activities for the center and establishes policy in a wide variety of areas, including management, research, and compliance. It directs the planning, programming, budgeting, and administrative support for the center. The Office of the Director is also responsible for approving New Animal Drug Applications and Abbreviated New Animal Drug Applications, approving the use of animal food additives, and reviewing submitted New Animal Drug Applications for effects on human health. The Director serves as the spokesperson for the center's activities and is in contact with the public, industry, other government agencies, national organizations, and international organizations. The Office of Management provides customer service, guidance, and education on the activities of the center. Individuals in this office are in charge of managing strategic planning of the center's goals and priorities and serve as liaisons for specific facilities, programs, and services provided by the center. This office is also in charge of managing billing, information management and technology, talent development, and budget planning for the center. The Office of New Animal Drug Evaluation reviews information submitted by drug sponsors who are working to gain approval to manufacture and market animal drugs. This office determines if an animal drug should be approved and ensures that the new drug meets four pillars: the drug product must be safe for both animals and humans, must be effective for its intended use, must be a quality manufactured product, and must be properly labeled with how to safely use, store, and handle the drug. This office also ensures that these standards are maintained after the drug enters the marketplace. The office has eight divisions which each evaluate a different part of the drug review process. The Office of Surveillance and Compliance is in charge of regulating animal drugs and devices for their safety and effectiveness and also oversees animal food safety programs. Members of this office include veterinarians, animal scientists, toxicologists, consumer safety officers, and other scientists. The Office helps inspect products, analyze samples of products, and reviews products that may be imported into the United States. The Office conducts education and outreach about compliance, helps monitor adverse events and identify safety issues with animal drugs, animal food, and animal devices. The Office works to prevent and address any animal food hazards. If any safety concerns are found, this Office can issue product safety alerts, packaging label changes, recalls, or can withdrawal a product's approval. The Office of Research helps to develop new procedures for analyzing drugs, food additives, and contaminants. The Office works to investigate how drugs are absorbed, distributed, metabolized, and excreted, and how different drugs impact the immunology or physiology of animals. This office also helps develop screening tests for foodborne diseases and screens for drug residues in food products. The Office is involved in many scientific areas of research including veterinary medicine, animal science, biology, chemistry, microbiology, epidemiology, and pharmacology. The building that houses this Office is equipped with laboratories, animal facilities, and has specialized experimental equipment to conduct research. The Office of Minor Use and Minor Species is the smallest office within the Center and handles "minor use" drugs, which are those that are intended for use in horses, dogs, cats, cattle, pigs, turkey, and chickens but are for diseases that do not occur very frequently, only impact a small geographic area, or are only impacted a small number of animals each year. This Office also handles issues pertaining to "minor species" which include animals such as zoo animals, parrots, ferrets, guinea pigs, sheep, goats, and honeybees. This Office establishes and maintains the Index of Legally Marketed Unapproved New Animal Drugs for Minor Species. Outreach and education is also a significant part of this Office's activities. Sources References Veterinary medicine in the United States Food and Drug Administration National agencies for veterinary drug regulation Drug safety Experimental drugs United States federal health legislation Biotechnology products
Center for Veterinary Medicine
[ "Chemistry", "Biology" ]
1,230
[ "Biotechnology products", "Drug safety" ]
4,127,357
https://en.wikipedia.org/wiki/Hermitian%20symmetric%20space
In mathematics, a Hermitian symmetric space is a Hermitian manifold which at every point has an inversion symmetry preserving the Hermitian structure. First studied by Élie Cartan, they form a natural generalization of the notion of Riemannian symmetric space from real manifolds to complex manifolds. Every Hermitian symmetric space is a homogeneous space for its isometry group and has a unique decomposition as a product of irreducible spaces and a Euclidean space. The irreducible spaces arise in pairs as a non-compact space that, as Borel showed, can be embedded as an open subspace of its compact dual space. Harish Chandra showed that each non-compact space can be realized as a bounded symmetric domain in a complex vector space. The simplest case involves the groups SU(2), SU(1,1) and their common complexification SL(2,C). In this case the non-compact space is the unit disk, a homogeneous space for SU(1,1). It is a bounded domain in the complex plane C. The one-point compactification of C, the Riemann sphere, is the dual space, a homogeneous space for SU(2) and SL(2,C). Irreducible compact Hermitian symmetric spaces are exactly the homogeneous spaces of simple compact Lie groups by maximal closed connected subgroups which contain a maximal torus and have center isomorphic to the circle group. There is a complete classification of irreducible spaces, with four classical series, studied by Cartan, and two exceptional cases; the classification can be deduced from Borel–de Siebenthal theory, which classifies closed connected subgroups containing a maximal torus. Hermitian symmetric spaces appear in the theory of Jordan triple systems, several complex variables, complex geometry, automorphic forms and group representations, in particular permitting the construction of the holomorphic discrete series representations of semisimple Lie groups. Hermitian symmetric spaces of compact type Definition Let H be a connected compact semisimple Lie group, σ an automorphism of H of order 2 and Hσ the fixed point subgroup of σ. Let K be a closed subgroup of H lying between Hσ and its identity component. The compact homogeneous space H / K is called a symmetric space of compact type. The Lie algebra admits a decomposition where , the Lie algebra of K, is the +1 eigenspace of σ and the –1 eigenspace. If contains no simple summand of , the pair (, σ) is called an orthogonal symmetric Lie algebra of compact type. Any inner product on , invariant under the adjoint representation and σ, induces a Riemannian structure on H / K, with H acting by isometries. A canonical example is given by minus the Killing form. Under such an inner product, and are orthogonal. H / K is then a Riemannian symmetric space of compact type. The symmetric space H / K is called a Hermitian symmetric space if it has an almost complex structure preserving the Riemannian metric. This is equivalent to the existence of a linear map J with J2 = −I on which preserves the inner product and commutes with the action of K. Symmetry and center of isotropy subgroup If (,σ) is Hermitian, K has non-trivial center and the symmetry σ is inner, implemented by an element of the center of K. In fact J lies in and exp tJ forms a one-parameter group in the center of K. This follows because if A, B, C, D lie in , then by the invariance of the inner product on Replacing A and B by JA and JB, it follows that Define a linear map δ on by extending J to be 0 on . The last relation shows that δ is a derivation of . Since is semisimple, δ must be an inner derivation, so that with T in and A in . Taking X in , it follows that A = 0 and T lies in the center of and hence that K is non-semisimple. The symmetry σ is implemented by z = exp πT and the almost complex structure by exp π/2 T. The innerness of σ implies that K contains a maximal torus of H, so has maximal rank. On the other hand, the centralizer of the subgroup generated by the torus S of elements exp tT is connected, since if x is any element in K there is a maximal torus containing x and S, which lies in the centralizer. On the other hand, it contains K since S is central in K and is contained in K since z lies in S. So K is the centralizer of S and hence connected. In particular K contains the center of H. Irreducible decomposition The symmetric space or the pair (, σ) is said to be irreducible if the adjoint action of (or equivalently the identity component of Hσ or K) is irreducible on . This is equivalent to the maximality of as a subalgebra. In fact there is a one-one correspondence between intermediate subalgebras and K-invariant subspaces of given by Any orthogonal symmetric algebra (, σ) of Hermitian type can be decomposed as an (orthogonal) direct sum of irreducible orthogonal symmetric algebras of Hermitian type. In fact can be written as a direct sum of simple algebras each of which is left invariant by the automorphism σ and the complex structure J, since they are both inner. The eigenspace decomposition of coincides with its intersections with and . So the restriction of σ to is irreducible. This decomposition of the orthogonal symmetric Lie algebra yields a direct product decomposition of the corresponding compact symmetric space H / K when H is simply connected. In this case the fixed point subgroup Hσ is automatically connected. For simply connected H, the symmetric space H / K is the direct product of Hi / Ki with Hi simply connected and simple. In the irreducible case, K is a maximal connected subgroup of H. Since K acts irreducibly on (regarded as a complex space for the complex structure defined by J), the center of K is a one-dimensional torus T, given by the operators exp tT. Since each H is simply connected and K connected, the quotient H/K is simply connected. Complex structure if H / K is irreducible with K non-semisimple, the compact group H must be simple and K of maximal rank. From Borel-de Siebenthal theory, the involution σ is inner and K is the centralizer of its center, which is isomorphic to T. In particular K is connected. It follows that H / K is simply connected and there is a parabolic subgroup P in the complexification G of H such that H / K = G / P. In particular there is a complex structure on H / K and the action of H is holomorphic. Since any Hermitian symmetric space is a product of irreducible spaces, the same is true in general. At the Lie algebra level, there is a symmetric decomposition where is a real vector space with a complex structure J, whose complex dimension is given in the table. Correspondingly, there is a graded Lie algebra decomposition where is the decomposition into +i and −i eigenspaces of J and . The Lie algebra of P is the semidirect product . The complex Lie algebras are Abelian. Indeed, if U and V lie in , [U,V] = J[U,V] = [JU,JV] = [±iU,±iV] = –[U,V], so the Lie bracket must vanish. The complex subspaces of are irreducible for the action of K, since J commutes with K so that each is isomorphic to with complex structure ±J. Equivalently the centre T of K acts on by the identity representation and on by its conjugate. The realization of H/K as a generalized flag variety G/P is obtained by taking G as in the table (the complexification of H) and P to be the parabolic subgroup equal to the semidirect product of L, the complexification of K, with the complex Abelian subgroup exp . (In the language of algebraic groups, L is the Levi factor of P.) Classification Any Hermitian symmetric space of compact type is simply connected and can be written as a direct product of irreducible hermitian symmetric spaces Hi / Ki with Hi simple, Ki connected of maximal rank with center T. The irreducible ones are therefore exactly the non-semisimple cases classified by Borel–de Siebenthal theory. Accordingly, the irreducible compact Hermitian symmetric spaces H/K are classified as follows. In terms of the classification of compact Riemannian symmetric spaces, the Hermitian symmetric spaces are the four infinite series AIII, DIII, CI and BDI with p = 2 or q = 2, and two exceptional spaces, namely EIII and EVII. Classical examples The irreducible Hermitian symmetric spaces of compact type are all simply connected. The corresponding symmetry σ of the simply connected simple compact Lie group is inner, given by conjugation by the unique element S in Z(K) / Z(H) of period 2. For the classical groups, as in the table above, these symmetries are as follows: AIII: in S(U(p)×U(q)), where αp+q=(−1)p. DIII: S = iI in U(n) ⊂ SO(2n); this choice is equivalent to . CI: S=iI in U(n) ⊂ Sp(n) = Sp(n,C) ∩ U(2n); this choice is equivalent to Jn. BDI: in SO(p)×SO(2). The maximal parabolic subgroup P can be described explicitly in these classical cases. For AIII in SL(p+q,C). P(p,q) is the stabilizer of a subspace of dimension p in Cp+q. The other groups arise as fixed points of involutions. Let J be the n × n matrix with 1's on the antidiagonal and 0's elsewhere and set Then Sp(n,C) is the fixed point subgroup of the involution θ(g) = A (gt)−1 A−1 of SL(2n,C). SO(n,C) can be realised as the fixed points of ψ(g) = B (gt)−1 B−1 in SL(n,C) where B = J. These involutions leave invariant P(n,n) in the cases DIII and CI and P(p,2) in the case BDI. The corresponding parabolic subgroups P are obtained by taking the fixed points. The compact group H acts transitively on G / P, so that G / P = H / K. Hermitian symmetric spaces of noncompact type Definition As with symmetric spaces in general, each compact Hermitian symmetric space H/K has a noncompact dual H*/K obtained by replacing H with the closed real Lie subgroup H* of the complex Lie group G with Lie algebra Borel embedding Whereas the natural map from H/K to G/P is an isomorphism, the natural map from H*/K to G/P is only an inclusion onto an open subset. This inclusion is called the Borel embedding after Armand Borel. In fact P ∩ H = K = P ∩ H*. The images of H and H* have the same dimension so are open. Since the image of H is compact, so closed, it follows that H/K = G/P. Cartan decomposition The polar decomposition in the complex linear group G implies the Cartan decomposition H* = K ⋅ exp in H*. Moreover, given a maximal Abelian subalgebra in t, A = exp is a toral subgroup such that σ(a) = a−1 on A; and any two such 's are conjugate by an element of K. A similar statement holds for . Morevoer if A* = exp , then These results are special cases of the Cartan decomposition in any Riemannian symmetric space and its dual. The geodesics emanating from the origin in the homogeneous spaces can be identified with one parameter groups with generators in or . Similar results hold for in the compact case: H= K ⋅ exp and H = KAK. The properties of the totally geodesic subspace A can be shown directly. A is closed because the closure of A is a toral subgroup satisfying σ(a) = a−1, so its Lie algebra lies in and hence equals by maximality. A can be generated topologically by a single element exp X, so is the centralizer of X in . In the K-orbit of any element of there is an element Y such that (X,Ad k Y) is minimized at k = 1. Setting k = exp tT with T in , it follows that (X,[T,Y]) = 0 and hence [X,Y] = 0, so that Y must lie in . Thus is the union of the conjugates of . In particular some conjugate of X lies in any other choice of , which centralizes that conjugate; so by maximality the only possibilities are conjugates of . The decompositions can be proved directly by applying the slice theorem for compact transformation groups to the action of K on H / K. In fact the space H / K can be identified with a closed submanifold of H, and the Cartan decomposition follows by showing that M is the union of the kAk−1 for k in K. Since this union is the continuous image of K × A, it is compact and connected. So it suffices to show that the union is open in M and for this it is enough to show each a in A has an open neighbourhood in this union. Now by computing derivatives at 0, the union contains an open neighbourhood of 1. If a is central the union is invariant under multiplication by a, so contains an open neighbourhood of a. If a is not central, write a = b2 with b in A. Then τ = Ad b − Ad b−1 is a skew-adjoint operator on anticommuting with σ, which can be regarded as a Z2-grading operator σ on . By an Euler–Poincaré characteristic argument it follows that the superdimension of coincides with the superdimension of the kernel of τ. In other words, where and are the subspaces fixed by Ad a. Let the orthogonal complement of in be . Computing derivatives, it follows that Ad eX (a eY), where X lies in and Y in , is an open neighbourhood of a in the union. Here the terms a eY lie in the union by the argument for central a: indeed a is in the center of the identity component of the centralizer of a which is invariant under σ and contains A. The dimension of is called the rank of the Hermitian symmetric space. Strongly orthogonal roots In the case of Hermitian symmetric spaces, Harish-Chandra gave a canonical choice for . This choice of is determined by taking a maximal torus T of H in K with Lie algebra . Since the symmetry σ is implemented by an element of T lying in the centre of H, the root spaces in are left invariant by σ. It acts as the identity on those contained in and minus the identity on those in . The roots with root spaces in are called compact roots and those with root spaces in are called noncompact roots. (This terminology originates from the symmetric space of noncompact type.) If H is simple, the generator Z of the centre of K can be used to define a set of positive roots, according to the sign of α(Z). With this choice of roots and are the direct sum of the root spaces over positive and negative noncompact roots α. Root vectors Eα can be chosen so that lie in . The simple roots α1, ...., αn are the indecomposable positive roots. These can be numbered so that αi vanishes on the center of for i, whereas α1 does not. Thus α1 is the unique noncompact simple root and the other simple roots are compact. Any positive noncompact root then has the form β = α1 + c2 α2 + ⋅⋅⋅ + cn αn with non-negative coefficients ci. These coefficients lead to a lexicographic order on positive roots. The coefficient of α1 is always one because is irreducible for K so is spanned by vectors obtained by successively applying the lowering operators E–α for simple compact roots α. Two roots α and β are said to be strongly orthogonal if ±α ±β are not roots or zero, written α ≐ β. The highest positive root ψ1 is noncompact. Take ψ2 to be the highest noncompact positive root strongly orthogonal to ψ1 (for the lexicographic order). Then continue in this way taking ψi + 1 to be the highest noncompact positive root strongly orthogonal to ψ1, ..., ψi until the process terminates. The corresponding vectors lie in and commute by strong orthogonality. Their span is Harish-Chandra's canonical maximal Abelian subalgebra. (As Sugiura later showed, having fixed T, the set of strongly orthogonal roots is uniquely determined up to applying an element in the Weyl group of K.) Maximality can be checked by showing that if for all i, then cα = 0 for all positive noncompact roots α different from the ψj's. This follows by showing inductively that if cα ≠ 0, then α is strongly orthogonal to ψ1, ψ2, ... a contradiction. Indeed, the above relation shows ψi + α cannot be a root; and that if ψi – α is a root, then it would necessarily have the form β – ψi. If ψi – α were negative, then α would be a higher positive root than ψi, strongly orthogonal to the ψj with j < i, which is not possible; similarly if β – ψi were positive. Polysphere and polydisk theorem Harish-Chandra's canonical choice of leads to a polydisk and polysphere theorem in H*/K and H/K. This result reduces the geometry to products of the prototypic example involving SL(2,C), SU(1,1) and SU(2), namely the unit disk inside the Riemann sphere. In the case of H = SU(2) the symmetry σ is given by conjugation by the diagonal matrix with entries ±i so that The fixed point subgroup is the maximal torus T, the diagonal matrices with entries e ±it. SU(2) acts on the Riemann sphere transitively by Möbius transformations and T is the stabilizer of 0. SL(2,C), the complexification of SU(2), also acts by Möbius transformations and the stabiliser of 0 is the subgroup B of lower triangular matrices. The noncompact subgroup SU(1,1) acts with precisely three orbits: the open unit disk |z| < 1; the unit circle z = 1; and its exterior |z| > 1. Thus where B+ and TC denote the subgroups of upper triangular and diagonal matrices in SL(2,C). The middle term is the orbit of 0 under the upper unitriangular matrices Now for each root ψi there is a homomorphism of πi of SU(2) into H which is compatible with the symmetries. It extends uniquely to a homomorphism of SL(2,C) into G. The images of the Lie algebras for different ψi's commute since they are strongly orthogonal. Thus there is a homomorphism π of the direct product SU(2)r into H compatible with the symmetries. It extends to a homomorphism of SL(2,C)r into G. The kernel of π is contained in the center (±1)r of SU(2)r which is fixed pointwise by the symmetry. So the image of the center under π lies in K. Thus there is an embedding of the polysphere (SU(2)/T)r into H / K = G / P and the polysphere contains the polydisk (SU(1,1)/T)r. The polysphere and polydisk are the direct product of r copies of the Riemann sphere and the unit disk. By the Cartan decompositions in SU(2) and SU(1,1), the polysphere is the orbit of TrA in H / K and the polydisk is the orbit of TrA*, where Tr = π(Tr) ⊆ K. On the other hand, H = KAK and H* = K A* K. Hence every element in the compact Hermitian symmetric space H / K is in the K-orbit of a point in the polysphere; and every element in the image under the Borel embedding of the noncompact Hermitian symmetric space H* / K is in the K-orbit of a point in the polydisk. Harish-Chandra embedding H* / K, the Hermitian symmetric space of noncompact type, lies in the image of , a dense open subset of H / K biholomorphic to . The corresponding domain in is bounded. This is the Harish-Chandra embedding named after Harish-Chandra. In fact Harish-Chandra showed the following properties of the space : As a space, X is the direct product of the three factors. X is open in G. X is dense in G. X contains H*. The closure of H* / K in X / P = is compact. In fact are complex Abelian groups normalised by KC. Moreover, since . This implies P ∩ M+ = {1}. For if x = eX with X in lies in P, it must normalize M− and hence . But if Y lies in , then so that X commutes with . But if X commutes with every noncompact root space, it must be 0, so x = 1. It follows that the multiplication map μ on M+ × P is injective so (1) follows. Similarly the derivative of μ at (x,p) is which is injective, so (2) follows. For the special case H = SU(2), H* = SU(1,1) and G = SL(2,C) the remaining assertions are consequences of the identification with the Riemann sphere, C and unit disk. They can be applied to the groups defined for each root ψi. By the polysphere and polydisk theorem H*/K, X/P and H/K are the union of the K-translates of the polydisk, Cr and the polysphere. So H* lies in X, the closure of H*/K is compact in X/P, which is in turn dense in H/K. Note that (2) and (3) are also consequences of the fact that the image of X in G/P is that of the big cell B+B in the Gauss decomposition of G. Using results on the restricted root system of the symmetric spaces H/K and H*/K, Hermann showed that the image of H*/K in is a generalized unit disk. In fact it is the convex set of X for which the operator norm of ad Im X is less than one. Bounded symmetric domains A bounded domain Ω in a complex vector space is said to be a bounded symmetric domain if for every x in Ω, there is an involutive biholomorphism σx of Ω for which x is an isolated fixed point. The Harish-Chandra embedding exhibits every Hermitian symmetric space of noncompact type H* / K as a bounded symmetric domain. The biholomorphism group of H* / K is equal to its isometry group H*. Conversely every bounded symmetric domain arises in this way. Indeed, given a bounded symmetric domain Ω, the Bergman kernel defines a metric on Ω, the Bergman metric, for which every biholomorphism is an isometry. This realizes Ω as a Hermitian symmetric space of noncompact type. Classification The irreducible bounded symmetric domains are called Cartan domains and are classified as follows. Classical domains In the classical cases (I–IV), the noncompact group can be realized by 2 × 2 block matrices acting by generalized Möbius transformations The polydisk theorem takes the following concrete form in the classical cases: Type Ipq (p ≤ q): for every p × q matrix M there are unitary matrices such that UMV is diagonal. In fact this follows from the polar decomposition for p × p matrices. Type IIIn: for every complex symmetric n × n matrix M there is a unitary matrix U such that UMUt is diagonal. This is proved by a classical argument of Siegel. Take V unitary so that V*M*MV is diagonal. Then VtMV is symmetric and its real and imaginary parts commute. Since they are real symmetric matrices they can be simultaneously diagonalized by a real orthogonal matrix W. So UMUt is diagonal if U = WVt. Type IIn: for every complex skew symmetric n × n matrix M there is a unitary matrix such that UMUt is made up of diagonal blocks and one zero if n is odd. As in Siegel's argument, this can be reduced to case where the real and imaginary parts of M commute. Any real skew-symmetric matrix can be reduced to the given canonical form by an orthogonal matrix and this can be done simultaneously for commuting matrices. Type IVn: by a transformation in SO(n) × SO(2) any vector can be transformed so that all but the first two coordinates are non-zero. Boundary components The noncompact group H* acts on the complex Hermitian symmetric space H/K = G/P with only finitely many orbits. The orbit structure is described in detail in . In particular the closure of the bounded domain H*/K has a unique closed orbit, which is the Shilov boundary of the domain. In general the orbits are unions of Hermitian symmetric spaces of lower dimension. The complex function theory of the domains, in particular the analogue of the Cauchy integral formulas, are described for the Cartan domains in . The closure of the bounded domain is the Baily–Borel compactification of H*/K. The boundary structure can be described using Cayley transforms. For each copy of SU(2) defined by one of the noncompact roots ψi, there is a Cayley transform ci which as a Möbius transformation maps the unit disk onto the upper half plane. Given a subset I of indices of the strongly orthogonal family ψ1, ..., ψr, the partial Cayley transform cI is defined as the product of the ci's with i in I in the product of the groups πi. Let G(I) be the centralizer of this product in G and H*(I) = H* ∩ G(I). Since σ leaves H*(I) invariant, there is a corresponding Hermitian symmetric space MI H*(I)/H*(I)∩K ⊂ H*/K = M . The boundary component for the subset I is the union of the K-translates of cI MI. When I is the set of all indices, MI is a single point and the boundary component is the Shilov boundary. Moreover, MI is in the closure of MJ if and only if I ⊇ J. Geometric properties Every Hermitian symmetric space is a Kähler manifold. They can be defined equivalently as Riemannian symmetric spaces with a parallel complex structure with respect to which the Riemannian metric is Hermitian. The complex structure is automatically preserved by the isometry group H of the metric, and so any Hermitian symmetric space M is a homogeneous complex manifold. Some examples are complex vector spaces and complex projective spaces, with their usual Hermitian metrics and Fubini–Study metrics, and the complex unit balls with suitable metrics so that they become complete and Riemannian symmetric. The compact Hermitian symmetric spaces are projective varieties, and admit a strictly larger Lie group G of biholomorphisms with respect to which they are homogeneous: in fact, they are generalized flag manifolds, i.e., G is semisimple and the stabilizer of a point is a parabolic subgroup P of G. Among (complex) generalized flag manifolds G/P, they are characterized as those for which the nilradical of the Lie algebra of P is abelian. Thus they are contained within the family of symmetric R-spaces which conversely comprises Hermitian symmetric spaces and their real forms. The non-compact Hermitian symmetric spaces can be realized as bounded domains in complex vector spaces. Jordan algebras Although the classical Hermitian symmetric spaces can be constructed by ad hoc methods, Jordan triple systems, or equivalently Jordan pairs, provide a uniform algebraic means of describing all the basic properties connected with a Hermitian symmetric space of compact type and its non-compact dual. This theory is described in detail in and and summarized in . The development is in the reverse order from that using the structure theory of compact Lie groups. It starting point is the Hermitian symmetric space of noncompact type realized as a bounded symmetric domain. It can be described in terms of a Jordan pair or hermitian Jordan triple system. This Jordan algebra structure can be used to reconstruct the dual Hermitian symmetric space of compact type, including in particular all the associated Lie algebras and Lie groups. The theory is easiest to describe when the irreducible compact Hermitian symmetric space is of tube type. In that case the space is determined by a simple real Lie algebra with negative definite Killing form. It must admit an action of SU(2) which only acts via the trivial and adjoint representation, both types occurring. Since is simple, this action is inner, so implemented by an inclusion of the Lie algebra of SU(2) in . The complexification of decomposes as a direct sum of three eigenspaces for the diagonal matrices in SU(2). It is a three-graded complex Lie algebra, with the Weyl group element of SU(2) providing the involution. Each of the ±1 eigenspaces has the structure of a unital complex Jordan algebra explicitly arising as the complexification of a Euclidean Jordan algebra. It can be identified with the multiplicity space of the adjoint representation of SU(2) in . The description of irreducible Hermitian symmetric spaces of tube type starts from a simple Euclidean Jordan algebra E. It admits Jordan frames, i.e. sets of orthogonal minimal idempotents e1, ..., em. Any two are related by an automorphism of E, so that the integer m is an invariant called the rank of E. Moreover, if A is the complexification of E, it has a unitary structure group. It is a subgroup of GL(A) preserving the natural complex inner product on A. Any element a in A has a polar decomposition with . The spectral norm is defined by ||a|| = sup αi. The associated bounded symmetric domain is just the open unit ball D in A. There is a biholomorphism between D and the tube domain T = E + iC where C is the open self-dual convex cone of elements in E of the form with u an automorphism of E and αi > 0. This gives two descriptions of the Hermitian symmetric space of noncompact type. There is a natural way of using mutations of the Jordan algebra A to compactify the space A. The compactification X is a complex manifold and the finite-dimensional Lie algebra of holomorphic vector fields on X can be determined explicitly. One parameter groups of biholomorphisms can be defined such that the corresponding holomorphic vector fields span . This includes the group of all complex Möbius transformations corresponding to matrices in SL(2,C). The subgroup SU(1,1) leaves invariant the unit ball and its closure. The subgroup SL(2,R) leaves invariant the tube domain and its closure. The usual Cayley transform and its inverse, mapping the unit disk in C to the upper half plane, establishes analogous maps between D and T. The polydisk corresponds to the real and complex Jordan subalgebras generated by a fixed Jordan frame. It admits a transitive action of SU(2)m and this action extends to X. The group G generated by the one-parameter groups of biholomorphisms acts faithfully on . The subgroup generated by the identity component K of the unitary structure group and the operators in SU(2)m. It defines a compact Lie group H which acts transitively on X. Thus H / K is the corresponding Hermitian symmetric space of compact type. The group G can be identified with the complexification of H. The subgroup H* leaving D invariant is a noncompact real form of G. It acts transitively on D so that H* / K is the dual Hermitian symmetric space of noncompact type. The inclusions D ⊂ A ⊂ X reproduce the Borel and Harish-Chandra embeddings. The classification of Hermitian symmetric spaces of tube type reduces to that of simple Euclidean Jordan algebras. These were classified by in terms of Euclidean Hurwitz algebras, a special type of composition algebra. In general a Hermitian symmetric space gives rise to a 3-graded Lie algebra with a period 2 conjugate linear automorphism switching the parts of degree ±1 and preserving the degree 0 part. This gives rise to the structure of a Jordan pair or hermitian Jordan triple system, to which extended the theory of Jordan algebras. All irreducible Hermitian symmetric spaces can be constructed uniformly within this framework. constructed the irreducible Hermitian symmetric space of non-tube type from a simple Euclidean Jordan algebra together with a period 2 automorphism. The −1 eigenspace of the automorphism has the structure of a Jordan pair, which can be deduced from that of the larger Jordan algebra. In the non-tube type case corresponding to a Siegel domain of type II, there is no distinguished subgroup of real or complex Möbius transformations. For irreducible Hermitian symmetric spaces, tube type is characterized by the real dimension of the Shilov boundary being equal to the complex dimension of . See also Invariant convex cone Notes References The standard book on Riemannian symmetric spaces. . Chapter 8 contains a self-contained account of Hermitian symmetric spaces of compact type. . This contains a detailed account of Hermitian symmetric spaces of noncompact type. Differential geometry Complex manifolds Riemannian geometry Lie groups Homogeneous spaces
Hermitian symmetric space
[ "Physics", "Mathematics" ]
7,349
[ "Lie groups", "Mathematical structures", "Group actions", "Homogeneous spaces", "Space (mathematics)", "Topological spaces", "Algebraic structures", "Geometry", "Symmetry" ]
4,127,524
https://en.wikipedia.org/wiki/CSIX
The common switch interface (CSIX) is a physical interface specification between a traffic manager (network processor) and a switching fabric. It was developed by the Network Processing Forum to: promote development and deployment of highly scalable network switches permit hardware and software interoperability References Data transmission
CSIX
[ "Technology" ]
58
[ "Computing stubs", "Computer network stubs" ]
4,127,667
https://en.wikipedia.org/wiki/Thiotepa
Thiotepa (INN), sold under the brand name Tepadina among others, is an anti-cancer medication. Thiotepa is an organophosphorus compound with the formula (C2H4N)3PS. Medical uses Thiotepa is indicated for use in combination with other chemotherapy agents to treat cancer. This can be with or without total body irradiation (TBI), as a conditioning treatment prior to allogeneic or autologous hematopoietic progenitor cell transplantation (HPCT) in hematological diseases in adults and children. These diseases include Hodgkin's disease and leukaemia. Thiotepa is also used with high-dose chemotherapy with HPCT support to treat certain solid tumors in adult and children. Thiotepa is used in the palliation of many neoplastic diseases. The best results are found in the treatment of adenocarcinoma of the breast, adenocarcinoma of the ovary, papillary thyroid cancer and bladder cancer. Thiotepa is used to control intracavitary effusions caused by serosal neoplastic deposits. Intravesical use Thiotepa is used as intravesical chemotherapy in bladder cancer. Side effects The main side effect of thiotepa is bone marrow suppression resulting in leukopenia, thrombocytopenia and anemia. History Thiotepa was developed by the American Cyanamid company in the early 1950s and reported to media outlets in 1953. In 1959, thiotepa was registered with the US Food and Drug Administration (FDA) as a drug therapy for several solid cancers. In January 2007, the European Medicines Agency (EMA) designated thiotepa as an orphan drug. In April 2007, the United States FDA designated thiotepa as a conditioning treatment for use prior to hematopoietic stem cell transplantation. In June 2024, the FDA approved a ready-to-dilute liquid formulation of thiotepa to treat breast and ovarian cancer. References Alkylating antineoplastic agents 1-Aziridinyl compounds Cancer treatments IARC Group 1 carcinogens Organophosphoric amides Orphan drugs Thiophosphoryl compounds
Thiotepa
[ "Chemistry" ]
480
[ "Functional groups", "Thiophosphoryl compounds" ]
4,127,669
https://en.wikipedia.org/wiki/Kookaburra%20%28rocket%29
Kookaburra is an Australian sounding rocket used for atmospheric research. It was designed to be relatively inexpensive It took part in an international experiment in March 1970 with Britain and India to measure ozone levels and atmospheric temperature. The Kookaburra was launched 33 times in total before being retired in 1976. Technical data Apogee: 75 km Total Mass: 100 kg Core Diameter: 0.12 m Total Length: 3.40 m References Sounding rockets of Australia
Kookaburra (rocket)
[ "Astronomy" ]
94
[ "Rocketry stubs", "Astronomy stubs" ]
4,128,196
https://en.wikipedia.org/wiki/Urban%20prairie
Urban prairie (or urban grassland) is vacant urban land that has reverted to green space. The definition can vary across countries and disciplines, but at its broadest encompasses meadows, lawns, and gardens, as well as public and private parks, vacant land, remnants of rural landscapes, and areas along transportation corridors. If previously developed, structures occupying the urban lots have been demolished, leaving patchy areas of green space that are usually untended and unmanaged, forming an involuntary park. Spaces can also be intentionally created to facilitate amenities, such as green belts, community gardens and wildlife reserve habitats. Urban brownfields are contaminated grasslands that also fall under the urban grassland umbrella. Urban greenspaces are a larger category that include urban grasslands in addition to other spaces. Causes Urban prairies can result from several factors. They can either being land that was previously developed and has since been cleared, or remnants of the natural landscape. In the first case, the value of aging buildings may fall too low to provide financial incentives for their owners to maintain them or seizure by local government as a response to unpaid property taxes. In many cases, cities demolish vacant structures because they pose health and safety threats (such as fire hazards), or be used as a location for criminal activity. Areas may be cleared of buildings as part of a revitalization plan with the intention of redeveloping the land. In flood-prone areas, government agencies may purchase developed lots and then demolish the structures to improve drainage during floods. Neighborhoods near major industrial or environmental clean-up sites can be acquired and leveled to create a buffer zone and minimize the risks associated with pollution or industrial accidents. Additionally, residents of the city may fill up the unplanned empty space with urban parks or community gardens. Governments and non-profit groups can also create community gardens and conservation, to restore or reintroduce a wildlife habitat, help the environment, and educate people about the prairie. Detroit, Michigan is one particular city that has many urban prairies. Benefits Many studies show urbanization has been linked to a loss of biodiversity. Additionally, remaining urban landscapes are typically unable to support the complex food webs the previously hosted and become novel habitats home to highly adapted alien species, such as rats, cockroaches, and pigeons. As natural landscapes are replaced with urban ones, the ecosystem services of the area can be diminished. Due to this, in urban areas green spaces and grasslands are even more vital. These areas not only better human life through providing space for leisure activities, and social interaction, but also direct health benefits such as reducing air pollution. They also provide homes for important pollinators such as wild bees. Despite the issues surrounding their cleanup and maintenance, even small urban grasslands can have a big effect ecologically. In Melbourne, Australia at the Tunnerminnerwait and Maulboyheenner memorial site just three years after being replanted with a variety of native species and receiving upkeep, the green space had increased the number and diversity of insects in the vicinity. See also Urban decay References External links City of Des Moines Urban Prairie Project Urban studies and planning terminology Prairie Prairies Urban decay Ecology Rewilding
Urban prairie
[ "Biology" ]
650
[ "Ecology" ]
4,128,358
https://en.wikipedia.org/wiki/Krull%27s%20theorem
In mathematics, and more specifically in ring theory, Krull's theorem, named after Wolfgang Krull, asserts that a nonzero ring has at least one maximal ideal. The theorem was proved in 1929 by Krull, who used transfinite induction. The theorem admits a simple proof using Zorn's lemma, and in fact is equivalent to Zorn's lemma, which in turn is equivalent to the axiom of choice. Variants For noncommutative rings, the analogues for maximal left ideals and maximal right ideals also hold. For pseudo-rings, the theorem holds for regular ideals. An apparently slightly stronger (but equivalent) result, which can be proved in a similar fashion, is as follows: Let R be a ring, and let I be a proper ideal of R. Then there is a maximal ideal of R containing I. The statement of the original theorem can be obtained by taking I to be the zero ideal (0). Conversely, applying the original theorem to R/I leads to this result. To prove the "stronger" result directly, consider the set S of all proper ideals of R containing I. The set S is nonempty since I ∈ S. Furthermore, for any chain T of S, the union of the ideals in T is an ideal J, and a union of ideals not containing 1 does not contain 1, so J ∈ S. By Zorn's lemma, S has a maximal element M. This M is a maximal ideal containing I. Notes References Ideals (ring theory)
Krull's theorem
[ "Mathematics" ]
321
[ "Theorems in algebra", "Mathematical theorems", "Mathematical problems", "Algebra" ]
4,128,491
https://en.wikipedia.org/wiki/Dot%20blot
A dot blot (or slot blot) is a technique in molecular biology used to detect proteins. It represents a simplification of the western blot method, with the exception that the proteins to be detected are not first separated by electrophoresis. Instead, the sample is applied directly on a membrane in a single spot, and the blotting procedure is performed. The technique offers significant savings in time, as chromatography or gel electrophoresis, and the complex blotting procedures for the gel are not required. However, it offers no information on the size of the target protein. Uses Performing a dot blot is similar in idea to performing a western blot, with the advantage of faster speed and lower cost. Dot blots are also performed to screen the binding capabilities of an antibody. Methodology A general dot blot protocol involves spotting 1–2 microliters of a samples onto a nitrocellulose or PVDF membrane and letting it air dry. Samples can be in the form of tissue culture supernatants, blood serum, cell extracts, or other preparations. After the protein samples are spotted onto the membrane, the membrane is placed in a plastic container and sequentially incubated in blocking buffer, antibody solutions, or rinsing buffer on shaker. After antibody binding, the membrane is incubated with a chemiluminescent substrate and imaged. Vacuum-assisted dot blot apparatus has been used to facilitate the rinsing and incubating process by using vacuum to extract the solution from underneath the membrane, which is assembled in between several layers of plates to ensure good seal between sample wells, hold waste solution, and deliver suction force. For chemiluminescence signal detection, apparatus need to be disassembled and the membrane need to be taken out and wrapped in a transparent plastic film. See also ELISpot Western blot References Genetics techniques Molecular biology techniques
Dot blot
[ "Chemistry", "Engineering", "Biology" ]
398
[ "Genetics techniques", "Molecular biology techniques", "Genetic engineering", "Molecular biology" ]
4,128,619
https://en.wikipedia.org/wiki/Dichloralphenazone
Dichloralphenazone is a 1:2 mixture of antipyrine with chloral hydrate. In combination with paracetamol and isometheptene, it is the active ingredient of medications for migraine and tension headaches, including Epidrin and Midrin. Performance impairments are common with this drug and caution is advised, for example when driving motor vehicles. Additional uses of dichloralphenazone include sedation for the treatment of short-term insomnia, although there are probably better drug choices for the treatment of insomnia. See also Chloral betaine References External links Analgesics Hypnotics Sedatives Combination drugs GABAA receptor positive allosteric modulators
Dichloralphenazone
[ "Biology" ]
154
[ "Hypnotics", "Behavior", "Sleep" ]
4,128,669
https://en.wikipedia.org/wiki/Environmental%20isotopes
The environmental isotopes are a subset of isotopes, both stable and radioactive, which are the object of isotope geochemistry. They are primarily used as tracers to see how things move around within the ocean-atmosphere system, within terrestrial biomes, within the Earth's surface, and between these broad domains. Isotope geochemistry Chemical elements are defined by their number of protons, but the mass of the atom is determined by the number of protons and neutrons in the nucleus. Isotopes are atoms that are of a specific element, but have different numbers of neutrons and thus different mass numbers. The ratio between isotopes of an element varies slightly in the world, so in order to study isotopic ratio changes across the world, changes in isotope ratios are defined as deviations from a standard, multiplied by 1000. This unit is a "per mil". As a convention, the ratio is of the heavier isotope to the lower isotope. ‰ These variations in isotopes can occur through many types of fractionation. They are generally classified as mass independent fractionation and mass dependent fractionation. An example of a mass independent process is the fractionation of oxygen atoms in ozone. This is due to the kinetic isotope effect (KIE) and is caused by different isotope molecules reacting at different speeds. An example of a mass dependent process is the fractionation of water as it transitions from the liquid to gas phase. Water molecules with heavier isotopes (18O and 2H) tend to stay in the liquid phase as water molecules with lighter isotopes (16O and 1H) preferentially move to the gas phase. Of the different isotopes that exist, one common classification is distinguishing radioactive isotopes from stable isotopes. Radioactive isotopes are isotopes that will decay into a different isotope. For example, 3H (tritium) is a radioactive isotope of hydrogen. It decays into 3He with a half-life of ~12.3 years. By comparison, stable isotopes do not undergo radioactive decay, and their fixed proportions are measured against exponentially decaying proportions of radioactive isotopes to determine the age of a substance. Radioactive isotopes are generally more useful on shorter timescales, such as investigating modern circulation of the ocean using 14C, while stable isotopes are generally more useful on longer timescales, such as investigating differences in river flow with stable strontium isotopes. These isotopes are used as tracers to study various phenomena of interest. These tracers have a certain distribution spatially, and so scientists need to deconvolve the different processes that affect these tracer distributions. One way tracer distributions are set is by conservative mixing. In conservative mixing, the amount of the tracer is conserved. An example of this is mixing two water masses with different salinities. The salt from the saltier water mass moves to the less salty water mass, keeping the total amount of salinity constant. This way of mixing tracers is very important, giving a baseline of what value of a tracer one should expect. The value of a tracer as a point is expected to be an average value of the sources that flow into that region. Deviations from this are indicative of other processes. These can be called nonconservative mixing, where there are other processes that do not conserve the amount of tracer. An example of this is 𝛿14C. This mixes between water masses, but it also decays over time, reducing the amount of 14C in the region. Commonly used isotopes The most used environmental isotopes are: deuterium tritium carbon-13 carbon-14 nitrogen-15 oxygen-18 silicon-29 chlorine-36 isotopes of uranium isotopes of strontium Ocean circulation One topic that environmental isotopes are used to study is the circulation of the ocean. Treating the ocean as a box is only useful in some studies; in depth consideration of the oceans in general circulation models (GCMs) requires knowing how the ocean circulates. This leads to an understanding of how the oceans (along with the atmosphere) transfer heat from the tropics to the poles. This also helps deconvolve circulation effects from other phenomena that affect certain tracers such as radioactive and biological processes. Using rudimentary observation techniques, the circulation of the surface ocean can be determined. In the Atlantic basin, surface waters flow from the south towards the north in general, while also creating gyres in the northern and southern Atlantic. In the Pacific Ocean, the gyres still form, but there is comparatively very little large scale meridional (North-South) movement. For deep waters, there are two areas where density causes waters to sink into the deep ocean. These are in the North Atlantic and the Antarctic. The deep water masses formed are North Atlantic Deep Water (NADW) and Antarctic Bottom Water (AABW). Deep waters are mixtures of these two waters, and understanding how waters are composed of these two water masses can tell us about how water masses move around in the deep ocean. This can be investigated with environmental isotopes, including 14C. 14C is predominantly produced in the upper atmosphere and from nuclear testing, with no major sources or sinks in the ocean. This 14C from the atmosphere becomes oxidized into 14CO2, allowing it to enter the surface ocean through gas transfer. This is transferred into the deep ocean through NADW and AABW. In NADW, the 𝛿14C is approximately -60‰, and in AABW, the 𝛿14C is approximately -160‰. Thus, using conservative mixing of radiocarbon, the expected amount of radiocarbon in various locations can be determined using the percent compositions of NADW and AABW at that location. This can be determined using other tracers, such as phosphate star or salinity. Deviations from this expected value are indicative of other processes that affect the delta ratio of radiocarbon, namely radioactive decay. This deviation can be converted to a time, giving the age of the water at that location. Doing this over the world's ocean can yield a circulation pattern of the ocean and the rate at which water flow through the deep ocean. Using this circulation in conjunction with the surface circulation allows scientists to understand the energy balance of the world. Warmer surface waters flow northward while colder deep waters flow southward, leading to net heat transfer towards the pole. Paleoclimate Isotopes are also used to study paleoclimate. This is the study of how climate was in the past, from hundreds of years ago to hundreds of thousands of years ago. The only records of these times that we have are buried in rocks, sediments, biological shells, stalagmites and stalactites, etc. The isotope ratios in these samples were affected by the temperature, salinity, circulation of the ocean, precipitation, etc. of the climate at the time, causing a measurable change from the standards for isotope measurements. This is how climate information is encoded in these geological formations. Some of the many isotopes useful for environmental science are discussed below. δ18O One useful isotope for reconstructing past climates is oxygen-18. It is another stable isotope of oxygen along with oxygen-16, and its incorporation into water and carbon dioxide/carbonate molecules is strongly temperature dependent. Higher temperature implies more incorporation of oxygen-18, and vice versa. Thus, the ratio of 18O/16O can tell something about temperature. For water, the isotope ratio standard is Vienna Standard Mean Ocean Water, and for carbonates, the standard is Pee Dee Belemnite. Using ice cores and sediment cores that record information about the water and shells from past times, this ratio can tell scientists about the temperature of those times. This ratio is used with ice cores to determine the temperature at the spot in the ice core. Depth in an ice core is proportional to time, and it is "wiggle-matched" with other records to determine the true time of the ice at that depth. This can be done by comparing δ18O in calcium carbonate shells in sediment cores to these records to match large scale changes in the temperature of the Earth. Once the ice cores are matched to sediment cores, highly accurate dating methods such as U-series dating can be used to accurately determine the time of these events. There are some processes that mix water from different times into the same depth in the ice core, such as firn production and sloped landscape floes. Lisiecki and Raymo (2005) used measurements of δ18O in benthic foraminifera from 57 globally distributed deep sea sediment cores, taken as a proxy for the total global mass of glacial ice sheets, to reconstruct the climate for the past five million years. This record shows oscillations of 2-10 degrees Celsius over this time. Between 5 million and 1.2 million years ago, these oscillations had a period of 41,000 years (41 kyr), but about 1.2 million years ago the period switch to 100 kyr. These changes in global temperature match with changes in orbital parameters of the Earth's orbit around the Sun. These are called Milankovitch cycles, and these are related to eccentricity, obliquity (axial tilt), and precession of Earth around its axis. These correspond to cycles with periods of 100 kyr, 40 kyr, and 20 kyr. δ18O can also be used to investigate smaller scale climate phenomena. Koutavas et al. (2006) used δ18O of G. ruber foraminifera to study the El Niño–Southern Oscillation (ENSO) and it's variability through the mid-Holocene. By isolating individual foram shells, Koutavas et al. were able to obtain a spread of δ18O values at a specific depth. Because these forams live for approximately a month and that the individual forams were from many different months, clumped together in a small depth range in the coral, the variability of δ18O was able to be determined. In the eastern Pacific, where these cores were taken, the primary driver of this variability is ENSO, making this a record of ENSO variability over the core's time span. Koutavas et al. found that ENSO was much less variable in the mid Holocene (~6,000 years ago) than it is currently. Strontium isotopes Another set of environmental isotopes used in paleoclimate is strontium isotopes. Strontium-86 and strontium-87 are both stable isotopes of strontium, but strontium-87 is radiogenic, coming from the decay of rubidium-87. The ratio of these two isotopes depends on the concentration of rubidium-87 initially and the age of the sample, assuming that the background concentration of strontium-87 is known. This is useful because 87Rb is predominantly found in continental rocks. Particles from these rocks come into the ocean through weathering by rivers, meaning that this strontium isotope ratio is related to the weathering ion flux coming from rivers into the ocean. The background concentration in the ocean for 87Sr/86Sr is 0.709 ± 0.0012. Because the strontium ratio is recorded in sedimentary records, the oscillations of this ratio over time can be studied. These oscillations are related to the riverine input into the oceans or into the local basin. Richter and Turekian have done work on this, finding that over glacial-interglacial timescales (105 years), the 87Sr/86Sr ratio varies by 3*10−5. Uranium and related isotopes Uranium has many radioactive isotopes that continue emitting particles down a decay chain. Uranium-235 is in one such chain, and decays into protactinium-231 and then into other products. Uranium-238 is in a separate chain, decaying into a series of elements, including thorium-230. Both of these series end up forming lead, either lead-207 from uranium-235 or lead-206 from uranium-238. All of these decays are alpha or beta decays, meaning that they all follow first order rate equations of the form , where λ is the half-life of the isotope in question. This makes it simple to determine the age of a sample based on the various ratios of radioactive isotopes that exist. One way uranium isotopes are used is to date rocks from millions to billions of years ago. This is through uranium-lead dating. This technique uses zircon samples and measures the lead content in them. Zircon incorporates uranium and thorium atoms into its crystal structure, but strongly rejects lead. Thus, the only sources of lead in a zircon crystal are through decay of uranium and thorium. Both the uranium-235 and uranium-238 series decay into an isotope of lead. The half-life of converting 235U to 207Pb is 710 million years, and the half-life of converting 238U to 206Pb is 4.47 billion years. Because of high resolution mass-spectroscopy, both chains can be used to date rocks, giving complementary information about the rocks. The large difference in half-lives makes the technique robust over long time scales, from on the order of millions of years to on the order of billions of years. Another way uranium isotopes are used in environmental science is the ratio of 231Pa/230Th. These radiogenic isotopes have different uranium parents, but have very different reactivities in the ocean. The uranium profile in the ocean is constant because uranium has a very large residence time compared to the residence time of the ocean. The decay of uranium is thus also isotropic, but the daughter isotopes react differently. Thorium is readily scavenged by particles, leading to rapid removal from the ocean into sediments. By contrast, 231Pa is not as particle-reactive, feeling the circulation of the ocean in small amounts before settling into the sediment. Thus, knowing the decay rates of both isotopes and the fractions of each uranium isotopes, the expected ratio of 231Pa/230Th can be determined, with any deviation from this value being due to circulation. Circulation leads to a higher 231Pa/230Th ratio downstream and a lower ratio upstream, with the magnitude of the deviation being related to flow rate. This technique has been used to quantify the Atlantic Meridional Overturning Circulation (AMOC) during the Last Glacial Maximum (LGM) and during abrupt climate change events in Earth's past, such as Heinrich events and Dansgaard-Oeschger events. Neodymium Neodymium isotopes are also used to determine circulation in the ocean. All of the isotopes of neodymium are stable on the timescales of glacial-interglacial cycles, but 143Nd is a daughter of 147Sm, a radioactive isotope in the ocean. Samarium-147 has higher concentrations in mantle rocks vs crust rocks, so areas that receive river inputs from mantle-derived rocks have higher concentrations of 147Sm and 143Nd. However, these differences are so small, the standard notation of a delta value are no blunt for it; a more precise epsilon value is used to describe variations in this ratio of neodymium isotopes. It is defined as The only major sources of this in the ocean are in the North Atlantic and in the deep Pacific Ocean. Because one of the end-members is set in the interior of the ocean, this technique has the potential to tell us complementary information about paleoclimate compared to all other ocean tracers that are only set in the surface ocean. References
Environmental isotopes
[ "Chemistry" ]
3,227
[ "Environmental isotopes", "Isotopes" ]
4,128,720
https://en.wikipedia.org/wiki/Cyclic%20module
In mathematics, more specifically in ring theory, a cyclic module or monogenous module is a module over a ring that is generated by one element. The concept is a generalization of the notion of a cyclic group, that is, an Abelian group (i.e. Z-module) that is generated by one element. Definition A left R-module M is called cyclic if M can be generated by a single element i.e. for some x in M. Similarly, a right R-module N is cyclic if for some . Examples 2Z as a Z-module is a cyclic module. In fact, every cyclic group is a cyclic Z-module. Every simple R-module M is a cyclic module since the submodule generated by any non-zero element x of M is necessarily the whole module M. In general, a module is simple if and only if it is nonzero and is generated by each of its nonzero elements. If the ring R is considered as a left module over itself, then its cyclic submodules are exactly its left principal ideals as a ring. The same holds for R as a right R-module, mutatis mutandis. If R is F[x], the ring of polynomials over a field F, and V is an R-module which is also a finite-dimensional vector space over F, then the Jordan blocks of x acting on V are cyclic submodules. (The Jordan blocks are all isomorphic to ; there may also be other cyclic submodules with different annihilators; see below.) Properties Given a cyclic R-module M that is generated by x, there exists a canonical isomorphism between M and , where denotes the annihilator of x in R. See also Finitely generated module References Module theory
Cyclic module
[ "Mathematics" ]
376
[ "Fields of abstract algebra", "Module theory" ]
4,128,733
https://en.wikipedia.org/wiki/Bafilomycin
The bafilomycins are a family of macrolide antibiotics produced from a variety of Streptomycetes. Their chemical structure is defined by a 16-membered lactone ring scaffold. Bafilomycins exhibit a wide range of biological activity, including anti-tumor, anti-parasitic, immunosuppressant and anti-fungal activity. The most used bafilomycin is bafilomycin A1, a potent inhibitor of cellular autophagy. Bafilomycins have also been found to act as ionophores, transporting potassium K+ across biological membranes and leading to mitochondrial damage and cell death. Bafilomycin A1 specifically targets the vacuolar-type H+ -ATPase (V-ATPase) enzyme, a membrane-spanning proton pump that acidifies either the extracellular environment or intracellular organelles such as the lysosome of animal cells or the vacuole of plants and fungi. At higher micromolar concentrations, bafilomycin A1 also acts on P-type ATPases, which have a phosphorylated transitional state. Bafilomycin A1 serves as an important tool compound in many in vitro research applications; however, its clinical use is limited by a substantial toxicity profile. Discovery and history Bafilomycin A1, B1 and C1 were first isolated from Streptomyces griseus in 1983. During a screen seeking to identify microbial secondary metabolites whose activity mimicked that of two cardiac glycosides, bafilomycin C1 was identified as an inhibitor of P-ATPase with a ki of 11 μM. Bafilomycin C1 was found to have activity against Caenorhabditis elegans, ticks, and tapeworms, in addition to stimulating the release of γ-aminobutyruc acid (GABA) from rat synaptosomes. Independently, bafilomycin A1 and other derivatives were isolated from S. griseus and shown to have antibiotic activity against some yeast, Gram-positive bacteria and fungi. Bafilomycin A1 was also shown to have an anti-proliferative effect on concanavalin-A-stimulated T cells. However, its high toxicity has prevented use in clinical trials. Two years later, bafilomycins D and E were also isolated from S. griseus. In 2010, 9-hydroxy-bafilomycin D, 29-hydroxy-bafilomycin D and a number of other bafilomycins were identified from the endophytic microorganism Streptomyces sp. YIM56209. From 2004 to 2011, bafilomycins F-K were isolated from other Streptomyces sp. As one of the first identified and most commonly used, bafilomycin A1 is of particular importance, especially as its structure serves as the core of all other bafilomycins. With its large structure, bafilomycin has multiple chiral centers and functional groups, which makes modifying its structure difficult, a task that has been attempted to reduce the compound's associated toxicity. Target Within the cell, bafilomycin A1 specifically interacts with the proton pump V-ATPase. This large protein depends on Adenosine triphosphate (ATP) hydrolysis to pump protons across a biological membrane. When bafilomycin and other inhibitors of V-ATPase, such as concanamycin, were first discovered in the 1980s they were used to establish the presence of V-ATPase in specialized cells types and tissues, characterizing the proton pump's distribution. Structurally, V-ATPase consists of 13 distinct subunits that together make up the membrane spanning Vo and cytosolic V1 domains of the enzyme. The V1 domain in the cytosol is made up of subunits A through H whereas the Vo domain is made up of subunits a, d, e, c, and c". V-ATPase mechanism of action In order to move protons across the membrane, a proton first enters subunit a within the Vo domain through a cytoplasmic hemichannel. This allows conserved glutamic acid residues within the proteolipid ring of Vo subunits c and c" to become protonated. Adenosine triphosphate (ATP) is then hydrolyzed by the V1 domain of the enzyme, enabling both the rotation of the central stalk of the pump, made up of subunits D, F and d, and the rotation of the proteolipid ring. This rotation puts the protonated glutamic acid residues in contact with a luminal hemichannel located in subunit a. Within subunit a, arginine residues serve to stabilize the deprotonated form of glutamic acid and allow the release of their protons. This rotation and proton transfer brings the protons through the pump and across the membrane. Bafilomycin–V-ATPase interaction For more than ten years after bafilomycin was discovered as a V-ATPase inhibitor, the site of its interaction with V-ATPase was unclear. Beginning studies used the chromaffin granule V-ATPase to suggest that bafilomycin interacted with the Vo domain. Two further studies confirmed this hypothesis using V-ATPase from bovine clathrin coated vesicles. They showed that application of bafilomycin inhibited proton flow through Vo and that this inhibition could be overcome by adding back the Vo domain to the coated vesicles. Further narrowing bafilomycin's interaction site, they found that specific addition of just Vo subunit a could restore function. This suggested bafilomycin interacted specifically with subunit a of V-ATPase; however, another study contradicted this finding. A group found that by using a bafilomycin affinity chromatography column V-ATPase could be purified, and that addition of DCCD, an inhibitor of the Vo c subunit, drastically decreased bafilomycin's affinity for V-ATPase. This suggested that bafilomycin interacted more strongly with subunit c of the Vo domain. It was further found that amino acid changes within subunit a could also lower V-ATPase-Bafilomycin interaction, indicating a minor role of subunit a in bafilomycin binding in addition to subunit c. An analysis of nine mutations that conferred resistance to bafilomycin showed all of them to change amino acids in the Vo c subunit. These data suggested that the bafilomycin binding site was on the outer surface of the Vo domain, at the interface between two c subunits. This binding site has recently been described in high resolution by two groups that used cryo electron microscopy to obtain structures of the V-ATPase bound to bafilomycin. Overall, bafilomycin binds with nanomolar efficiency to the Vo c subunit of the V-ATPase complex and inhibits proton translocation. Although the interaction between bafilomycin and V-ATPase is not covalent, its low dissociation constant of about 10 nM describes the strength of its interaction and can make the effects of bafilomycin difficult to reverse. V-ATPase localization and function V-ATPase is ubiquitous in mammalian cells and plays an important role in many cellular processes. It is localized to the trans-golgi network and the cellular organelles that are derived from it, including lysosomes, secretory vesicles and endosomes. V-ATPase can also be found within the plasma membrane. In mammals, location of the V-ATPase can be linked to the specific isoform of subunit a that the complex has. Isoforms a1 and a2 target V-ATPase intracellularly, to synaptic vesicles and endosomes respectively. Subunits a3 and a4, however, mediate V-ATPase localization to the plasma membrane in osteoclasts (a3) and renal intercalated cells (a4). If located at the lysosomal membrane, this results in the acidification of the lysosome as lumenal pH is lowered, enabling activity of lysosomal hydrolases. When V-ATPase is located at the plasma membrane, proton extrusion through the pump causes the acidification of the extracellular space, which is utilized by specialized cells such as osteoclasts, epididymal clear cells, and renal epithelial intercalated cells. Intracellular function As it promotes the acidification of lysosomes, endosomes, and secretory vesicles, V-ATPase contributes to processes including: vesicular/protein trafficking receptor recycling endocytosis protein degradation autophagy cell signaling With its role in lysosomal acidification, V-ATPase is also crucial in driving the transport of ions and small molecules into the cytoplasm, particularly calcium and amino acids. Additionally, its acidification of endosomes is critical in receptor endocytosis as low pH tends to drive ligand release as well as receptor cleavage which contributes to signaling events, such as through the release of the intracellular domain of Notch. Plasma membrane function When at the plasma membrane, V-ATPase function is critical in the acidification of the extracellular environment, which is seen with osteoclasts and epididymal clear cells. When present at the plasma membrane in renal epithelial intercalated cells, V-ATPase is important for acid secretion, which contributes to the acidification of urine. In response to reduced plasma pH, increased levels of V-ATPase are typically trafficked to the plasma membrane in these cells by phosphorylation of the pump by Protein Kinase A (PKA). V-ATPase in disease Clinically, dysfunction of V-ATPase has been correlated with several diseases in humans. Some of these diseases include male infertility, osteopetrosis, and renal acidosis. Additionally, V-ATPase can be found at the plasma membrane of some invasive cancer cells including breast, prostate and liver cancer, among others. In human lung cancer samples, V-ATPase expression was correlated with drug resistance. A large number of V-ATPase subunit mutations have also been identified in a number of cancers, including follicular lymphomas. Cellular action As the target of Bafilomycin V-ATPase, is involved in many aspects of cellular function, Bafilomycin treatment greatly alters cellular processes. Inhibition of autophagy Bafilomycin A1 is most known for its use as an autophagy inhibitor. Autophagy is the process by which the cell degrades its own organelles and some proteins through the formation of autophagosomes. Autophagosomes then fuse with lysosomes facilitating the degradation of engulfed cargo by lysosomal proteases. This process is critical in maintaining the cell's store of amino acids and other nutrients during times of nutrient deprivation or other metabolic stresses. Bafilomycin interferes with this process by inhibiting the acidification of the lysosome through its interaction with V-ATPase. Lack of lysosomal acidification prevents the activity of lysosomal proteases like cathepsins so that engulfed cargo can no longer be degraded. Since V-ATPase is widely distributed within the cell, Bafilomycin is only specific as an autophagy inhibitor for a short amount of time. Other effects are seen outside this short window, including interference in the trafficking of endosomes and proteasomal inhibition. In addition to blocking the acidification of the lysosome, Bafilomycin has been reported to block the fusion of autophagosomes with lysosomes. This was initially found in a paper by Yamamoto, et al. in which the authors used bafilomycin A1 to treat rat hepatoma H-4-II-E cells. By electron microscopy, they saw a blockage of autophagosome-lysosome fusion after using bafilomycin at a concentration of 100 nM for 1 hour. This has been confirmed by other studies, particularly two that found decreased colocalization of mitochondria and lysosomes by fluorescence microscopy following a 12-24 hour treatment with 100 or 400 nM Bafilomycin. However, further studies have failed to see this inhibition of fusion with similar bafilomycin treatments. These contradictory results have been explained by time differences among treatments as well as use of different cell lines. The effect of Bafilomycin on autophagosome-lysosome fusion is complex and time dependent in each cell line. In neurons, an increase in the autophagosome marker LC3-II has been seen with Bafilomycin treatment. This occurs as autophagosomes fail to fuse with lysosomes, which normally stimulates the degradation of LC3-II. Induction of apoptosis In PC12 cells, bafilomycin was found to induce apoptosis, or programmed cell death. Additionally, in some cell lines it has been found to disrupt the electrochemical gradient of the mitochondria and induce the release of cytochrome c, which is an initiator of apoptosis. Bafilomycin has also been shown to induce both inhibition of autophagy and subsequent induction of apoptosis in osteosarcoma cells as well as other cancer cell lines. K+ transport Bafilomycin acts as an ionophore, meaning it can transfer K+ ions across biological membranes. Typically, the mitochondrial inner membrane is not permeable to K+ and maintains a set electrochemical gradient. In excitable cells, mitochondria can contain a K+ channel that, when opened, can cause mitochondrial stress by inducing mitochondrial swelling, changing the electrochemical gradient, and stimulating respiration. Bafilomycin A1 treatment can induce mitochondrial swelling in the presence of K+ ions, stimulate the oxidation of pyrimidine nucleotides and uncouple oxidative phosphorylation. Ascending concentrations of bafilomycin were found to linearly increase the amount of K+ that traversed the mitochondrial membrane, confirming it acts as an ionophore. Compared to other ionophores, however, bafilomycin has a low affinity for K+. Research applications Anti-tumorigenic In many cancers, it has been found that various subunits of V-ATPase are upregulated. Upregulation of these subunits appears to be correlated with increased tumor cell metastasis and reduced clinical outcome. Bafilomycin application has been shown to reduce cell growth in various cancer cell lines across multiple cancer types by induction of apoptosis. Additionally, in vitro bafilomycin's anti-proliferative effect appears to be specific to cancer cells over normal cells, which is seen with selective inhibition of hepatoblastoma cell growth compared to healthy hepatocytes. The mechanism by which bafilomycin causes this cancer specific anti-proliferative effect is multifactorial. In addition to the induction of caspase-dependent apoptosis through the mitochondrial pathway, bafilomycin also causes increased levels of reactive oxygen species and increased expression of HIF1alpha. These effects suggest that inhibition of V-ATPase with bafilomycin can induce a cellular stress response, including autophagy and eventual apoptosis. These somewhat contradictory effects of V-ATPase inhibition in terms of inhibition or induction of apoptosis demonstrate that bafilomycin's function is critically dependent on cellular context, and can mediate either a pro-survival or pro-death phenotype. In vivo bafilomycin reduced average tumor volume in MCF-7 and MDA-MB-231 xenograft mouse models by 50% and did not show toxic effects at a dosing of 1 mg/kg. Additionally, when combined with sorafenib, bafilomycin also caused tumor regression in MDA-MB-231 xenograft mice. In a HepG2 orthotropic HCC xenograft model in nude mice, bafilomycin prevented tumor growth. V-ATPase dysregulation is thought to play a role in resistance to cancer therapies, as aberrant acidification of the extracellular environment can protonate chemotherapeutics, preventing their entry into the cell. It is unclear if` V-ATPase dysregulation is a direct cause of associated poor clinical outcome or if its dysregulation primarily effects the response to treatment. Although treatment with bafilomycin and cisplatin had a synergistic effect on cancer cell cytotoxicity. Anti-fungal Bafilomycins have been shown to inhibit plasma membrane ATPase (P-ATPase) as well as the ATP-binding cassette (ABC) transporters. These transporters are identified as good anti-fungal targets as they render organisms unable to cope with cation stress. When Cryptococcus neoformans was treated with bafilomycin, growth inhibition was observed. Bafilomycin has also been used in C. neoformans in conjunction with calcineurin inhibitor FK506, displaying synergistic anti-fungal activity. Anti-parasitic Bafilomycin has been shown to be active against Plasmodium falciparum, the causative agent of malaria. Upon infection of red blood cells, P. falciparum exports a membrane network into the red blood cell cytoplasm and also inserts several of its own proteins into the host membrane, including its own V-ATPase. This proton pump has a role in maintaining the intracellular pH of the infected red blood cell and facilitating the uptake of small metabolites at equilibrium. Treatment of the parasitized red blood cell with bafilomycin prevents the extracellular acidification, causing a dip in intracellular pH around the malarial parasite. Anti-viral Bafilomycin A1 and bafilomycin D have shown antiviral properties against SARS-CoV-2, the virus that causes COVID-19. Bafilomycin A1 has also demonstrated antiviral properties against the Zika virus. Immunosuppressant The inflammatory myopathy Inclusion Body Myositis (IBM) is relatively common in patients over 50 years of age and involves over activation of autophagic flux. In this condition, increased autophagy results in an increase in protein degradation and therefore an increase in the presentation of antigenic peptides in muscles. This can cause over-activation of immune cells. Treatment with bafilomycin can prevent the acidification of lysosomes and therefore autophagy, decreasing the number of antigenic peptides digested and displayed to the immune system. In Lupus patients, the autophagy pathway has been found to be altered in both B and T cells. Particularly, more autophagic vacuoles were seen in T cells as well as increased LC3-11 staining for autophagosomes, indicating increased autophagy. Increased autophagy can also be seen in naïve patient B cell subsets. Bafilomycin A1 treatment lowered the differentiation of plasmablasts and decreased their survival. Clearance of protein aggregates in neurodegenerative diseases Neurodegenerative diseases typically display elevated levels of protein aggregates within the cell that contribute to dysfunction of neurons and eventual neuronal death. As a method of protein degradation within the cell, autophagy can traffic these protein aggregates to be degraded in the lysosome. Although it is unclear the exact role continuous autophagy, or autophagic flux, plays in neuronal homeostasis and disease states, it has been shown that autophagic dysfunction can be seen in neurodegenerative diseases. Bafilomycin is commonly used to study this autophagic flux in neurons, among other cell types. To do this, neurons are first put into nutrient rich conditions then into nutrient starved conditions to stimulate autophagy. Bafilomycin is co-administered in the condition of nutrient stress so that while autophagy is stimulated, bafilomycin blocks its final stage of autophagosome-lysosomal fusion resulting in the accumulation of autophagosomes. Levels of autophagy related proteins associated with autophagosomes, such as LC3, can then be monitored to determine the level of autophagosome formation induced by nutrient deprivation. In vitro drug interactions Lysosomotropic drugs Some cationic drugs, such as chloroquine and sertraline, are known as lysosomotropic drugs. These drugs are weak bases that become protonated in the acidic environment of the lysosome. This traps the otherwise non-protonated compound within the lysosome, as protonation prevents its passage back across the lipid membrane of the organelle. This phenomenon is known as ion trapping. Trapping of the cationic compound also draws water into the lysosome through an osmotic effect, which can sometimes lead to vacuolization seen in in vitro cultured cells. When one of these drugs is co-applied to cells with bafilomycin A1, the action of bafilomycin A1 prevents the acidification of the lysosome, therefore preventing the phenomenon of ion trapping in this compartment. As the lysosome cannot acidify, lysosomotropic drugs do not become protonated and subsequently trapped in the lysosome in the presence of bafilomycin. Additionally, when cells are preloaded with lysosomotropic drugs in vitro, then treated with bafilomycin, bafilomycin acts to release the cationic compound from its accumulation in the lysosome. Pretreating cells with bafilomycin before administration of a cationic drug can alter the kinetics of the cationic compound. In a rabbit contractility assay, bafilomycin was used to pre-treat isolated rabbit aorta. The lipophilic agent xylometazoline, an alpha-adrenoreceptor agonist, displayed an increased effect when administered after bafilomycin treatment. With bafilomycin, faster contraction and relaxation of the aorta was seen as bafilomycin prevented the ion trapping of xylometazoline in the lysosome. Without pre-treatment with bafilomycin, the functional V-ATPase causes the lysosome to become a reservoir for xylometazoline, slowing its effect on contractility. Chloroquine As a lysosomotropic drug, chloroquine typically accumulates in the lysosome disrupting their degradative function, inhibiting autophagy, and inducing apoptosis through Bax-dependent mechanisms. However, in cultured cerebellar granule neurons (CGNs) low treatment with Bafillomycin of 1 nM decreased chloroquine induced apoptosis without affecting chloroquine inhibition of autophagy. The exact mechanism of this protection is unknown, although it is hypothesized to lie downstream of autophagosome-lysosome fusion yet upstream of Bax induction of apoptosis. Chemotherapeutics Bafilomycin has been shown to potentiate the effect of taxol in decreasing Matrix Metalloprotease (MMP) levels by depressing Bcl-xL's mitochondrial protective role. Additionally, within cisplatin resistant cells, V-ATPase expression was found to be increased, and co-treatment of bafilomycin with cisplatin sensitized these cells to cisplatin-induced cytotoxicity. Bafilomycin has also been shown to increase the efficacy of EGFR inhibitors in anti-cancer applications. References Antibiotics Polyols Secondary alcohols Tertiary alcohols Lactones Conjugated dienes Macrolides Isopropyl compounds Enones
Bafilomycin
[ "Biology" ]
5,125
[ "Antibiotics", "Biocides", "Biotechnology products" ]
4,128,748
https://en.wikipedia.org/wiki/Oxygen-18
Oxygen-18 (, Ω) is a natural, stable isotope of oxygen and one of the environmental isotopes. is an important precursor for the production of fluorodeoxyglucose (FDG) used in positron emission tomography (PET). Generally, in the radiopharmaceutical industry, enriched water () is bombarded with hydrogen ions in either a cyclotron or linear accelerator, producing fluorine-18. This is then synthesized into FDG and injected into a patient. It can also be used to make an extremely heavy version of water when combined with tritium (hydrogen-3): or . This compound has a density almost 30% greater than that of natural water. The accurate measurements of rely on proper procedures of analysis, sample preparation and storage. Paleoclimatology In ice cores, mainly Arctic and Antarctic, the ratio of to (known as δ) can be used to determine the temperature of precipitation through time. Assuming that atmospheric circulation and elevation has not changed significantly over the poles, the temperature of ice formation can be calculated as equilibrium fractionation between phases of water that is known for different temperatures. Water molecules are also subject to Rayleigh fractionation as atmospheric water moves from the equator poleward which results in progressive depletion of , or lower δ values. In the 1950s, Harold Urey performed an experiment in which he mixed both normal water and water with oxygen-18 in a barrel, and then partially froze the barrel's contents. The ratio / (δ) can also be used to determine paleothermometry in certain types of fossils. The fossils in question have to show progressive growth in the animal or plant that the fossil represents. The fossil material used is generally calcite or aragonite, however oxygen isotope paleothermometry has also been done of phosphatic fossils using SHRIMP. For example, seasonal temperature variations may be determined from a single sea shell from a scallop. As the scallop grows, an extension is seen on the surface of the shell. Each growth band can be measured, and a calculation is used to determine the probable sea water temperature in comparison to each growth. The equation for this is: Where T is temperature in Celsius and A and B are constants. For determination of ocean temperatures over geologic time, multiple fossils of the same species in different stratigraphic layers would be measured, and the difference between them would indicate long term changes. Plant physiology In the study of plants' photorespiration, the labeling of atmosphere by oxygen-18 allows for the measurement of oxygen uptake by the photorespiration pathway. Labeling by gives the unidirectional flux of uptake, while there is a net photosynthetic evolution. It was demonstrated that, under preindustrial atmosphere, most plants reabsorb, by photorespiration, half of the oxygen produced by photosynthesis. Then, the yield of photosynthesis was halved by the presence of oxygen in atmosphere. 18F production Fluorine-18 is usually produced by irradiation of 18O-enriched water (H218O) with high-energy (about 18 MeV) protons prepared in a cyclotron or a linear accelerator, yielding an aqueous solution of 18F fluoride. This solution is then used for rapid synthesis of a labeled molecule, often with the fluorine atom replacing a hydroxyl group. The labeled molecules or radiopharmaceuticals have to be synthesized after the radiofluorine is prepared, as the high energy proton radiation would destroy the molecules. Large amounts of oxygen-18 enriched water are used in positron emission tomography centers, for on-site production of 18F-labeled fludeoxyglucose (FDG). An example of the production cycle is a 90-minute irradiation of 2 milliliters of 18O-enriched water in a titanium cell, through a 25 μm thick window made of Havar (a cobalt alloy) foil, with a proton beam having an energy of 17.5 MeV and a beam current of 30 microamperes. The irradiated water has to be purified before another irradiation, to remove organic contaminants, traces of tritium produced by a 18O(p,t)16O reaction, and ions leached from the target cell and sputtered from the Havar foil. See also Willi Dansgaard – a paleoclimatologist Isotopes of oxygen Paleothermometry Pâté de Foie Gras (short story) Δ18O Global meteoric water line References Environmental isotopes Isotopes of oxygen
Oxygen-18
[ "Chemistry" ]
969
[ "Isotopes of oxygen", "Environmental isotopes", "Isotopes" ]
4,128,758
https://en.wikipedia.org/wiki/Brahmaputra%20Valley%20semi-evergreen%20forests
The Brahmaputra Valley semi-evergreen forests is a tropical moist broadleaf forest ecoregion of Northeastern India, southern Bhutan and adjacent Bangladesh. Location and description The ecoregion covers and encompasses the alluvial plain of the upper Brahmaputra River as it moves westward through India's Assam state (with small parts of the ecoregion in the states of Arunachal Pradesh and Nagaland and also south Bhutan and northern Bangladesh). The valley lies between the Himalayas to the north and the Lushai hills to the south and when the river floods during the June to September monsoon it brings up to 300 cm of water onto the plain carrying rich soils to create a fertile environment which has been extensively farmed for thousands of years. Other rivers that water the plains as well as the Brahmaputra include the Manas and the Subansiri. Flora/plants The extensive farming has meant that the original semi-evergreen forest now exists only in patches. Typical canopy trees include the evergreen Syzygium, Cinnamomum and Magnoliaceae along with deciduous Terminalia myriocarpa, Terminalia citrina, Terminalia tomentosa, Cinnamomum cassia, Durio zibethinus, Artocarpus heterophyllus, Ficus benghalensis, Gnetum gnemon, Mangifera indica, Toona ciliata, Toona sinensis, Cocos nucifera, Ginkgo biloba, Prunus serrulata, Camphora officinarum, Cathaya argyrophylla, Taiwania cryptomerioides, Cyathea spinulosa, Sassafras tzumu, Davidia involucrata, Metasequoia glyptostroboides, Glyptostrobus pensilis, Castanea mollissima, Quercus myrsinifolia, Quercus acuta, Quercus glauca, Machilus thunbergii, Tetracentron, Tsuga dumosa, Ulmus lanceifolia, Tectona grandis, Larix gmelinii, Larix sibirica, Larix × czekanowskii, Betula dahurica, Betula pendula, Pinus koraiensis, Pinus sibirica, Pinus sylvestris, Picea obovata, Abies sibirica, Quercus acutissima, Quercus mongolica, Prunus padus, Tilia amurensis, Salix babylonica, Acer palmatum, Acer campbellii, Populus tremula, Ulmus davidiana, Ulmus pumila, Pinus pumila, Haloxylon ammodendron, Elaeagnus angustifolia, Tamarix ramosissima, Pinus roxburghii, Pinus hwangshanensis, Juniperus tibetica, Olea europaea subsp. cuspidata, Shorea robusta, Taxus sumatrana, Juglans regia, Alnus nepalensis, Betula alnoides, Betula utilis, Larix griffithii, Picea brachytyla, Prunus sibirica, Tetrameles species. Understory trees and shrubs include the laurels Phoebe, Machilus, and Actinodaphne, Polyalthias, Aphanamixis, and cultivated Mesua ferrea and species of mahogany, cashews, nutmegs and magnolias, with bamboos such as Bambusa arundinaria and Melocanna bambusoides. Fauna/animals Despite the centuries of human clearance and exploitation, the forests and grasslands along the river remain a habitat for a variety of wildlife including tiger (Panthera tigris), clouded leopard (Pardofelis nebulosa), capped langur (Semnopithecus pileatus), gaur (Bos gaurus), barasingha deer (Cervus duvaucelii), sloth bear (Melursus ursinus), wild water buffalo (Bubalus arnee), India's largest population of Asian elephants (Elephas maximus) and the world's largest population of Indian rhinoceros, while Asian black bears live in the higher slopes of the valley sides. Most of these mammals are threatened or endangered species. The Brahmaputra is a natural barrier to the migration of much wildlife and many species, such as the pygmy hog, hispid hare, or the Malayan sun bear, pig-tailed macaque, golden langur, stump-tailed macaque, western hoolock gibbon live on one side of the river only. The area is a meeting point of species of Indian and Malayan origin. The endemic mammals of the valley are the pygmy hog and the hispid hare, both of which inhabit the grasslands of the riverbanks. The valley is home to rich bird life with 370 species of which two are endemic, the Manipur bush quail (Perdicula manipurensis) and the marsh babbler (Pellorneum palustre) and one, the Bengal florican is very rare. Woodland birds like kalij pheasant, great hornbill, rufous necked hornbill, brown hornbill, Oriental pied hornbill, grey hornbill, peacock pheasant and tragopan are quite common. Threats and preservation This area has been densely populated for centuries and most of the valley has been and still are used for agriculture. Some blocks of natural habitat do remain, however, mainly in national parks the largest of which are Manas, Dibru-Saikhowa and Kaziranga National Parks in India. In Bhutan, these areas are part of Royal Manas National Park. Protected areas In 1997, the World Wildlife Fund identified twelve protected areas in the ecoregion, with a combined area of approximately 2,560 km2, that include 5% of the ecoregion's area. Dehing Patkai Landscape, including Dehing Patkai National Park and Dehing Patkai Elephant Reserve Mehao Wildlife Sanctuary, Arunachal Pradesh (190 km2, also includes portions of the Eastern Himalayan broadleaf forests and Himalayan subtropical pine forests) Manas National Park, Assam (560 km2) Bornadi Wildlife Sanctuary, Assam (90 km2) Kaziranga National Park, Assam (320 km2) Orang National Park, Assam (110 km2) Laokhowa Wildlife Sanctuary, Assam (170 km2) Pobitora Wildlife Sanctuary, Assam (80 km2) Sonai Rupai Wildlife Sanctuary, Assam (160 km2) Nameri National Park, Assam (90 km2) Dibru-Saikhowa National Park, Assam (490 km2) D'Ering Memorial Wildlife Sanctuary, Arunachal Pradesh (190 km2) Pabha Wildlife Sanctuary, Assam (110 km2) Vulture breeding Rani Vulture Breeding Centre was established in 2008 inside Brahmaputra Valley semi-evergreen forests at Rani in Kamprup district with the help of Jatayu Conservation Breeding Centre, Pinjore, which now houses 90 vultures as of December 2018. 40 million vultures have died in last 20 years. See also List of ecoregions in Bhutan List of ecoregions in India References External links Geographical ecoregion maps and basic info. Tropical and subtropical moist broadleaf forests Ecoregions of Bhutan Ecoregions of India Biota of Bhutan Biota of India Indomalayan ecoregions Environment of Assam
Brahmaputra Valley semi-evergreen forests
[ "Biology" ]
1,562
[ "Biota by country", "Biota of India", "Biota of Bhutan" ]
4,128,785
https://en.wikipedia.org/wiki/Fura-2-acetoxymethyl%20ester
Fura-2-acetoxymethyl ester, often abbreviated Fura-2AM, is a membrane-permeant derivative of the ratiometric calcium indicator Fura-2 used in biochemistry to measure cellular calcium concentrations by fluorescence. When added to cells, Fura-2AM crosses cell membranes and once inside the cell, the acetoxymethyl groups are removed by cellular esterases. Removal of the acetoxymethyl esters regenerates "Fura-2", the pentacarboxylate calcium indicator. Measurement of Ca2+-induced fluorescence at both 340 nm and 380 nm allows for calculation of calcium concentrations based 340/380 ratios. The use of the ratio automatically cancels out certain variables such as local differences in fura-2 concentration or cell thickness that would otherwise lead to artifacts when attempting to image calcium concentrations in cells. References Biochemistry methods Cell culture reagents Cell imaging Fluorescent dyes Oxazoles Benzofuran ethers at the benzene ring Acetate esters Formals Glycol ethers Anilines
Fura-2-acetoxymethyl ester
[ "Chemistry", "Biology" ]
225
[ "Biochemistry methods", "Formals", "Biotechnology stubs", "Functional groups", "Biochemistry stubs", "Cell culture reagents", "Microscopy", "Biochemistry", "Reagents for biochemistry", "Cell imaging" ]
4,128,810
https://en.wikipedia.org/wiki/Homeotopy
In algebraic topology, an area of mathematics, a homeotopy group of a topological space is a homotopy group of the group of self-homeomorphisms of that space. Definition The homotopy group functors assign to each path-connected topological space the group of homotopy classes of continuous maps Another construction on a space is the group of all self-homeomorphisms , denoted If X is a locally compact, locally connected Hausdorff space then a fundamental result of R. Arens says that will in fact be a topological group under the compact-open topology. Under the above assumptions, the homeotopy groups for are defined to be: Thus is the mapping class group for In other words, the mapping class group is the set of connected components of as specified by the functor Example According to the Dehn-Nielsen theorem, if is a closed surface then i.e., the zeroth homotopy group of the automorphisms of a space is the same as the outer automorphism group of its fundamental group. References Algebraic topology Homeomorphisms
Homeotopy
[ "Mathematics" ]
226
[ "Homeomorphisms", "Algebraic topology", "Topology stubs", "Fields of abstract algebra", "Topology" ]
4,128,827
https://en.wikipedia.org/wiki/Dynamin
Dynamin is a GTPase protein responsible for endocytosis in the eukaryotic cell. Dynamin is part of the "dynamin superfamily", which includes classical dynamins, dynamin-like proteins, Mx proteins, OPA1, mitofusins, and GBPs. Members of the dynamin family are principally involved in the scission of newly formed vesicles from the membrane of one cellular compartment and their targeting to, and fusion with, another compartment, both at the cell surface (particularly caveolae internalization) as well as at the Golgi apparatus. Dynamin family members also play a role in many processes including division of organelles, cytokinesis and microbial pathogen resistance. Structure Dynamin itself is a 96 kDa enzyme, and was first isolated when researchers were attempting to isolate new microtubule-based motors from the bovine brain. Dynamin has been extensively studied in the context of clathrin-coated vesicle budding from the cell membrane. Beginning from the N-terminus, Dynamin consists of a GTPase domain connected to a helical stalk domain via a flexible neck region containing a Bundle Signalling Element and GTPase Effector Domain. At the opposite end of the stalk domain is a loop that links to a membrane-binding Pleckstrin homology domain. The protein strand then loops back towards the GTPase domain and terminates with a Proline Rich Domain that binds to the Src Homology domains of many proteins. Function During clathrin-mediated endocytosis, the cell membrane invaginates to form a budding vesicle. Dynamin binds to and assembles around the neck of the endocytic vesicle, forming a helical polymer arranged such that the GTPase domains dimerize in an asymmetric manner across helical rungs. The polymer constricts the underlying membrane upon GTP binding and hydrolysis via conformational changes emanating from the flexible neck region that alters the overall helical symmetry. Constriction around the vesicle neck leads to the formation of a hemi-fission membrane state that ultimately results in membrane scission. Constriction may be in part the result of the twisting activity of dynamin, which makes dynamin the only molecular motor known to have a twisting activity. Types In mammals, three different dynamin genes have been identified with key sequence differences in their Pleckstrin homology domains leading to differences in the recognition of lipid membranes: Dynamin I is expressed in neurons and neuroendocrine cells Dynamin II is expressed in most cell types Dynamin III is strongly expressed in the testis, but is also present in heart, brain, and lung tissue. Pharmacology Small molecule inhibitors of dynamin activity have been developed, including Dynasore and photoswitchable derivatives (Dynazo) for spatiotemporal control of endocytosis with light (photopharmacology). Disease implications Mutations in Dynamin II have been found to cause dominant intermediate Charcot-Marie-Tooth disease. Epileptic encephalopathy–causing de novo mutations in dynamin have been suggested to cause dysfunction of vesicle scission during synaptic vesicle endocytosis. References External links Cellular processes EC 3.6.5
Dynamin
[ "Biology" ]
717
[ "Cellular processes" ]
4,128,860
https://en.wikipedia.org/wiki/Chlorine-36
Chlorine-36 (36Cl) is an isotope of chlorine. Chlorine has two stable isotopes and one naturally occurring radioactive isotope, the cosmogenic isotope 36Cl. Its half-life is 301,300 ± 1,500 years. 36Cl decays primarily (98%) by beta-minus decay to 36Ar, and the balance to 36S. Trace amounts of radioactive 36Cl exist in the environment, in a ratio of about to 1 with respect to the stable chlorine isotopes. This 36Cl/Cl ratio is sometimes abbreviated as R36Cl. This corresponds to a concentration of approximately . 36Cl is produced in the atmosphere by spallation of 36Ar by interactions with cosmic ray protons. In the top meter of the lithosphere, 36Cl is generated primarily by thermal neutron activation of 35Cl and spallation of 39K and 40Ca. In the subsurface environment, muon capture by 40Ca becomes more important. The production rates are about 4200 atoms 36Cl/yr/mole 39K and 3000 atoms 36Cl/yr/mole 40Ca, due to spallation in rocks at sea level. The half-life of this isotope makes it suitable for geologic dating in the range of 60,000 to 1 million years. Its properties make it useful as a proxy data source to characterize cosmic particle bombardment and solar activity of the past. Additionally, large amounts of 36Cl were produced by irradiation of seawater during atmospheric and underwater test detonations of nuclear weapons between 1952 and 1958. The residence time of 36Cl in the atmosphere is about 2 years. Thus, as an event marker of 1950s water in soil and ground water, 36Cl is also useful for dating waters less than 50 years before the present. 36Cl has seen use in other areas of the geological sciences, including dating ice and sediments. See also Isotopes of chlorine References Isotopes of chlorine Environmental isotopes Radionuclides used in radiometric dating
Chlorine-36
[ "Chemistry" ]
414
[ "Environmental isotopes", "Isotopes of chlorine", "Isotopes", "Radionuclides used in radiometric dating" ]