id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
16,204,605
https://en.wikipedia.org/wiki/Electric%20field%20NMR
Electric field NMR (EFNMR) spectroscopy is the NMR spectroscopy where additional information on a sample being probed is obtained from the effect of a strong, externally applied, electric field on the NMR signal. See also NMR spectroscopy Stark effect References Nuclear magnetic resonance
Electric field NMR
[ "Physics", "Chemistry" ]
59
[ "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs", "Nuclear physics" ]
16,206,083
https://en.wikipedia.org/wiki/Disjunction%20property%20of%20Wallman
In mathematics, especially in order theory, a partially ordered set with a unique minimal element 0 has the disjunction property of Wallman when for every pair (a, b) of elements of the poset, either b ≤ a or there exists an element c ≤ b such that c ≠ 0 and c has no nontrivial common predecessor with a. That is, in the latter case, the only x with x ≤ a and x ≤ c is x = 0. A version of this property for lattices was introduced by , in a paper showing that the homology theory of a topological space could be defined in terms of its distributive lattice of closed sets. He observed that the inclusion order on the closed sets of a T1 space has the disjunction property. The generalization to partial orders was introduced by . References Order theory
Disjunction property of Wallman
[ "Mathematics" ]
177
[ "Algebra stubs", "Combinatorics", "Combinatorics stubs", "Order theory", "Algebra" ]
16,206,953
https://en.wikipedia.org/wiki/Diaper%20bag
A diaper bag or nappy bag is a storage bag with many pocket-like spaces that is big enough to carry everything needed by someone taking care of a baby while taking a typical short outing. These bags are not always designed expressly as a diaper bag, as any well-pocketed bag sized in between a child's school backpack and adult pro-camping backpack can be used for the purpose. Some are now being made with rigid handles and wheels so that someone can cart one around, allowing that person to hold the baby more firmly, complete more tasks (like opening a door, paying a cashier, or using a phone), and reduce lower back pain. My Child magazine suggests a brightly colored diaper bag is harder to lose and can help combat the "baby blues". Diaper bags are generally small enough to fit on or under a stroller or buggy. There have been fashion trends against large bags, as mothers learn to reduce the number of necessities carried. In the 21st-century, there has been an "explosion of styles, colors, designs, and functions." Diaper bags have helped fashion designers make their mark. Since 2005, some premium designers such as Kate Spade, Coach and Ralph Lauren have launched expensive diaper bag designs. According to Fortune in 2006, "To gain an edge, smart manufacturers are doing whatever it takes to capture the attention (and aesthetics) of today's chic parents-to-be who are willing - sometimes even eager - to pay top dollar for products that seamlessly blend fashion and function." Luxury diaper bags continue to be sold as of 2021. Designers can be protective of their diaper bag designs and trademarks. In 1977, diaper bags were the cause of a New York court case between Macy's and Gucci. The court found that Macy's had infringed on Gucci's trademark by selling diaper bags with green and red bands and the wording "Gucchi Goo." Designs without bright colors or licensed characters can be high-fashion items associated with celebrity mothers. Companies also produce diaper bags with a more rugged look, as part of a growing sector of the baby-products market designed to appeal to men. These bags often feature darker colors, more streamlined designs, and materials such as leather or waxed canvas. As more and more fathers take an active role in caring for their children, these bags provide a practical and stylish option for dads on the go. See also Ju-Ju-Be (brand) Petunia Pickle Bottom References Babycare Bags Diapers Bags (fashion)
Diaper bag
[ "Biology" ]
536
[ "Diapers", "Excretion" ]
16,208,016
https://en.wikipedia.org/wiki/PSR%20J1951%2B1123
PSR J1951+1123 is a pulsar. This pulsar is notable due to its exceptionally long period, one of the longest known, with a period of 5.09 seconds. References External links PSR J1951+1123 Image PSR J1951+1123 Aquila (constellation) Pulsars
PSR J1951+1123
[ "Astronomy" ]
73
[ "Aquila (constellation)", "Constellations" ]
16,209,582
https://en.wikipedia.org/wiki/Warren%20truss
In structural engineering, a Warren truss or equilateral truss is a type of truss employing a weight-saving design based upon equilateral triangles. It is named after the British engineer James Warren, who patented it in 1848. Origins It was patented in 1848 by its designers James Warren and Willoughby Theobald Monzani. Truss The Warren truss consists of longitudinal members joined only by angled cross-members, forming alternately inverted equilateral triangle-shaped spaces along its length. This gives a pure truss: each individual strut, beam, or tie is only subject to tension or compression forces, there are no bending or torsional forces on them. Loads on the diagonals alternate between compression and tension (approaching the centre), with no vertical elements, while elements near the centre must support both tension and compression in response to live loads. This configuration combines strength with economy of materials and can therefore be relatively light. The girders being of equal length, it is ideal for use in prefabricated modular bridges. It is an improvement over the Neville truss in which the elements form isosceles triangles. A variant of the Warren truss has additional vertical members within the triangles. These are used when the lengths of the upper horizontal members would otherwise become so long as to present a risk of buckling These verticals do not carry a large proportion of the truss loads; they act mostly to stabilise the horizontal members against breaking down. Bridges Architecture The Warren truss is a prominent structural feature in hundreds of hastily constructed aircraft hangars in WW2. In the early parts of the war, the British and Canadian government formed an agreement known as the British Commonwealth Air Training Plan which used newly constructed airbases in Canada to train aircrew needed to sustain emerging air forces. Hundreds of airfields, aprons, taxiways and ground installations were constructed all across Canada. Two characteristic features were a triangle runway layout and hangars built from virgin British Columbia timbers with Warren truss configuration roofs. Many still remain in service. Aircraft Warren truss construction has also been used in airframe design and construction, for substantial numbers of aircraft designs. An early use was for the interplane wing struts on some biplanes. The Italian World War I Ansaldo SVA series of fast reconnaissance biplanes were among the fastest aircraft of their era, while the Handley Page H.P.42 was a successful airliner of the late 1920s and the Fiat CR.42 Falco Falco fighter remained in service until World War II. The Warren truss is also sometimes used for fuselage frames, such as in the Piper J-3 Cub and Hawker Hurricane. Notes Explanatory notes Citations Trusses
Warren truss
[ "Technology" ]
546
[ "Structural system", "Trusses" ]
16,210,493
https://en.wikipedia.org/wiki/Heat%20spreader
A heat spreader transfers energy as heat from a hotter source to a colder heat sink or heat exchanger. There are two thermodynamic types, passive and active. The most common sort of passive heat spreader is a plate or block of material having high thermal conductivity, such as copper, aluminum, or diamond. An active heat spreader speeds up heat transfer with expenditure of energy as work supplied by an external source. A heat pipe uses fluids inside a sealed case. The fluids circulate either passively, by spontaneous convection, triggered when a threshold temperature difference occurs; or actively, because of an impeller driven by an external source of work. Without sealed circulation, energy can be carried by transfer of fluid matter, for example externally supplied colder air, driven by an external source of work, from a hotter body to another external body, though this is not exactly heat transfer as defined in physics. Exemplifying increase of entropy according to the second law of thermodynamics, a passive heat spreader disperses or "spreads out" heat, so that the heat exchanger(s) may be more fully utilized. This has the potential to increase the heat capacity of the total assembly, but the additional thermal junctions limit total thermal capacity. The high conduction properties of the spreader will make it more effective to function as an air heat exchanger, as opposed to the original (presumably smaller) source. The low heat conduction of air in convection is matched by the higher surface area of the spreader, and heat is transferred more effectively. A heat spreader is generally used when the heat source tends to have a high heat-flux density, (high heat flow per unit area), and for whatever reason, heat can not be conducted away effectively by the heat exchanger. For instance, this may be because it is air-cooled, giving it a lower heat transfer coefficient than if it were liquid-cooled. A high enough heat exchanger transfer coefficient is sufficient to avoid the need for a heat spreader. The use of a heat spreader is an important part of an economically optimal design for transferring heat from high to low heat flux media. Examples include: A copper-clad bottom on a steel or stainless steel stove-top cooking container Air-cooling integrated circuits such as a microprocessor Air-cooling a photovoltaic cell in a concentrated photovoltaics system Diamond has a very high thermal conductivity. Synthetic diamond is used as submounts for high-power integrated circuits and laser diodes. Composite materials can be used, such as the metal matrix composites (MMCs) copper–tungsten, AlSiC (silicon carbide in aluminium matrix), Dymalloy (diamond in copper-silver alloy matrix), and E-Material (beryllium oxide in beryllium matrix). Such materials are often used as substrates for chips, as their thermal expansion coefficient can be matched to ceramics and semiconductors. Research In May 2022, researchers at the University of Illinois at Urbana-Champaign and University of California, Berkeley devised a new solution that could cool modern electronics more efficiently than other existing strategies. Their proposed method is based on the use of heat spreaders consisting of an electrical insulating layer of poly (2-chloro-p-xylylene) (Parylene C) and a coating of copper. This solution would also require less expensive materials. See also Computer module Heat pipe Heat sink Thermal conductivity of diamond Thermal grease Thermal interface material References Computer hardware cooling Heat transfer Residential heating appliances
Heat spreader
[ "Physics", "Chemistry" ]
729
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
16,211,452
https://en.wikipedia.org/wiki/LDMOS
LDMOS (laterally-diffused metal-oxide semiconductor) is a planar double-diffused MOSFET (metal–oxide–semiconductor field-effect transistor) used in amplifiers, including microwave power amplifiers, RF power amplifiers and audio power amplifiers. These transistors are often fabricated on p/p+ silicon epitaxial layers. The fabrication of LDMOS devices mostly involves various ion-implantation and subsequent annealing cycles. As an example, the drift region of this power MOSFET is fabricated using up to three ion implantation sequences in order to achieve the appropriate doping profile needed to withstand high electric fields. The silicon-based RF LDMOS (radio-frequency LDMOS) is the most widely used RF power amplifier in mobile networks, enabling the majority of the world's cellular voice and data traffic. LDMOS devices are widely used in RF power amplifiers for base-stations as the requirement is for high output power with a corresponding drain to source breakdown voltage usually above 60 volts. Compared to other devices such as GaAs FETs they show a lower maximum power gain frequency. Manufacturers of LDMOS devices and foundries offering LDMOS technologies include, Tower Semiconductor, TSMC, LFoundry, SAMSUNG, GLOBALFOUNDRIES, Vanguard International Semiconductor Corporation, STMicroelectronics, Infineon Technologies, RFMD, NXP Semiconductors (including former Freescale Semiconductor), SMIC, MK Semiconductors, Polyfet and Ampleon. Photo gallery Applications Common applications of LDMOS technology include the following. Amplifiers — RF power amplifiers, audio power amplifiers, class AB Audio technology — loudspeakers, high-fidelity (hi-fi) equipment, public announcement (PA) systems Mobile devices — mobile phones Mobile networks — base stations and RF amplifiers Pulse applications Radio-frequency (RF) technology — RF engineering (RF engineering), RF power amplifiers Wireless technology — wireless networks and digital networks RF LDMOS Common applications of RF LDMOS technology include the following. See also FET amplifier Power semiconductor device RF CMOS References External links Microwave Encyclopedia on LDMOS BCD process including customizable LDMOS Electronic design Transistor types MOSFETs
LDMOS
[ "Engineering" ]
449
[ "Electronic design", "Electronic engineering", "Design" ]
16,211,513
https://en.wikipedia.org/wiki/Stebbins%E2%80%93Whitford%20effect
The Stebbins–Whitford effect refers to the excess reddening of the spectra of elliptical galaxies as shown by measurements published by Joel Stebbins and Albert Whitford In 1948. The spectra were shifted much more to the red than the Hubble redshift could account for. Furthermore, this excess reddening increased with the distance of the galaxies. The effect was only found for elliptical and not for spiral galaxies. One possible explanation was that younger galaxies contain more red giants than older galaxies. This kind of evolution could not exist according to the steady-state theory. Later analysis of the same data showed that the data was inadequate to establish the claimed effect. After further measurements and analysis Whitford withdrew the claim in 1956. References See also Scientific phenomena named after people Extragalactic astronomy Astronomical spectroscopy Physical cosmology Elliptical galaxies
Stebbins–Whitford effect
[ "Physics", "Chemistry", "Astronomy" ]
172
[ "Astronomical sub-disciplines", "Spectrum (physical sciences)", "Theoretical physics", "Astrophysics", "Astronomical spectroscopy", "Extragalactic astronomy", "Spectroscopy", "Physical cosmology" ]
16,211,966
https://en.wikipedia.org/wiki/Turbojet%20development%20at%20the%20RAE
Between 1936 and 1940 Alan Arnold Griffith designed a series of turbine engines that were built under the direction of Hayne Constant at the Royal Aircraft Establishment (RAE). The designs were advanced for the era, typically featuring a "two-spool" layout with high- and low-pressure compressors that individually had more stages than typical engines of the era. Although advanced, the engines were also difficult to build, and only the much simpler "Freda" design would ever see production, as the Metrovick F.2 and later the Armstrong Siddeley Sapphire. Much of the pioneering work would be later used in Rolls-Royce designs, starting with the hugely successful Rolls-Royce Avon. Early work In 1920 W.J. Stern of the Air Ministry Laboratory in South Kensington wrote a report in response to an Aeronautical Research Committee (ARC) request about the possibilities of developing a gas turbine engine to drive a propeller. His report was extremely negative. Given the performance of existing turbocompressors, such an engine appeared to be mechanically inefficient. In addition to high weight and poor fuel efficiency, Stern was skeptical that there were materials available that would be suitable for use in the high-heat areas of the turbine. Griffith, who was at this point the senior scientific officer at the RAE at Farnborough, read Stern's report and responded with a request that the National Physical Laboratory should study the materials problem. Griffiths, meanwhile, started studying the issues with compressor design. In 1926 he published An Aerodynamic Theory of Turbine Design, which noted that existing compressor designs used flat blades that were essentially "flying stalled" and that efficiency could be dramatically improved by shaping them aerodynamically. In October, Griffith presented the paper to a small group from the Air Ministry and the RAE. They unanimously supported starting a development project to study Griffiths' compressor designs. Initial work started in 1927, and by 1929 this project had progressed to the point of building an extremely simple "engine" consisting of a single-stage compressor and turbine with a single row of stators in front of each. Designed solely to test the basic concept, the rig nevertheless demonstrated superb aerodynamic efficiencies as high as 91%. At the same the RAE team introduced the "cascade", consisting of multiple rows of compressor blades attached to flat plates. Unconvinced that the aerodynamics of a single blade in a wind tunnel would match the real world performance of a multi-stage compressor, the cascade allowed various compressor layouts to be tested simply by moving the plates on a mounting plate inside the wind tunnel. This also allowed the angle of attack to be easily varied by rotating the plates with respect to the airflow. According to NASA, one of the reasons UK engine design remained ahead of the US into the 1950s was that the cascade tests and theory were widely used in the UK, while generally ignored in the US. CR.1 During this period Griffith was promoted to principal scientific officer at the Air Ministry's South Kensington Laboratory. Here he returned to theoretical work and published a report in November 1929 that outlined the design and theoretical performance of a 500 hp turbine engine driving a propeller. Contrary to Stern's earlier report, Griffith demonstrated that if the existing testbed design could be scaled up successfully, it would have performance far superior to existing piston engines. The engine outlined in the report was quite complex, consisting primarily of a fourteen-stage gas generator. In contrast to typical designs where the compressor and turbine are separate and connected on a shaft, in the CR.1 design there were a series of disks that each held a single compressor stage on the inner circumference and a turbine stage on the outer. Each was independently mounted to a non-rotating support shaft in the center, and could turn independently of the other stages. They were arranged to rotate in opposite directions. Air was taken in at the rear of the engine, passed through the compressor stages in the center, entered a novel rotating combustion chamber than also reversed the direction of the airflow, and then exited the burners across the turbine stages at the outside. A separate turbine was used to power the propeller, or in later designs, a multi-stage fan. In April 1930 Griffith proposed building a testbed version of his design, but the ARC concluded that it was simply too far beyond the current state of the art. In 1931 Griffith returned to the RAE. At some point during this period he was given Frank Whittle's engine design using centrifugal compressors and returned a negative response; after pointing out minor errors in the calculations he stated that the centrifugal design was inefficient and its large frontal size would make it unsuitable for aircraft use. He also stated that Whittle's idea of using the hot exhaust directly for thrust was inefficient and would not match the performance of existing engines, in spite of Whittle concentrating on high-speed use where it would be more effective (propellers suffer a dramatic drop in efficiency below the speed of sound (M.1)). Sometime later, Armstrong Siddeley built a single example of this "contra-flow turbo-compressor", which was quite compact. However, air leaking between the compressor and turbine areas was a significant problem, as much as 50% of the air leaked between the seals, compared to a predicted 4%. Other issues included the large differences in temperatures along a single rotor due to the turbine and compressor being a single unit. The concept was not used for further developments. Anne and Betty In 1936 ARC, now under the direction of Henry Tizard, returned to the turbine engine concept after learning that Whittle was going ahead with his designs at his new company, Power Jets. Tizard convinced Hayne Constant to return to the RAE from Imperial College to assist with the development of Griffith's designs. They set about building a version of the inner portion of the Griffith engine, known as Anne, consisting of the hub and eight compressor stages without the outer turbine portions. On its first run a faulty seal allowed the oil to drain from the engine, and the blading was stripped off after only 30 seconds of running. In 1937, while Anne was being built, Griffith visited Jakob Ackeret of Brown Boveri, another turbine pioneer, and became convinced that the compressor/stator design was superior to his own contra-rotating "all compressor" concept. After it was damaged, Anne was rebuilt using the new layout and started running again in October 1939. It continued to be used in tests until it was destroyed in a German bombing raid by KG 54 on 13 August 1940, "Eagle Day". At this point there was some debate as to how to proceed after Anne. The team, which included Griffith, Constant, Taffy Howell and D. Carter, studied a number of approaches to building a complete engine, as opposed to the compressor-only Anne. They decided that the only reasonable solution to low compressor efficiency was to use what would today be referred to as a "two-spool" design, with separate high and low-pressure compressors. However the team considered the concentric shafts needed for this layout to be too complex (although the reasons for this are not clear), and there was some consideration of using two completely separate compressor/turbine sections "side-by-side". Eventually they settled on building one of the two engines that would be used in such a layout, in order to study the mechanical problems. The resulting Betty design consisted of a nine-stage compressor feet in diameter attached through a coupling to a four-stage turbine. A considerable amount of design effort went into various devices to relieve mechanical stress due to thermal expansion. For instance, the compressor and turbine blading was attached to large hollow rotors which they felt would expand and contract more like the outer engine casing than a series of solid disks as used in Anne. The ends of the turbine rotor were closed with double-cones, which had enough flexibility to expand with the rotor while still remaining solidly attached to the power shaft. The compressor and turbine were attached to each other through another rotor, allowing the two sections to be easily separated. When attached, they were arranged "inside out", with the compressor intake near the center of the engine and its outlet at one end. Here it entered two long tubes with the combustion chambers, piping the resulting hot air to the enter end of the engine where it entered the turbine. The turbine outlet was next to the compressor inlet. Finally the turbine was water-cooled, as it was believed that even the latest high-temperature alloys like Hadfield's ERA/ATV would eventually deform under constant operation. Betty, also known as B.10, was first tested as separate compressor and turbine sections using steam to power them. In October 1940 they were run as a single complete engine for the first time. During testing it was decided that the water cooling was not needed, and was replaced by an air cooling system, and the turbine was allowed to run red hot at 675 C. Experiments with Betty convinced the team that any sort of piping between sections led to unacceptable losses, so the "distributed engine" concept Betty was built to test would likely be inefficient. At the same time, it was decided that overall pressure ratios on the order of 5:1 would be sufficient for near-term engines, so it was decided to abandon the two-spool approach for the time being. A dead-end During construction, Constant produced a new report, The internal combustion turbine as a prime mover for aircraft, RAE Note E.3546. By this point several high-temperature alloys had become available with creep strength up to 700 °C, and Constant demonstrated that using these materials in an engine would produce what would now be called a turboprop that would outperform existing piston engines except at very low altitudes. Further, continued improvements in these metals would allow improvements in compression ratios that would lead to it being completely superior to piston engines in all ways. The report also pointed out that such an engine would be considerably less complex than a piston engine of similar power, and therefore more reliable. Based on the work with Betty and Constant's report, ARC gave the team the go-ahead to build a complete turboprop engine. The new D.11 Doris design consisted of an enlarged Betty-like 17-stage compressor/ 8-stage turbine section, and a mechanically separate 5-stage low-pressure turbine to drive the propeller. Designed to provide about 2,000 hp, construction of Doris started in 1940. By this point in time Whittle's centrifugal-compressor designs were fully operational, and plans were underway to start production of early models. The progress had been so swift that Whittle's argument that the centrifugal layout was mechanically superior than the axial designs appeared to be borne out. Adding to their problems, in June 1939 Griffith left the team and started work at Rolls-Royce. At Rolls he returned to his earlier "contraflow" designs and eventually produced such a design in 1944, but the concept was abandoned as being too complex. So even while Doris was being built, Whittle's successes meant it was considered outdated, and work proceeded slowly. It was not until 1941 that the Doris compressor started running, and in testing it demonstrated a number of problems related to high-speed airflow that could not be tested in the earlier cascade wind tunnel system. A new high-speed version was constructed to test these issues, and new blading provided to address the problems were added later in 1941. The Doris concept was then abandoned. The F.2 Before construction started on Doris the RAE team had already turned their attention to the problem of delivering a usable "pure-jet" engine as quickly as possible. The earlier designs had been built with the assumption that overall airflow should be kept as low as possible and that the energy would be extracted through a propeller. This was not appropriate for a pure-jet, where airflow is also providing the thrust. A new 9-stage compressor section known as Freda was designed, increasing in size to just over 22 inches in diameter and providing 50 lb/s airflow and a compression ratio of about 4:1. Freda proved successful, and in December 1939 was fitted with a turbine section to become the first self-running axial turbojet in England, the F.1, providing 2,150 lbf. Attention immediately turned to a slightly larger design, the F.1A of 2,690 lbf. There were a number of detail changes including the removal of water cooling for the turbine and various enlargements to increase the mass flow from the F.1's 38 lb/s to 47.5 lb/s, closer to the original Freda design concept. As attention turned to a production design, Constant started organizing industrial partners with the manufacturing capability to set up serial production. In July 1940 Metropolitan-Vickers (Metrovick) joined the effort, as they were a major steam turbine manufacturer and would be ideally suited to rapid scale-up. The F.1A was turned over to Metrovick in July 1940, and a production effort started as the F.2. Further work The RAE continued working on axial compressor design after the F.2 success. The original Freda compressor was later enlarged into Sarah with the addition of a further five low-pressure stages as part of a collaboration with Armstrong Siddeley, and eventually became the ASX. They also worked with the British General Electric Company on a series of axial compressor designs for other uses, and there was some exploration of axial-compressor based superchargers known as E.5. By this point, however, the British industrial companies had taken over much of the research and development effort, and the RAE team was no longer vital to continued development. It was later folded into the nationalized Power Jets to form the National Gas Turbine Establishment. None of the RAE designs would go on to be a success on their own. The F.2 design was not put into production, although an enlarged version was very successful as the Armstrong Siddeley Sapphire. Griffith's complex designs at Rolls never worked properly and were abandoned, but he turned his attention to the simpler F.2-like AJ.65 design and produced the even more successful Rolls-Royce Avon, and later to the world's first turbofan, the Rolls-Royce Conway. References Bibliography Kay, Antony, Turbojet, History and Development 1930-1960, Vol 1, Great Britain and Germany, pp. 12–20, Crowood Press, 2007. Jet engines
Turbojet development at the RAE
[ "Technology" ]
2,977
[ "Jet engines", "Engines" ]
16,213,903
https://en.wikipedia.org/wiki/Prestik
Prestik is a rubber-like temporary adhesive that is marketed in South Africa, and manufactured by Bostik. It is water resistant, and can be used in temperatures from -30°C to 100°C. It can be used to secure things in place, such as pieces of paper on walls or fridge doors. It is similar to Blu Tack. External links Bostik Prestik data sheet Manufacturer's Website Adhesives
Prestik
[ "Physics" ]
93
[ "Materials stubs", "Materials", "Matter" ]
16,214,166
https://en.wikipedia.org/wiki/Cray%20EL90
The Cray EL90 series was an air-cooled vector processor supercomputer first sold by Cray Research in 1993. The EL90 series evolved from the Cray Y-MP EL minisupercomputer, and is compatible with Y-MP software, running the same UNICOS operating system. The range comprised three models: EL92, with up to two processors and 64 megawords (512 MB) of DRAM in a deskside chassis: dimensions 42×23.5×26 inches or 1050×600×670 mm (height × width × depth) and 380 lb/172 kg in weight. EL94, with up to four processors and 64 megawords (512 MB) of DRAM, in the same cabinet as the EL92. EL98, a revised Y-MP EL with up to eight processors and 256 megawords (2 GB) of DRAM in a Y-MP EL-style cabinet (62×50×32 inches or 2010×1270×810 mm, 1400 lb/635 kg in weight). The EL90 series Input/Output Subsystem (IOS) was based on the VMEbus and a Heurikon HK68 Motorola 68000-based processor board (or IOP). The IOP also provided the system's serial console. All EL90 models could be powered from regular mains power. The EL90 series was superseded by the Cray J90 series. References Fred Gannett's Cray FAQ Cray EL boot sequence Computer-related introductions in 1993 El90 Vector supercomputers
Cray EL90
[ "Technology" ]
335
[ "Computing stubs", "Computer hardware stubs" ]
16,215,763
https://en.wikipedia.org/wiki/Chromium%28IV%29%20chloride
Chromium(IV) chloride () is an unstable chromium compound. It is generated by combining chromium(III) chloride and chlorine gas at elevated temperatures, but reverts to those substances at room temperature. References Chromium–halogen compounds Chlorides Metal halides Chromium(IV) compounds
Chromium(IV) chloride
[ "Chemistry" ]
71
[ "Chlorides", "Inorganic compounds", "Inorganic compound stubs", "Salts", "Metal halides" ]
16,215,818
https://en.wikipedia.org/wiki/Fault%20gouge
Fault gouge is a type of fault rock best defined by its grain size. It is found as incohesive fault rock (rock which can be broken into its component granules at the present outcrop, only aided with fingers/pen-knife), with less than 30% clasts >2mm in diameter. Fault gouge forms in near-surface fault zones with brittle deformation mechanisms. There are several properties of fault gouge that influence its strength including composition, water content, thickness, temperature, and the strain rate conditions of the fault. Formation Fault gouge forms from localization of strain within fault zones under brittle conditions near the Earth’s surface. The grinding and milling from the two sides of the fault moving along each other results in grain size reduction and fragmentation. First, a fault breccia will form with more fragmental material and with continued grinding the rock will transition into a fault gouge with fewer and smaller fragments, enhancing fluid-rock interaction to alter some minerals and produce clay. Both the rate and manner of slip in a fault zone, as well as the available fluids, can determine the formation of different fault rock varieties. Role of pore fluids The formation of faults is determined by the stress conditions in the Earth’s crust. Pore fluid pressure in a rock can significantly reduce the stress needed to induce faulting by reducing the effective normal stress. Fault gouge formation can decrease permeability of the rock through creation of clay minerals, leading to higher pore fluid pressures in a localized zone and to slip localization within the gouge. Cataclastic deformation Cataclastic deformation is one of the main modes of fault gouge formation, as fault gouge is a common product of cataclasis at low pressure and temperature conditions. It is dependent on friction and is considered a brittle deformation mechanism. To further elucidate, cataclasis involves the granulation of grains due to both brittle fracture and rigid body rotation—where rigid body rotation is when mineral grains exhibit rotation in agreement with the fault plane shear sense. The corresponding cataclasis intensity is exhibited by a decrease in median grain size. As well, the development of fault gouge may also be accompanied by a degradation in sorting. Classification Fault rocks may be classified in terms of their textures, although the divisions are often gradational. After the classification scheme proposed by Sibson, fault gouge is defined as an incohesive fault with randomly oriented fabric and less than 30% visible fragments comprising the rock. An incohesive fault rock with more than 30% fragments is a fault breccia and cohesive fault rocks are either of the cataclasite series (non foliated) or the mylonite series (foliated). This was later modified to include foliated cataclasite. This classification scheme was further simplified for ease of the classification in the field. It defined fault gouge as having less than 30% clasts > 2mm and is found as incohesive fault rock at the present outcrop. Based on this classification scheme, fault breccias can undergo subdivision (as chaotic, mosaic, and crackle breccias). This subdivision allows for fault breccias to be foliated or non foliated, cohesive or incohesive, as well as found to contain a fine-grained matrix, small clasts, and even crystalline cement in varying proportions. Properties, friction and fault strength The fault strength of a gouge is dependent on its composition, its water content, its thickness, temperature and it can easily be affected by any changes in effective normal stress and slip rate. These parameters all have an effect on the coefficient of friction. Byerlee's law Byerlee’s law is what is used to describe the frictional strength of a rock. It is as follows:Where: = shear stress = coefficient of friction = normal stress Composition The composition will have an impact on the slip behavior of a fault. A high frictional strength is associated with a composition high in strong minerals such as quartz and feldspar. The composition and concentration of clay minerals will affect the fault behavior in the brittle crust. Gouges dominated by clay minerals (montmorillonite, illite, and chlorite) are consistently weaker. Those with a high concentration in montmorillonite are significantly weaker than those with a composition high in chlorite or illite. Permeability The composition also impacts the permeability of a gouge. It is an important parameter controlling fault mechanics and frictional stability. The presence of water will reduce the frictional resistance between the grains of phyllosilicate minerals Also, the permeability before shearing is usually higher than after the deformation. However, the influence of shearing varies depending on the composition. For example, with montmorillonite or illite, a sharp decrease is visible in post-shear permeability. However, with minerals such as chlorite, the higher permeability will be maintained even after shearing. Because chlorite crystals form at higher pressure and temperature, it is most likely to remain as larger aggregates in shear zones compared to the smaller size of montmorillonite or illite grains which explains why the permeability is less affected. Fault gouges rich in chlorite and quartz keep their high permeability to a significant depth. On the other hand, fault gouges with a low permeability such as gouges high in clay minerals are more susceptible to develop high pore pressures because fluid flow is unable to diffuse. Gouge thickness Gouge thickness increases over time with accumulation of slip events along a fault. A larger thickness of fault gouge is associated with higher degrees of pore fluid pressure. Temperature As mentioned earlier, the frictional resistance of a gouge can change as temperature varies. However, its effect differs based on the mineral composition. For example, in the case of quartz gouges, an increase in temperature will most likely decrease the coefficient of friction while a decrease in temperature leads to an increase in the coefficient of friction. Examples Bonita Fault: Found in New Mexico, near Tucumcari, this normal fault is also an example of quartz gouge. Its gouge is found in the Mesa Rica Sandstone, within 40m of the fault contact. This fault also exhibits many subsidiary faults and shear fractures within its fault zone (60m wide) Hurricane Fault: This fault is found in Pintura, Utah, with its gouge found in the Coconino Sandstone. This is another example of quartz gouge. Nojima Fault Gouge: This fault produced thin oscillating foliations of psudotachylite and fine fault gouge from granite at a depth of 3 km. San Andreas Fault Gouge: Consists of two active shear zones: the southwest deforming zone and the central deforming zone. At the San Andreas Fault Observatory at Depth (SAFOD) they are overarchingly composed of serpentinite porphyroclasts and sedimentary rock amongst a Magnesium-rich clay matrix. Saponite, corresite, quartz, and feldspars compose the southwest deforming zone. Saponite, quartz, and calcite compose the central deforming zone. Muddy Mountain Thrust: This fault is located in the southeast of Nevada, USA and represents tens of kilometers of transport at near-surface or surface conditions. The fault gouge contains less than 30% fragments of hanging wall dolomite and footwall sandstone clasts within a yellow-stained aggregate matrix with granular to foliated texture. See also Fault breccia Fault (geology) References Rocks Tectonics
Fault gouge
[ "Physics" ]
1,592
[ "Rocks", "Physical objects", "Matter" ]
16,216,013
https://en.wikipedia.org/wiki/IC%20405
IC 405 (also known as the Flaming Star Nebula, SH 2-229, or Caldwell 31) is an emission and reflection nebula in the constellation Auriga north of the celestial equator, surrounding the bluish, irregular variable star AE Aurigae. It shines at magnitude +6.0. Its celestial coordinates are RA dec . It is located near the emission nebula IC 410, the open clusters M38 and M36, and the K-class star Iota Aurigae. The nebula measures approximately 37.0' x 19.0', and lies about 1,500 light-years away from Earth. It is believed that the proper motion of the central star can be traced back to the Orion's Belt area. The nebula is about 5 light-years across. Gallery See also Auriga (Chinese astronomy) Caldwell catalogue Cosmic dust List of largest nebulae Notes Sources External links Diffuse nebulae Emission nebulae Reflection nebulae 0405 031b 229 Auriga
IC 405
[ "Astronomy" ]
205
[ "Auriga", "Constellations" ]
16,217,389
https://en.wikipedia.org/wiki/Flavoxanthin
Flavoxanthin is a natural xanthophyll pigment with a golden-yellow color found in small quantities in a variety of plants. As a food additive it used under the E number E161a as a food coloring although it is not approved for use in the EU or USA. It is listed as food additive 161a in Australia and New Zealand where it is approved for usage as an ingredient in food products. References Carotenoids Food colorings Tetraterpenes Benzofurans Cyclohexenes
Flavoxanthin
[ "Chemistry", "Biology" ]
111
[ "Biomarkers", "Carotenoids" ]
16,217,406
https://en.wikipedia.org/wiki/Violaxanthin
Violaxanthin is a xanthophyll pigment with an orange color found in a variety of plants. Violaxanthin is the product of the epoxidation of zeaxanthin where the oxygen atoms are from reactive oxygen species (ROS). Such ROS's arise when a plant is subject to solar radiation so intense that the light cannot all be absorbed by the chlorophyll. Food coloring Violaxanthin is used as a food coloring under the E number E161e and INS number 161e. The coloring is not approved for use in food in the EU or the United States, but is allowed in Australia and New Zealand. Violaxanthin cycle Violaxanthin is a participant in the violaxanthin cycle. References Carotenoids Epoxides Food colorings Tetraterpenes
Violaxanthin
[ "Biology" ]
176
[ "Biomarkers", "Carotenoids" ]
16,217,417
https://en.wikipedia.org/wiki/Rubixanthin
Rubixanthin, or natural yellow 27, is a natural xanthophyll pigment with a red-orange color found in rose hips. As a food additive it used under the E number E161d as a food coloring; it is not approved for use in the USA or EU but is approved in Australia and New Zealand where it is listed as 161d. References Carotenoids Food colorings Tetraterpenes Cyclohexenes
Rubixanthin
[ "Chemistry", "Biology" ]
96
[ "Biomarkers", "Carotenoids", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
16,218,101
https://en.wikipedia.org/wiki/EQ%20Virginis
EQ Virginis is a single variable star in the equatorial constellation of Virgo. It has a baseline visual apparent magnitude of 9.36, but is a flare star that undergoes sporadic bursts of brightening. The star is located at a distance of 67 light-years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −23 km/s. It is a member of the IC 2391 moving group of stars, which is between 30 and 50 million years old. This is an orange-hued K-type main-sequence star with a stellar classification of K5Ve, where the 'e' suffix indicates emission lines in the spectrum. It is a young, rapidly rotating star with a mean magnetic field strength of . The star is classified as an eruptive variable of the UV Ceti type and a BY Draconis variable. It shows strong chromospheric activity with extensive star spots that, on average, cover ~24% of the surface. The star displays a strong X-ray emission. References K-type main-sequence stars Flare stars BY Draconis variables Virgo (constellation) Durchmusterung objects 0517 118100 066252 Virginis, EQ Emission-line stars
EQ Virginis
[ "Astronomy" ]
260
[ "Virgo (constellation)", "Constellations" ]
16,218,441
https://en.wikipedia.org/wiki/Fuselloviridae
Fuselloviridae is a family of viruses. Sulfolobus species, specifically shibatae, solfataricus, and islandicus, serve as natural hosts. There are two genera and nine species in the family. The Fuselloviridae are ubiquitous in high-temperature (≥70 °C), acidic (pH ≤4) hot springs around the world. Taxonomy The family contains the following genera and species: Alphafusellovirus Sulfolobus spindle-shaped virus 1 Sulfolobus spindle-shaped virus 2 Sulfolobus spindle-shaped virus 4 Sulfolobus spindle-shaped virus 5 Sulfolobus spindle-shaped virus 7 Sulfolobus spindle-shaped virus 8 Sulfolobus spindle-shaped virus 9 Betafusellovirus Acidianus spindle-shaped virus 1 Sulfolobus spindle-shaped virus 6 Structure Viruses in Fuselloviridae are enveloped, with lemon-shaped geometries. The diameter is around 60 nm, with a length of 100 nm. Genomes consist of double-stranded circular DNA, around 17.3 kb in length. Biochemical characterization of SSV1, a prototypical fusellovirus, showed that virions are composed of four virus-encoded structural proteins, VP1 to VP4, as well as one DNA-binding chromatin protein of cellular origin. The virion proteins VP1, VP3, and VP4 undergo posttranslational modification by glycosylation, seemingly at multiple sites. VP1 is also proteolytically processed. SSV1 virions contain glycerol dibiphytanyl glycerol tetraether (GDGT) lipids, which appear to be acquired by the virus in a selective manner from the host cytoplasmic membrane. Life cycle Viral replication is cytoplasmic. Entry into the host cell is achieved by adsorption into the host cell. DNA templated transcription is the method of transcription. Sulfolobus shibatae, S. solfataricus, and S. islandicus serve as the natural host. Fuselloviruses are released from the host without causing cell lysis by a budding mechanism, similar to that employed by enveloped eukaryotic viruses. References External links Viralzone: Fuselloviridae ICTV Archaeal viruses Virus families
Fuselloviridae
[ "Biology" ]
493
[ "Archaea", "Archaeal viruses" ]
16,218,569
https://en.wikipedia.org/wiki/Ross%20458
Ross 458, also referred to as DT Virginis, is a binary star system in the constellation of Virgo. It has an apparent visual magnitude of 9.79 and is located at a distance of 37.6 light-years from the Sun. Both of the stars are low-mass red dwarfs with at least one of them being a flare star. This binary system has a circumbinary sub-stellar companion. This star was mentioned as a suspected variable by M. Petit in 1957. In 1960, O. J. Eggen classified it as a member of the Hyades moving group based on the system's space motion; it is now considered a likely member of the Carina Near Moving Group. Two flares were reported from this star in 1969 by N. I. Shakhovskaya, confirming it as a flare star. It was identified as an astrometric binary in 1994 by W. D. Heintz, who found a period of 14.5 years. The pair were resolved using adaptive optics in 1999. Early mass estimates placed the companion near the substellar limit, and it was initially proposed as a brown dwarf but is now considered late-type red dwarf. The primary member, component A, is an M-type main-sequence star with a stellar classification of M0.5. It is young, magnetically very active star with a high rate of rotation and strong Hα emission. The star experiences star spots that cover 10–15% of the surface It is smaller and less massive than the Sun. The star is radiating just 4.4% of the luminosity of the Sun from its photosphere at an effective temperature of 3,484 K. Planetary system A distant sub-stellar companion to the binary star system was discovered in 2010 as part of a deep infrared sky survey. This is most likely a T8 spectral type brown dwarf with an estimated rotation period of . The object varies slightly in brightness, which may be due to patchy clouds. The companion lacks a detectable oxygen in the atmosphere, implying its formation from sequestrated source or peculiar atmospheric chemistry. See also CM Draconis GU Piscium b HD 106906 b Kepler-16 Lists of exoplanets NN Serpentis QS Virginis WD 0806-661 References External links Simbad M-type main-sequence stars Flare stars Binary stars Planetary systems with one confirmed planet Virgo (constellation) 0494 063510 Virginis, DT
Ross 458
[ "Astronomy" ]
511
[ "Virgo (constellation)", "Constellations" ]
16,218,692
https://en.wikipedia.org/wiki/Guttaviridae
Guttaviridae is a family of viruses. Archaea serve as natural hosts. There are two genera in this family, containing one species each. The name is derived from the Latin gutta, meaning 'droplet'. Taxonomy The family currently contains one genera and species: Betaguttavirus Aeropyrum pernix ovoid virus 1 Genus Alphaguttavirus and species Sulfolobus newzealandicus droplet-shaped virus were removed in ICTV version 2021. Structure Viruses in the family Guttaviridae are enveloped. The diameter is around 70–95 nm, with a length of 110–185 nm. Genomes are circular, around 20kb in length. The virons consist of a coat, a core, a nucleocapsid, and projecting fibers at the pointed end. The surface of the virion has a beehive-like ribbed surface pattern with protrusions that are densely covered by a 'beard' of long fibers at its pointed end. The genome is extremely heavily methylated. Life cycle DNA-templated transcription is the method of transcription. Archaea serve as the natural host. References External links ICTV Online Report Guttaviridae Viralzone: Guttaviridae Viralzone: Alphaguttavirus Viralzone: Betaguttavirus Archaeal viruses Virus families
Guttaviridae
[ "Biology" ]
275
[ "Archaea", "Archaeal viruses" ]
16,219,011
https://en.wikipedia.org/wiki/R%20Virginis
R Virginis is a Mira variable in the constellation Virgo. Located approximately distant, it varies between magnitudes 6.1 and 12.1 over a period of approximately 146 days. Its variable nature was discovered by Karl Ludwig Harding in 1809. References Mira variables Virgo (constellation) M-type giants Virginis, R 4808 109914 061667 Durchmusterung objects Emission-line stars
R Virginis
[ "Astronomy" ]
86
[ "Virgo (constellation)", "Constellations" ]
16,219,399
https://en.wikipedia.org/wiki/S%20Virginis
S Virginis is a Mira-type variable star in the constellation Virgo. Located approximately distant, it varies between magnitudes 6.3 and 13.2 over a period of approximately 375 days. References Mira variables Virgo (constellation) M-type giants Virginis, S 5101 117833 066100 Durchmusterung objects
S Virginis
[ "Astronomy" ]
72
[ "Virgo (constellation)", "Constellations" ]
16,219,820
https://en.wikipedia.org/wiki/Act%20to%20Prevent%20Pollution%20from%20Ships
The Act to Prevent Pollution from Ships (APPS, 33 U.S.C. §§1905-1915) is a United States law that implements the provisions of MARPOL 73/78 and the annexes of MARPOL to which the United States is a party. The most recent U.S. action concerning MARPOL occurred in April 2006, when the U.S. Senate approved Annex VI, which regulates air pollution (Treaty Doc. 108–7, Exec. Rept. 109-13). Following that approval, in March 2007, the House of Representatives approved legislation to implement the standards in Annex VI (H.R. 802), through regulations to be promulgated by Environmental Protection Agency in consultation with the U.S. Coast Guard. APPS applies to all U.S.-flagged ships anywhere in the world, and to all foreign-flagged vessels operating in navigable waters of the United States, or while at port under U.S. jurisdiction. The Coast Guard has primary responsibility to prescribe and enforce regulations necessary to implement APPS in these waters. The regulatory mechanism established in APPS to implement MARPOL is separate and distinct from the Clean Water Act and other federal environmental laws. The H.R. 6665 legislation was passed by the 96th U.S. Congressional session and signed by U.S. President Jimmy Carter on October 21, 1980. See also Clean Water Act Merchant Marine Act of 1920 Merchant Shipping (Pollution) Act 2006 Regulation of ship pollution in the United States References This article is based on a public domain Congressional Research Service report: Copeland, Claudia. "Cruise Ship Pollution: Background, Laws and Regulations, and Key Issues" (Order Code RL32450). Congressional Research Service (Updated February 6, 2008). External links The Act online 1980 in American law 96th United States Congress Pollution in the United States United States federal environmental legislation Environmental impact of shipping Ocean pollution
Act to Prevent Pollution from Ships
[ "Chemistry", "Environmental_science" ]
392
[ "Ocean pollution", "Water pollution" ]
16,220,091
https://en.wikipedia.org/wiki/Cruise%20ship%20pollution%20in%20the%20United%20States
Cruise ships carrying several thousand passengers and crew have been compared to “floating cities,” and the volume of wastes that they produce is comparably large, consisting of sewage; wastewater from sinks, showers, and galleys (graywater); hazardous wastes; solid waste; oily bilge water; ballast water; and air pollution. The waste streams generated by cruise ships are governed by a number of international protocols (especially MARPOL) and U.S. domestic laws (including the Clean Water Act and the Act to Prevent Pollution from Ships), regulations, and standards, but there is no single law or rule. Some cruise ship waste streams appear to be well regulated, such as solid wastes (garbage and plastics) and bilge water. But there is overlap of some areas, and there are gaps in others. In 2000, the U.S. Congress enacted legislation restricting cruise ship discharges in U.S. navigable waters within the state of Alaska. California, and Maine have enacted state-specific laws concerning cruise ship pollution, and a few other states have entered into voluntary agreements with industry to address management of cruise ship discharges. Meanwhile, the cruise industry has voluntarily undertaken initiatives to improve pollution prevention, by adopting waste management guidelines and procedures and researching new technologies. Concerns about cruise ship pollution raise issues for Congress in three broad areas: adequacy of laws and regulations, research needs, and oversight and enforcement of existing requirements. Legislation to regulate cruise ship discharges of sewage, graywater, and bilge water nationally was introduced in the 109th Congress, but there was no further congressional action. This article describes the several types of waste streams that cruise ships may discharge and emit. It identifies the complex body of international and domestic laws that address pollution from cruise ships. It then describes federal and state legislative activity concerning cruise ships in Alaskan waters and activities in a few other states, as well as current industry initiatives to manage cruise ship pollution. Background More than 46,000 commercial vessels — tankers, bulk carriers, container ships, barges, and passenger ships — travel the oceans and other waters of the world, carrying cargo and passengers for commerce, transport, and recreation. Their activities are regulated and scrutinized in a number of respects by international protocols and U.S. domestic laws, including those designed to protect against discharges of pollutants that could harm marine resources, other parts of the ambient environment, and human health. However, there are overlaps of some requirements, gaps in other areas, geographic differences in jurisdiction based on differing definitions, and questions about the adequacy of enforcement. Public attention to the environmental impacts of the maritime industry has been especially focused on the cruise industry, in part because its ships are highly visible and in part because of the industry's desire to promote a positive image. It represents a relatively small fraction of the entire shipping industry worldwide. As of January 2008, passenger ships (which include cruise ships and ferries) composed about 12% of the world shipping fleet. The cruise industry is a significant and growing contributor to the U.S. economy, providing more than $32 billion in total benefits annually and generating more than 330,000 U.S. jobs, but also making the environmental impacts of its activities an issue to many. Since 1980, the average annual growth rate in the number of cruise passengers worldwide has been 8.4%, and in 2005, cruises hosted an estimated 11.5 million passengers. Cruises are especially popular in the United States. In 2005, U.S. ports handled 8.6 million cruise embarcations (75% of global passengers), 6.3% more than in 2004. The worldwide cruise ship fleet consists of more than 230 ships, and the majority are foreign-flagged, with Liberia and Panama being the most popular flag countries. Foreign-flag cruise vessels owned by six companies account for nearly 95% of passenger ships operating in U.S. waters. Each year, the industry adds new ships to the total fleet, vessels that are bigger, more elaborate and luxurious, and that carry larger numbers of passengers and crew. Over the past two decades, the average ship size has been increasing at the rate of roughly every five years. The average ship entering the market from 2008 to 2011 will be more than long and will weigh more than 130,000 tons. To the cruise ship industry, a key issue is demonstrating to the public that cruising is safe and healthy for passengers and the tourist communities that are visited by their ships. Cruise ships carrying several thousand passengers and crew have been compared to “floating cities,” in part because the volume of wastes produced and requiring disposal is greater than that of many small cities on land. During a typical one-week voyage, a large cruise ship (with 3,000 passengers and crew) is estimated to generate of sewage; of graywater (wastewater from sinks, showers, and laundries); more than of hazardous wastes; 8 tons of solid waste; and of oily bilge water. Passengers can singlehandedly produce up to 7.7 pounds of waste in a single day aboard a cruise ship. Those wastes, if not properly treated and disposed of, can pose risks to human health, welfare, and the environment. Environmental advocates have raised concerns about the adequacy of existing laws for managing these wastes, and suggest that enforcement of existing laws is weak. A 2000 Government Accountability Office (GAO) report focused attention on problems of cruise vessel compliance with environmental requirements. GAO found that between 1993 and 1998, foreign-flag cruise ships were involved in 87 confirmed illegal discharge cases in U.S. waters. A few of the cases included multiple illegal discharge incidents occurring over the six-year period. GAO reviewed three major waste streams (solids, hazardous chemicals, and oily bilge water) and concluded that 83% of the cases involved discharges of oil or oil-based products, the volumes of which ranged from a few drops to hundreds of gallons. The balance of the cases involved discharges of plastic or garbage. GAO judged that 72% of the illegal discharges were accidental, 15% were intentional, and 13% could not be determined. The 87 cruise ship cases represented 4% of the 2,400 illegal discharge cases by foreign-flag ships (including tankers, cargo ships and other commercial vessels, as well as cruise ships) confirmed during the six years studied by GAO. Although cruise ships operating in U.S. waters have been involved in a relatively small number of pollution cases, GAO said, several have been widely publicized and have led to criminal prosecutions and multimillion-dollar fines. In 2000, a coalition of 53 environmental advocacy groups petitioned the Environmental Protection Agency (EPA) to take regulatory action to address pollution by cruise ships. The petition was amended in 2000 to request that EPA also examine air pollution from cruise ships. The petition called for an investigation of wastewater, oil, and solid waste discharges from cruise ships. In response, EPA agreed to study cruise ship discharges and waste management approaches. As part of that effort, EPA issued a background document in 2000 with preliminary information and recommendations for further assessment through data collection and public information hearings. The agency released its final Cruise Ship Discharge Assessment Report in 2009. The report summarized findings of recent data collection activities, especially from cruise ships operating in Alaskan waters. Concurrently, litigation regarding the National Pollutant Discharge Elimination System (NPDES) permit program led to a 2008 decision by the Ninth Circuit Court, ruling that EPA could not exclude vessel discharges from NPDES requirements. Subsequently, EPA issued an initial Vessel General Permit (VGP) with an effective date of February 6, 2009. Cruise ship waste streams Cruise ships generate a number of waste streams that can result in discharges to the marine environment, including sewage, graywater, hazardous wastes, oily bilge water, ballast water, and solid waste. They also emit air pollutants to the air and water. These wastes, if not properly treated and disposed of, can be a significant source of pathogens, nutrients, and toxic substances with the potential to threaten human health and damage aquatic life. Cruise ships represent a small — although highly visible — portion of the entire international shipping industry, and the waste streams described here are not unique to cruise ships. However, particular types of wastes, such as sewage, graywater, and solid waste, may be of greater concern for cruise ships relative to other seagoing vessels, because of the large numbers of passengers and crew that cruise ships carry and the large volumes of wastes that they produce. Further, because cruise ships tend to concentrate their activities in specific coastal areas and visit the same ports repeatedly (especially Florida, California, New York City, Galveston, Seattle, and the waters of Alaska), their cumulative impact on a local scale could be significant, as can impacts of individual large-volume releases (either accidental or intentional). International laws and regulations MARPOL 73/78 is one of the most important treaties regulating pollution from ships. Six Annexes of the Convention cover the various sources of pollution from ships and provide an overarching framework for international objectives. In the U.S., the convention is implemented through the Act to Prevent Pollution from Ships. Under the provisions of the convention, the United States can take direct enforcement action under U.S. laws against foreign-flagged ships when pollution discharge incidents occur within U.S. jurisdiction. When incidents occur outside U.S. jurisdiction or jurisdiction cannot be determined, the United States refers cases to flag states, in accordance with MARPOL. These procedures require substantial coordination between the Coast Guard, the State Department, and other flag states, and the response rate from flag states has been poor. Federal laws and regulations In the United States, several federal agencies have some jurisdiction over cruise ships in U.S. waters, but no one agency is responsible for or coordinates all of the relevant government functions. The U.S. Coast Guard and EPA have principal regulatory and standard-setting responsibilities. Cruise ships that are in length or greater, are subject to the requirements of the EPA Vessel General Permit (VGP). The most recent VGP was published in 2013. EPA issued its Small Vessel General Permit (sVGP) for smaller cruise ships in 2014, however this permit only applied to ballast water. In 2018 Congress repealed the sVGP under the Vessel Incidental Discharge Act (VIDA). As of 2021 small vessels are subject to the ballast water requirements of the VGP, Coast Guard regulations, and applicable state and local government requirements. On October 26, 2020, EPA published proposed VIDA implementation regulations. The Department of Justice prosecutes violations of federal laws. In addition, the Department of State represents the United States at meetings of the IMO and in international treaty negotiations and is responsible for pursuing foreign-flag violations. Other federal agencies have limited roles and responsibilities. For example, the National Oceanic and Atmospheric Administration (NOAA, Department of Commerce) works with the Coast Guard and EPA to report on the effects of marine debris. The Animal and Plant Health Inspection Service (APHIS) is responsible for ensuring quarantine inspection and disposal of food-contaminated garbage (these APHIS responsibilities are part of the Department of Homeland Security). In some cases, states and localities have responsibilities as well. Sewage Commercial vessels are required to obtain NPDES permits pursuant to Section 402 of the Clean Water Act. Section 312 of the Act prohibits the dumping of untreated or inadequately treated sewage from vessels into the navigable waters of the United States (defined in the act as within of shore). Cruise ships are subject to this prohibition. Commercial and recreational vessels with installed toilets are required to have marine sanitation devices, which are designed to prevent the discharge of untreated sewage. Beyond , raw sewage can be discharged. On some cruise ships, especially many of those that travel in Alaskan waters, sewage is treated using Advanced Wastewater Treatment (AWT) systems that generally provide improved screening, treatment, disinfection, and sludge processing as compared with traditional MSDs (marine sanitation devices). AWTs are believed to be very effective in removing pathogens, oxygen-demanding substances, suspended solids, oil and grease, and particulate metals from sewage, but only moderately effective in removing dissolved metals and nutrients (ammonia, nitrogen and phosphorus). States may also establish no-discharge zones (NDZs) for vessel sewage, under section 312. A state may completely prohibit the discharge of both treated and untreated sewage from all vessels with installed toilets into some or all waters over which it has jurisdiction (up to from land). As of 2017, this designation has been used for 72 areas representing part or all of the waters of 26 states, including a number of inland states. Greywater Graywater discharges from large cruise ships are regulated by the 2013 VGP. Pursuant to a state law in Alaska, greywater must be treated prior to discharge into that state's waters. Solid waste Cruise ship discharges of solid waste are governed by two federal laws. Title I of the Marine Protection, Research and Sanctuaries Act makes it illegal to transport garbage from the United States for the purpose of dumping it into ocean waters without a permit or to dump material from outside the U.S. into U.S. waters. Beyond U.S. waters, no MPRSA permit is required for a cruise ship to discharge solid waste. The routine discharge of effluent incidental to the propulsion of vessels is explicitly exempted from the definition of dumping in the MPRSA.28 The Act to Prevent Pollution from Ships prohibits the discharge of all garbage within of shore, certain types of garbage within offshore, and plastic anywhere. It applies to all vessels operating in U.S. navigable waters and the Exclusive Economic Zone (EEZ). Hazardous waste The Resource Conservation and Recovery Act (RCRA) is the primary federal law that governs hazardous waste management. The owner or operator of a cruise ship may be a generator and/or a transporter of hazardous waste, and thus subject to RCRA rules. Issues that the cruise ship industry may face relating to RCRA include ensuring that hazardous waste is identified at the point at which it is considered generated; ensuring that parties are properly identified as generators, storers, treaters, or disposers; and determining the applicability of RCRA requirements to each. Hazardous waste generated onboard cruise ships are stored onboard until the wastes can be offloaded for recycling or disposal in accordance with RCRA. A range of activities on board cruise ships generate hazardous wastes and toxic substances that would ordinarily be presumed to be subject to RCRA. Cruise ships are potentially subject to RCRA requirements to the extent that chemicals used for operations such as ship maintenance and passenger services result in the generation of hazardous wastes. However, it is not entirely clear what regulations apply to the management and disposal of these wastes. RCRA rules that cover small-quantity generators (those that generate more than 100 kilograms but less than 1,000 kilograms of hazardous waste per month) are less stringent than those for large-quantity generators (generating more than 1,000 kilograms per month), and it is unclear whether cruise ships are classified as large or small generators of hazardous waste. Moreover, some cruise companies argue that they generate less than 100 kilograms per month and therefore should be classified in a third category, as “conditionally exempt small-quantity generators,” a categorization that allows for less rigorous requirements for notification, recordkeeping, and the like. In addition to RCRA, hazardous waste discharges from cruise ships are subject to Section 311 of the Clean Water Act, which prohibits the discharge of hazardous substances in harmful quantities into or upon the navigable waters of the United States, adjoining shorelines, or into or upon the waters of the contiguous zone. Bilge water Section 311 of the Clean Water Act, as amended by the Oil Pollution Act of 1990, applies to cruise ships and prohibits discharge of oil or hazardous substances in harmful quantities into or upon U.S. navigable waters, or into or upon the waters of the contiguous zone, or which may affect natural resources in the U.S. EEZ (extending offshore). Coast Guard regulations prohibit discharge of oil within from shore, unless passed through a 15-ppm oil water separator, and unless the discharge does not cause a visible sheen. Beyond , oil or oily mixtures can be discharged while a vessel is proceeding en route and if the oil content without dilution is less than 100 ppm. Vessels are required to maintain an Oil Record Book to record disposal of oily residues and discharges overboard or disposal of bilge water. In addition to Section 311 requirements, the Act to Prevent Pollution from Ships (APPS) implements MARPOL Annex I concerning oil pollution. APPS applies to all U.S. flagged ships anywhere in the world and to all foreign flagged vessels operating in the navigable waters of the United States, or while at a port under U.S. jurisdiction. To implement APPS, the Coast Guard has promulgated regulations prohibiting the discharge of oil or oily mixtures into the sea within of the nearest land, except under limited conditions. However, because most cruise lines are foreign registered and because APPS only applies to foreign ships within U.S. navigable waters, the APPS regulations have limited applicability to cruise ship operations. In addition, most cruise lines have adopted policies that restrict discharges of machinery space waste within three miles (5 km) from shore. Cruise ship efforts to reduce pollution Following the 2000 GAO report, twelve cruise ship companies that had violated pollution regulations implemented new environmental strategies during 2003 to 2008. Several cruise lines within this group and others who had not been cited for violations sought to ameliorate their pollution of marine waters. Some of these companies reduced their waste production by transferring over from single use plastic packaging to reusable plastic containers for food being prepared on board and cutlery being used on the pool decks. Princess Cruises and Royal Caribbean Cruise Line acknowledged the waste problem, and adopted reusable items to work towards a resolution. Recycling programs have also been implemented, and, because of the use of compactors on ships, plastics and aluminum are condensed and made easily recyclable upon ships return to port. The United States alone gains upward of 18,000 pounds of recycled goods from these practices. Some cruise lines have also taken it upon themselves to reduce air pollution, by using alternative, cleaner energy sources which has positive implications for marine waters as well. Shorepower (cold ironing), battery power, fuel cells, and liquified natural gas (LNG) are among sources that are being explored. Carnival Corporation is one example of a cruise liner that is currently pursuing cleaner energies. In 2016 Carnival announced that it had agreed to partner with Shell and utilize LNG to power its next two new ships. Liquefied natural gas, a fossil fuel, is a much cleaner source of energy than oil and also much less expensive, to the benefit of cruise companies. Transitioning away from oil will help cruise liners combat their problems with bilge pollution. Cruise ships efforts to reduce their waste, recycle, and find more renewable energies will help alleviate their current rate of pollution of marine waters. Repeat offenders and imposed sanctions Some large cruise shipping lines have violated regulations by illegally bypassing the onboard oily water separator and discharging untreated oily wastewater. These violations by means of a so-called "magic pipe" have been prosecuted and resulted in large fines. Ship employees may be prosecuted and subject to imprisonment and fines. In 2002 the Carnival Corporation pleaded guilty in United States District Court in Miami to criminal charges related to falsifying records of the oil-contaminated bilge water that six of its ships dumped into the sea from 1996 through 2001. The Carnival Corporation was ordered to pay $18 million in fines and perform community service, received five years' probation and must submit to a court-supervised worldwide environmental-compliance program for each of its cruise ships. For dumping oily waste into the seas and attempted cover-up Princess Cruise Lines was fined $40 million in 2016. According to federal authorities, it was the "largest-ever criminal penalty" for intentional vessel pollution. Officials said that these practices began in 2005 and persisted until August 2013, when a newly hired engineer blew the whistle. As part of its plea agreement, ships of the parent company Carnival Cruise lines were subjected to a court supervised environmental compliance plan for five years. For violation of the probation terms of 2016 Carnival and its Princess line were ordered to pay an additional $20 million penalty in 2019. The new violations included discharging plastic into waters in the Bahamas, falsifying records and interfering with court supervision. See also Cruise ship pollution in Europe Environmental issues in the United States Marine Environmental Protection (U.S. Coast Guard) References This article is based on a public domain Congressional Research Service report: Copeland, Claudia. "Cruise Ship Pollution: Background, Laws and Regulations, and Key Issues" (Order Code RL32450). Congressional Research Service (Updated February 6, 2008). External links Friends of the Earth - Cruise Ship Report Card "Vessels, Marinas and Ports" - EPA regulatory information and studies "NPDES Program—Vessels: Incidental Discharge Permitting" - EPA VGP and sVGP information Water pollution in the United States Environmental impact of shipping Ocean pollution Cruise lines
Cruise ship pollution in the United States
[ "Chemistry", "Environmental_science" ]
4,365
[ "Ocean pollution", "Water pollution" ]
16,220,305
https://en.wikipedia.org/wiki/PSR%20B1828%E2%88%9211
PSR B1828-11 (also known as PSR B1828-10) is a pulsar approximately 10,000 light-years away in the constellation of Scutum. The star exhibits variations in the timing and shape of its pulses: this was at one stage interpreted as due to a possible planetary system in orbit around the pulsar, though the model required an anomalously large second period derivative of the pulse times. The planetary model was later discarded in favour of precession effects as the planets could not cause the observed shape variations of the pulses. While the generally accepted model is that the pulsar is a neutron star undergoing free precession, a model has been proposed that interprets the pulsar as a quark star undergoing forced precession due to an orbiting "quark planet". The entry for the pulsar on SIMBAD lists this hypothesis as being controversial. References 10 Scutum (constellation)
PSR B1828−11
[ "Astronomy" ]
199
[ "Scutum (constellation)", "Constellations" ]
16,220,399
https://en.wikipedia.org/wiki/Citranaxanthin
Citranaxanthin is a carotenoid pigment used as a food additive under the E number E161i as a food coloring. There are natural sources of citranaxanthin, but it is generally prepared synthetically. It is used as an animal feed additive to impart a yellow color to chicken fat and egg yolks. References Carotenoids Food colorings Cyclohexenes
Citranaxanthin
[ "Chemistry", "Biology" ]
89
[ "Biomarkers", "Carotenoids", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
16,220,413
https://en.wikipedia.org/wiki/Rhodoxanthin
Rhodoxanthin is a xanthophyll pigment with a purple color that is found in small quantities in a variety of plants including Taxus baccata and Lonicera morrowii. It is also found in the feathers of some birds. As a food additive it is used under the E number E161f as a food coloring. It is not approved for use in the EU or US; however, it is approved in Australia and New Zealand (where it is listed under its INS number 161f). References Carotenoids Food colorings Tetraterpenes Diketones Cyclohexenes
Rhodoxanthin
[ "Chemistry", "Biology" ]
131
[ "Biomarkers", "Carotenoids", "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
16,220,926
https://en.wikipedia.org/wiki/Regulation%20of%20ship%20pollution%20in%20the%20United%20States
In the United States, several federal agencies and laws have some jurisdiction over pollution from ships in U.S. waters. States and local government agencies also have responsibilities for ship-related pollution in some situations. International laws and regulations MARPOL 73/78 (the "International Convention for the Prevention of Pollution From Ships") is one of the most important treaties regulating pollution from ships. Six Annexes of the Convention cover the various sources of pollution from ships and provide an overarching framework for international objectives. In the U.S., the Convention is implemented through the Act to Prevent Pollution from Ships (APPS). Under the provisions of the Convention, the United States can take direct enforcement action under U.S. laws against foreign-flagged ships when pollution discharge incidents occur within U.S. jurisdiction. When incidents occur outside U.S. jurisdiction or jurisdiction cannot be determined, the United States refers cases to flag states, in accordance with MARPOL. These procedures require substantial coordination between the Coast Guard, the State Department, and other flag states, and the response rate from flag states has been poor. Different regulations apply to vessels, depending on the individual state. Federal laws and regulations In the United States, several federal agencies have some jurisdiction over ships in U.S. waters, but no one agency is responsible for or coordinates all of the relevant government functions. The U.S. Coast Guard and Environmental Protection Agency (EPA) have principal regulatory and standard-setting responsibilities, and the Department of Justice prosecutes violations of federal laws. EPA and the Department of Defense (DOD) began jointly issuing Uniform National Discharge Standards ("UNDS") for armed forces vessels in 2017. In addition, the Department of State represents the United States at meetings of the IMO and in international treaty negotiations and is responsible for pursuing foreign-flag violations. Other federal agencies have limited roles and responsibilities. For example, the National Oceanic and Atmospheric Administration (NOAA, Department of Commerce) works with the Coast Guard and EPA to report on the effects of marine debris. The Animal and Plant Health Inspection Service (APHIS) is responsible for ensuring quarantine inspection and disposal of food-contaminated garbage. In some cases, states and localities have responsibilities as well. Vessels General Permit EPA issued its most recent Vessels General Permit, under the National Pollutant Discharge Elimination System (NPDES), a Clean Water Act (CWA) program, in 2013. The permit applies to large commercial vessels ( in length or greater) (except fishing vessels) and regulates 26 specific types of vessel discharges: Deck Washdown and Runoff and Above Water Line Hull Cleaning Bilge water/Oily Water Separator Effluent Ballast water Anti-fouling Hull Coatings/Hull Coating Leachate Aqueous film forming foam (AFFF) Boiler/Economizer Blowdown Cathodic protection Chain Locker Effluent Controllable Pitch Propeller and Thruster Hydraulic Fluid and other Oil Sea Interfaces including Lubrication Discharges from Paddle Wheel Propulsion, Stern Tubes, Thruster Bearings, Stabilizers, Rudder Bearings, Azimuth Thrusters, and Propulsion Pod Lubrication, and Wire Rope and Mechanica l #Equipment Subject to Immersion] Distillation and Reverse Osmosis Brine Elevator Pit Effluent Firemain Systems Freshwater Layup Gas Turbine Washwater Graywater (except certain commercial vessels operating in the Great Lakes) Motor Gasoline and Compensating Discharge Non-Oily Machinery Wastewater Refrigeration and Air Condensate Discharge Seawater Cooling Overboard Discharge (Including Non-Contact Engine Cooling Water; Hydraulic System Cooling Water, Refrigeration Cooling Water) Seawater Piping Biofouling Prevention Boat Engine Wet Exhaust Sonar Dome Discharge Underwater Ship Husbandry Welldeck Discharges Graywater Mixed with Sewage from Vessels Exhaust Gas Scrubber Washwater Discharge. In 2016 EPA estimated that approximately 69,000 vessels, both domestic and foreign flagged, were covered by the VGP. EPA issued its Small Vessels General Permit (sVGP) for smaller commercial vessels in 2014, however this permit only applied to ballast water. In 2018 Congress repealed the sVGP under the Vessel Incidental Discharge Act. As of 2023 small vessels are subject to the ballast water requirements of the VGP, Coast Guard regulations, and applicable state and local government requirements. Vessel Incidental Discharge Act The Vessel Incidental Discharge Act (VIDA), approved in 2018, requires EPA to develop new performance standards for vessel discharges, and generally requires that the new standards be at least as stringent as the 2013 VGP. On October 26, 2020 EPA published proposed VIDA implementation regulations. Until this proposal is finalized, the existing EPA discharge permits and Coast Guard regulations remain in effect. Sewage Commercial vessels discharging sewage, except fishing vessels, are subject to the VGP or SVGP requirements. Recreational vessels are exempt from the permit requirements, but vessel operators must implement Best Management Practices to control their discharges. Marine sanitation devices Section 312 of the CWA prohibits the dumping of untreated or inadequately treated sewage from vessels into the navigable waters of the United States (defined as within of shore). It is implemented jointly by EPA and the Coast Guard. Under commercial and recreational vessels with installed toilets are required to have marine sanitation devices (MSDs), which are designed to prevent the discharge of untreated sewage. EPA is responsible for developing performance standards for MSDs, and the Coast Guard is responsible for MSD design and operation regulations and for certifying MSD compliance with the EPA rules. MSDs are designed either to hold sewage for shore-based disposal or to treat sewage prior to discharge. The Coast Guard regulations cover three types of MSDs. Large vessels use either Type II or Type III MSDs. In Type II MSDs, the waste is either chemically or biologically treated prior to discharge and must meet limits of no more than 200 fecal coliforms per 100 milliliters and no more than 150 milligrams per liter of suspended solids. Type III MSDs store wastes and do not treat them; the waste is pumped out later and treated in an onshore system or discharged outside U.S. waters. Type I MSDs use chemicals to disinfect the raw sewage prior to discharge and must meet a performance standard for fecal coliform bacteria of not greater than 1,000 per 100 milliliters and no visible floating solids. Type I MSDs are generally only found on recreational vessels or others under in length. The regulations, which have not been revised since 1976, do not require ship operators to sample, monitor, or report on their effluent discharges. Critics point out a number of deficiencies with this regulatory structure as it affects large vessels. First, the MSD regulations only cover discharges of bacterial contaminants and suspended solids, while the NPDES permit program for other point sources typically regulates many more pollutants such as chemicals, pesticides, heavy metals, oil, and grease that may be released by large vessels as well as land-based sources. Second, sources subject to NPDES permits must comply with sampling, monitoring, recordkeeping, and reporting requirements, which do not exist in the MSD rules. In addition, the Coast Guard, responsible for inspecting vessels for compliance with the MSD rules, has been heavily criticized for poor enforcement of Section 312 requirements. In its 2000 report, the Government Accountability Office (GAO) said that Coast Guard inspectors "rarely have time during scheduled ship examinations to inspect sewage treatment equipment or filter systems to see if they are working properly and filtering out potentially harmful contaminants." GAO reported that a number of factors limit the ability of Coast Guard inspectors to detect violations of environmental law and rules, including the inspectors' focus on safety, the large size some ships, limited time and staff for inspections, and the lack of an element of surprise concerning inspections. The Coast Guard carries out a wide range of responsibilities that encompass both homeland security (ports, waterways, and coastal security, defense readiness, drug and migrant interdiction) and non-homeland security (search and rescue, marine environmental protection, fisheries enforcement, aids to navigation). Since the September 11 terrorist attacks on the United States, the Coast Guard has focused more of its resources on homeland security activities. One likely result is that less of the Coast Guard's time and attention are available for vessel inspections for MSD or other environmental compliance. Annex IV of MARPOL was drafted to regulate sewage discharges from vessels. It has entered into force internationally and would apply to ships that are flagged in ratifying countries, but because the United States has not ratified Annex IV, it is not mandatory that ships follow it when in U.S. waters. However, its requirements are minimal, even compared with U.S. rules for MSDs. Annex IV requires that vessels be equipped with a certified sewage treatment system or holding tank, but it prescribes no specific performance standards. Within three miles (5 km) of shore, Annex IV requires that sewage discharges be treated by a certified MSD prior to discharge. Between three and from shore, sewage discharges must be treated by no less than maceration or chlorination; sewage discharges beyond from shore are unrestricted. Vessels are permitted to meet alternative, less stringent requirements when they are in the jurisdiction of countries where less stringent requirements apply. In U.S. waters, vessels must comply with the regulations implementing Section 312 of the Clean Water Act. On some ships, especially many of those that travel in Alaskan waters, sewage is treated using Advanced Wastewater Treatment (AWT) systems that generally provide improved screening, treatment, disinfection, and sludge processing as compared with traditional Type II MSDs. AWTs are believed to be very effective in removing pathogens, oxygen demanding substances, suspended solids, oil and grease, and particulate metals from sewage, but only moderately effective in removing dissolved metals and nutrients (ammonia, nitrogen and phosphorus). No-discharge zones Section 312 has another means of addressing sewage discharges, through establishment of no-discharge zones (NDZs) for vessel sewage. A state may completely prohibit the discharge of both treated and untreated sewage from all vessels with installed toilets into some or all waters over which it has jurisdiction (up to from land). To create a no-discharge zone to protect waters from sewage discharges by vessels, the state must apply to EPA under one of three categories. NDZ based on the need for greater environmental protection, and the state demonstrates that adequate pumpout facilities for safe and sanitary removal and treatment of sewage from all vessels are reasonably available. As of 2017, this category of designation has been used for 72 areas representing part or all of the waters of 26 states, including a number of inland states. NDZ for special waters found to have a particular environmental importance (e.g., to protect environmentally sensitive areas such as shellfish beds or coral reefs); it is not necessary for the state to show pumpout availability. This category of designation has been used twice (state waters within the Florida Keys National Marine Sanctuary and the Boundary Waters Canoe area of Minnesota). NDZ to prohibit the discharge of sewage into waters that are drinking water intake zones; it is not necessary for the state to show pumpout availability. This category of designation has been used to protect part of the Hudson River in New York. Solid waste Ship discharges of solid waste are governed by two laws. Title I of the Marine Protection, Research, and Sanctuaries Act (MPRSA) applies to cruise ships and other vessels and makes it illegal to transport garbage from the United States for the purpose of dumping it into ocean waters without a permit or to dump any material transported from a location outside the United States into U.S. territorial seas or the contiguous zone (within from shore) or ocean waters. EPA is responsible for issuing permits that regulate the disposal of materials at sea (except for dredged material disposal, for which the U.S. Army Corps of Engineers is responsible). Beyond waters that are under U.S. jurisdiction, no MPRSA permit is required for a ship to discharge solid waste. The routine discharge of effluent incidental to the propulsion of vessels is explicitly exempted from the definition of dumping in the MPRSA. The Act to Prevent Pollution from Ships (APPS) and its regulations, which implement U.S.-ratified provisions of MARPOL, also apply to ships. APPS prohibits the discharge of all garbage within of shore, certain types of garbage within offshore, and plastic anywhere. It applies to all vessels, whether seagoing or not, regardless of flag, operating in U.S. navigable waters and the Exclusive Economic Zone (EEZ). It is administered by the Coast Guard, which carries out inspection programs to insure the adequacy of port facilities to receive offloaded solid waste. Hazardous waste The Resource Conservation and Recovery Act (RCRA) is the primary federal law that governs hazardous waste management through a "cradle-to-grave" program that controls hazardous waste from the point of generation until ultimate disposal. The act imposes management requirements on generators, transporters, and persons who treat or dispose of hazardous waste. Under this act, a waste is hazardous if it is ignitable, corrosive, reactive, or toxic, or appears on a list of about 100 industrial process waste streams and more than 500 discarded commercial products and chemicals. Treatment, storage, and disposal facilities are required to have permits and comply with operating standards and other EPA regulations. The owner or operator of a ship may be a generator and/or a transporter of hazardous waste, and thus subject to RCRA rules. Issues that the ship industry may face relating to RCRA include ensuring that hazardous waste is identified at the point at which it is considered generated; ensuring that parties are properly identified as generators, storers, treaters, or disposers; and determining the applicability of RCRA requirements to each. Hazardous waste generated on board ships is stored onboard until the wastes can be offloaded for recycling or disposal in accordance with RCRA. A range of activities on board cruise generate hazardous wastes and toxic substances that would ordinarily be presumed to be subject to RCRA. Ships are potentially subject to RCRA requirements to the extent that chemicals used for operations such as ship maintenance and passenger services result in the generation of hazardous wastes. However, it is not entirely clear what regulations apply to the management and disposal of these wastes.30 RCRA rules that cover small-quantity generators (those that generate more than 100 kilograms but less than 1,000 kilograms of hazardous waste per month) are less stringent than those for large-quantity generators (generating more than 1,000 kilograms per month), and it is unclear whether ships are classified as large or small generators of hazardous waste. Moreover, some ship companies argue that they generate less than 100 kilograms per month and therefore should be classified in a third category, as "conditionally exempt small-quantity generators," a categorization that allows for less rigorous requirements for notification, recordkeeping, and the like. A release of hazardous substances by a vessel could also theoretically trigger coverage under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA; also known as "Superfund."). In addition to RCRA, hazardous waste discharges from ships are subject to Section 311 of the Clean Water Act, which prohibits the discharge of hazardous substances in harmful quantities into or upon the navigable waters of the United States, adjoining shorelines, or into or upon the waters of the contiguous zone. Bilge water Section 311 of the Clean Water Act, as amended by the Oil Pollution Act of 1990, applies to ships and prohibits discharge of oil or hazardous substances in harmful quantities into or upon U.S. navigable waters, or into or upon the waters of the contiguous zone, or which may affect natural resources in the U.S. EEZ (extending offshore). Coast Guard regulations prohibit discharge of oil within from shore, unless passed through a 15-ppm oil water separator, and unless the discharge does not cause a visible sheen. Beyond , oil or oily mixtures can be discharged while a vessel is proceeding en route and if the oil content without dilution is less than 100 ppm. Vessels are required to maintain an Oil Record Book to record disposal of oily residues and discharges overboard or disposal of bilge water. In addition to Section 311 requirements, APPS implements MARPOL Annex I concerning oil pollution. APPS applies to all U.S. flagged ships anywhere in the world and to all foreign flagged vessels operating in the navigable waters of the United States, or while at a port under U.S. jurisdiction. To implement APPS, the Coast Guard has promulgated regulations prohibiting the discharge of oil or oily mixtures into the sea within of the nearest land, except under limited conditions. However, because many ships are foreign registered and because APPS only applies to foreign ships within U.S. navigable waters, the APPS regulations have limited applicability to ship operations. Ballast water discharge limitations The VGP sets numeric ballast water discharge limits for large commercial vessels. The limits are expressed as the maximum acceptable concentration of living organisms per cubic meter of ballast water. The Coast Guard worked with EPA in developing the scientific basis and the regulatory requirements in the VGP. Uniform National Discharge Standards for armed forces vessels Congress amended the CWA in 1996 to require development of uniform national discharge standards ("UNDS") for military vessels. The standards are being developed jointly by EPA and DOD. Initial regulations were published in 1999, to identify and characterize a wide variety of discharge types from ships and boats. A final rule setting specific standards for 11 discharge types was published in 2017. A final rule covering 11 additional discharge categories was published in 2020. The majority of vessels covered belong to the U.S. Navy, but the regulations also cover vessels of the Coast Guard, Marine Corps, Army, Military Sealift Command, and Air Force, totalling over 7,000 vessels. Enforcement In April 2021 a ship engineer on the Zao Galaxy, an oil tanker, was convicted in the United States District Court for the Northern District of California for intentionally dumping oily bilge water in February 2019 and submitting false paperwork in an attempt to conceal the crime. The engineer may receive a substantial prison sentence and fine. The ship operator pleaded guilty to violating APPS and was fined $1.65 million US and ordered to "implement a comprehensive Environmental Compliance Plan." See also Cruise ship pollution in the United States Merchant Shipping (Pollution) Act 2006 Merchant Marine Act of 1920 References Copeland, Claudia (2008). "Cruise Ship Pollution: Background, Laws and Regulations, and Key Issues" Washington, D.C.: U.S. Congressional Research Service. Order Code RL32450. Updated 2008-02-06. Pollution in the United States Maritime history of the United States Environmental impact of shipping United States admiralty law Ocean pollution Water pollution in the United States Ship pollution
Regulation of ship pollution in the United States
[ "Chemistry", "Environmental_science" ]
3,892
[ "Ocean pollution", "Water pollution" ]
7,318,436
https://en.wikipedia.org/wiki/Von%20Zeipel%20theorem
In astrophysics, the von Zeipel theorem states that the radiative flux in a uniformly rotating star is proportional to the local effective gravity . The theorem is named after Swedish astronomer Edvard Hugo von Zeipel. The theorem is: where the luminosity and mass are evaluated on a surface of constant pressure . The effective temperature can then be found at a given colatitude from the local effective gravity: This relation ignores the effect of convection in the envelope, so it primarily applies to early-type stars. According to the theory of rotating stars, if the rotational velocity of a star depends only on the radius, it cannot simultaneously be in thermal and hydrostatic equilibrium. This is called the von Zeipel paradox. The paradox is resolved, however, if the rotational velocity also depends on height, or there is a meridional circulation. A similar situation may arise in accretion disks. References Stellar astronomy Equations of astronomy
Von Zeipel theorem
[ "Physics", "Astronomy" ]
192
[ "Concepts in astronomy", "Astronomy stubs", "Stellar astronomy stubs", "Equations of astronomy", "Astronomical sub-disciplines", "Stellar astronomy" ]
7,318,441
https://en.wikipedia.org/wiki/Serial%20interval
The serial interval in the epidemiology of communicable (infectious) diseases is the time between successive cases in a chain of transmission. The serial interval is generally estimated from the interval between clinical onsets (if observable), in which case it is the 'clinical onset serial interval'. It could in principle be estimated by the time interval between infection and subsequent transmission. If the typical time from the first person's clinical onset to when they infect another is TA, and the incubation period of a subsequent case is IB, then the clinical onset serial interval is TA + IB. More realistically, the calculation would use the observed frequency distribution of times from onset of a single primary case to that of its associated secondary cases. If the distribution of timing of transmission events during the infectious period is not skewed around its mean, then the average serial interval is calculated as the sum of the average latent period (from infection to infectiousness) and half the average infectious period. Serial intervals can vary widely, especially for lifelong diseases such as HIV infection, chickenpox, and herpes. The serial interval for SARS was 7 days. For the original strain of COVID-19, a 2020 review of the published literature shows its serial interval to be 4-8 days. Related but distinct quantities include the 'average transmission interval' sum of average latent and infectious period, the 'incubation period' between infection and disease onset, and the 'latent period' between infection and infectiousness. References Epidemiology
Serial interval
[ "Environmental_science" ]
318
[ "Epidemiology", "Environmental social science" ]
7,319,263
https://en.wikipedia.org/wiki/Entropy%20%28energy%20dispersal%29
In thermodynamics, the interpretation of entropy as a measure of energy dispersal has been exercised against the background of the traditional view, introduced by Ludwig Boltzmann, of entropy as a quantitative measure of disorder. The energy dispersal approach avoids the ambiguous term 'disorder'. An early advocate of the energy dispersal conception was Edward A. Guggenheim in 1949, using the word 'spread'. In this alternative approach, entropy is a measure of energy dispersal or spread at a specific temperature. Changes in entropy can be quantitatively related to the distribution or the spreading out of the energy of a thermodynamic system, divided by its temperature. Some educators propose that the energy dispersal idea is easier to understand than the traditional approach. The concept has been used to facilitate teaching entropy to students beginning university chemistry and biology. Comparisons with traditional approach The term "entropy" has been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Such descriptions have tended to be used together with commonly used terms such as disorder and randomness, which are ambiguous, and whose everyday meaning is the opposite of what they are intended to mean in thermodynamics. Not only does this situation cause confusion, but it also hampers the teaching of thermodynamics. Students were being asked to grasp meanings directly contradicting their normal usage, with equilibrium being equated to "perfect internal disorder" and the mixing of milk in coffee from apparent chaos to uniformity being described as a transition from an ordered state into a disordered state. The description of entropy as the amount of "mixedupness" or "disorder," as well as the abstract nature of the statistical mechanics grounding this notion, can lead to confusion and considerable difficulty for those beginning the subject. Even though courses emphasised microstates and energy levels, most students could not get beyond simplistic notions of randomness or disorder. Many of those who learned by practising calculations did not understand well the intrinsic meanings of equations, and there was a need for qualitative explanations of thermodynamic relationships. Arieh Ben-Naim recommends abandonment of the word entropy, rejecting both the 'dispersal' and the 'disorder' interpretations; instead he proposes the notion of "missing information" about microstates as considered in statistical mechanics, which he regards as commonsensical. Description Increase of entropy in a thermodynamic process can be described in terms of "energy dispersal" and the "spreading of energy," while avoiding mention of "disorder" except when explaining misconceptions. All explanations of where and how energy is dispersing or spreading have been recast in terms of energy dispersal, so as to emphasise the underlying qualitative meaning. In this approach, the second law of thermodynamics is introduced as "Energy spontaneously disperses from being localized to becoming spread out if it is not hindered from doing so," often in the context of common experiences such as a rock falling, a hot frying pan cooling down, iron rusting, air leaving a punctured tyre and ice melting in a warm room. Entropy is then depicted as a sophisticated kind of "before and after" yardstick — measuring how much energy is spread out over time as a result of a process such as heating a system, or how widely spread out the energy is after something happens in comparison with its previous state, in a process such as gas expansion or fluids mixing (at a constant temperature). The equations are explored with reference to the common experiences, with emphasis that in chemistry the energy that entropy measures as dispersing is the internal energy of molecules. The statistical interpretation is related to quantum mechanics in describing the way that energy is distributed (quantized) amongst molecules on specific energy levels, with all the energy of the macrostate always in only one microstate at one instant. Entropy is described as measuring the energy dispersal for a system by the number of accessible microstates, the number of different arrangements of all its energy at the next instant. Thus, an increase in entropy means a greater number of microstates for the final state than for the initial state, and hence more possible arrangements of a system's total energy at any one instant. Here, the greater 'dispersal of the total energy of a system' means the existence of many possibilities. Continuous movement and molecular collisions visualised as being like bouncing balls blown by air as used in a lottery can then lead on to showing the possibilities of many Boltzmann distributions and continually changing "distribution of the instant", and on to the idea that when the system changes, dynamic molecules will have a greater number of accessible microstates. In this approach, all everyday spontaneous physical happenings and chemical reactions are depicted as involving some type of energy flows from being localized or concentrated to becoming spread out to a larger space, always to a state with a greater number of microstates. This approach provides a good basis for understanding the conventional approach, except in very complex cases where the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot. Thus in situations such as the entropy of mixing when the two or more different substances being mixed are at the same temperature and pressure so there will be no net exchange of heat or work, the entropy increase will be due to the literal spreading out of the motional energy of each substance in the larger combined final volume. Each component's energetic molecules become more separated from one another than they would be in the pure state, when in the pure state they were colliding only with identical adjacent molecules, leading to an increase in its number of accessible microstates. Current adoption Variants of the energy dispersal approach have been adopted in number of undergraduate chemistry texts, mainly in the United States. One respected text states: The concept of the number of microstates makes quantitative the ill-defined qualitative concepts of 'disorder' and the 'dispersal' of matter and energy that are used widely to introduce the concept of entropy: a more 'disorderly' distribution of energy and matter corresponds to a greater number of micro-states associated with the same total energy. — Atkins & de Paula (2006) History The concept of 'dissipation of energy' was used in Lord Kelvin's 1852 article "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy." He distinguished between two types or "stores" of mechanical energy: "statical" and "dynamical." He discussed how these two types of energy can change from one form to the other during a thermodynamic transformation. When heat is created by any irreversible process (such as friction), or when heat is diffused by conduction, mechanical energy is dissipated, and it is impossible to restore the initial state. Using the word 'spread', an early advocate of the energy dispersal concept was Edward Armand Guggenheim. In the mid-1950s, with the development of quantum theory, researchers began speaking about entropy changes in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels, such as by the reactants and products of a chemical reaction. In 1984, the Oxford physical chemist Peter Atkins, in a book The Second Law, written for laypersons, presented a nonmathematical interpretation of what he called the "infinitely incomprehensible entropy" in simple terms, describing the Second Law of thermodynamics as "energy tends to disperse". His analogies included an imaginary intelligent being called "Boltzmann's Demon," who runs around reorganizing and dispersing energy, in order to show how the W in Boltzmann's entropy formula relates to energy dispersion. This dispersion is transmitted via atomic vibrations and collisions. Atkins wrote: "each atom carries kinetic energy, and the spreading of the atoms spreads the energy…the Boltzmann equation therefore captures the aspect of dispersal: the dispersal of the entities that are carrying the energy." In 1997, John Wrigglesworth described spatial particle distributions as represented by distributions of energy states. According to the second law of thermodynamics, isolated systems will tend to redistribute the energy of the system into a more probable arrangement or a maximum probability energy distribution, i.e. from that of being concentrated to that of being spread out. By virtue of the First law of thermodynamics, the total energy does not change; instead, the energy tends to disperse over the space to which it has access. In his 1999 Statistical Thermodynamics, M.C. Gupta defined entropy as a function that measures how energy disperses when a system changes from one state to another. Other authors defining entropy in a way that embodies energy dispersal are Cecie Starr and Andrew Scott. In a 1996 article, the physicist Harvey S. Leff set out what he called "the spreading and sharing of energy." Another physicist, Daniel F. Styer, published an article in 2000 showing that "entropy as disorder" was inadequate. In an article published in the 2002 Journal of Chemical Education, Frank L. Lambert argued that portraying entropy as "disorder" is confusing and should be abandoned. He has gone on to develop detailed resources for chemistry instructors, equating entropy increase as the spontaneous dispersal of energy, namely how much energy is spread out in a process, or how widely dispersed it becomes – at a specific temperature. See also Introduction to entropy References Further reading Carson, E. M., and Watson, J. R., (Department of Educational and Professional Studies, King's College, London), 2002, "Undergraduate students' understandings of entropy and Gibbs Free energy," University Chemistry Education - 2002 Papers, Royal Society of Chemistry. Lambert, Frank L. (2002). "Disorder - A Cracked Crutch For Supporting Entropy Discussions," Journal of Chemical Education 79: 187-92. Texts using the energy dispersal approach Atkins, P. W., Physical Chemistry for the Life Sciences. Oxford University Press, ; W. H. Freeman, Benjamin Gal-Or, "Cosmology, Physics and Philosophy", Springer-Verlag, New York, 1981, 1983, 1987 Bell, J., et al., 2005. Chemistry: A General Chemistry Project of the American Chemical Society, 1st ed. W. H. Freeman, 820pp, Brady, J.E., and F. Senese, 2004. Chemistry, Matter and Its Changes, 4th ed. John Wiley, 1256pp, Brown, T. L., H. E. LeMay, and B. E. Bursten, 2006. Chemistry: The Central Science, 10th ed. Prentice Hall, 1248pp, Ebbing, D.D., and S. D. Gammon, 2005. General Chemistry, 8th ed. Houghton-Mifflin, 1200pp, Ebbing, Gammon, and Ragsdale. Essentials of General Chemistry, 2nd ed. Hill, Petrucci, McCreary and Perry. General Chemistry, 4th ed. Kotz, Treichel, and Weaver. Chemistry and Chemical Reactivity, 6th ed. Moog, Spencer, and Farrell. Thermodynamics, A Guided Inquiry. Moore, J. W., C. L. Stanistski, P. C. Jurs, 2005. Chemistry, The Molecular Science, 2nd ed. Thompson Learning. 1248pp, Olmsted and Williams, Chemistry, 4th ed. Petrucci, Harwood, and Herring. General Chemistry, 9th ed. Silberberg, M.S., 2006. Chemistry, The Molecular Nature of Matter and Change, 4th ed. McGraw-Hill, 1183pp, Suchocki, J., 2004. Conceptual Chemistry 2nd ed. Benjamin Cummings, 706pp, External links welcome to entropy site A large website by Frank L. Lambert with links to work on the energy dispersal approach to entropy. The Second Law of Thermodynamics (6) Thermodynamic entropy
Entropy (energy dispersal)
[ "Physics" ]
2,529
[ "Statistical mechanics", "Entropy", "Physical quantities", "Thermodynamic entropy" ]
7,319,461
https://en.wikipedia.org/wiki/Expression%20cloning
Expression cloning is a technique in DNA cloning that uses expression vectors to generate a library of clones, with each clone expressing one protein. This expression library is then screened for the property of interest and clones of interest are recovered for further analysis. An example would be using an expression library to isolate genes that could confer antibiotic resistance. Expression vectors Expression vectors are a specialized type of cloning vector in which the transcriptional and translational signals needed for the regulation of the gene of interest are included in the cloning vector. The transcriptional and translational signals may be synthetically created to make the expression of the gene of interest easier to regulate. Purpose Usually the ultimate aim of expression cloning is to produce large quantities of specific proteins. To this end, a bacterial expression clone may include a ribosome binding site (Shine-Dalgarno sequence) to enhance translation of the gene of interest's mRNA, a transcription termination sequence, or, in eukaryotes, specific sequences to promote the post-translational modification of the protein product. See also Molecular cell biology genetics gene expression Transcription (genetics) translation λ phage pBR322 References Genetics techniques Molecular genetics
Expression cloning
[ "Chemistry", "Engineering", "Biology" ]
240
[ "Genetics techniques", "Molecular genetics", "Genetic engineering", "Molecular biology" ]
7,319,655
https://en.wikipedia.org/wiki/Signaling%20peptide%20receptor
Signaling peptide receptor is a type of receptor which binds one or more signaling peptides or signaling proteins. An example is the tropomyosin receptor kinase B (TrkB), which is bound and activated by the neurotrophic protein brain-derived neurotrophic factor (BDNF). Another example is the μ-opioid receptor (MOR), which is bound and activated by the opioid peptide hormone β-endorphin. Adiponectin receptor AdipoR1 Agonists Peptide Adiponectin ADP-355 ADP-399 Non-peptide AdipoRon (–)-Arctigenin Arctiin Gramine Matairesinol Antagonists Peptide ADP-400 AdipoR2 Agonists Peptide Adiponectin ADP-355 ADP-399 Non-peptide AdipoRon Deoxyschizandrin Parthenolide Syringing Taxifoliol Antagonists Peptide ADP-400 Angiotensin receptor Bradykinin receptor Agonists Bradykinin Kallidin Antagonists FR-173657 Icatibant LF22-0542 Calcitonin gene-related peptide receptor Agonists Amylin CGRP Pramlintide Antagonists Atogepant BI 44370 TA CGRP (8-37) MK-3207 Olcegepant Rimegepant SB-268262 Telcagepant Ubrogepant Antibodies Eptinezumab Erenumab Fremanezumab Galcanezumab Cholecystokinin receptor CCKA Agonists Cholecystokinin Antagonists Amiglumide Asperlicin Devazepide Dexloxiglumide Lintitript Lorglumide Loxiglumide Pranazepide Proglumide Tarazepide Tomoglumide CCKB Agonists Cholecystokinin CCK-4 Gastrin Pentagastrin (CCK-5) Antagonists Ceclazepide CI-988 (PD-134308) Itriglumide L-365,360 Netazepide Proglumide Spiroglumide Unsorted Antagonists Nastorazepide Corticotropin-releasing hormone receptor CRF1 Agonists Cortagine Corticorelin Corticotropin-releasing hormone Sauvagine Stressin I Urocortin Antagonists Antalarmin Astressin-B CP-154,526 Emicerfont Hypericin LWH-234 NBI-27914 NBI-74788 Pexacerfont R-121919 TS-041 Verucerfont CRF2 Agonists Corticorelin Corticotropin-releasing hormone Sauvagine Urocortin Antagonists Astressin-B Cytokine receptor Endothelin receptor Agonists Endothelin 1 Endothelin 2 Endothelin 3 IRL-1620 Sarafotoxin Antagonists A-192621 Ambrisentan Aprocitentan Atrasentan Avosentan Bosentan BQ-123 BQ-788 Clazosentan Darusentan Edonentan Enrasentan Fandosentan Feloprentan Macitentan Nebentan Sitaxentan Sparsentan Tezosentan Zibotentan Galanin receptor GAL1 Agonists Galanin Galanin (1-15) Galanin-like peptide Galmic Galnon NAX 810-2 Antagonists C7 Dithiepine-1,1,4,4-tetroxide Galantide (M15) M32 M35 M40 SCH-202596 GAL2 Agonists Galanin Galanin (1-15) Galanin (2-11) Galanin-like peptide Galmic Galnon J18 NAX 810-2 Antagonists C7 Galantide (M15) M32 M35 M40 M871 GAL3 Agonists Galanin Galanin (1-15) Galmic Galnon Antagonists C7 Galantide (M15) GalR3ant HT-2157 M32 M35 M40 SNAP-37889 SNAP-398299 Growth hormone secretagogue receptor Growth hormone receptor Growth-hormone-releasing hormone receptor Glucagon-like peptide receptor GLP-1 Agonists Albiglutide Beinaglutide Dulaglutide Efpeglenatide Exenatide GLP-1 Langlenatide Liraglutide Lixisenatide Oxyntomodulin Pegapamodutide Semaglutide Taspoglutide GLP-2 Agonists Apraglutide Elsiglutide Glepaglutide GLP-2 Teduglutide Others Propeptides Preproglucagon Proglucagon Glucagon receptor Agonists Dasiglucagon Glucagon Oxyntomodulin Antagonists Adomeglivant L-168,049 LGD-6972 Propeptides Preproglucagon Proglucagon Gonadotropin-releasing hormone receptor Gonadotropin receptor Growth factor receptor Insulin receptor Agonists Chaetochromin (4548-G05) Insulin-like growth factor 1 Insulin-like growth factor 2 Insulin Insulin aspart Insulin degludec Insulin detemir Insulin glargine Insulin glulisine Insulin lispro Mecasermin Mecasermin rinfabate Antagonists BMS-754807 S661 S961 Kinase inhibitors Linsitinib Antibodies Xentuzumab (against IGF-1 and IGF-2) KiSS1-derived peptide receptor Agonists Kisspeptin (kisspeptin-54, metastin) Kisspeptin-10 KISS1-305 MVT-602 (RVT-602, TAK-448) TAK-683 Antagonists Kisspeptin-234 Leptin receptor Agonists Leptin Metreleptin Melanin-concentrating hormone receptor MCH1 Agonists Melanin-concentrating hormone Antagonists ATC-0065 ATC-0175 GW-803430 NGD-4715 SNAP-7941 SNAP-94847 MCH2 Agonists Melanin-concentrating hormone Melanocortin receptor Neuropeptide FF receptor Agonists Neuropeptide AF Neuropeptide FF Neuropeptide SF (RFRP-1) Neuropeptide VF (RFRP-3) Antagonists BIBP-3226 RF9 Neuropeptide S receptor Agonists Neuropeptide S Antagonists ML-154 SHA-68 Neuropeptide Y receptor Y1 Agonists Neuropeptide Y Peptide YY Antagonists BIBO-3304 BIBP-3226 BVD-10 GR-231118 PD-160170 Y2 Agonists 2-Thiouridine 5'-triphosphate Neuropeptide Y Neuropeptide Y (13-36) Peptide YY Peptide YY (3-36) Antagonists BIIE-0246 JNJ-5207787 SF-11 Y4 Agonists GR-231118 Neuropeptide Y Pancreatic polypeptide Peptide YY Antagonists UR-AK49 Y5 Agonists BWX-46 Neuropeptide Y Peptide YY Antagonists CGP-71683 FMS-586 L-152,804 Lu AA-33810 MK-0557 NTNCB Velneperit (S-2367) Neurotensin receptor NTS1 Agonists Neurotensin Neuromedin N Antagonists Meclinertant SR-142948 NTS2 Agonists Neurotensin Antagonists Levocabastine SR-142948 Opioid receptor Orexin receptor Oxytocin receptor Prolactin receptor Parathyroid hormone receptor Agonists Abaloparatide Parathyroid hormone Parathyroid hormone-related protein (PTHrP) Semparatide Teriparatide Relaxin receptor Agonists Insulin-like factor 3 Relaxin (1, 2, 3) Serelaxin Somatostatin receptor Tachykinin receptor Thyrotropin-releasing hormone receptor Agonists Azetirelin JTP-2942 Montirelin Orotirelin Posatirelin Protirelin Rovatirelin RX-77368 (thymoliberin) Taltirelin TRH (TRF) Thyrotropin receptor Agonists Thyrotropin alfa TSH (thyrotropin) Vasopressin receptor Vasoactive intestinal peptide receptor and Pituitary adenylate cyclase-activating peptide VIPR1 Agonists Peptide Bay 55-9837 LBT-3393 PACAP VIP VIPR2 Agonists Peptide LBT-3627 PACAP VIP PAC1 Agonists PACAP PACAP (1-27) PACAP (1-38) Antagonists PACAP (6-38) Unsorted PHI PHM PHV Others Endogenous Adrenomedullin Apelin Asprosin Bombesin Calcitonin Carnosine CART CLIP DSIP Enteroglucagon Formyl peptide GALP GIP GRP Integrin ligands collagens fibrinogen fibronectin laminins ICAM-1 ICAM-2 osteopontin VCAM-1 vitronectin Kininogens Motilin Natriuretic peptides ANP BNP CNP urodilatin Nesfatin-1 Neuromedin B Neuromedin N Neuromedin S Neuromedin U Obestatin Osteocalcin Resistin Secretin Thymopoietin Thymosins Thymulin Urotensin-II VGF Exogenous Lifitegrast (LFA-1 antagonist) See also Neuropeptide receptor Neurotransmitter receptor References External links Peptides Receptors
Signaling peptide receptor
[ "Chemistry" ]
2,181
[ "Biomolecules by chemical classification", "Signal transduction", "Receptors", "Molecular biology", "Peptides" ]
7,319,988
https://en.wikipedia.org/wiki/Jubilee%20Clip
A Jubilee Clip is a genericised brand name for a worm drive hose clamp, a type of band clamp, consisting of a circular metal band or strip combined with a worm gear fixed to one end. It is designed to hold a soft, pliable hose onto a rigid circular pipe, or sometimes a solid spigot, of smaller diameter. Other names for the worm gear hose clamp include worm drive, worm gear clips, clamps, or just hose clips. In the United Kingdom, Ireland and some of the former British colonies, the Jubilee Clip dominated the market to the extent that Jubilee Clips tend to be known almost exclusively by their brand name. In Canada, these are sometimes called Tridon Clamps after the Canadian manufacturer of these hose or gear clamps. History The Jubilee brand clamp brand was started by Commander Lumley Robinson of the British Royal Navy, who was granted the first patent for his device by the London Patent Office in 1921 while operating as a sole trader. It is now subject to a registered trademark in many countries around the world. The design has been copied with many variations, and there are many other hose clips of a similar design. See also Marman clamp Cable tie References External links Jubilee Clips Vintage advertisements for Jubilee Clips Hoses Mechanical fasteners British inventions
Jubilee Clip
[ "Engineering" ]
264
[ "Mechanical fasteners", "Mechanical engineering" ]
7,320,111
https://en.wikipedia.org/wiki/RFQ%20beam%20cooler
A radio-frequency quadrupole (RFQ) beam cooler is a device for particle beam cooling, especially suited for ion beams. It lowers the temperature of a particle beam by reducing its energy dispersion and emittance, effectively increasing its brightness (brilliance). The prevalent mechanism for cooling in this case is buffer-gas cooling, whereby the beam loses energy from collisions with a light, neutral and inert gas (typically helium). The cooling must take place within a confining field in order to counteract the thermal diffusion that results from the ion-atom collisions. The quadrupole mass analyzer (a radio frequency quadrupole used as a mass filter) was invented by Wolfgang Paul in the late 1950s to early 60s at the University of Bonn, Germany. Paul shared the 1989 Nobel Prize in Physics for his work. Samples for mass analysis are ionized, for example by laser (matrix-assisted laser desorption/ionization) or discharge (electrospray or inductively coupled plasma) and the resulting beam is sent through the RFQ and "filtered" by scanning the operating parameters (chiefly the RF amplitude). This gives a mass spectrum, or fingerprint, of the sample. Residual gas analyzers use this principle as well. Applications of ion cooling to nuclear physics Despite its long history, high-sensitivity high-accuracy mass measurements of atomic nuclei continue to be very important areas of research for many branches of physics. Not only do these measurements provide a better understanding of nuclear structures and nuclear forces but they also offer insight into how matter behaves in some of nature's harshest environments. At facilities such as ISOLDE at CERN and TRIUMF in Vancouver, for instance, measurement techniques are now being extended to short-lived radionuclei that only occur naturally in the interior of exploding stars. Their short half-lives and very low production rates at even the most powerful facilities require the very highest in sensitivity of such measurements. Penning traps, the central element in modern high-accuracy high-sensitivity mass measurement installations, enable measurements of accuracies approaching 1 part in 1011 on single ions. However, to achieve this Penning traps must have the ion to be measured delivered to it very precisely and with certainty that it is indeed the desired ion. This imposes severe requirements on the apparatus that must take the atomic nucleus out of the target in which it has been created, sort it from the myriad of other ions that are emitted from the target and then direct it so that it can be captured in the measurement trap. Cooling these ion beams, particularly radioactive ion beams, has been shown to drastically improve the accuracy and sensitivity of mass measurements by reducing the phase space of the ion collections in question. Using a light neutral background gas, typically helium, charged particles originating from on-line mass separators undergo a number of soft collisions with the background gas molecules resulting in fractional losses of the ions' kinetic energy and a reduction of the ion ensemble's overall energy. In order for this to be effective, however, the ions need to be contained using transverse radiofrequency quadrupole (RFQ) electric fields during the collisional cooling process (also known as buffer gas cooling). These RFQ coolers operate on the same principles as quadrupole ion traps and have been shown to be particularly well suited for buffer gas cooling given their capacity for total confinement of ions having a large dispersion of velocities, corresponding to kinetic energies up to tens of electron volts. A number of the RFQ coolers have already been installed at research facilities around the world and a list of their characteristics can be found below. List of facilities containing RFQ coolers See also Quadrupole mass analyzer References Bibliography External links LEBIT Project NSCL/MSU ISOLTRAP Experimental Setup TITAN: TRIUMF's Ion Trap for Atomic and Nuclear science TRIMP – Trapped Radioactive Isotopes: Micro-laboratories for fundamental Physics The SHIPTRAP Experiment The ISCOOL project Measuring instruments Mass spectrometry Accelerator physics
RFQ beam cooler
[ "Physics", "Chemistry", "Technology", "Engineering" ]
829
[ "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Measuring instruments", "Experimental physics", "Mass spectrometry", "Accelerator physics", "Matter" ]
7,320,339
https://en.wikipedia.org/wiki/Nitrogen%20rejection%20unit
A nitrogen rejection unit (NRU) selectively removes nitrogen from a gas. The name can be applied to any system that removes nitrogen from natural gas. For high flow-rate applications, typically above per day at standard pressure, cryogenic processing is the norm. This is a distillation process which utilizes the different volatilities of methane (boiling point of −161.6 °C) and nitrogen (boiling point of −195.69 °C) to achieve separation. In this process, a system of compression and distillation columns drastically reduces the temperature of the gas mixture to a point where methane is liquified and the nitrogen is not. For smaller applications, a series of heat exchangers may be used as an alternative to distillation columns. For smaller volumes of gas, a system utilizing pressure swing adsorption (PSA) is a more typical method of separation. In PSA, methane and nitrogen can be separated by using an adsorbent with an aperture size very close to the molecular diameter of the larger species, in this case methane (3.8 angstroms). This means nitrogen is able to diffuse through the adsorbent, filling adsorption sites, whilst methane is not. This results in a purified natural gas stream that fits pipeline specifications. The adsorbent can then be regenerated, leaving a highly pure nitrogen stream. PSA is a flexible method for nitrogen rejection, being applied to both small and large flow rates. The operating conditions of various PSA units are quite variable. Depending on the vendor, high degrees of pretreatment of the gas stream (removal of water vapor and heavy hydrocarbons) may be necessary for the system to operate optimally and without damage to the adsorbent material. Moreover, the degree of hydrocarbon recoveries (75% vs 95%) and purities can vary considerably. The economic viability of any PSA unit will be highly dependent on such factors. An estimated 25% of the US natural gas reserves contain unacceptably large quantities of nitrogen. Nitrogen is inert and lowers the energy value per volume of natural gas. It also takes up capacity in pipelines that could be used for valuable methane. Pipeline specifications for nitrogen are extremely variable, though no more than 4% nitrogen is a typical specification. References External links G.I. Dynamics Cryogenic Nitrogen Rejection Technology California Energy Commission Glossary Molecular Gate Adsorption Technology Further reading Natural gas technology Industrial gases
Nitrogen rejection unit
[ "Chemistry" ]
510
[ "Chemical process engineering", "Natural gas technology", "Industrial gases" ]
7,320,365
https://en.wikipedia.org/wiki/Sieve%20analysis
A sieve analysis (or gradation test) is a practice or procedure used in geology, civil engineering, and chemical engineering to assess the particle size distribution (also called gradation) of a granular material by allowing the material to pass through a series of sieves of progressively smaller mesh size and weighing the amount of material that is stopped by each sieve as a fraction of the whole mass. The size distribution is often of critical importance to the way the material performs in use. A sieve analysis can be performed on any type of non-organic or organic granular materials including sand, crushed rock, clay, granite, feldspar, coal, soil, a wide range of manufactured powder, grain and seeds, down to a minimum size depending on the exact method. Being such a simple technique of particle sizing, it is probably the most common. Procedure A gradation test is performed on a sample of aggregate in a laboratory. A typical sieve analysis uses a column of sieves with wire mesh screens of graded mesh size. A representative weighed sample is poured into the top sieve which has the largest screen openings. Each lower sieve in the column has smaller openings than the one above. At the base is a pan, called the receiver. The column is typically placed in a mechanical shaker, which shakes the column, usually for a set period, to facilitate exposing all of the material to the screen openings so that particles small enough to fit through the holes can fall through to the next layer. After the shaking is complete the material on each sieve is weighed. The mass of the sample of each sieve is then divided by the total mass to give a percentage retained on each sieve. The size of the average particle on each sieve is then analysed to get a cut-off point or specific size range, which is then captured on a screen. The results of this test are used to describe the properties of the aggregate and to see if it is appropriate for various civil engineering purposes such as selecting the appropriate aggregate for concrete mixes and asphalt mixes as well as sizing of water production well screens. The results of this test are provided in graphical form to identify the type of gradation of the aggregate. The complete procedure for this test is outlined in the American Society for Testing and Materials (ASTM) C 136 and the American Association of State Highway and Transportation Officials (AASHTO) T 27 A suitable sieve size for the aggregate underneath the nest of sieves to collect the aggregate that passes through the smallest. The entire nest is then agitated, and the material whose diameter is smaller than the mesh opening pass through the sieves. After the aggregate reaches the pan, the amount of material retained in each sieve is then weighed. Preparation In order to perform the test, a sufficient sample of the aggregate must be obtained from the source. To prepare the sample, the aggregate should be mixed thoroughly and be reduced to a suitable size for testing. The total mass of the sample is also required. Results The results are presented in a graph of percent passing versus the sieve size. On the graph the sieve size scale is logarithmic. To find the percent of aggregate passing through each sieve, first find the percent retained in each sieve. To do so, the following equation is used, %Retained = ×100% where WSieve is the mass of aggregate in the sieve and WTotal is the total mass of the aggregate. The next step is to find the cumulative percent of aggregate retained in each sieve. To do so, add up the total amount of aggregate that is retained in each sieve and the amount in the previous sieves. The cumulative percent passing of the aggregate is found by subtracting the percent retained from 100%. %Cumulative Passing = 100% - %Cumulative Retained. The values are then plotted on a graph with cumulative percent passing on the y axis and logarithmic sieve size on the x axis. There are two versions of the %Passing equations. the .45 power formula is presented on .45 power gradation chart, whereas the more simple %Passing is presented on a semi-log gradation chart. version of the percent passing graph is shown on .45 power chart and by using the .45 passing formula. .45 power percent passing formula % Passing = Pi = x100% Where: SieveLargest - Largest diameter sieve used in (mm). Aggregatemax_size - Largest piece of aggregate in the sample in (mm). Percent passing formula %Passing = x100% Where: WBelow - The total mass of the aggregate within the sieves below the current sieve, not including the current sieve's aggregate. WTotal - The total mass of all of the aggregate in the sample. Methods There are different methods for carrying out sieve analyses, depending on the material to be measured. Throw-action Here a throwing motion acts on the sample. The vertical throwing motion is overlaid with a slight circular motion which results in distribution of the sample amount over the whole sieving surface. The particles are accelerated in the vertical direction (are thrown upwards). In the air they carry out free rotations and interact with the openings in the mesh of the sieve when they fall back. If the particles are smaller than the openings, they pass through the sieve. If they are larger, they are thrown. The rotating motion while suspended increases the probability that the particles present a different orientation to the mesh when they fall back again, and thus might eventually pass through the mesh. Modern sieve shakers work with an electro-magnetic drive which moves a spring-mass system and transfers the resulting oscillation to the sieve stack. Amplitude and sieving time are set digitally and are continuously observed by an integrated control-unit. Therefore, sieving results are reproducible and precise (an important precondition for a significant analysis). Adjustment of parameters like amplitude and sieving time serves to optimize the sieving for different types of material. This method is the most common in the laboratory sector. Horizontal In horizontal sieve shaker the sieve stack moves in horizontal circles in a plane. Horizontal sieve shakers are preferably used for needle-shaped, flat, long or fibrous samples, as their horizontal orientation means that only a few disoriented particles enter the mesh and the sieve is not blocked so quickly. The large sieving area enables the sieving of large amounts of sample, for example as encountered in the particle-size analysis of construction materials and aggregates. Tapping A horizontal circular motion overlies a vertical motion which is created by a tapping impulse. These motional processes are characteristic of hand sieving and produce a higher degree of sieving for denser particles (e.g. abrasives) than throw-action sieve shakers. Wet Most sieve analyses are carried out dry. But there are some applications which can only be carried out by wet sieving. This is the case when the sample which has to be analysed is e.g. a suspension which must not be dried; or when the sample is a very fine powder which tends to agglomerate (mostly < 45 μm) – in a dry sieving process this tendency would lead to a clogging of the sieve meshes and this would make a further sieving process impossible. A wet sieving process is set up like a dry process: the sieve stack is clamped onto the sieve shaker and the sample is placed on the top sieve. Above the top sieve a water-spray nozzle is placed which supports the sieving process additionally to the sieving motion. The rinsing is carried out until the liquid which is discharged through the receiver is clear. Sample residues on the sieves have to be dried and weighed. When it comes to wet sieving it is very important not to change the sample in its volume (no swelling, dissolving or reaction with the liquid). Air Circular Jet Air jet sieving machines are ideally suited for very fine powders which tend to agglomerate and cannot be separated by vibrational sieving. The reason for the effectiveness of this sieving method is based on two components: A rotating slotted nozzle inside the sieving chamber and a powerful industrial vacuum cleaner which is connected to the chamber. The vacuum cleaner generates a vacuum inside the sieving chamber and sucks in fresh air through the slotted nozzle. When passing the narrow slit of the nozzle the air stream is accelerated and blown against the sieve mesh, dispersing the particles. Above the mesh, the air jet is distributed over the complete sieve surface and is sucked in with low speed through the sieve mesh. Thus the finer particles are transported through the mesh openings into the vacuum cleaner. Types of gradation Dense gradation A dense gradation refers to a sample that is approximately of equal amounts of various sizes of aggregate. By having a dense gradation, most of the air voids between the material are filled with particles. A dense gradation will result in an even curve on the gradation graph. Narrow gradation Also known as uniform gradation, a narrow gradation is a sample that has aggregate of approximately the same size. The curve on the gradation graph is very steep, and occupies a small range of the aggregate. Gap gradation A gap gradation refers to a sample with very little aggregate in the medium size range. This results in only coarse and fine aggregate. The curve is horizontal in the medium size range on the gradation graph. Open gradation An open gradation refers an aggregate sample with very little fine aggregate particles. This results in many air voids, because there are no fine particles to fill them. On the gradation graph, it appears as a curve that is horizontal in the small size range. Rich gradation A rich gradation refers to a sample of aggregate with a high proportion of particles of small sizes. Types of sieves Woven wire mesh sieves Woven wire mesh sieves are according to technical requirements of ISO 3310-1. These sieves usually have nominal aperture ranging from 20 micrometers to 3.55 millimeters, with diameters ranging from 100 to 450 millimeters. Perforated plate sieves Perforated plate sieves conform to ISO 3310-2 and can have round or square nominal apertures ranging from 1 millimeter to 125 millimeters. The diameters of the sieves range from 200 to 450 millimeters. American standard sieves American standard sieves also known as ASTM sieves conform to ASTM E11 standard. The nominal aperture of these sieves range from 20 micrometers to 200 millimeters, however these sieves have only and diameter sizes. Limitations of sieve analysis Sieve analysis has, in general, been used for decades to monitor material quality based on particle size. For coarse material, sizes that range down to #100 mesh (150 μm), a sieve analysis and particle size distribution is accurate and consistent. However, for material that is finer than 100 mesh, dry sieving can be significantly less accurate. This is because the mechanical energy required to make particles pass through an opening and the surface attraction effects between the particles themselves and between particles and the screen increase as the particle size decreases. Wet sieve analysis can be utilized where the material analyzed is not affected by the liquid - except to disperse it. Suspending the particles in a suitable liquid transports fine material through the sieve much more efficiently than shaking the dry material. Sieve analysis assumes that all particle will be round (spherical) or nearly so and will pass through the square openings when the particle diameter is less than the size of the square opening in the screen. For elongated and flat particles a sieve analysis will not yield reliable mass-based results, as the particle size reported will assume that the particles are spherical, where in fact an elongated particle might pass through the screen end-on, but would be prevented from doing so if it presented itself side-on. Properties Gradation affects many properties of an aggregate, including bulk density, physical stability and permeability. With careful selection of the gradation, it is possible to achieve high bulk density, high physical stability, and low permeability. This is important because in pavement design, a workable, stable mix with resistance to water is important. With an open gradation, the bulk density is relatively low, due to the lack of fine particles, the physical stability is moderate, and the permeability is quite high. With a rich gradation, the bulk density will also be low, the physical stability is low, and the permeability is also low. The gradation can be affected to achieve the desired properties for the particular engineering application. Engineering applications Gradation is usually specified for each engineering application it is used for. For example, foundations might only call for coarse aggregates, and therefore an open gradation is needed. Sieve analysis determines the particle size distribution of a given soil sample and hence helps in easy identification of a soil's mechanical properties. These mechanical properties determine whether a given soil can support the proposed engineering structure. It also helps determine what modifications can be applied to the soil and the best way to achieve maximum soil strength. See also Soil gradation Automated sieving using photoanalysis Optical granulometry References External links List of ASTM test methods for sieve analysis of various materials ASTM C136 / C136M - 14 Standard Test Method for Sieve Analysis of Fine and Coarse Aggregates ASTM B214 - 16 Standard Test Method for Sieve Analysis of Metal Powders Chemical engineering Soil Classification System, Unified Granulometric analyses Particle technology
Sieve analysis
[ "Chemistry", "Engineering" ]
2,830
[ "Particle technology", "Chemical engineering", "nan", "Environmental engineering" ]
7,320,527
https://en.wikipedia.org/wiki/Collision%20%28telecommunications%29
A collision is the situation that occurs when two or more demands are made simultaneously on equipment that can handle only one at any given instant. It may refer to: Collision domain, a physical network segment where data packets can "collide" Carrier-sense multiple access with collision avoidance, (CSMA/CA) used for example with wireless LANs Carrier-sense multiple access with collision detection, (CSMA/CD) used with Ethernet Late collision, a specific type of collision that should not occur on properly operating networks Local collision is a collision that occurs in the network interface rather than on the network itself See also Collision (disambiguation) Contention (telecommunications) References Telecommunications engineering
Collision (telecommunications)
[ "Engineering" ]
139
[ "Electrical engineering", "Telecommunications engineering" ]
7,320,592
https://en.wikipedia.org/wiki/Indoor%20tanning%20lotion
Indoor tanning lotions accelerate the tanning process, by promoting the production of melanin. Increasing blood flow to the skin is a proposed mechanism, which may in turn stimulate production of melanin by melanocytes. Historically, indoor tanning lotions have contained no sunscreen and offer no protection from the sun. However, many tanning lotions currently contain sunscreen. Unlike sunless tanning lotions, these are designed for use with an ultraviolet source such as a tanning bed or booth. Ingredients Some of the active ingredients found in common tanning lotions include melanin and L-Tyrosine. Other commonly found ingredients include tea oil, copper (in many different chemical compounds), green tea extract and many other natural oils. Indoor tanning lotions are usually designed to only use ingredients that will not cause damage or build up on acrylic surfaces. This is because all tanning beds use 100% acrylic in their protective shields. This is one reason people should not use outdoor tanning lotion in a tanning bed, as some common ingredients such as mineral oil (common ingredient in cosmetics, including some baby oil brands) will damage the surface of the acrylics. Tingle is a standard description for indoor tanning lotions that contain ingredients that increase blood flow at the skin level, or cause a "tingling" sensation. Bronzers Some lotions have a bronzing effect to them. There are three different types of bronzers; cosmetic, natural and DHA. DHA (dihydroxyacetone) is a higher level of bronzer that stays on the skin for about 4–5 days depending on how much one exfoliates. Natural bronzers that are made from plant extracts, and stay on the skin for about 3–4 days, also exist. Cosmetic bronzers stain the skin the most, they stay on the skin for about 1–3 days and can be easily washed off in the shower. These bronzers work with the skin to provide a darker cosmetic color. They take approximately 4–6 hours to develop full color. Having a base tan before using a bronzer produces a more natural looking color. Natural bronzers use natural ingredients, such as caramel, riboflavin, etc. These ingredients provide a slight instant boost of color, but will wash off in the shower. Higher quality natural bronzer lotions will have certain organic ingredients/natural or exotic extracts that aid in the process of tanning (melanin production/oxidization). Moisturizing One of the primary purposes for using indoor tanning lotions is to moisturize the skin. This is because tanning (indoors or out) can dehydrate the skin so additional moisturization is needed to compensate and leave the skin looking smooth and healthy. One of the most popular moisturizing elements in tanning lotions is hempseed oil, although other oils are also common. The primary moisturizing ingredients in tanning lotions are essentially the same as in regular hand lotions, although they tend to have less alcohol in them. Outdoor use Most indoor tanning lotions do not offer protection from the sun (have no SPF) and are not intended for outdoor use. However, many tanning lotions now contain SPF. See also Indoor tanning References External links FDA site Body Lotion Manufacturer Further reading Tanning (beauty treatment) Cosmetic industry Skin care
Indoor tanning lotion
[ "Chemistry" ]
701
[ "Tanning (beauty treatment)", "Ultraviolet radiation" ]
7,320,978
https://en.wikipedia.org/wiki/Air%20sensitivity
Air sensitivity is a term used, particularly in chemistry, to denote the reactivity of chemical compounds with some constituent of air. Most often, reactions occur with atmospheric oxygen (O2) or water vapor (H2O), although reactions with the other constituents of air such as carbon monoxide (CO), carbon dioxide (CO2), and nitrogen (N2) are also possible. Method A variety of air-free techniques have been developed to handle air-sensitive compounds. Two main types of equipment are gloveboxes and Schlenk lines. Glove boxes are sealed cabinets filled with an inert gas such as argon or nitrogen. Normal laboratory equipment can be set up in the glovebox, and manipulated by the use of gloves that penetrate its walls. The atmosphere can be regulated to approximately atmospheric pressure and set to be pure nitrogen or other gas with which the chemicals will not react. Chemicals and equipment can be transferred in and out via an airlock. A Schlenk line is a vacuum and inert-gas dual-manifold that allows glassware to be evacuated and refilled with inert gas specially developed to work with air sensitive compounds. It is connected with a cold trap to prevent vapors from contaminating a rotary vane pump. The technique is modified from the double-tipped needle technique. These methods allow working in totally controlled and isolated environment. Air-sensitive compounds Air-sensitive compounds are substances that would react with components in air. Almost all metals react with air to form a thin passivating layer of oxide, which is often imperceptible. Many bulk compounds react readily with air as well. The reactive components of air are . Very many compounds react with some or all of these species. Examples: O2: organolithium compounds and Grignard reagents H2O: anhydrous metal halides and acyl chlorides as well as organolithium compounds and Grignard reagents CO2: strong bases such as sodium hydroxide, as well as organolithium compounds and Grignard reagents N2: lithium metal (but not organolithium compounds) Some semiconductors are air-sensitive. See also Hygroscopy Hydrophile Ultrahydrophobicity References Air-free techniques Chemical properties
Air sensitivity
[ "Chemistry", "Engineering" ]
473
[ "Vacuum systems", "Air-free techniques", "nan" ]
7,321,060
https://en.wikipedia.org/wiki/Interferometric%20synthetic-aperture%20radar
Interferometric synthetic aperture radar, abbreviated InSAR (or deprecated IfSAR), is a radar technique used in geodesy and remote sensing. This geodetic method uses two or more synthetic aperture radar (SAR) images to generate maps of surface deformation or digital elevation, using differences in the phase of the waves returning to the satellite or aircraft. The technique can potentially measure millimetre-scale changes in deformation over spans of days to years. It has applications for geophysical monitoring of natural hazards, for example earthquakes, volcanoes and landslides, and in structural engineering, in particular monitoring of subsidence and structural stability. Technique Synthetic aperture radar Synthetic aperture radar (SAR) is a form of radar in which sophisticated processing of radar data is used to produce a very narrow effective beam. It can be used to form images of relatively immobile targets; moving targets can be blurred or displaced in the formed images. SAR is a form of active remote sensing – the antenna transmits radiation that is reflected from the image area, as opposed to passive sensing, where the reflection is detected from ambient illumination. SAR image acquisition is therefore independent of natural illumination and images can be taken at night. Radar uses electromagnetic radiation at microwave frequencies; the atmospheric absorption at typical radar wavelengths is very low, meaning observations are not prevented by cloud cover. Phase SAR makes use of the amplitude and the absolute phase of the return signal data. In contrast, interferometry uses differential phase of the reflected radiation, either from multiple passes along the same trajectory and/or from multiple displaced phase centers (antennas) on a single pass. Since the outgoing wave is produced by the satellite, the phase is known, and can be compared to the phase of the return signal. The phase of the return wave depends on the distance to the ground, since the path length to the ground and back will consist of a number of whole wavelengths plus some fraction of a wavelength. This is observable as a phase difference or phase shift in the returning wave. The total distance to the satellite (i.e., the number of whole wavelengths) is known based on the time that it takes for the energy to make the round trip back to the satellite—but it is the extra fraction of a wavelength that is of particular interest and is measured to great accuracy. In practice, the phase of the return signal is affected by several factors, which together can make the absolute phase return in any SAR data collection essentially arbitrary, with no correlation from pixel to pixel. To get any useful information from the phase, some of these effects must be isolated and removed. Interferometry uses two images of the same area taken from the same position (or, for topographic applications, slightly different positions) and finds the difference in phase between them, producing an image known as an interferogram. This is measured in radians of phase difference and, because of the cyclic nature of phase, is recorded as repeating fringes that each represent a full cycle. Factors affecting phase The most important factor affecting the phase is the interaction with the ground surface. The phase of the wave may change on reflection, depending on the properties of the material. The reflected signal back from any one pixel is the summed contribution to the phase from many smaller 'targets' in that ground area, each with different dielectric properties and distances from the satellite, meaning the returned signal is arbitrary and completely uncorrelated with that from adjacent pixels. Importantly though, it is consistent – provided nothing on the ground changes the contributions from each target should sum identically each time, and hence be removed from the interferogram. Once the ground effects have been removed, the major signal present in the interferogram is a contribution from orbital effects. For interferometry to work, the satellites must be as close as possible to the same spatial position when the images are acquired. This means that images from two satellite platforms with different orbits cannot be compared, and for a given satellite data from the same orbital track must be used. In practice the perpendicular distance between them, known as the baseline, is often known to within a few centimetres but can only be controlled on a scale of tens to hundreds of metres. This slight difference causes a regular difference in phase that changes smoothly across the interferogram and can be modelled and removed. The slight difference in satellite position also alters the distortion caused by topography, meaning an extra phase difference is introduced by a stereoscopic effect. The longer the baseline, the smaller the topographic height needed to produce a fringe of phase change – known as the altitude of ambiguity. This effect can be exploited to calculate the topographic height, and used to produce a digital elevation model (DEM). If the height of the topography is already known, the topographic phase contribution can be calculated and removed. This has traditionally been done in two ways. In the two-pass method, elevation data from an externally derived DEM is used in conjunction with the orbital information to calculate the phase contribution. In the three-pass method two images acquired a short time apart are used to create an interferogram, which is assumed to have no deformation signal and therefore represent the topographic contribution. This interferogram is then subtracted from a third image with a longer time separation to give the residual phase due to deformation. Once the ground, orbital and topographic contributions have been removed the interferogram contains the deformation signal, along with any remaining noise (see Difficulties below). The signal measured in the interferogram represents the change in phase caused by an increase or decrease in distance from the ground pixel to the satellite, therefore only the component of the ground motion parallel to the satellite line of sight vector will cause a phase difference to be observed. For sensors like ERS with a small incidence angle this measures vertical motion well, but is insensitive to horizontal motion perpendicular to the line of sight (approximately north–south). It also means that vertical motion and components of horizontal motion parallel to the plane of the line of sight (approximately east–west) cannot be separately resolved. One fringe of phase difference is generated by a ground motion of half the radar wavelength, since this corresponds to a whole wavelength increase in the two-way travel distance. Phase shifts are only resolvable relative to other points in the interferogram. Absolute deformation can be inferred by assuming one area in the interferogram (for example a point away from expected deformation sources) experienced no deformation, or by using a ground control (GPS or similar) to establish the absolute movement of a point. Difficulties A variety of factors govern the choice of images which can be used for interferometry. The simplest is data availability – radar instruments used for interferometry commonly don't operate continuously, acquiring data only when programmed to do so. For future requirements it may be possible to request acquisition of data, but for many areas of the world archived data may be sparse. Data availability is further constrained by baseline criteria. Availability of a suitable DEM may also be a factor for two-pass InSAR; commonly 90 m SRTM data may be available for many areas, but at high latitudes or in areas of poor coverage alternative datasets must be found. A fundamental requirement of the removal of the ground signal is that the sum of phase contributions from the individual targets within the pixel remains constant between the two images and is completely removed. However, there are several factors that can cause this criterion to fail. Firstly the two images must be accurately co-registered to a sub-pixel level to ensure that the same ground targets are contributing to that pixel. There is also a geometric constraint on the maximum length of the baseline – the difference in viewing angles must not cause phase to change over the width of one pixel by more than a wavelength. The effects of topography also influence the condition, and baselines need to be shorter if terrain gradients are high. Where co-registration is poor or the maximum baseline is exceeded the pixel phase will become incoherent – the phase becomes essentially random from pixel to pixel rather than varying smoothly, and the area appears noisy. This is also true for anything else that changes the contributions to the phase within each pixel, for example changes to the ground targets in each pixel caused by vegetation growth, landslides, agriculture or snow cover. Another source of error present in most interferograms is caused by the propagation of the waves through the atmosphere. If the wave travelled through vacuum, it should theoretically be possible (subject to sufficient accuracy of timing) to use the two-way travel-time of the wave in combination with the phase to calculate the exact distance to the ground. However, the velocity of the wave through the atmosphere is lower than the speed of light in vacuum, and depends on air temperature, pressure and the partial pressure of water vapour. It is this unknown phase delay that prevents the integer number of wavelengths being calculated. If the atmosphere was horizontally homogeneous over the length scale of an interferogram and vertically over that of the topography then the effect would simply be a constant phase difference between the two images which, since phase difference is measured relative to other points in the interferogram, would not contribute to the signal. However, the atmosphere is laterally heterogeneous on length scales both larger and smaller than typical deformation signals. This spurious signal can appear completely unrelated to the surface features of the image, however, in other cases the atmospheric phase delay is caused by vertical inhomogeneity at low altitudes and this may result in fringes appearing to correspond with the topography. Persistent scatterer InSAR Persistent or permanent scatterer techniques are a relatively recent development from conventional InSAR, and rely on studying pixels which remain coherent over a sequence of interferograms. In 1999, researchers at Politecnico di Milano, Italy, developed a new multi-image approach in which one searches the stack of images for objects on the ground providing consistent and stable radar reflections back to the satellite. These objects could be the size of a pixel or, more commonly, sub-pixel sized, and are present in every image in the stack. That specific implementation is patented. Some research centres and companies, were inspired to develop variations of their own algorithms which would also overcome InSAR's limitations. In scientific literature, these techniques are collectively referred to as persistent scatterer interferometry or PSI techniques. The term persistent scatterer interferometry (PSI) was proposed by European Space Agency (ESA) to define the second generation of radar interferometry techniques. This term is nowadays commonly accepted by scientific and the end user community. Commonly such techniques are most useful in urban areas with many permanent structures, for example the PSI studies of European geohazard sites undertaken by the Terrafirma project. The Terrafirma project provides a ground motion hazard information service, distributed throughout Europe via national geological surveys and institutions. The objective of this service is to help save lives, improve safety, and reduce economic loss through the use of state-of-the-art PSI information. Over the last 9 years this service has supplied information relating to urban subsidence and uplift, slope stability and landslides, seismic and volcanic deformation, coastlines and flood plains. Producing interferograms The processing chain used to produce interferograms varies according to the software used and the precise application but will usually include some combination of the following steps. Two SAR images are required to produce an interferogram; these may be obtained pre-processed, or produced from raw data by the user prior to InSAR processing. The two images must first be co-registered, using a correlation procedure to find the offset and difference in geometry between the two amplitude images. One SAR image is then re-sampled to match the geometry of the other, meaning each pixel represents the same ground area in both images. The interferogram is then formed by cross-multiplication of each pixel in the two images, and the interferometric phase due to the curvature of the Earth is removed, a process referred to as flattening. For deformation applications a DEM can be used in conjunction with the baseline data to simulate the contribution of the topography to the interferometric phase, this can then be removed from the interferogram. Once the basic interferogram has been produced, it is commonly filtered using an adaptive power-spectrum filter to amplify the phase signal. For most quantitative applications the consecutive fringes present in the interferogram will then have to be unwrapped, which involves interpolating over the 0 to 2π phase jumps to produce a continuous deformation field. At some point, before or after unwrapping, incoherent areas of the image may be masked out. The final processing stage involves geocoding the image, which resamples the interferogram from the acquisition geometry (related to direction of satellite path) into the desired geographic projection. Hardware Spaceborne Early exploitation of satellite-based InSAR included use of Seasat data in the 1980s, but the potential of the technique was expanded in the 1990s, with the launch of ERS-1 (1991), JERS-1 (1992), RADARSAT-1 and ERS-2 (1995). These platforms provided the stable, well-defined orbits and short baselines necessary for InSAR. More recently, the 11-day NASA STS-99 mission in February 2000 used a SAR antenna mounted on the Space Shuttle to gather data for the Shuttle Radar Topography Mission (SRTM). In 2002 ESA launched the ASAR instrument, designed as a successor to ERS, aboard Envisat. While the majority of InSAR to date has utilized the C-band sensors, recent missions such as the ALOS PALSAR, TerraSAR-X and COSMO-SkyMed are expanding the available data in the L- and X-band. Sentinel-1A and Sentinel-1B, both C-band sensors, were launched by the ESA in 2014 and 2016, respectively. Together, they provide InSAR coverage on a global scale and on a six-day repeat cycle. Airborne Airborne InSAR data acquisition systems are built by companies such as the American Intermap, the German AeroSensing, and the Brazilian OrbiSat. Terrestrial or ground-based Terrestrial or ground-based SAR interferometry (TInSAR or GBInSAR) is a remote sensing technique for the displacement monitoring of slopes, rock scarps, volcanoes, landslides, buildings, infrastructures etc. This technique is based on the same operational principles of the satellite SAR interferometry, but the synthetic aperture of the radar (SAR) is obtained by an antenna moving on a rail instead of a satellite moving around an orbit. SAR technique allows 2D radar image of the investigated scenario to be achieved, with a high range resolution (along the instrumental line of sight) and cross-range resolution (along the scan direction). Two antennas respectively emit and receive microwave signals and, by calculating the phase difference between two measurements taken in two different times, it is possible to compute the displacement of all the pixels of the SAR image. The accuracy in the displacement measurement is of the same order of magnitude as the EM wavelength and depends also on the specific local and atmospheric conditions. Applications Tectonic InSAR can be used to measure tectonic deformation, for example ground movements due to earthquakes. It was first used for the 1992 Landers earthquake, but has since been utilised extensively for a wide variety of earthquakes all over the world. In particular the 1999 Izmit and 2003 Bam earthquakes were extensively studied. InSAR can also be used to monitor creep and strain accumulation on faults. Volcanic InSAR can be used in a variety of volcanic settings, including deformation associated with eruptions, inter-eruption strain caused by changes in magma distribution at depth, gravitational spreading of volcanic edifices, and volcano-tectonic deformation signals. Early work on volcanic InSAR included studies on Mount Etna, and Kilauea, with many more volcanoes being studied as the field developed. The technique is now widely used for academic research into volcanic deformation, although its use as an operational monitoring technique for volcano observatories has been limited by issues such as orbital repeat times, lack of archived data, coherence and atmospheric errors. Recently InSAR has been used to study rifting processes in Ethiopia. Subsidence Ground subsidence from a variety of causes has been successfully measured using InSAR, in particular subsidence caused by oil or water extraction from underground reservoirs, subsurface mining and collapse of old mines. Thus, InSAR has become an indispensable tool to satisfactorily address many subsidence studies. Tomás et al. performed a cost analysis that allowed to identify the strongest points of InSAR techniques compared with other conventional techniques: (1) higher data acquisition frequency and spatial coverage; and (2) lower annual cost per measurement point and per square kilometre. Landslides Although InSAR technique can present some limitations when applied to landslides, it can also be used for monitoring landscape features such as landslides. Tomás et al. conducted a bibliometric study on the trends in publications related to landslides and InSAR. They found that the publication trends follow a power model, indicating that despite its inception in the last century, InSAR is a growing topical issue and has become established as a valuable tool for studying landslides. Ice flow Glacial motion and deformation have been successfully measured using satellite interferometry. The technique allows remote, high-resolution measurement of changes in glacial structure, ice flow, and shifts in ice dynamics, all of which agree closely with ground observations. Infrastructure and building monitoring InSAR can also be used to monitor the stability of built structures. Very high resolution SAR data (such as derived from the TerraSAR-X StripMap mode or COSMO-Skymed HIMAGE mode) are especially suitable for this task. InSAR is used for monitoring highway and railway settlements, dike stability, forensic engineering and many other uses. DEM generation Interferograms can be used to produce digital elevation maps (DEMs) using the stereoscopic effect caused by slight differences in observation position between the two images. When using two images produced by the same sensor with a separation in time, it must be assumed other phase contributions (for example from deformation or atmospheric effects) are minimal. In 1995 the two ERS satellites flew in tandem with a one-day separation for this purpose. A second approach is to use two antennas mounted some distance apart on the same platform, and acquire the images at the same time, which ensures no atmospheric or deformation signals are present. This approach was followed by NASA's SRTM mission aboard the Space Shuttle in 2000. InSAR-derived DEMs can be used for later two-pass deformation studies, or for use in other geophysical applications. Mapping and classification of active deformation areas Various procedures have been developed to semi-automatically identify clusters of active persistent scatterers, usually referred to as active deformation areas, and preliminarily associate them with different potential types of deformational processes (e.g., landslides, sinkholes, building settlements, land subsidence) across wide areas. See also Coherence (physics) Optical heterodyne detection Remote sensing ROI PAC References Further reading B. Kampes, Radar Interferometry – Persistent Scatterer Technique, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2006. External links InSAR, a tool for measuring Earth's surface deformation Matthew E. Pritchard USGS InSAR factsheet InSAR Principles, ESA publication, TM19, February 2007. Geophysical survey Geodesy Synthetic aperture radar Interferometry
Interferometric synthetic-aperture radar
[ "Mathematics" ]
4,039
[ "Applied mathematics", "Geodesy" ]
7,321,359
https://en.wikipedia.org/wiki/Fel%20d%201
Fel d 1 is a secretoglobin protein complex that, in cats, is encoded by the CH1 (chain 1/Fel d 1-A) and CH2 (chain 2/Fel d 1-B) genes. Among cats, Fel d 1 is produced largely in their saliva and by the sebaceous glands located in their skin. It is the primary allergen present on cats and kittens. The function of the protein for cats is unknown, but it causes an IgG or IgE reaction in sensitive humans (either as an allergic or asthmatic response). Variation in cats Kittens produce less Fel d 1 than adult cats. Female cats produce a lower level of Fel d 1 than (unneutered) males, while neutered males produce levels similar to those of females. Both intact and spayed females produce similar levels. Although females and neutered males produce Fel d 1 in lower levels, they still produce enough to cause allergic symptoms in sensitive individuals. Researchers have been investigating reports from cat owners that certain breeds of cats either do not produce Fel d 1 or are thought to do so at significantly lower levels than other breeds. For instance, individual cats from the naturally occurring Siberian breed, native to the Siberian region for which the breed is named, have been shown to have genetic variants that result in a lower production of Fel d 1. Another breed thought to have a possible genetic disposition not to produce this allergen or to produce less of it is the Balinese, an offshoot of the Siamese breed. Several other breeds are widely referenced as causing a diminished immune reaction in cat allergy sufferers, including Sphynx, Russian Blue, Cornish Rex, Devon Rex, Siamese, Javanese, Oriental shorthair, Burmese, and Laperm. Fairly reliable tests for their Fel d 1 protein production is available for individual cats, but research regarding entire breeds continues, hampered by the lack of a thoroughly accessible and accurate genetic test for production of the antigen. Structure The complete quaternary structure of Fel d 1 has been determined. The allergen is a tetrameric glycoprotein consisting of two disulfide-linked heterodimers of chains 1 and 2. Fel d 1 chains 1 and 2 share structural similarity with uteroglobin, a secretoglobin superfamily member; chain 2 is a glycoprotein with N-linked oligosaccharides. Both chains share an all alpha-helical structure. Presence in other species Proteins matching the InterPro family signature for Fel d 1 parts is widespread among Theria, a subclass of mammals amongst the Theriiformes (the sister taxon to Yinotheria.) Theria includes the eutherians that includes the placental mammals and the metatherians that includes the marsupials. More specifically, the InterPro profiles link the two components of Fel d 1 to the rodent androgen-binding protein (ABP; not to be confused with the human SHBG), a salivary pheromone. A homolog of Fel d 1 protein is also present in the venom of the slow loris (Primate: Nycticebus). Slow lorises are one of only a few venomous mammals and the only known venomous primate. They possess a dual-composite venom of saliva and brachial gland exudate (BGE). The BGE possesses a protein resembling Fel d 1, which may affect host species as an allergen as a constituent of the venom. It possess a communicative function. See also Allergy to cats Fel d 4 Slow loris References Further reading Mammalian proteins Felinology Glycoproteins
Fel d 1
[ "Chemistry" ]
776
[ "Glycoproteins", "Glycobiology" ]
7,321,377
https://en.wikipedia.org/wiki/Angle%20of%20loll
Angle of loll is the state of a ship that is unstable when upright (i.e. has a negative metacentric height) and therefore takes on an angle of heel to either port or starboard. When a vessel has negative metacentric height (GM) i.e., is in unstable equilibrium, any external force applied to the vessel will cause it to start heeling. As it heels, the moment of inertia of the vessel's waterplane (a plane intersecting the hull at the water's surface) increases, which increases the vessel's BM (distance from the centre of Buoyancy to the Metacenter). Since there is relatively little change in KB (distance from the Keel to the centre of Buoyancy) of the vessel, the KM (distance from Keel to the Metacentre) of the vessel increases. At some angle of heel (say 10°), KM will increase sufficiently equal to KG (distance from the keel to the centre of gravity), thus making GM of vessel equal to zero. When this occurs, the vessel goes to neutral equilibrium, and the angle of heel at which it happens is called angle of loll. In other words, when an unstable vessel heels over towards a progressively increasing angle of heel, at a certain angle of heel, the centre of buoyancy (B) may fall vertically below the centre of gravity (G). Angle of list should not be confused with angle of loll. Angle of list is caused by unequal loading on either side of centre line of vessel. Although a vessel at angle of loll does display features of stable equilibrium, this is a dangerous situation and rapid remedial action is required to prevent the vessel from capsizing. It is often caused by the influence of a large free surface or the loss of stability due to damaged compartments. It is different from list in that the vessel is not induced to heel to one side or the other by the distribution of weight, it is merely incapable of maintaining a zero heel attitude. See also Angle of list Capsizing Fluid statics Kayak roll Limit of Positive Stability Naval architecture Ship stability Turtling Weight distribution References Naval architecture Engineering concepts Ship measurements
Angle of loll
[ "Engineering" ]
448
[ "Naval architecture", "nan", "Marine engineering" ]
7,321,584
https://en.wikipedia.org/wiki/Polyhydroxyethylmethacrylate
Poly(2-hydroxyethyl methacrylate) (pHEMA) is a polymer that forms a hydrogel in water. Poly (hydroxyethyl methacrylate) (PHEMA) hydrogel for intraocular lens (IOL) materials was synthesized by solution polymerization using 2-hydroxyethyl methacrylate (HEMA) as raw material, ammonium persulfate and sodium pyrosulfite (APS/SMBS) as catalyst, and triethyleneglycol dimethacrylate (TEGDMA) as cross-linking additive. It was invented by Drahoslav Lim and Otto Wichterle for biological use. Together they succeeded in preparing a cross-linking gel which absorbed up to 40% of water, exhibited suitable mechanical properties and was transparent. They patented this material in 1953. Applications Contact lenses In 1959, this material was first used as an optical implant. Wichterle thought pHEMA might be a suitable material for a contact lens and gained his first patent for soft contact lenses. By late 1961, he succeeded in producing the first four pHEMA hydrogel contact lenses on a home-made apparatus. Copolymers of pHEMA are still widely used today. Poly-HEMA functions as a hydrogel by rotating around its central carbon. In air, the non-polar methyl side turns outward, making the material brittle and easy to grind into the correct lens shape. In water, the polar hydroxyethyl side turns outward and the material becomes flexible. Pure pHEMA yields lenses that are too thick for sufficient oxygen to diffuse through, so all contact lenses that are pHEMA based are manufactured with copolymers that make the gel thinner and increase its water of hydration. These copolymer hydrogel lenses are often suffixed "-filcon", such as Methafilcon, which is a copolymer of hydroxyethyl methacrylate and methyl methacrylate. Another copolymer hydrogel lens, called Polymacon, is a copolymer of hydroxyethyl methacrylate and ethylene glycol dimethacrylate. Cell culture pHEMA is commonly used to coat cell culture flasks in order to prevent cell adhesion and induce spheroid formation, particularly in cancer research. Older alternatives to pHEMA include agar and agarose gels. References Plastics Acrylate polymers Czech inventions
Polyhydroxyethylmethacrylate
[ "Physics" ]
519
[ "Amorphous solids", "Unsolved problems in physics", "Plastics" ]
7,321,888
https://en.wikipedia.org/wiki/Dielectric%20thermal%20analysis
Dielectric thermal analysis (DETA), or dielectric analysis (DEA), is a materials science technique similar to dynamic mechanical analysis except that an oscillating electrical field is used instead of a mechanical force. For investigation of the curing behavior of thermosetting resin systems, composite materials, adhesives and paints, Dielectric Analysis (DEA) can be used in accordance with ASTM E 2038 or E 2039. The great advantage of DEA is that it can be employed not only on a laboratory scale, but also in process. Measuring principle In a typical test, the sample is placed in contact with two electrodes (the dielectric sensor) and a sinusoidal voltage (the excitation) is applied to one electrode. The resulting sinusoidal current (the response) is measured at the second electrode. The response signal is attenuated in amplitude and shifted in phase in relation to the mobility of the ions and alignment of the dipoles. Dipoles in the material will attempt to align with the electric field and ions (present as impurities) will move toward the electrode of opposite polarity. The dielectric properties of permittivity ε' and loss factor ε" are then calculated from this measured amplitude and phase change. References Materials science Scientific techniques
Dielectric thermal analysis
[ "Physics", "Materials_science", "Engineering" ]
268
[ "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "nan" ]
7,322,992
https://en.wikipedia.org/wiki/Matterhorn%20%28ride%29
The Matterhorn or Flying Bobs, sometimes known by alternate names such as Musik Express or Terminator, is an amusement ride very similar to the Superbob, which consists of a number of cars attached to axles that swing in and out. The hill and valley shape of the ride causes a pronounced swinging motion: the faster the ride goes, the more dramatic the swinging motion. This ride is commonly seen at a travelling funfairs. Most carnivals and parks require riders to be at least 42 inches or taller. United States Rides are commonly known as "Flying Bobs". They can typically be found at carnivals, where another common name for them is the "Himalaya," but can also exist at amusement parks such as the Flying Bobs at DelGrosso's Amusement Park, KonTiki at Six Flags New England and at Coney Island (Cincinnati) and Matterhorn at Cedar Point and Lake Winnepesaukah. The carnival rides are typically transported on two trucks. One is for the ride itself, and the other is for the swinging cars. All rides are essentially similar in concept, but have varying designs. Cars typically move forward and backward at varying intervals during the ride. The Allan Herschell Company made the first "Flying Bobs" in the 1960s. Chance-Morgan currently manufactures a few versions, called the "Alpine Bobs" or "Thunder Bolt." Mack manufactures the Matterhorn, Feria Swing, and Petersburg Schilitenfahrt (Sleigh Ride). Another common manufacturer of the Matterhorn is Bertazzon. It is currently unknown for what company that manufactured the other version, called "Rip Curl" in Fun Spot in Orlando, Florida. United Kingdom The common analog of the Matterhorn is the Music Express. The main difference between the two rides is the Music Express' use of a track, rather than axles. These versions can also be found in the United States, but have been discontinued by all manufacturers except Bertazzon. Rides manufacturers Bertazzon Chance Morgan Mack Rides Reverchon Industries See also Matterhorn External links Matterhorn History at the nfa. Database of Matterhorns travelling in the UK. Video featuring a Matterhorn. References Amusement rides
Matterhorn (ride)
[ "Physics", "Technology" ]
457
[ "Physical systems", "Machines", "Amusement rides" ]
7,324,284
https://en.wikipedia.org/wiki/Indexed%20language
Indexed languages are a class of formal languages discovered by Alfred Aho; they are described by indexed grammars and can be recognized by nested stack automata. Indexed languages are a proper subset of context-sensitive languages. They qualify as an abstract family of languages (furthermore a full AFL) and hence satisfy many closure properties. However, they are not closed under intersection or complement. The class of indexed languages has generalization of context-free languages, since indexed grammars can describe many of the nonlocal constraints occurring in natural languages. Gerald Gazdar (1988) and Vijay-Shanker (1987) introduced a mildly context-sensitive language class now known as linear indexed grammars (LIG). Linear indexed grammars have additional restrictions relative to IG. LIGs are weakly equivalent (generate the same language class) as tree adjoining grammars. Examples The following languages are indexed, but are not context-free: These two languages are also indexed, but are not even mildly context sensitive under Gazdar's characterization: On the other hand, the following language is not indexed: Properties Hopcroft and Ullman tend to consider indexed languages as a "natural" class, since they are generated by several formalisms, such as: Aho's indexed grammars Aho's one-way nested stack automata Fischer's macro grammars Greibach's automata with stacks of stacks Maibaum's algebraic characterization Hayashi generalized the pumping lemma to indexed grammars. Conversely, Gilman gives a "shrinking lemma" for indexed languages. See also Chomsky hierarchy References External links "NLP in Prolog" chapter on indexed grammars and languages Formal languages
Indexed language
[ "Mathematics" ]
347
[ "Formal languages", "Mathematical logic" ]
7,324,297
https://en.wikipedia.org/wiki/Genome-based%20peptide%20fingerprint%20scanning
Genome-based peptide fingerprint scanning (GFS) is a system in bioinformatics analysis that attempts to identify the genomic origin (that is, what species they come from) of sample proteins by scanning their peptide-mass fingerprint against the theoretical translation and proteolytic digest of an entire genome. This method is an improvement from previous methods because it compares the peptide fingerprints to an entire genome instead of comparing it to an already annotated genome. This improvement has the potential to improve genome annotation and identify proteins with incorrect or missing annotations. History and background GFS was designed by Michael C. Giddings (University of North Carolina, Chapel Hill) et al., and released in 2003. Giddings expanded the algorithms for GFS from earlier ideas. Two papers were published in 1993 explaining the techniques used to identify proteins in sequence databases. These methods determined the mass of peptides using mass spectrometry, and then used the mass to search protein databases to identify the proteins In 1999 a more complex program was released called Mascot that integrated three types of protein/database searches: peptide molecular weights, tandem mass spectrometry from one or more peptide, and combination mass data with amino acid sequence. The fallback with this widely used program is that it is unable to detect alternative splice sites that are not currently annotated, and it not usually able to find proteins that have not been annotated. Giddings built upon these sources to create GFS which would compare peptide mass data to entire genomes to identify the proteins. Giddings system is able to find new annotations of genes that have not been found, such as undocumented genes and undocumented alternative splice sites. Research examples In 2012 research was published where genes and proteins were found in a model organism that could not have been found without GFS because they had not been previously annotated. The planarian Schmidtea mediterranea has been used in research for over 100 years. This planarian is capable of regenerating missing body parts and is therefore emerging as potential model organism for stem cell research. Planarians are covered in mucus which aids in locomotion, in protecting them from predation, and in helping their immune system. The genome of Schmidtea mediterranea is sequenced but mostly un-annotated making it a prime candidate for genome-based peptide fingerprint scanning. When the proteins were analyzed with GFS 1,604 proteins were identified. These proteins had mostly not been annotated before they were found with GFS They were also able to find the mucous subproteome (all the genes associated with mucus production). They found that this proteome was conserved in the sister species Schmidtea mansoni. The mucous subproteome is so conserved that 119 orthologs of planarians are found in humans. Due to the similarity in these genes the planarian can now be used as a model to study mucous protein function in humans. This is relevant for infections and diseases related to mucous aberrancies such as cystic fibrosis, asthma, and other lung diseases. These genes could not have been found without GFS because they had not been previously annotated. In February 2013, proteogenomic mapping research was done with ENCODE to identify translational regions in the human genome. They applied peptide fingerprint scanning and MASCOT to the protein data to find regions that may not have been previously annotated as translated in the human genome. This search against the whole genome revealed that approximately 4% of unique peptide that they found were outside of previously annotated regions. Also the comparison of the whole genome revealed 15% more hits than from a protein database search (such as MASCOT) alone. GFS can be used as a complementary method for annotation due to the fact that you can find new genes or splice sites that have not been annotated before. However it is important to remember that the whole genome approach used by GFS can be less sensitive than programs that look only at annotated regions. References External links Genome-based Peptide Fingerprint Scanning (GFS) Documentation Facebook link to "Genome-based Peptide Fingerprint Scanning" Explanation of MS/MS in relation to MASCOT Bioinformatics Genomics techniques
Genome-based peptide fingerprint scanning
[ "Chemistry", "Engineering", "Biology" ]
887
[ "Genetics techniques", "Genomics techniques", "Biological engineering", "Bioinformatics", "Molecular biology techniques" ]
7,324,959
https://en.wikipedia.org/wiki/Rook%27s%20graph
In graph theory, a rook's graph is an undirected graph that represents all legal moves of the rook chess piece on a chessboard. Each vertex of a rook's graph represents a square on a chessboard, and there is an edge between any two squares sharing a row (rank) or column (file), the squares that a rook can move between. These graphs can be constructed for chessboards of any rectangular shape. Although rook's graphs have only minor significance in chess lore, they are more important in the abstract mathematics of graphs through their alternative constructions: rook's graphs are the Cartesian product of two complete graphs, and are the line graphs of complete bipartite graphs. The square rook's graphs constitute the two-dimensional Hamming graphs. Rook's graphs are highly symmetric, having symmetries taking every vertex to every other vertex. In rook's graphs defined from square chessboards, more strongly, every two edges are symmetric, and every pair of vertices is symmetric to every other pair at the same distance in moves (making the graph distance-transitive). For rectangular chessboards whose width and height are relatively prime, the rook's graphs are circulant graphs. With one exception, the rook's graphs can be distinguished from all other graphs using only two properties: the numbers of triangles each edge belongs to, and the existence of a unique -cycle connecting each nonadjacent pair of vertices. Rook's graphs are perfect graphs. In other words, every subset of chessboard squares can be colored so that no two squares in a row or column have the same color, using a number of colors equal to the maximum number of squares from the subset in any single row or column (the clique number of the induced subgraph). This class of induced subgraphs are a key component of a decomposition of perfect graphs used to prove the strong perfect graph theorem, which characterizes all perfect graphs. The independence number and domination number of a rook's graph both equal the smaller of the chessboard's width and height. In terms of chess, the independence number is the maximum number of rooks that can be placed without attacking each other; the domination number is the minimum number needed to attack all unoccupied board squares. Rook's graphs are well-covered graphs, meaning that placing non-attacking rooks one at a time can never get stuck until a set of maximum size is reached. Definition and mathematical constructions An rook's graph represents the moves of a rook on an chessboard. Its vertices represent the squares of the chessboard, and may be given coordinates , where and . Two vertices with coordinates and are adjacent if and only if either or . (If , the vertices share a file and are connected by a vertical rook move; if , they share a rank and are connected by a horizontal rook move.) The squares of a single rank or file are all directly connected to each other, so each rank and file forms a clique—a subset of vertices forming a complete graph. The whole rook's graph for an chessboard can be formed from these two kinds of cliques, as the Cartesian product of graphs . Because the rook's graph for a square chessboard is the Cartesian product of equal-size cliques, it is an example of a Hamming graph. Its dimension as a Hamming graph is two, and every two-dimensional Hamming graph is a rook's graph for a square chessboard. Square rook's graphs are also called "Latin square graphs"; applied to a Latin square, its edges describe pairs of squares that cannot contain the same value. The Sudoku graphs are rook's graphs with some additional edges, connecting squares of a Sudoku puzzle that should have unequal values. Geometrically, the rook's graphs can be formed by sets of the vertices and edges (the skeletons) of a family of convex polytopes, the Cartesian products of pairs of neighborly polytopes. For instance, the 3-3 duoprism is a four-dimensional shape formed as the Cartesian product of two triangles, and has a rook's graph as its skeleton. Regularity and symmetry Strong regularity and observe that the rook's graph (or equivalently, as they describe it, the line graph of the complete bipartite graph ) has all of the following properties: It has vertices, one for each square of the chessboard. Each vertex is adjacent to edges, connecting it to the squares on the same rank and the squares on the same file. The triangles within the rook's graph are formed by triples of squares within a single rank or file. When , exactly edges (the ones connecting squares on the same rank) belong to triangles; the remaining edges (the ones connecting squares on the same file) belong to triangles. When , each edge belongs to triangles. Every two nonadjacent vertices belong to a unique -vertex cycle, namely the only rectangle using the two vertices as corners. They show that except in the case , these properties uniquely characterize the rook's graph. That is, the rook's graphs are the only graphs with these numbers of vertices, edges, triangles per edge, and with a unique 4-cycle through each two non-adjacent vertices. When , these conditions may be abbreviated by stating that an rook's graph is a strongly regular graph with parameters . These parameters describe the number of vertices, the number of edges per vertex, the number of triangles per edge, and the number of shared neighbors for two non-adjacent vertices, respectively. Conversely, every strongly regular graph with these parameters must be an rook's graph, unless . When , there is another strongly regular graph, the Shrikhande graph, with the same parameters as the rook's graph. The Shrikhande graph obeys the same properties listed by Moon and Moser. It can be distinguished from the rook's graph in that the neighborhood of each vertex in the Shrikhande graph is connected to form a . In contrast, in the rook's graph, the neighborhood of each vertex forms two triangles, one for its rank and another for its file, without any edges from one part of the neighborhood to the other. Another way of distinguishing the rook's graph from the Shrikhande graph uses clique cover numbers: the rook's graph can be covered by four cliques (the four ranks or the four files of the chessboard) whereas six cliques are needed to cover the Shrikhande graph. Symmetry Rook's graphs are vertex-transitive, meaning that they have symmetries taking every vertex to every other vertex. This implies that every vertex has an equal number of edges: they are -regular. The rook's graphs are the only regular graphs formed from the moves of standard chess pieces in this way. When , the symmetries of the rook's graph are formed by independently permuting the rows and columns of the graph, so the automorphism group of the graph has elements. When , the graph has additional symmetries that swap the rows and columns, so the number of automorphisms is . Any two vertices in a rook's graph are either at distance one or two from each other, according to whether they are adjacent or nonadjacent respectively. Any two nonadjacent vertices may be transformed into any other two nonadjacent vertices by a symmetry of the graph. When the rook's graph is not square, the pairs of adjacent vertices fall into two orbits of the symmetry group according to whether they are adjacent horizontally or vertically, but when the graph is square any two adjacent vertices may also be mapped into each other by a symmetry and the graph is therefore distance-transitive. When and are relatively prime, the symmetry group of the rook's graph contains as a subgroup the cyclic group that acts by cyclically permuting the vertices. Therefore, in this case, the rook's graph is a circulant graph. Square rook's graphs are connected-homogeneous, meaning that every isomorphism between two connected induced subgraphs can be extended to an automorphism of the whole graph. Other properties Perfection A rook's graph can also be viewed as the line graph of a complete bipartite graph — that is, it has one vertex for each edge of , and two vertices of the rook's graph are adjacent if and only if the corresponding edges of the complete bipartite graph share a common endpoint. In this view, an edge in the complete bipartite graph from the th vertex on one side of the bipartition to the th vertex on the other side corresponds to a chessboard square with coordinates . Any bipartite graph is a subgraph of a complete bipartite graph, and correspondingly any line graph of a bipartite graph is an induced subgraph of a rook's graph. The line graphs of bipartite graphs are perfect: in them, and in any of their induced subgraphs, the number of colors needed in any vertex coloring is the same as the number of vertices in the largest complete subgraph. Line graphs of bipartite graphs form an important family of perfect graphs: they are one of a small number of families used by to characterize the perfect graphs and to show that every graph with no odd hole and no odd antihole is perfect. In particular, rook's graphs are themselves perfect. Because a rook's graph is perfect, the number of colors needed in any coloring of the graph is just the size of its largest clique. The cliques of a rook's graph are the subsets of a single row or a single column, and the largest of these have size , so this is also the chromatic number of the graph. An -coloring of an rook's graph may be interpreted as a Latin square: it describes a way of filling the rows and columns of an grid with different values in such a way that the same value does not appear twice in any row or column. In the same way, a coloring of a rectangular rook's graph corresponds to a Latin rectangle. Although finding an optimal coloring of a rook's graph is straightforward, it is NP-complete to determine whether a partial coloring can be extended to a coloring of the whole graph (this problem is called precoloring extension). Equivalently, it is NP-complete to determine whether a partial Latin square can be completed to a full Latin square. Independence An independent set in a rook's graph is a set of vertices, no two of which belong to the same row or column of the graph; in chess terms, it corresponds to a placement of rooks no two of which attack each other. Perfect graphs may also be described as the graphs in which, in every induced subgraph, the size of the largest independent set is equal to the number of cliques in a partition of the graph's vertices into a minimum number of cliques. In a rook's graph, the sets of rows or the sets of columns (whichever has fewer sets) form such an optimal partition. The size of the largest independent set in the graph is therefore . Rook's graphs are well-covered graphs: every independent set in a rook's graph can be extended to a maximum independent set, and every maximal independent set in a rook's graph has the same size, . Domination The domination number of a graph is the minimum cardinality among all dominating sets. On the rook's graph a set of vertices is a dominating set if and only if their corresponding squares either occupy, or are a rook's move away from, all squares on the board. For the board the domination number is . On the rook's graph a -dominating set is a set of vertices whose corresponding squares attack all other squares (via a rook's move) at least times. A -tuple dominating set on the rook's graph is a set of vertices whose corresponding squares attack all other squares at least times and are themselves attacked at least times. The minimum cardinality among all -dominating and -tuple dominating sets are the -domination number and the -tuple domination number, respectively. On the square board, and for even , the -domination number is when and . In a similar fashion, the -tuple domination number is when is odd and less than . Hamiltonicity Every rook's graph contains a Hamiltonian cycle. However, these cycles may involve moves between squares that are far apart within a single row or column of the chessboard. Instead, the study of "rook's tours", in the mathematics of chess, has generally concentrated on a special case of these Hamiltonian cycles where the rook is restricted to move only to adjacent squares. These single-step rook's tours only exist on boards with an even number of squares. They play a central role in the proof of Gomory's theorem that, if two squares of opposite colors are removed from a standard chessboard, the remaining squares can always be covered by dominoes. They are featured alongside knight's tours in the first work to discuss chess-piece tours, the 9th century Sanskrit Kavyalankara of Rudrata. Spectrum The spectrum of a rook's graph (the eigenvalues of its adjacency matrix) consists of the four eigenvalues , , , and . Because these are all integers, rook's graphs are integral graphs. There are only three classes of graphs (and finitely many exceptional graphs) that can have four eigenvalues with one of the four being ; one of the three classes is the class of rook's graphs. For most combinations of and , the rook's graph is spectrally unique: no other graph has the same spectrum. In particular this is true when or , or when the two numbers and sum to at least 18 and do not have the form . In other graphs The graphs for which the neighbors of each vertex induce a rook's graph have been called locally grid. Examples include the Johnson graphs , for which the neighbors of each vertex form a rook's graph. Other examples are known, and for some rook's graphs, a complete classification is known. For instance, there are two graphs whose neighborhoods are all rook's graphs: they are the Johnson graph , and the complement graph of a rook's graph. See also Bishop's graph Chessboard complex, the independence complex of the rook's graph King's graph Knight's graph Lattice graph, the graph of horizontal and vertical adjacencies of squares on a chessboard Queen's graph References External links Mathematical chess problems Perfect graphs Parametric families of graphs Regular graphs Strongly regular graphs
Rook's graph
[ "Mathematics" ]
3,017
[ "Recreational mathematics", "Mathematical chess problems" ]
7,325,366
https://en.wikipedia.org/wiki/Timeline%20of%20telescope%20technology
The following timeline lists the significant events in the invention and development of the telescope. BC 2560 BC to 1 BC c.2560 BC–c.860 BC — Egyptian artisans polish rock crystal, semi-precious stones, and latterly glass to produce facsimile eyes for statuary and mummy cases. The intent appears to be to produce an optical illusion. 424 BC Aristophanes "lens" is a glass globe filled with water.(Seneca says that it can be used to read letters no matter how small or dim) 3rd century BC Euclid is the first to study reflection and refraction using mathematical theorems based on the fact that light travels in straight lines AD 1 AD to 999 AD 2nd century AD — Ptolemy (in his work Optics) wrote about the properties of light including: reflection, refraction, and colour. 984 — Ibn Sahl completes a treatise On Burning Mirrors and Lenses, describing plano-convex and biconvex lenses, and parabolic and ellipsoidal mirrors. 1000 AD to 1999 AD 1011–1021 — Ibn al-Haytham (also known as Alhacen or Alhazen) writes the Kitab al-Manazir (Book of Optics) 1230–1235 — Robert Grosseteste describes the use of 'optics' to "...make small things placed at a distance appear any size we want, so that it may be possible for us to read the smallest letters at incredible distances..." ("Haec namque pars Perspectivae perfecte cognita ostendit nobis modum, quo res longissime distantes faciamus apparere propinquissime positas et quo res magnas propinquas faciamus apparere brevissimas et quo res longe positas parvas faciamus apparere quantum volumus magnas, ita ut possible sit nobis ex incredibili distantia litteras minimas legere, aut arenam, aut granum, aut gramina, aut quaevis minuta numerare.") in his work De Iride. 1266 — Roger Bacon mentions the magnifying properties of transparent objects in his treatise Opus Majus. 1270 (approx) — Witelo writes Perspectiva — "Optics" incorporating much of Kitab al-Manazir. 1285–1300 spectacles are invented. 1570 — The writings of Thomas Digges describes how his father, English mathematician and surveyor Leonard Digges (1520–1559), made use of a "proportional Glass" to view distant objects and people. Some, such as the historian Colin Ronan, claim this describes a reflecting or refracting telescope built between 1540 and 1559 but its vague description and claimed performance makes it dubious. 1586 Giambattista della Porta writes "...to make glasses that can recognize a man several miles away" It is unclear whether he is describing a telescope or corrective glasses. 1608 — Hans Lippershey, a Dutch lensmaker, applies for a patent for a perspective glass "for seeing things far away as if they were nearby", the first recorded design for what will later be called a telescope. His patent beats fellow Dutch instrument-maker's Jacob Metius's patent by a few weeks. A claim will be made 37 years later by another Dutch spectacle-maker that his father, Zacharias Janssen, invented the telescope. 1609 — Galileo Galilei makes his own improved version of Lippershey's telescope, calling it a "perspicillum". 1611 — Greek mathematician Giovanni Demisiani coins the word "telescope" (from the Greek τῆλε, tele "far" and σκοπεῖν, skopein "to look or see"; τηλεσκόπος, teleskopos "far-seeing") for one of Galileo Galilei's instruments presented at a banquet at the Accademia dei Lincei. 1611 — Johannes Kepler describes the optics of lenses (see his books Astronomiae Pars Optica and Dioptrice), including a new kind of astronomical telescope with two convex lenses (the 'Keplerian' telescope). 1616 — Niccolo Zucchi claims at this time he experimented with a concave bronze mirror, attempting to make a reflecting telescope. 1630 — Christoph Scheiner constructs a telescope to Kepler's design. 1650 — Christiaan Huygens produces his design for a compound eyepiece. 1663 — Scottish mathematician James Gregory designs a reflecting telescope with paraboloid primary mirror and ellipsoid secondary mirror. Construction techniques at the time could not make it, and a workable model was not produced until 10 years later by Robert Hooke. The design is known as 'Gregorian'. 1668 — Isaac Newton produces the first functioning reflecting telescope using a spherical primary mirror and a flat diagonal secondary mirror. This design is termed the 'Newtonian'. 1672 — Laurent Cassegrain, produces a design for a reflecting telescope using a paraboloid primary mirror and a hyperboloid secondary mirror. The design, named 'Cassegrain', is still used in astronomical telescopes used in observatories in 2006. 1674 — Robert Hooke produces a reflecting telescope based on the Gregorian design. 1684 — Christiaan Huygens publishes "Astroscopia Compendiaria" in which he described the design of very long aerial telescopes. 1720 — John Hadley develops ways of aspherizing spherical mirrors to make very accurate parabolic mirrors and produces a much improved Gregorian telescope 1721 — John Hadley experiments with the neglected Newtonian telescope design and demonstrates one with a 6-inch parabolic mirror to the Royal Society. 1730s — James Short succeeds in producing a Gregorian telescopes to true paraboloidal primary and ellipsoidal secondary design specifications. 1733 — Chester Moore Hall invents the achromatic lens. 1758 — John Dollond re-invents and patents the achromatic lens. 1783 — Jesse Ramsden invents his eponymous eyepiece. 1803 — The "Observatorio Astronómico Nacional de Colombia (OAN)" is inaugurated as the first observatory in the Americas in Bogotá, Colombia. 1849 — Carl Kellner designs and manufactures the first achromatic eyepiece, announced in his paper "Das orthoskopische Ocular". 1857 — Léon Foucault improves reflecting telescopes when he introduced a process of depositing a layer of silver on glass telescope mirrors. 1860 — Georg Simon Plössl produces his eponymous eyepiece. 1880 — Ernst Abbe designs the first orthoscopic eyepiece (Kellner's was solely achromatic rather than orthoscopic, despite his description). 1897 — Largest practical refracting telescope, the Yerkes Observatorys' 40 inch (101.6 cm) refractor, is built. 1900 — The largest refractor ever, Great Paris Exhibition Telescope of 1900 with an objective of 49.2 inch (1.25 m) diameter is temporarily exhibited at the Paris 1900 Exposition. 1910s — George Willis Ritchey and Henri Chrétien co-invent the Ritchey-Chrétien telescope used in many, if not most of the largest astronomical telescopes. 1930 — Bernhard Schmidt invents the Schmidt camera. 1932 — John Donovan Strong first “aluminizes" a telescope mirror a much longer lasting aluminium coating using thermal vacuum evaporation. 1944 — Dmitri Dmitrievich Maksutov invents the Maksutov telescope. 1967 — The first neutrino telescope opened in Africa. 1970 — The first space observatory, Uhuru, is launched, being also the first gamma-ray telescope. 1975 — BTA-6 is the first major telescope to use an altazimuth mount, which is mechanically simpler but requires computer control for accurate pointing. 1990 — Hubble Space Telescope (HST) was launched into low Earth orbit 2000 CE to 2025 CE 2003 — The Spitzer Space Telescope (SST), formerly the Space Infrared Telescope Facility (SIRTF), is an infrared space observatory launched in 2003. It is the fourth and final of the NASA Great Observatories program 2008 — Max Tegmark and Matias Zaldarriaga created the Fast Fourier Transform Telescope. 2022 — The James Webb Space Telescope is launched by NASA. See also Catadioptric telescope Eyepiece History of telescopes List of largest optical telescopes historically NASA Reflecting telescope Refracting telescope Timeline of telescopes, observatories, and observing technology References External links Telescope Telescopes
Timeline of telescope technology
[ "Astronomy" ]
1,797
[ "Telescopes", "Astronomical instruments" ]
7,325,543
https://en.wikipedia.org/wiki/Stochastic%20optimization
Stochastic optimization (SO) are optimization methods that generate and use random variables. For stochastic optimization problems, the objective functions or constraints are random. Stochastic optimization also include methods with random iterates. Some hybrid methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize deterministic methods for deterministic problems. Methods for stochastic functions Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system, and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include: stochastic approximation (SA), by Robbins and Monro (1951) stochastic gradient descent finite-difference SA by Kiefer and Wolfowitz (1952) simultaneous perturbation SA by Spall (1992) scenario optimization Randomized search methods On the other hand, even when the data set consists of precise measurements, some methods introduce randomness into the search-process to accelerate progress. Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics. Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly across many data sets, for many sorts of problems. Stochastic optimization methods of this kind include: simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983) quantum annealing Probability Collectives by D.H. Wolpert, S.R. Bieniawski and D.G. Rajnarayan (2011) reactive search optimization (RSO) by Roberto Battiti, G. Tecchiolli (1994), recently reviewed in the reference book cross-entropy method by Rubinstein and Kroese (2004) random search by Anatoly Zhigljavsky (1991) Informational search stochastic tunneling parallel tempering a.k.a. replica exchange stochastic hill climbing swarm algorithms evolutionary algorithms genetic algorithms by Holland (1975) evolution strategies cascade object optimization & modification algorithm (2016) In contrast, some authors have argued that randomization can only improve a deterministic algorithm if the deterministic algorithm was poorly designed in the first place. Fred W. Glover argues that reliance on random elements may prevent the development of more intelligent and better deterministic components. The way in which results of stochastic optimization algorithms are usually presented (e.g., presenting only the average, or even the best, out of N runs without any mention of the spread), may also result in a positive bias towards randomness. See also Global optimization Machine learning Scenario optimization Gaussian process State Space Model Model predictive control Nonlinear programming Entropic value at risk References Further reading Michalewicz, Z. and Fogel, D. B. (2000), How to Solve It: Modern Heuristics, Springer-Verlag, New York. External links COSP Monte Carlo methods
Stochastic optimization
[ "Physics" ]
737
[ "Monte Carlo methods", "Computational physics" ]
7,326,570
https://en.wikipedia.org/wiki/Vorinostat
{{Drugbox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 470631860 | IUPAC_name = N-Hydroxy-N-phenyloctanediamide | image = Vorinostat.svg | width = 275 | pronounce = | tradename = Zolinza | Drugs.com = | MedlinePlus = a607050 | licence_US = Vorinostat | pregnancy_AU = | pregnancy_US = D | pregnancy_category = | legal_AU = | legal_CA = | legal_UK = | legal_US = Rx-only | legal_status = Rx-only | routes_of_administration = Oral (capsules) | bioavailability = 1.8–11% | protein_bound = ~71% | metabolism = Hepatic glucuronidation and β-oxidationCYP system not involved | metabolites = vorinostat O-glucuronide, 4-anilino-4-oxobutanoic acid (both inactive) | elimination_half-life = ~2 hours (vorinostat and O-glucuronide), 11 hours (4-anilino-4-oxobutanoic acid) | excretion = Renal (negligible) | IUPHAR_ligand = 6852 | CAS_number_Ref = | CAS_number = 149647-78-9 | ATC_prefix = L01 | ATC_suffix = XH01 | PubChem = 5311 | DrugBank_Ref = | DrugBank = DB02546 | ChemSpiderID_Ref = | ChemSpiderID = 5120 | UNII_Ref = | UNII = 58IFB293JI | KEGG_Ref = | KEGG = D06320 | ChEBI_Ref = | ChEBI = 45716 | ChEMBL_Ref = | ChEMBL = 98 | C=14 | H=20 | N=2 | O=3 | smiles = O=C(Nc1ccccc1)CCCCCCC(=O)NO | StdInChI_Ref = | StdInChI = 1S/C14H20N2O3/c17-13(15-12-8-4-3-5-9-12)10-6-1-2-7-11-14(18)16-19/h3-5,8-9,19H,1-2,6-7,10-11H2,(H,15,17)(H,16,18) | StdInChIKey_Ref = | StdInChIKey = WAEXFXRVDQXREF-UHFFFAOYSA-N }}Vorinostat (rINN), also known as suberoylanilide hydroxamic acid (suberoyl+anilide+hydroxamic acid abbreviated as SAHA), is a member of a larger class of compounds that inhibit histone deacetylases (HDAC). Histone deacetylase inhibitors (HDI) have a broad spectrum of epigenetic activities. Vorinostat is marketed under the name Zolinza''' ( ) by Merck for the treatment of cutaneous manifestations in patients with cutaneous T cell lymphoma (CTCL) when the disease persists, gets worse, or comes back during or after two systemic therapies. The compound was developed by Columbia University chemist Ronald Breslow and Memorial Sloan-Kettering researcher Paul Marks. Medical uses Vorinostat was the first histone deacetylase inhibitor approved by the U.S. Food and Drug Administration (FDA) for the treatment of CTCL on October 6, 2006. Development In 1966, Charlotte Friend published her observation that a suspension of murine erythroleukemia cells underwent cytodifferentiation to normal erythrocytes when treated with dimethylsulfoxide (DMSO, a common drug solvent and cryoprotectant frequently used for cell culture freezing) at 280 mmolar. Memorial Sloan-Kettering researcher Paul Marks approached Columbia University chemist Ronald Breslow about these findings and together they decided to develop more potent analogs of DMSO, in order to make use of this property for cancer treatment. Their optimization process lead to the discovery of suberoylanilide hydroxamic acid and its HDAC-inhibiting property. Mechanism of action Vorinostat has been shown to bind to the active site of histone deacetylases and act as a chelator for zinc ions also found in the active site of histone deacetylases. Vorinostat's inhibition of histone deacetylases results in the accumulation of acetylated histones and acetylated proteins, including transcription factors crucial for the expression of genes needed to induce cell differentiation. It acts on class I, II and IV of histone deacetylase. Clinical trials Vorinostat has also been used to treat Sézary syndrome, another type of lymphoma closely related to CTCL. A recent study suggested that vorinostat also possesses some activity against recurrent glioblastoma multiforme, resulting in a median overall survival of 5.7 months (compared to 4–4.4 months in earlier studies). Further brain tumor trials are planned in which vorinostat will be combined with other drugs. Including vorinostat in treatment of advanced non-small-cell lung carcinoma (NSCLC) showed improved response rates and increased median progression free survival and overall survival. It has given encouraging results in a phase II trial for myelodysplastic syndromes in combination with idarubicin and cytarabine. It failed to demonstrate efficacy in treating acute myeloid leukemia in an earlier phase II study. Preclinical investigations Vorinostat is being investigated as a potential HIV latency reversing agent (LRA) as part of an investigational therapeutic strategy known as "shock and kill". Vorinostat was shown to reactivate HIV in latently HIV-infected T cells, both in vitro and in vivo.'' Vorinostat also has shown some activity against the pathophysiological changes in α1-antitrypsin deficiency and cystic fibrosis. Recent evidence also suggests vorinostat can be a therapeutic tool for Niemann-Pick type C1 (NPC1), a rare lysosomal lipid storage disease. Preclinical experiments by University of Alabama at Birmingham researchers suggest the cancer drugs vorinostat, belinostat and panobinostat might be repurposed to treat infections caused by human papillomavirus, or HPV. See also Trichostatin A References External links Vorinostat bound to proteins in the PDB Anilides Antineoplastic drugs Histone deacetylase inhibitors Drugs developed by Merck & Co. Hydroxamic acids Orphan drugs
Vorinostat
[ "Chemistry" ]
1,499
[ "Organic compounds", "Functional groups", "Hydroxamic acids" ]
7,326,801
https://en.wikipedia.org/wiki/Equivalence%20group
An equivalence group is a set of unspecified cells that have the same developmental potential or ability to adopt various fates. Our current understanding suggests that equivalence groups are limited to cells of the same ancestry, also known as sibling cells. Often, cells of an equivalence group adopt different fates from one another. Equivalence groups assume various potential fates in two general, non-mutually exclusive ways. One mechanism, induction, occurs when a signal originating from outside of the equivalence group specifies a subset of the naïve cells. Another mode, known as lateral inhibition, arises when a signal within an equivalence group causes one cell to adopt a dominant fate while others in the group are inhibited from doing so. In many examples of equivalence groups, both induction and lateral inhibition are used to define patterns of distinct cell types. Cells of an equivalence group that do not receive a signal adopt a default fate. Alternatively, cells that receive a signal take on different fates. At a certain point, the fates of cells within an equivalence group become irreversibly determined, thus they lose their multipotent potential. The following provides examples of equivalence groups studied in nematodes and ascidians. Vulva precursor cell equivalence group Introduction A classic example of an equivalence group is the vulva precursor cells (VPCs) of nematodes. In Caenorhabditis elegans self-fertilized eggs exit the body through the vulva. This organ develops from a subset of cell of an equivalence group consisting of six VPCs, P3.p-P8.p, which lie ventrally along the anterior-posterior axis. In this example a single overlying somatic cell, the anchor cell, induces nearby VPCs to take on vulva fates 1° (P6.p) and 2° (P5.p and P7.p). VPCs that are not induced form the 3° lineage (P3.p, P4.p and P8.p), which make epidermal cells that fuse to a large syncytial epidermis (see image). The six VPCs form an equivalence group because all of the six cells are competent to take on any of the available fates (1°, 2°, and 3°) dependent on their proximity to the anchor cell. Ablation experiments indicate that all VPCs are able to adopt vulva fates. For example, if the P6.p cell that normally becomes 1° is ablated then the VPC closest to the anchor cell, either P5.p or P7.p, assumes the 1° fate. Furthermore, if all VPCs are destroyed except the most anterior P3.p cell then the anchor cell designates this cell the 1° fate. However, if the anchor cell is killed, in the absence of an inductive signal, then all of the VPCs assume the default 3° lineage. Molecular mechanism The anchor cell directly induces the vulva fates by secreting the epidermal growth factor (EGF)-like ligand LIN-3. The P6.p cell receives the LIN-3 signal via the receptor tyrosine kinase LET-23 (P5.p and P7.p also receive LIN-3 but to a lesser extent). Activation of LET-23 in P6.p results in the activation of LIN-12 (Notch) in P5.p and P7.p. Experimental evidence shows that LIN-12 is necessary and sufficient for the formation of the 2° fate. Through lateral inhibition LIN-12 prevents the P5.p and P7.p cells from adopting the 1° lineage. Thus, in this example both inductive EGF signaling and lateral Notch activation pattern the VPC equivalence group. Ascidian pigment precursor equivalence group Introduction The larvae of ascidians (sea squirts) contain a pair of sensory pigment cells known as the otolith and ocellus. The otolith is used to sense gravity, whereas the ocellus responds to light. During embryogenesis the otolith and ocellus develop from two bilateral equivalent precursors. Either the left or right pigment precursor cell has equal probability of developing into the otolith or ocellus. The decision to adopt either fate is determined after neural tube closure during the early tailbud stage (see image), via a poorly defined mechanism of induction. During normal development, after neural tube closure, the pigment precursors align dorsally along the anterior-posterior axis of the neural tube. Whichever cell aligns anteriorly will become the otolith, while the posterior cell will form the ocellus. In the absence of cell-cell interactions both cells develop into ocelli, which is the default fate. Experimental methods for studying equivalence in Halocynthia roretzi To elucidate whether the fates of the otolith and ocellus are determined in the early embryo or after the precursors align during neural tube closure, ablation and drug treatment techniques were used in the ascidian species Halocythia roretzi. Cells that are labeled with fluorescein isothiocyanate-dextran (FDX) can be selectively photoablated by fluorescent excitation. When one FDX labeled pigment precursor cells is photoablated during the mid-neurula stage (15 hrs) the other will almost always develop into an ocellus. However, if the ablations are performed during the late tailbud stage (22.5 hrs) then the remaining cell has an equal likelihood of becoming an otolith or ocellus. Inhibiting cell division and morphogenesis with cytochalasin B is another method used to determine when the pigment precursor equivalence group is specified. Cytochalasin treatment of early tailbud stage embryos (17 hrs), while the two bilateral cells are still separated, results in both cells becoming ocelli. When the drug was used after the two cells aligned at the dorsal midline, the anterior cell developed into the otolith and the posterior cell became the ocellus without exception. Both experiments suggest that fates of the pigment precursor cells are irreversibly determined by approximately the mid-tailbud stage (21 hrs). Other equivalence groups Equivalence groups have also been described in the ganglion mother cells in grasshopper and the O/P teloblasts in the leech. Like other instances of equivalence groups, progeny cells are born equivalent and become specified through cell interactions. Equivalence groups are a common theme in the development of many organisms from diverse phyla. References External links Nematode Vulva Development Ascidian Network for In Situ Expression and Embryological Data Developmental biology
Equivalence group
[ "Biology" ]
1,386
[ "Behavior", "Developmental biology", "Reproduction" ]
7,327,130
https://en.wikipedia.org/wiki/Tunguska%20event%20in%20fiction
The Tunguska event—an enormous explosion in a remote region of Siberia on 30 June 1908—has appeared in many works of fiction. History The event had a long-lasting influence on disaster stories featuring comets. Cause While the event is generally held to have been caused by a meteor air burst, several alternative explanations have been proposed both in scientific circles and in fiction. A popular one in fiction is that it was caused by an alien spaceship, possibly first put forth in Ed Earl Repp's 1930 short story "The Second Missile". It gained prominence following the publication of Russian science fiction writer Alexander Kazantsev's 1946 short story "Explosion"; inspired by the similarities between the event and the nuclear bombing of Hiroshima, Kazantsev's story posits that a nuclear explosion in the engine of a spacecraft was responsible. An alien spacecraft is also the explanation in Polish science fiction writer Stanisław Lem's 1951 novel The Astronauts and its 1960 film adaptation The Silent Star, while a human-made one is to blame in Ian Watson's 1983 novel Chekhov's Journey. Additional variations on the spaceship theme appear in Rudy Rucker and Bruce Sterling's 1985 short story "Storming the Cosmos" and Algis Budrys's 1993 novel Hard Landing, among others. Another proposed explanation is that the cause was the impact of a micro black hole, as in Larry Niven's 1975 short story "The Borderland of Sol" and Bill DeSmedt's 2004 novel Singularity. Effect In Donald R. Bensen's 1978 novel And Having Writ..., the course of history is altered by the arrival of aliens to Earth in 1908, which also causes the Tunguska event. The 1996 The X-Files episode "Tunguska" revolves around the impact possibly having introduced alien microbial life to Earth. Ice from the impact turns out to have peculiar properties in Vladimir Sorokin's 2002 novel Ice and Jacek Dukaj's 2007 novel likewise titled Ice. See also Impact events in fiction References Fiction about impact events Russia in fiction fiction
Tunguska event in fiction
[ "Physics" ]
426
[ "Unsolved problems in physics", "Tunguska event" ]
7,327,278
https://en.wikipedia.org/wiki/Transcription%20bubble
A transcription bubble is a molecular structure formed during DNA transcription when a limited portion of the DNA double helix is unwound. The size of a transcription bubble ranges from 12 to 14 base pairs. A transcription bubble is formed when the RNA polymerase enzyme binds to a promoter and causes two DNA strands to detach. It presents a region of unpaired DNA, where a short stretch of nucleotides are exposed on each strand of the double helix. RNA polymerase The bacterial RNA polymerase, a leading enzyme involved in formation of a transcription bubble, uses DNA template to guide RNA synthesis. It is present in two main forms: as a core enzyme, when it is inactive, and as a holoenzyme, when it is activated. A sigma (σ) factor is a subunit that assists the process of transcription and it stabilizes the transcription bubble when it binds to unpaired bases. These two components, RNA polymerase and sigma factor, when paired together, build RNA polymerase holoenzyme which is then in its active form and ready to bind to a promoter and initiate DNA transcription. Once it binds to the DNA, RNA polymerase turns from a closed to an open complex, forming the transcription bubble. RNA polymerase synthesizes the new RNA in the 5' to 3' direction by adding complementary bases to the 3' end of a new strand. The holoenzyme composition dissociates after transcription initiation, where the σ factor disengages the complex and the RNA polymerase, in its core form, slides along the DNA molecule. The transcription cycle of bacterial RNA polymerase The RNA polymerase holoenzyme binds to a promoter of an exposed DNA strand and begins to synthesize the new strand of RNA. The double helix DNA is unwound and a short nucleotide sequence is accessible on each strand. The transcription bubble is a region of unpaired bases on one of the exposed DNA strands. The starting transcription point is determined by the place where the holoenzyme binds to a promoter. The DNA is unwound and single-stranded at the start site. The DNA promoter interaction is interrupted as the RNA polymerase moves down the template DNA strand and the sigma factor is released. The σ factor is required for the initiation but not for the remaining steps of the DNA transcription. Once the σ factor dissociates from the RNA polymerase, the transcription continues. About 10 synthesized nucleotides of a new RNA strand are required for this to proceed to the elongation step. The process of transcribing during elongation is very fast. Elongation takes place until the RNA polymerase comes across a termination signal (terminator) which arrests the process and causes the release of both the DNA template and the new RNA molecule. The DNA usually encodes the termination signal. Eukaryotic transcription The majority of eukaryotic genes are transcribed by RNA polymerase II, proceeding in the 5' to 3' direction. In eukaryotes, specific subunits within the RNA polymerase II complex allow it to carry out multiple functions. General transcription factors help binding RNA polymerase II to DNA. Promoters are sites where RNA polymerase II binds to start transcription and, in eukaryotes, transcription starting point is positioned at +1 nucleotide. Like all RNA polymerases, it travels along the template DNA, in the 3' to 5' direction and synthesizes a new RNA strand in the 5' to 3' direction, by adding new bases to the 3' end of the new RNA. A transcription bubble occurs as a result of the double stranded DNA unwinding. After about 25 base pairs of the DNA double strand are unwound, RNA synthesis takes place within the transcription bubble region. Supercoiling is also part of this process since DNA regions in front of the RNA polymerase II are unwinding, while DNA regions behind it are rewinding, forming a double helix again. The RNA polymerase carries out the majority of the steps during the transcription cycle, especially in maintaining the transcription bubble open for the complementary base pairing. There are some steps of the transcription cycle that require more proteins, such as the Rpb4/7 complex and the RNA polymerase attached to the elongation factor transcription factor IIS (TFIIS). See also Transcription factors Transcription (genetics)#Pre-initiation References Genetics Gene expression
Transcription bubble
[ "Chemistry", "Biology" ]
906
[ "Genetics", "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
7,327,506
https://en.wikipedia.org/wiki/Richard%20Lathe
Richard Lathe is a molecular biologist who has held professorships at the University of Strasbourg, the University of Edinburgh, and the State University of Pushchino. He was assistant director at the biotech company Transgene in Strasbourg, a principal scientist at ABRO, Edinburgh, and Co-Director of the Biotechnology College ESBS based in Strasbourg. Lathe is also the founder of Pieta Research, a biotechnology consultancy in Edinburgh, current academic appointments are with the University of Edinburgh and the State University of Pushchino. He resigned from his post at Pushchino in 2022. Early work Lathe studied Molecular Biology at Edinburgh under Bill Hayes and Ken Murray, followed by doctoral studies at Brussels under Rene Thomas. He then moved to Cambridge (under Mike Ashburner), to Heidelberg (Ekke Bautz) before joining the newly founded Biotech company Transgene SA under Jean-Pierre Lecocq, Pierre Chambon, and Philippe Kourilsky. Vaccines Lathe is the primary inventor of the vaccine that eradicated rabies in France.; other members of the team included Marie-Paule Kieny. Extension of this work included the development of vaccines against virus-induced tumors. The European rabies vaccination campaigns proved tremendously successful and constituted a paradigm for wildlife vaccination programs. France was declared free of rabies in 2002, but success in North America has been less dramatic owing to the prevalence of several species capable of transmitting rabies Gene technology The most highly cited paper regards a tool for isolating coding sequences, published in 1985 in the Journal of Molecular Biology. This paper makes him among the 1000 citations clan of authors who have achieved a thousand citations for a single-author work Hippocampal function A review in the Journal of Endocrinology entitled 'Hormones and the Hippocampus' argues that external and internal biochemical sensing have been crucial for the evolution of the mammalian brain. Although the hippocampus is likely to play a role in internal sensing, other brain regions have been implicated, and a far wider role of the hippocampus in consciousness, episodic memory, and emotional feelings has been discussed Autism, Brain, and Environment In Autism, Brain, and Environment (2006, ), Lathe proposes that autism is largely a disorder of the limbic brain, balancing what he calls evidence that environmental factors may trigger autism with a recognition of genetic vulnerability. Lunar theory of life's origins on Earth Lathe's research has led him to develop a theory that without the Moon, there would be no life on Earth. When life began, Earth orbited much more closely to the Moon than it does now, causing massive tides every few hours, which in turn caused rapid cycling of salinity levels on coastlines and may have driven the evolution of early DNA. Lathe uses polymerase chain reaction (PCR), which amplifies DNA replication in the lab, as an example of the mechanisms that facilitate DNA replication, In the laboratory, PCR synthesis is achieved by cycling DNA between two extreme temperatures in the presence of certain enzymes. At lower temperatures, about 50 °C, single strands of DNA act as templates for building complementary strands. At higher temperatures, about 100 °C, the double strands break apart, doubling the number of molecules. The synthesis of DNA is started again by lowering the temperature, and so forth. Through this process, one DNA molecule can be converted into a trillion identical copies in just 40 cycles. Saline cycles triggered by rapid tidal activity would have amplified molecules such as DNA in a process similar to PCR, through alternating high and low salinity – a process dubbed tidal chain reaction (TCR) – says Lathe, "The tidal force is absolutely important, because it provides the energy for association and dissociation" of polymers. A contrasting viewpoint is that deep-sea hydrothermal vents, among other scenarios, may have led to the emergence of life (abiogenesis). In addition, the fast early tides invoked may not have been quite so rapid, and it is possible that only 2-3% of the Earth's crust may have been exposed above the sea until late in terrestrial evolution [4]; although mechanistically sound, the tidal chain reaction theory remains speculative. Brain disease The prion theory has been widely questioned, and the prion (PrP) protein is now recognized to be an RNA-binding protein. Darlix and Lathe propose that retroelements constitute the replicative component of the transmissible spongiform encephalopathies. Lathe has also argued that infection may play a role in Alzheimer disease, and has worked with Rudy Tanzi and Rob Moir at Harvard to develop the Antimicrobial Protection Theory of Alzheimer Disease Building on earlier recommendations of an expert group and increasing evidence that there is a causal link between infection and Alzheimer's disease. References External links Pieta-Research.org - 'Pieta Research A biotechnology consultancy based in Edinburgh UK: Specialist areas Molecular biology, neuroscience, physiology' NewScientist.com - 'No Moon, no life on Earth, suggests theory', Anil Ananthaswamy, New Scientist (18 March 2004) NewScientist.com - 'Toxic metal clue to autism', Richard Lathe and Michael Le Page, New Scientist (18 June 2003) - 'Autism and pollution: the vital link', Juliet Rix, The Times (2 May 2006) - 'Making sense of autism', Francesca Happé, Nature (9 Aug 2006) - 'Fast tidal cycling and the origin of life', Richard Lathe, Icarus (2003) - 'Without the moon, would there be life on earth', Bruce Dorminey, Scientific American (21 April 2009) - 'Autism, Brain, and Environment', Richard Lathe, Jessica Kingsley Publishers (2006) Academics of the University of Edinburgh Autism researchers Living people Molecular biologists Year of birth missing (living people)
Richard Lathe
[ "Chemistry" ]
1,222
[ "Biochemists", "Molecular biology", "Molecular biologists" ]
7,327,786
https://en.wikipedia.org/wiki/Polarity%20in%20embryogenesis
In developmental biology, an embryo is divided into two hemispheres: the animal pole and the vegetal pole within a blastula. The animal pole consists of small cells that divide rapidly, in contrast with the vegetal pole below it. In some cases, the animal pole is thought to differentiate into the later embryo itself, forming the three primary germ layers and participating in gastrulation. The vegetal pole contains large yolky cells that divide very slowly, in contrast with the animal pole above it. In some cases, the vegetal pole is thought to differentiate into the extraembryonic membranes that protect and nourish the developing embryo, such as the placenta in mammals and the chorion in birds. In amphibians, the development of the animal-vegetal axis occurs prior to fertilization. Sperm entry can occur anywhere in the animal hemisphere. The point of sperm entry defines the dorso-ventral axis - cells opposite the region of sperm entry will eventually form the dorsal portion of the body. In the frog Xenopus laevis, the animal pole is heavily pigmented while the vegetal pole remains unpigmented. A pigment pattern provides the oocyte with features of a radially symmetrical body with a distinct polarity. The animal hemisphere is dark brown, and the vegetal hemisphere is only weakly pigmented. The axis of symmetry passes through on one side the animal pole, and on the other side the vegetal pole. The two hemispheres are separated by an unpigmented equatorial belt. Polarity has a major influence on the emergence of the embryonic structures. In fact, the axis polarity serves as one coordinate of geometrical system in which early embryogenesis is organized. Naming The animal pole draws its name from its liveliness relative to the slowly developing vegetal pole, while the vegetal pole is named for its relative inactivity relative to the animal pole. See also Gastrulation Embryogenesis References Developmental biology
Polarity in embryogenesis
[ "Biology" ]
411
[ "Behavior", "Developmental biology", "Reproduction" ]
13,505,229
https://en.wikipedia.org/wiki/LEMON%20%28C%2B%2B%20library%29
LEMON is an open source graph library written in the C++ language providing implementations of common data structures and algorithms with focus on combinatorial optimization tasks connected mainly with graphs and networks. The library is part of the COIN-OR project. LEMON is an abbreviation of Library for Efficient Modeling and Optimization in Networks. Design LEMON employs genericity in C++ by using templates. The tools of the library are designed to be versatile, convenient and highly efficient. They can be combined easily to solve complex real-life optimization problems. For example, LEMON’s graphs can differ in many ways (depending on the representation and other specialities), but all have to satisfy one or more graph concepts, which are standardized interfaces to work with the rest of the library. Features LEMON provides Graph structures and related tools Graph search algorithms Shortest path algorithms Maximum flow algorithms Minimum cost flow algorithms Minimum cut algorithms Connectivity and other graph properties Maximum cardinality and minimum cost perfect matching algorithms Minimum cost spanning tree algorithms Approximation algorithms Auxiliary algorithms LEMON also contains some metaheuristic optimization tools and provides a general high-level interface for several LP and MIP solvers, such as GLPK, ILOG CPLEX, CLP, CBC, SoPlex. LEMON has its own graph storing format, the so called Lemon Graph Format and includes general EPS drawing methods and special graph exporting tools. LEMON also includes several miscellaneous tools. For example, it provides simple tools for measuring the performance of algorithms, which can be used to compare different implementations of the same problem. External links C++ libraries Numerical software Software_using_the_Boost_license
LEMON (C++ library)
[ "Mathematics" ]
327
[ "Numerical software", "Mathematical software" ]
13,505,242
https://en.wikipedia.org/wiki/Time-of-flight%20mass%20spectrometry
Time-of-flight mass spectrometry (TOFMS) is a method of mass spectrometry in which an ion's mass-to-charge ratio is determined by a time of flight measurement. Ions are accelerated by an electric field of known strength. This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio (heavier ions of the same charge reach lower speeds, although ions with higher charge will also increase in velocity). The time that it subsequently takes for the ion to reach a detector at a known distance is measured. This time will depend on the velocity of the ion, and therefore is a measure of its mass-to-charge ratio. From this ratio and known experimental parameters, one can identify the ion. Theory The potential energy of a charged particle in an electric field is related to the charge of the particle and to the strength of the electric field: where Ep is potential energy, q is the charge of the particle, and U is the electric potential difference (also known as voltage). When the charged particle is accelerated into time-of-flight tube (TOF tube or flight tube) by the voltage U, its potential energy is converted to kinetic energy. The kinetic energy of any mass is: In effect, the potential energy is converted to kinetic energy, meaning that equations () and () are equal The velocity of the charged particle after acceleration will not change since it moves in a field-free time-of-flight tube. The velocity of the particle can be determined in a time-of-flight tube since the length of the path (d) of the flight of the ion is known and the time of the flight of the ion (t) can be measured using a transient digitizer or time to digital converter. Thus, and we substitute the value of v in () into (). Rearranging () so that the flight time is expressed by everything else: Taking the square root yields the time, These factors for the time of flight have been grouped purposely. contains constants that in principle do not change when a set of ions are analyzed in a single pulse of acceleration. () can thus be given as: where k is a proportionality constant representing factors related to the instrument settings and characteristics. () reveals more clearly that the time of flight of the ion varies with the square root of its mass-to-charge ratio (m/q). Consider a real-world example of a MALDI time-of-flight mass spectrometer instrument which is used to produce a mass spectrum of the tryptic peptides of a protein. Suppose the mass of one tryptic peptide is 1000 daltons (Da). The kind of ionization of peptides produced by MALDI is typically +1 ions, so q = e in both cases. Suppose the instrument is set to accelerate the ions in a U = 15,000 volts (15 kilovolt or 15 kV) potential. And suppose the length of the flight tube is 1.5 meters (typical). All the factors necessary to calculate the time of flight of the ions are now known for (), which is evaluated first of the ion of mass 1000 Da: Note that the mass had to be converted from daltons (Da) to kilograms (kg) to make it possible to evaluate the equation in the proper units. The final value should be in seconds: which is about 28 microseconds. If there were a singly charged tryptic peptide ion with 4000 Da mass, and it is four times larger than the 1000 Da mass, it would take twice the time, or about 56 microseconds to traverse the flight tube, since time is proportional to the square root of the mass-to-charge ratio. Delayed extraction Mass resolution can be improved in axial MALDI-TOF mass spectrometer where ion production takes place in vacuum by allowing the initial burst of ions and neutrals produced by the laser pulse to equilibrate and to let the ions travel some distance perpendicularly to the sample plate before the ions can be accelerated into the flight tube. The ion equilibration in plasma plume produced during the desorption/ionization takes place approximately 100 ns or less, after that most of ions irrespectively of their mass start moving from the surface with some average velocity. To compensate for the spread of this average velocity and to improve mass resolution, it was proposed to delay the extraction of ions from the ion source toward the flight tube by a few hundred nanoseconds to a few microseconds with respect to the start of short (typically, a few nanosecond) laser pulse. This technique is referred to as "time-lag focusing" for ionization of atoms or molecules by resonance enhanced multiphoton ionization or by electron impact ionization in a rarefied gas and "delayed extraction" for ions produced generally by laser desorption/ionization of molecules adsorbed on flat surfaces or microcrystals placed on conductive flat surface. Delayed extraction generally refers to the operation mode of vacuum ion sources when the onset of the electric field responsible for acceleration (extraction) of the ions into the flight tube is delayed by some short time (200–500 ns) with respect to the ionization (or desorption/ionization) event. This differs from a case of constant extraction field where the ions are accelerated instantaneously upon being formed. Delayed extraction is used with MALDI or laser desorption/ionization (LDI) ion sources where the ions to be analyzed are produced in an expanding plume moving from the sample plate with a high speed (400–1000 m/s). Since the thickness of the ion packets arriving at the detector is important to mass resolution, on first inspection it can appear counter-intuitive to allow the ion plume to further expand before extraction. Delayed extraction is more of a compensation for the initial momentum of the ions: it provides the same arrival times at the detector for ions with the same mass-to-charge ratios but with different initial velocities. In delayed extraction of ions produced in vacuum, the ions that have lower momentum in the direction of extraction start to be accelerated at higher potential due to being further from the extraction plate when the extraction field is turned on. Conversely, those ions with greater forward momentum start to be accelerated at lower potential since they are closer to the extraction plate. At the exit from the acceleration region, the slower ions at the back of the plume will be accelerated to greater velocity than the initially faster ions at the front of the plume. So after delayed extraction, a group of ions that leaves the ion source earlier has lower velocity in the direction of the acceleration compared to some other group of ions that leaves the ion source later but with greater velocity. When ion source parameters are properly adjusted, the faster group of ions catches up to the slower one at some distance from the ion source, so the detector plate placed at this distance detects simultaneous arrival of these groups of ions. In its way, the delayed application of the acceleration field acts as a one-dimensional time-of-flight focusing element. Reflectron TOF The kinetic energy distribution in the direction of ion flight can be corrected by using a reflectron. The reflectron uses a constant electrostatic field to reflect the ion beam toward the detector. The more energetic ions penetrate deeper into the reflectron, and take a slightly longer path to the detector. Less energetic ions of the same mass-to-charge ratio penetrate a shorter distance into the reflectron and, correspondingly, take a shorter path to the detector. The flat surface of the ion detector (typically a microchannel plate, MCP) is placed at the plane where ions of same m/z but with different energies arrive at the same time counted with respect to the onset of the extraction pulse in the ion source. A point of simultaneous arrival of ions of the same mass-to-charge ratio but with different energies is often referred as time-of-flight focus. An additional advantage to the re-TOF arrangement is that twice the flight path is achieved in a given length of the TOF instrument. Ion gating A Bradbury–Nielsen shutter is a type of ion gate used in TOF mass spectrometers and in ion mobility spectrometers, as well as Hadamard transform TOF mass spectrometers. The Bradbury–Nielsen shutter is ideal for fast timed ion selector (TIS)—a device used for isolating ions over narrow mass range in tandem (TOF/TOF) MALDI mass spectrometers. Orthogonal acceleration time-of-flight Continuous ion sources (most commonly electrospray ionization, ESI) are generally interfaced to the TOF mass analyzer by "orthogonal extraction" in which ions introduced into the TOF mass analyzer are accelerated along the axis perpendicular to their initial direction of motion. Orthogonal acceleration combined with collisional ion cooling allows separating the ion production in the ion source and mass analysis. In this technique, very high resolution can be achieved for ions produced in MALDI or ESI sources. Before entering the orthogonal acceleration region or the pulser, the ions produced in continuous (ESI) or pulsed (MALDI) sources are focused (cooled) into a beam of 1–2 mm diameter by collisions with a residual gas in RF multipole guides. A system of electrostatic lenses mounted in high-vacuum region before the pulser makes the beam parallel to minimize its divergence in the direction of acceleration. The combination of ion collisional cooling and orthogonal acceleration TOF has provided significant increase in resolution of modern TOF MS from few hundred to several tens of thousand without compromising the sensitivity. Hadamard transform time-of-flight mass spectrometry Hadamard transform time-of flight mass spectrometry (HT-TOFMS) is a mode of mass analysis used to significantly increase the signal-to-noise ratio of a conventional TOFMS. Whereas traditional TOFMS analyzes one packet of ions at a time, waiting for the ions to reach the detector before introducing another ion packet, HT-TOFMS can simultaneously analyze several ion packets traveling in the flight tube. The ions packets are encoded by rapidly modulating the transmission of the ion beam, so that lighter (and thus faster) ions from all initially-released packets of mass from a beam get ahead of heavier (and thus slower) ions. This process creates an overlap of many time-of-flight distributions convoluted in form of signals. The Hadamard transform algorithm is then used to carry out the deconvolution process which helps to produce a faster mass spectral storage rate than traditional TOFMS and other comparable mass separation instruments. Tandem time-of-flight Tandem time-of-flight (TOF/TOF) is a tandem mass spectrometry method where two time-of-flight mass spectrometers are used consecutively. To record full spectrum of precursor (parent) ions TOF/TOF operates in MS mode. In this mode, the energy of the pulse laser is chosen slightly above the onset of MALDI for specific matrix in use to ensure the compromise between an ion yield for all the parent ions and reduced fragmentation of the same ions. When operating in a tandem (MS/MS) mode, the laser energy is increased considerably above MALDI threshold. The first TOF mass spectrometer (basically, a flight tube which ends up with the timed ion selector) isolates precursor ions of choice using a velocity filter, typically, of a Bradbury–Nielsen type, and the second TOF-MS (that includes the post accelerator, flight tube, ion mirror, and the ion detector) analyzes the fragment ions. Fragment ions in MALDI TOF/TOF result from decay of precursor ions vibrationally excited above their dissociation level in MALDI source (post source decay ). Additional ion fragmentation implemented in a high-energy collision cell may be added to the system to increase dissociation rate of vibrationally excited precursor ions. Some designs include precursor signal quenchers as a part of second TOF-MS to reduce the instant current load on the ion detector. Quadrupole time-of-flight Quadrupole time-of-flight mass spectrometry (QToF-MS) has a similar configuration to a tandem mass spectrometer with a mass-resolving quadrupole and collision cell hexapole, but instead of a second mass-resolving quadrupole, a time-of-flight mass analyzer is used. Both quadrupoles can operate in RF mode only to allow all ions to pass through to the mass analyzer with minimal fragmentation. To increase spectral detail, the system takes advantage of collision-induced dissociation. Once the ions reach the flight tube, the ion pulser sends them upwards towards the reflectron and back down into the detector. Since the ion pulser transfers the same kinetic energy to all molecules, the flight time is dictated by the mass of the analyte. QToF is capable of measuring mass to the 4th decimal place and is frequently used for pharmaceutical and toxicological analysis as a screening method for drug analogues. Identification is done by collection of the mass spectrum and comparison to tandem mass spectrum libraries. Detectors A time-of-flight mass spectrometer (TOFMS) consists of a mass analyzer and a detector. An ion source (either pulsed or continuous) is used for lab-related TOF experiments, but not needed for TOF analyzers used in space, where the sun or planetary ionospheres provide the ions. The TOF mass analyzer can be a linear flight tube or a reflectron. The ion detector typically consists of microchannel plate detector or a fast secondary emission multiplier (SEM) where first converter plate (dynode) is flat. The electrical signal from the detector is recorded by means of a time to digital converter (TDC) or a fast analog-to-digital converter (ADC). TDC is mostly used in combination with orthogonal-acceleration (oa)TOF instruments. Time-to-digital converters register the arrival of a single ion at discrete time "bins"; a combination of threshold triggering and constant fraction discriminator (CFD) discriminates between electronic noise and ion arrival events. CFD converts nanosecond-long Gaussian-shaped electrical pulses of different amplitudes generated on the MCP's anode into common-shape pulses (e.g., pulses compatible with TTL/ESL logic circuitry) sent to TDC. Using CFD provides a time point correspondent to a position of peak maximum independent of variation in the peak amplitude caused by variation of the MCP or SEM gain. Fast CFDs of advanced designs have the dead times equal to or less than two single-hit response times of the ion detector (single-hit response time for MCP with 2-5 micron wide channels can be somewhere between 0.2 ns and 0.8 ns, depending on the channel angle) thus preventing repetitive triggering from the same pulse. Double-hit resolution (dead time) of modern multi-hit TDC can be as low as 3-5 nanosecond. The TDC is a counting detector – it can be extremely fast (down to a few picosecond resolution), but its dynamic range is limited due to its inability to properly count the events when more than one ion simultaneously (i.e., within the TDC dead time) hit the detector. The outcome of limited dynamic range is that the number of ions (events) recorded in one mass spectrum is smaller compared to real number. The problem of limited dynamic range can be alleviated using multichannel detector design: an array of mini-anodes attached to a common MCP stack and multiple CFD/TDC, where each CFD/TDC records signals from individual mini-anode. To obtain peaks with statistically acceptable intensities, ion counting is accompanied by summing of hundreds of individual mass spectra (so-called hystograming). To reach a very high counting rate (limited only by duration of individual TOF spectrum which can be as high as few milliseconds in multipath TOF setups), a very high repetition rate of ion extractions to the TOF tube is used. Commercial orthogonal acceleration TOF mass analyzers typically operate at 5–20 kHz repetition rates. In combined mass spectra obtained by summing a large number of individual ion detection events, each peak is a histogram obtained by adding up counts in each individual bin. Because the recording of the individual ion arrival with TDC yields only a single time point, the TDC eliminates the fraction of peak width determined by a limited response time of both the MCP detector and preamplifier. This propagates into better mass resolution. Modern ultra-fast 10 GSample/sec analog-to-digital converters digitize the pulsed ion current from the MCP detector at discrete time intervals (100 picoseconds). Modern 8-bit or 10-bit 10 GHz ADC has much higher dynamic range than the TDC, which allows its usage in MALDI-TOF instruments with its high peak currents. To record fast analog signals from MCP detectors one is required to carefully match the impedance of the detector anode with the input circuitry of the ADC (preamplifier) to minimize the "ringing" effect. Mass resolution in mass spectra recorded with ultra-fast ADC can be improved by using small-pore (2-5 micron) MCP detectors with shorter response times. Applications Matrix-assisted laser desorption ionization (MALDI) is a pulsed ionization technique that is readily compatible with TOF MS. Atom probe tomography also takes advantage of TOF mass spectrometry. Photoelectron photoion coincidence spectroscopy uses soft photoionization for ion internal energy selection and TOF mass spectrometry for mass analysis. Secondary ion mass spectrometry commonly utilizes TOF mass spectrometers to allow parallel detection of different ions with a high mass resolving power. Stefan Rutzinger proposed using TOF mass spectrometry with a cryogenic detector for the spectrometry of heavy biomolecules. History of the field An early time-of-flight mass spectrometer, named the Velocitron, was reported by A. E. Cameron and D. F. Eggers Jr, working at the Y-12 National Security Complex, in 1948. The idea had been proposed two years earlier, in 1946, by W. E. Stephens of the University of Pennsylvania in a Friday afternoon session of a meeting, at the Massachusetts Institute of Technology, of the American Physical Society. References Bibliography External links IFR/JIC TOF MS Tutorial Jordan TOF Products TOF Mass Spectrometer Tutorial University of Bristol TOF-MS Tutorial Kore Technology – Introduction to Time-of-Flight Mass Spectrometry Mass spectrometry
Time-of-flight mass spectrometry
[ "Physics", "Chemistry" ]
3,949
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,505,688
https://en.wikipedia.org/wiki/Analytic%20semigroup
In mathematics, an analytic semigroup is particular kind of strongly continuous semigroup. Analytic semigroups are used in the solution of partial differential equations; compared to strongly continuous semigroups, analytic semigroups provide better regularity of solutions to initial value problems, better results concerning perturbations of the infinitesimal generator, and a relationship between the type of the semigroup and the spectrum of the infinitesimal generator. Definition Let Γ(t) = exp(At) be a strongly continuous one-parameter semigroup on a Banach space (X, ||·||) with infinitesimal generator A. Γ is said to be an analytic semigroup if for some 0 < θ < π/&hairsp;2, the continuous linear operator exp(At) : X → X can be extended to t ∈ Δθ&hairsp;, and the usual semigroup conditions hold for s, t ∈ Δθ&hairsp;: exp(A0) = id, exp(A(t + s)) = exp(At) exp(As), and, for each x ∈ X, exp(At)x is continuous in t; and, for all t ∈ Δθ \ {0}, exp(At) is analytic in t in the sense of the uniform operator topology. Characterization The infinitesimal generators of analytic semigroups have the following characterization: A closed, densely defined linear operator A on a Banach space X is the generator of an analytic semigroup if and only if there exists an ω ∈ R such that the half-plane Re(λ) > ω is contained in the resolvent set of A and, moreover, there is a constant C such that for the resolvent of the operator A we have for Re(λ) > ω. Such operators are called sectorial. If this is the case, then the resolvent set actually contains a sector of the form for some δ > 0, and an analogous resolvent estimate holds in this sector. Moreover, the semigroup is represented by where γ is any curve from e−iθ∞ to e+iθ∞ such that γ lies entirely in the sector with π/&hairsp;2 < θ < π/&hairsp;2 + δ. References Functional analysis Partial differential equations Semigroup theory
Analytic semigroup
[ "Mathematics" ]
484
[ "Functions and mappings", "Mathematical structures", "Functional analysis", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Algebraic structures", "Semigroup theory" ]
13,506,220
https://en.wikipedia.org/wiki/Thermoplastic%20vulcanizates
Thermoplastic vulcanizates (TPVs) are a type of thermoplastic elastomers (TPE) that undergo vulcanization processes during manufacturing, giving elastomeric properties to the final product. Vulcanization involves the cross-linking of polymer chains, leading to increased strength, durability, and flexibility. Their thermoplastic nature allows TPVs, unlike traditional vulcanized rubbers, to be melted and reprocessed multiple times. Across the automotive, household appliance, electrical, construction, and healthcare sectors, nearly 100 TPV grades are used globally. Monsanto trademarked the name Santoprene for these materials in 1977. The trademark is now owned by the Celanese Corporation. Similar material is available from Elastron, and others. Overview Thermoplastic vulcanizates were first reported in 1962 by A.M. Gessler and W.H. Haslett. In 1973, W.K. Fisher reported the dynamic vulcanization process through his prior work on polypropylene and EPDM rubber-based TPVs with peroxides as a cross-linking agent. This resulted in the commercialization of "Uniroyal TPR" thermoplastic rubber. TPVs are a blend between a thermoplastic matrix and vulcanized rubber - combining the properties of both. A combination of elastomeric properties, including compressibility, tension sets, aging performance, and chemical resistance, characterizes TPVs. Although TPVs are part of the TPE family of polymers, they behave closer to EPDM thermoset rubbers in terms of their elastomeric properties. The first sales of developmental products began in 1977, the same year TPV was registered as Santoprene by Monsanto, and it was fully commercialized in 1981. Early successes Santoprene TPV had early application successes in the automotive sector, including rack and pinion boots. This was motivated by its flex life, fluid resistance, and sealability. In the appliance sector, a dishwasher sump boot was developed where Santoprene TPV provided sealing and resistance to heat and fluids. Santoprene TPV was also successful in the domestic and high-rise construction sectors in applications such as window seals, caster wheels, tubing, small hose parts, electrical connectors, and coatings for wires and cables. It was also used in the medical industry for gaskets on syringe plungers. Chemistry Santoprene TPV is a dynamically vulcanized polymer alloy consisting mostly of fully cured EPDM rubber particles encapsulated in a polypropylene (PP) matrix. Photographs made using atomic force microscopy and scanning electron microscopy show a multitude of very small particles, typically no bigger than a few microns in diameter. These particles are fully vulcanized rubber (typically EPDM rubber for most Santoprene TPV grades) in a thermoplastic phase (most often PP in the case of Santoprene TPV grades). Fully cross-linked or vulcanized means 98% or above, and because the morphology is "locked-in," it provides stable physical properties. Properties The properties of thermoplastic vulcanizates include: Hardness: The typical range is between 30 and 90 Shore A or higher, depending on the formulation. Tensile strength: Usually between 5 and 20 MPa, varying by specific blend and processing conditions. Elongation at break: Stretchable up to 3 times its length (300%). Compression set: The range is 10 to 30%, reflecting a major recovery after deformation. Thermal stability: Service temperature ranges is –40 °C to 120 °C (-40 °F to 248 °F), depending on formulation. Chemical resistance: Resistant to oils, solvents, and many chemicals; varies with formulation. Water absorption: Often less than 1% after 24 hours of immersion. Flammability: Often self-extinguishing; can vary based on specific additives. Processing capability: Can be processed using standard thermoplastic processing methods, such as injection molding and extrusion. Coprocessing with other polymers can include coextrusion, molding, and blow molding for multipart system designs. Recyclability: Can be reprocessed and recycled. Density: This can be reduced through design optimization and material replacement. Sealing performance: Long-term durability combined with dimensional stability and physical properties over the life of the part. Color: Black Applications Commercial TPV grades can be designed for a broad range of specific engineering applications, with grades ranging from hardness of 35 Shore A to 50 Shore D. Automotive components Thermoplastic vulcanization is used commercially for weather seals as a lightweight alternative to thermoset rubber materials in semi-dynamic and static parts. In under-hood and under-vehicle applications, it is well-suited for air ducts, tubing, molded seals, grommets, suspension bellows, cable jacketing, plugs, bumpers, and many other parts, thanks to its sealing performance and resistance to extreme temperatures, chemical exposure, and harsh environments. Building and construction products In commercial glazing seals, TPV can be used for curtain walls, storefronts, architectural windows, and skylight weather-seal applications. It is also commonly used for residential glazing seals because of its low air- and water-infiltration ratings for the life of window and door systems. Other applications include bridge decks, parking decks, water stops, rail pads, road construction projects and rail construction projects. TPV can be used to make durable seals, gaskets, and grommets that are resistant to flex fatigue, harsh temperatures, and chemicals, as well as for a variety of sealing applications, including pipe seals, bridge expansion joints and curtain walls, parts for potable water, and pipe seals for sewer and drainage. Household appliance parts Some TPV is used commercially in washing machines, dryers, dishwashers, refrigerators, small appliances, and floor care. Other uses include parts such as pump seals, hoses, couplings, vibration dampeners, drum rollers, knobs, and controls. Electrical components Commercial TPV is used in wiring connectors to make watertight seals with electrical and thermal resistance, insulation for high voltage applications, and parts requiring the use of temperatures down to −60 °C. For applications requiring watertight seals, TPV enables connectors to be insert-molded to cable jacketing, producing a single integral part. It is also used for industrial wire and cable connectors and low-voltage industrial cable applications that include insulation and jackets, in addition to consumer wire and cable use. Processing After a short drying period, TPV pellets are automatically transferred to the molding machine or extrusion line. Cycle times can be significantly faster compared with rubber (2 to 3 minutes) because the parts do not have to cure in the mold. Once the TPV parts are allowed to cool (about 30 seconds), they can be removed from the mold. Some commercial TPVs can be processed using conventional thermoplastic processes, such as injection molding, blow molding, and extrusion. The manufacture of TPV parts is less complex in contrast to rubber. Some commercial TPVs are ready to use and do not need to be compounded with other ingredients, such as reinforcing fillers (carbon black, mineral fillers), stabilizers, plasticizing oils, and curing systems. Compared to processing rubber, thermoplastic processing of TPV can deliver shorter cycle times, higher parts output per hour, and the reuse of scrap produced during processing. These attributes can result in cost reduction, less tooling/machinery, lower scrap costs, and optimization of material logistic costs compared to rubber. Processing options Injection molding: TPV can be processed using conventional thermoplastics injection-molding equipment at reduced cycle times compared to thermoset rubber. This flexibility allows for greater freedom of mold design where undercuts are employed. Insert molding: This method consists of placing a preformed substrate into the mold and injecting TPV around or over it. If the insert and the TPV are compatible materials, a melt bond occurs at the interface between the two materials. The strength of this bond is affected by several factors, including interface temperature, cleanliness of the insert, and the TPV's melt temperature. Two-shot injection molding: TPV can be combined with other polymers through several multi-shot injection molding processes. By combining multiple materials, a wide variety of parts applications, such as a hard/soft combination, can be achieved. The process produces both a finished part and a substrate during each cycle. Two-shot molding is more efficient than insert molding because no substrate handling is required Blow molding: Santoprene TPV can be blow molded in a single-layer, multi-layer, exchange blow, sequential 3D, suction blow, flashless extrusion blow, injection blow, and press-blow molding process. Extrusion: TPV easily extrudes into single and complex profiles. These materials can also be coextruded to yield a part with both rigid and soft components. Thermoforming: The thermoforming properties of TPV are similar to those of acrylonitrile butadiene styrene (ABS) rubber and exhibit good melt strength, which provides uniform and predictable sag characteristics during heating. When producing a sheet for thermoformed parts, key attributes of some commercial TPV can be maintained, including colorability, impact resistance, weather ability, chemical resistance, and non-skid, and matte surface in appearance and feel. Recycling The use of some commercial TPV can contribute to a reduction in overall waste in the manufacturing process, as scrap produced during processing can be recycled. Material that has been recycled – even from old parts – exhibits properties almost as good as virgin material, according to a 2013 publication. Recyclability advantages One of the most significant benefits of TPVs is their potential for recycling. Unlike traditional thermoset rubbers, thermoplastic vulcanizates can be: Reprocessed multiple times Recycled into new products Used in closed-loop manufacturing systems, and Blended with virgin material for new applications. According to the article: The results of tests on protective boots for an automotive rack and pinion gears showed that older TPV has slightly poorer physical and mechanical properties than new material. Some of the key indicators of the material's ability to maintain its properties did not change significantly. For example, new and old TPVs had nearly the same properties after air and oil ageing. The compression set also remained virtually identical. The results of tests that measured the color shift (Delta E) between the exterior and interior surfaces of old and new automotive secondary roof profiles showed that the TPV material experienced insignificant color changes. Other tests, which looked at whether the surfaces of the profiles bore the marks of radiation-induced degradation, showed a homogeneous appearance. In tests that compared old and new automotive glass run channel profiles, there was no significant difference in the tensile stress-strain properties—a key indicator of sealing performance. References External links Santoprene Chemical Resistance at VP-scientific.com ExxonMobil Santoprene thermoplastic vulcanization (TPV) AMCO Polymers, North American Distributor for Santoprene TPV Materials Polymers
Thermoplastic vulcanizates
[ "Chemistry", "Materials_science" ]
2,406
[ "Polymers", "Polymer chemistry" ]
13,506,515
https://en.wikipedia.org/wiki/Medicon%20Valley%20Alliance
Medicon Valley Alliance (or MVA for short) is the Danish-Swedish cluster organisation representing human life sciences in the cross-border region of Medicon Valley. As a non-profit member organisation, Medicon Valley Alliance (MVA) carries out initiatives on behalf of the local life science community in order to create new research and business opportunities – initiatives which members would not be able to implement individually, and with the aim of strengthening the development of Medicon Valley. The organisation MVA's member base comprises biotech, medtech and pharma companies of all sizes, CRO's and CMO's, as well as public organizations, universities, science parks, investors, and various business service providers. MVA is committed to facilitate economic growth, increased competitiveness and employment in Medicon Valley, and is furthermore committed to raise the international recognition of Medicon Valley with the aim of attracting labour, investments, and partners. MVA accomplish this by enhancing local networks, improving local framework conditions, increasing the visibility of Medicon Valley and facilitating international relations to companies and research institutions around the world. There are currently +300 MVA-members including numerous big and small private biotech companies and public sector research institutions. Among the most prominent members are Novo Nordisk, Technical University of Denmark, University of Lund and University of Copenhagen The current CEO of Medicon Valley Alliance is Anette Stenberg, following Petter Hartman and Stig Jørgensen. Chairman of the board is CEO of Alligator Bioscience, Søren Bregenholt. Deputy chairman is Ulf G. Andersson, CEO of MEDEON. Membership MVA participants comprise academic departments, regions (hospital managers), states, research, pharmaceutical and medical firms, CROs, CMOs, technology parks, developers, market service providers and other Medicon Valley organizations. References External links Oresund Science Region – Medicon Valley Alliance Medicon Valley Alliance Life Science Ambassador Programme Science and technology in Sweden Biotechnology organizations High-technology business districts Scientific organizations based in Denmark Business organizations based in Denmark Business organizations based in Sweden Organizations based in Copenhagen Organizations established in 1997
Medicon Valley Alliance
[ "Engineering", "Biology" ]
431
[ "Biotechnology organizations" ]
13,506,679
https://en.wikipedia.org/wiki/Saipem
Saipem S.p.A. (Società Azionaria Italiana Perforazioni E Montaggi lit. Drilling and Assembly Italian Public Limited Company) is an Italian multinational oilfield services company and one of the largest in the world. Until 2016 it was a subsidiary of Italian oil and gas supermajor Eni, which retains approximately 30% of Saipem's shares. History Early history The history of Saipem is deeply connected to Enrico Mattei's management era of Eni during the years of the Italian economic miracle. In the early 1950s Mattei had reorganized the Italian oil industry through a complex system of outright acquisitions and government investments, in order to guarantee Italy's self-reliance in energy. At first, Mattei focused on natural gas, the only abundant source of energy available in mainland Italy, through Snam, a newly formed gas pipelines company. In the late 1950s, Eni's subsidiary Snam came to head two sub-holdings: Snam Montaggi, created in 1955 to build pipelines and drilling platforms, and Snam Progetti, created in 1956, specializing in tankers. In 1957 drilling company Saip, a subsidiary of Agip (Eni's fuel retailer), was merged with Snam Montaggi to create Saipem. Saipem was a pioneer in offshore drilling and pipelines construction in Europe; in 1959 it started drilling oil off the coast of Gela, in Sicily and in the early 1960s initiated the Central European Line pipeline, running from the port of Genoa to West Germany, where Eni Deutschland subsidiary was building refineries in Ingolstadt. In addition, in 1961 Saipem built a 1,140 km long oil pipeline in India and a gas pipeline in Iraq. 1970s-1990s In 1978, Saipem laid down Castoro Sei, a column stabilized semi-submersible pipelay vessel. In the same year Sapiem was commissioned the construction of IGAT-2 pipeline in Iran. About 80 per cent of the line had been completed by 1985, when the works had to be halted because of the Iran-Iraq war. In 1983, Saipem completed the construction of the massive Trans-Mediterranean Pipeline, linking Algeria to Italy. In 1988, a joint venture between Saipem and Brown & Root was formed, known as European Marine Contractors, that realized two major projects: Zeepipe, completed in 1993, a 1,416 km natural gas transportation system to transport North Sea natural gas to the receiving terminal at Zeebrugge in Belgium; and a 707 km trunkline connecting Hong Kong with Yancheng 13-1 gasfield, located in the Yinggehai Basin, completed in 1994. In 1991, Saipem started operating Saipem 7000, the world's second biggest crane vessel. In 1996, the Maghreb–Europe Gas Pipeline linked Algerian gasfields to Spain. In 1995-1999, Saipem was the main contractor for the construction of Europipe I and Europipe II natural gas pipelines, connecting Norway to Germany. 21st century In the 21st century, Saipem carried on a number of acquisitions, culminating in the purchase of Bouygues Offshore for $1 billion in 2002. In 2006 Saipem merged with Snamprogetti, a subsidiary of Eni specializing in the design and execution of large scale offshore projects for the production and transportation of hydrocarbons. Through the merger, the new group strengthened its position in West Africa, Middle East, Central Asia, and South East Asia and acquired significant technological competence in gas monetization and heavy oil exploitation. In 2001-2003, Saipem built the offshore section of Blue Stream, a major trans-Black Sea gas pipeline that carries natural gas from Russia into Turkey. In 2003-2004, Saipem built the Greenstream pipeline, connecting Libya to Sicily. In 2006, Saipem completed the sealines of the Dolphin Gas Project, connecting Qatar's North Field to the United Arab Emirates and Oman. In 2006-2008, Saipem laid down Scarabeo 8 and Scarabeo 9 ultra deepwater 6th generation semi-submersible drilling rigs, completed in 2011–12. In 2011, Saipem completed the two 1,220 km gas sealines of Nord Stream 1, a system of offshore natural gas pipelines from Russia to Germany and the longest in the world. In 2013, Saipem was awarded a $3 billion contract for the development of the Egina oil field, located approximately 150 km off the coast of Nigeria in the Gulf of Guinea; the contract included engineering, procurement, fabrication, installation and pre-commissioning of 52 km of oil production and water injection flow lines, 12 flexible jumpers, 20 km of gas export pipelines, 80 km of umbilicals, and of the mooring and offloading systems. On 8 February 2015, Saipem won a $1.8 billion contract to build two 95 km pipelines at the Kashagan field, linking the oil fields in the Caspian Sea to the mainland in Kazakhstan. In November of the same year Saipem completed the pipelay on the 890 km gas export offshore pipeline for the Inpex-led Ichthys LNG project in Australia, what is said was the longest subsea pipeline in the southern hemisphere and the third longest in the world. In 2016, Eni sold a 12.5% stake in Saipem (retaining a 30% share though), that was acquired by CDP Equity, and subsequently allowed Saipem to scrap the old Eni logo and design its own, with the objective of creating a new, more autonomous company focusing on oilfield services. In 2019, Saipem entered the airborne wind energy or energy kite systems industry via an agreement with KiteGen. In May 2024, Saipem secured three new contracts worth $3.7 billion from TotalEnergies EP Angola Block 20 for the Kaminho deepwater project to develop Cameia and Golfinho oil fields. Controversies In 2010, Saipem agreed to pay a penalty of US$30 million to settle a Nigerian investigation into a bribery case involving the construction of Nigeria LNG facilities. Saipem is also under trial in Italy over charges relating to the same case. In September 2018, an Italian court found Saipem and former CEO Pietro Tali, guilty of corruption over bribes in Algeria. The former CEO was sentenced to four years and nine months in prison and 197.9 million euros were seized from the company. In January 2020, after an appeal brought before the Milan Court of Appeal, the court finally acquitted Saipem and all managers involved. Corporate affairs Headquarters and offices Saipem's headquarters are located in San Donato Milanese, a suburb of Milan, Italy. Saipem has offices in over 60 countries, including: Far East and Oceania: Australia, China, India, Indonesia, Malaysia, Singapore, Thailand. Europe: Italy, France, Belgium, Croatia, Germany, Great Britain, Ireland, Luxemburg, Norway, The Netherlands, Portugal, Spain, Switzerland, Turkey, Poland, Romania America: Argentina, Brazil, Canada, Ecuador, Mexico, Peru, U.S.A., Venezuela, Suriname CIS: Azerbaijan, Kazakhstan, Russia, Georgia Africa: Algeria, Angola, Cameroon, Congo, Egypt, Gabon, Libya, Morocco, Nigeria, Sudan, Mozambique Middle East: United Arab Emirates, Saudi Arabia, Iran, Oman, Qatar, Iraq, Kuwait Subsidiaries The group headed by Saipem S.p.A. includes approximately 90 companies and consortia, based all around the world. "Petromar’s" shares are divided 70% (Saipem) 30% (Sonangol) https://www.petromar.co.ao/about/ Board of directors The Board of Directors of the Company consists of nine Directors: • 6 are drawn from the majority list filed jointly by Eni S.p.A. and CDP Equity S.p.A.; • 3 are drawn from the minority list filed by institutional investors. The current Board of Directors was appointed for three financial years by the Shareholders’ Meeting on May 14, 2024. Its mandate will expire on the date the Shareholders’ Meeting is called to approve the financial statement as of December 31, 2026. The Shareholders' Meeting appointed Elisabetta Serafin as Chairman of the Board of Directors. The Board of Directors, on May 14, 2024, appointed Alessandro Puliti, already General Manager of the Company, as Chief Executive Officer and Director responsible for establishing and maintaining the Company’s Internal Control and Risk Management System; the General Counsel Simone Chini was appointed Secretary of the Board of Directors. The Board of Directors complies with the applicable legislation on gender balance: at least two fifths of Directors (4 out of 9) belong to the least represented gender. Furthermore, in line with the recommendations for large companies established by the Code of Corporate Governance, to which Saipem complies, at least half the Directors (6 out of 9) are independent: Elisabetta Serafin, Roberto Diacetti, Patrizia Michela Giangualano, Mariano Mossa, Francesca Mariotti and Paul Simon Schapira. The Board of Directors is thus composed of a majority of independent Directors; Board members are all non-executive Directors, except for the CEO and General Manager. Based on the statements made by the Directors and information available to the Company, the Board of Directors ascertained that all Directors (i) meet the integrity requirements, (ii) have no causes of ineligibility and incompatibility and (iii) comply with the guidelines, last approved by the Board of Directors on February 28, 2024, concerning the maximum number of offices that Saipem Directors may hold. Main Shareholders On the basis of the information available and the communications received pursuant to CONSOB Resolution 11971/1999 (Issuers Regulations), the shareholders holding shares totalling to more than 3% of the share capital of Saipem S.p.A. are: Share Capital Saipem S.p.A. has a share capital of 501,669,790.83 euros, divided into 1,995,557,732 common shares and 1,059 savings shares, all without the indication of the par value. The shares are indivisible and each share gives the right to one vote. Holders of Saipem shares can exercise the corporate and property rights attributed to them by law, in compliance with the limits set by the law. Main Offshore Pipe-laying fleets at 31 December 2017 Main Drilling fleets at 31 May 2024 Semi-submersible platform Scarabeo 8 Semi-submersible platform Scarabeo 9 Drillship Saipem 10000 Drillship Saipem 12000 Drillship DVD (Chartered) Drillship Santorini Jack-up Perro Negro 4 Jack-up Perro Negro 7 Jack-up Perro Negro 8 Jack-up Perro Negro 9 (Chartered) Jack-up Perro Negro 10 Jack-up Perro Negro 11 (Chartered) Jack-up Perro Negro 12 (Chartered) Jack-up Perro Negro 13 (Chartered) Jindal Pioneer (Chartered) Main FPSO's at 31 December 2017 Saipem Cidade de Vitoria Saipem Gimboa Saipem Kaombo (not owned) See also List of Italian companies List of oilfield service companies References Essential bibliography (en) Paul H. Frankel, Oil and Power Policy, New York – Washington, Praeger, 1966 (en) Marcello Boldrini, Mattei, Rome, Colombo, 1969 (it) Marcello Colitti, Energia e sviluppo in Italia, Bari, De Donato, 1979 (it) Nico Perrone, Enrico Mattei, Bologna, Il mulino, 2001 Eni Energy engineering and contractor companies Engineering companies of Italy Energy companies established in 1957 Oil and gas companies of Italy Companies based in Milan Italian brands Italian companies established in 1957 Companies in the FTSE MIB
Saipem
[ "Engineering" ]
2,492
[ "Energy engineering and contractor companies", "Engineering companies" ]
13,507,050
https://en.wikipedia.org/wiki/Dutch%20Design%20Week
Dutch Design Week (also known as DDW) is an event about Dutch design, hosted in Eindhoven, Netherlands. The event takes place around the last week of October and is a nine-day event with exhibitions, studio visits, workshops, seminars, and parties across the city. The event hosts companies including Philips, Philips Design and DAF, as well as the Design Academy Eindhoven and the Eindhoven University of Technology. The initiative began in 2002 as a non-commercial fair and by 2018 had 355,000 visitors. The DDW consists of around 120 venues. The main venues during the event are among others the Klokgebouw (Strijp-S), Design Academy Eindhoven and the Faculty of Industrial Design at the Eindhoven University of Technology, where successful and well-visited expositions are organized. Whereas the main goal remains to create a non-commercial event, many conflicts of interest and the rapid growth did contribute to a more commercial approach since 2007. Pop venue Effenaar and classical music venue Muziekgebouw Frits Philips both organize the musical program DDW Music around the festival with live performances as well as exhibitions related to experimental musical instruments, sound art and sound installations. Dutch Design Week 2020 was an online-only event. A digital festival, initially planned to work alongside a programme of studio tours and socially distanced activities, became the centrepiece of the festival as all physical events had been cancelled due to a rise in coronavirus cases in the city. Theme Since the 2012 edition Dutch Design Week picks a yearly theme overarching the entire week. Ambassadors Since 2009 Dutch Design Week picks multiple ambassadors from the field who are advocates of Dutch Design. See also Dutch Design References External links Dutch Design Week Dutch design Culture in Eindhoven Industrial design Events in Eindhoven
Dutch Design Week
[ "Engineering" ]
373
[ "Industrial design", "Design engineering", "Design" ]
13,507,300
https://en.wikipedia.org/wiki/Lambda%20Piscium
Lambda Piscium, Latinized from λ Piscium, is a solitary, white-hued star in the zodiac constellation of Pisces. With an apparent visual magnitude of 4.49, it is visible to the naked eye, forming the southeast corner of the "Circlet" asterism in Pisces. Based upon a measured annual parallax shift of 30.59 mas as seen from Earth, it is located 107 light years distant from the Sun. Lambda Piscium is a member of the Ursa Major Stream, lying among the outer parts, or corona, of this moving group of stars that roughly follow a common heading through space. This well-studied star has a stellar classification A7 V, indicating it is an A-type main-sequence star that is generating energy through hydrogen fusion at its core. It has 1.8 times the mass of the Sun and double the Sun's radius. The star is radiating 13.3 times the Sun's luminosity from its photosphere at an effective temperature of 7,734 K. Lambda Piscium is around 583 million years old and is spinning with a projected rotational velocity of 70 km/s. Naming In Chinese, (), meaning Cloud and Rain, refers to an asterism consisting of λ Piscium, κ Piscium, 12 Piscium and 21 Piscium. Consequently, the Chinese name for λ Piscium itself is (, .) References A-type main-sequence stars Ursa Major moving group Pisces (constellation) Piscium, Lambda BD+00 5037 Piscium, 018 222603 116928 8984
Lambda Piscium
[ "Astronomy" ]
348
[ "Pisces (constellation)", "Constellations" ]
13,507,959
https://en.wikipedia.org/wiki/Pfitzinger%20reaction
The Pfitzinger reaction (also known as the Pfitzinger-Borsche reaction) is the chemical reaction of isatin with base and a carbonyl compound to yield substituted quinoline-4-carboxylic acids. Several reviews have been published. Reaction mechanism The reaction of isatin with a base such as potassium hydroxide hydrolyses the amide bond to give the keto-acid 2. This intermediate can be isolated, but is typically not. A ketone (or aldehyde) will react with the aniline to give the imine (3) and the enamine (4). The enamine will cyclize and dehydrate to give the desired quinoline (5). Variations Halberkann variant Reaction of N-acyl isatins with base gives 2-hydroxy-quinoline-4-carboxylic acids. See also Camps quinoline synthesis Friedländer synthesis Niementowski quinazoline synthesis Doebner reaction Talnetant, Cinchocaine References Carbon-carbon bond forming reactions Condensation reactions Quinoline forming reactions Ring expansion reactions Name reactions
Pfitzinger reaction
[ "Chemistry" ]
240
[ "Ring expansion reactions", "Carbon-carbon bond forming reactions", "Organic reactions", "Name reactions", "Condensation reactions" ]
13,508,240
https://en.wikipedia.org/wiki/Planarity
Planarity is a 2005 puzzle computer game by John Tantalo, based on a concept by Mary Radcliffe at Western Michigan University. The name comes from the concept of planar graphs in graph theory; these are graphs that can be embedded in the Euclidean plane so that no edges intersect. By Fáry's theorem, if a graph is planar, it can be drawn without crossings so that all of its edges are straight line segments. In the planarity game, the player is presented with a circular layout of a planar graph, with all the vertices placed on a single circle and with many crossings. The goal for the player is to eliminate all of the crossings and construct a straight-line embedding of the graph by moving the vertices one by one into better positions. History and versions The game was written in Flash by John Tantalo at Case Western Reserve University in 2005. Online popularity and the local notoriety he gained placed Tantalo as one of Cleveland's most interesting people for 2006. It in turn has inspired the creation of a GTK+ version by Xiph.org's Chris Montgomery, which possesses additional level generation algorithms and the ability to manipulate multiple nodes at once. Puzzle generation algorithm The definition of the planarity puzzle does not depend on how the planar graphs in the puzzle are generated, but the original implementation uses the following algorithm: Generate a set of random lines in a plane such that no two lines are parallel and no three lines meet in a single point. Calculate the intersections of every line pair. Create a graph with a vertex for each intersection and an edge for each line segment connecting two intersections (the arrangement of the lines). If a graph is generated from lines, then the graph will have exactly vertices (each line has vertices, and each vertex is shared with one other line) and edges (each line contains edges). The first level of Planarity is built with lines, so it has vertices and edges. Each level after is generated by one more line than the last. If a level was generated with lines, then the next level has more vertices and more edges. The best known algorithms from computational geometry for constructing the graphs of line arrangements solve the problem in time, linear in the size of the graph to be constructed, but they are somewhat complex. Alternatively and more simply, it is possible to index each crossing point by the pair of lines that cross at that point, sort the crossings along each line by their -coordinates, and use this sorted ordering to generate the edges of the planar graph, in near-optimal time. Once the vertices and edges of the graph have been generated, they may be placed evenly around a circle using a random permutation. Related theoretical research The problem of determining whether a graph is planar can be solved in linear time, and any such graph is guaranteed to have a straight-line embedding by Fáry's theorem, that can also be found from the planar embedding in linear time. Therefore, any puzzle could be solved in linear time by a computer. However, these puzzles are not as straightforward for human players to solve. In the field of computational geometry, the process of moving a subset of the vertices in a graph embedding to eliminate edge crossings has been studied by Pach and Tardos (2002), and others, inspired by the planarity puzzle. The results of these researchers shows that (in theory, assuming that the field of play is the infinite plane rather than a bounded rectangle) it is always possible to solve the puzzle while leaving of the input vertices fixed in place at their original positions, for a constant that has not been determined precisely but lies between 1/4 and slightly less than 1/2. When the planar graph to be untangled is a cycle graph, a larger number of vertices may be fixed in place. However, determining the largest number of vertices that may be left in place for a particular input puzzle (or equivalently, the smallest number of moves needed to solve the puzzle) is NP-complete. has shown that the randomized circular layout used for the initial state of Planarity is nearly the worst possible in terms of its number of crossings: regardless of what planar graph is to be tangled, the expected value of the number of crossings for this layout is within a factor of three of the largest number of crossings among all layouts. In 2014, mathematician David Eppstein published a paper providing an effective algorithm for solving planar graphs generated by the original Planarity game, based on the specifics of the puzzle generation algorithm. References External links Planarity.net — the original Flash game NetLogo System — Included as sample program (game) in NetLogo system Planarity — Version using SVG and the d3 JavaScript library Multitouch Planarity — Multiplayer- and multitouch-enabled version written in Python using libavg. Puzzle video games Mathematical games Planar graphs
Planarity
[ "Mathematics" ]
1,014
[ "Recreational mathematics", "Planes (geometry)", "Planar graphs", "Mathematical games" ]
13,509,367
https://en.wikipedia.org/wiki/Cook%E2%80%93Heilbron%20thiazole%20synthesis
The Cook–Heilbron thiazole synthesis highlights the formation of 5-aminothiazoles through the chemical reaction of α-aminonitriles or with , carbon disulphide, carbon oxysulfide, or isothiocyanates at room temperature and under mild or aqueous conditions. Variation of substituents at the 2nd and 4th position of the thiazole is introduced by selecting different combinations of starting reagents. This reaction was first discovered in 1947 by Alan H. Cook, Sir Ian Heilbron, and A.L Levy, and marks one of the first examples of 5-aminothiazole synthesis with significant yield and diversity in scope. Prior to their discovery, 5-aminothiazoles were a relatively unknown class of compounds, but were of synthetic interest and utility. Their premier publication illustrated the formation of 5-amino-2-benzylthiazole and 5-amino-4-carbethoxy-2-benzylthiazole by reacting acid with aminoacetonitrile and ethyl , respectively. Subsequent experiments by Cook and Heilbron, detailed in their series of publications titled “Studies in the Azole Series” describe early attempts to expand the scope of 5-aminothiazole synthesis, as well as employ 5-aminothiazoles in the formation of purines and pyridines. Mechanism In the first step of the reaction mechanism for the synthesis of a 5-aminothiazole from an α-aminonitrile and carbon disulphide, a lone pair on the nitrogen of the α-aminonitrile performs a nucleophilic attack on the slightly electropositive carbon of carbon disulfide. This addition reaction pushes electrons from the carbon-sulfur double bond onto one of the sulfur atoms. Acting as a Lewis Base, the sulfur atom donates its electrons to the carbon atom of the nitrile, forming a sulfur-carbon sigma bond in an intramolecular 5-exo-dig cyclization. This cyclization forms a 5-imino-2-thione thiazolidine compound that undergoes a tautomerization when a base, such as water, abstracts the hydrogens at positions 3 and 4. The electrons from the carbon-hydrogen sigma bond are pushed back into the thiazole ring, forming two new double bonds with the adjacent carbon atoms, and catalyzing the formation of two new nitrogen-hydrogen, and sulfur-hydrogen sigma bonds. This tautomerization occurs because it is thermodynamically favourable, yielding the aromatic final product: 5-aminothiazole. Applications Few instances of applications of the Cook–Heilbron thiazole synthesis are found in literature. In recent years, modifications of the Hantzsch thiazole synthesis are the most common, partly because of its ease in introducing R- group diversity. However, in 2008 Scott et al. employed a Cook-Heilbron synthesis in their approach to synthesize novel of pyridyl and thiazolyl bisamide CSF-1R inhibitors for use in novel cancer therapeutics. A couple of the compounds that were analysed for in vivo anti-cancer activity contained thiazole derivatives that had been synthesized using a Cook-Heilbron approach. For instance, 2-methyl-5-aminothiazoles were prepared via condensation and cyclization of aminoacetonitrile and as part of the synthesis of thiazolyl bisamines: Relevance Thiazoles are essential components of many biologically active compounds making them important features in drug design. Thiazoles are found in a number of pharmacological compounds such as tiazofurin and dasatinib (antineoplastic agents), ritonavir (an anti-HIV drug), ravuconazole (antifungal agent), meloxicam and fentiazac (anti-inflammatory agents) and nizatidine (anti-ulcer agent). Consequently, understanding and applying a range of approaches to synthesize thiazoles facilitates greater flexibility in both designing drugs as well as optimizing synthetic routes. References Nitrogen heterocycle forming reactions Sulfur heterocycle forming reactions Heterocycle forming reactions Name reactions
Cook–Heilbron thiazole synthesis
[ "Chemistry" ]
899
[ "Name reactions", "Ring forming reactions", "Heterocycle forming reactions", "Organic reactions" ]
13,509,534
https://en.wikipedia.org/wiki/Cram%C3%A9r%27s%20decomposition%20theorem
Cramér’s decomposition theorem for a normal distribution is a result of probability theory. It is well known that, given independent normally distributed random variables ξ1, ξ2, their sum is normally distributed as well. It turns out that the converse is also true. The latter result, initially announced by Paul Lévy, has been proved by Harald Cramér. This became a starting point for a new subfield in probability theory, decomposition theory for random variables as sums of independent variables (also known as arithmetic of probabilistic distributions). The precise statement of the theorem Let a random variable ξ be normally distributed and admit a decomposition as a sum ξ=ξ1+ξ2 of two independent random variables. Then the summands ξ1 and ξ2 are normally distributed as well. A proof of Cramér's decomposition theorem uses the theory of entire functions. See also Raikov's theorem: Similar result for Poisson distribution. References Probability theorems Theorems in statistics Characterization of probability distributions
Cramér's decomposition theorem
[ "Mathematics" ]
200
[ "Mathematical problems", "Theorems in probability theory", "Mathematical theorems", "Theorems in statistics" ]
13,510,193
https://en.wikipedia.org/wiki/Densely%20defined%20operator
In mathematics – specifically, in operator theory – a densely defined operator or partially defined operator is a type of partially defined function. In a topological sense, it is a linear operator that is defined "almost everywhere". Densely defined operators often arise in functional analysis as operations that one would like to apply to a larger class of objects than those for which they a priori "make sense". A closed operator that is used in practice is often densely defined. Definition A densely defined linear operator from one topological vector space, to another one, is a linear operator that is defined on a dense linear subspace of and takes values in written Sometimes this is abbreviated as when the context makes it clear that might not be the set-theoretic domain of Examples Consider the space of all real-valued, continuous functions defined on the unit interval; let denote the subspace consisting of all continuously differentiable functions. Equip with the supremum norm ; this makes into a real Banach space. The differentiation operator given by is a densely defined operator from to itself, defined on the dense subspace The operator is an example of an unbounded linear operator, since This unboundedness causes problems if one wishes to somehow continuously extend the differentiation operator to the whole of The Paley–Wiener integral, on the other hand, is an example of a continuous extension of a densely defined operator. In any abstract Wiener space with adjoint there is a natural continuous linear operator (in fact it is the inclusion, and is an isometry) from to under which goes to the equivalence class of in It can be shown that is dense in Since the above inclusion is continuous, there is a unique continuous linear extension of the inclusion to the whole of This extension is the Paley–Wiener map. See also References Functional analysis Hilbert spaces Linear operators Operator theory
Densely defined operator
[ "Physics", "Mathematics" ]
372
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Quantum mechanics", "Linear operators", "Mathematical relations", "Hilbert spaces" ]
13,510,300
https://en.wikipedia.org/wiki/Kappa%20Piscium
Kappa Piscium (κ Piscium) is a solitary, white-hued star in the zodiac constellation of Pisces. It is visible to the naked eye with an apparent visual magnitude of 4.94, forming the southeastern corner of the "Circlet" asterism in Pisces. Based upon a measured annual parallax shift of 21.25 mas as seen from Earth, it is located about 153 light years distant from the Sun. Appearing as a single point in the sky, it is easily split when viewed with a pair of binoculars, and displays three components. Kappa Piscium has an apparent magnitude of 4.87 at maximum brightness and 4.95 at minimum brightness, while the visual companions have apparent magnitudes of 9.96 and 11.20. This is an A-type main-sequence star with a stellar classification of . The suffix designation indicates it is a "chemically peculiar" Ap star that displays abnormal abundances of silicon, strontium, and chromium. It is an Alpha2 Canum Venaticorum variable with a weak active magnetic field that causes it to fluctuate by 0.01 to 0.1 in magnitude as it rotates. It shows many lines of uranium, and possibly the rare element holmium in its spectrum. Its uranium and osmium content could have been provided by a nearby supernova. Compared to the Sun, it is deficient in oxygen relative to the magnesium abundance. This star is a candidate member of the AB Doradus moving group, an association of stars with similar ages that share a common heading through space. Naming In Chinese, (), meaning Cloud and Rain, refers to an asterism consisting of κ Piscium, 12 Piscium, 21 Piscium and λ Piscium. Consequently, the Chinese name for κ Piscium itself is (, .) References Ap stars Alpha2 Canum Venaticorum variables Pisces (constellation) Piscium, Kappa BD+00 4998 Piscium, 08 220825 115738 8911
Kappa Piscium
[ "Astronomy" ]
428
[ "Pisces (constellation)", "Constellations" ]
13,511,383
https://en.wikipedia.org/wiki/Anaerobic%20oxidation%20of%20methane
Anaerobic oxidation of methane (AOM) is a methane-consuming microbial process occurring in anoxic marine and freshwater sediments. AOM is known to occur among mesophiles, but also in psychrophiles, thermophiles, halophiles, acidophiles, and alkophiles. During AOM, methane is oxidized with different terminal electron acceptors such as sulfate, nitrate, nitrite and metals, either alone or in syntrophy with a partner organism. Coupled to sulfate reduction The overall reaction is: CH4 + SO42− → HCO3− + HS− + H2O Sulfate-driven AOM is mediated by a syntrophic consortium of methanotrophic archaea and sulfate-reducing bacteria. They often form small aggregates or sometimes voluminous mats. The archaeal partner is abbreviated ANME, which stands for "anaerobic methanotroph". ANME's are very closely related to methanogenic archaea and recent investigations suggest that AOM is an enzymatic reversal of methanogenesis. It is still poorly understood how the syntrophic partners interact and which intermediates are exchanged between the archaeal and bacterial cell. The research on AOM is hindered by the fact that the responsible organisms have not been isolated. This is because these organisms show very slow growth rates with a minimum doubling time of a few months. Countless isolation efforts have not been able to isolate one of the anaerobic methanotrophs, a possible explanation can be that the ANME archaea and the SRB have an obligate syntrophic interaction and can therefore not be isolated individually. In benthic marine areas with strong methane releases from fossil reservoirs (e.g. at cold seeps, mud volcanoes or gas hydrate deposits) AOM can be so high that chemosynthetic organisms like filamentous sulfur bacteria (see Beggiatoa) or animals (clams, tube worms) with symbiont sulfide-oxidizing bacteria can thrive on the large amounts of hydrogen sulfide that are produced during AOM. The bicarbonate (HCO3−) produced from AOM can (i) get sequestered in the sediments by the precipitation of calcium carbonate or so-called methane-derived authigenic carbonates and (ii) get released to the overlying water column. Methane-derived authigenic carbonates are known to be the most 13C depleted carbonates on Earth, with δ13C values as low as -125 per mil PDB reported. Coupled to nitrate and nitrite reduction The overall reactions are: CH4 + 4 NO3− → CO2 + 4 NO2− + 2 H2O 3 CH4 + 8 NO2− + 8 H+ → 3 CO2 + 4 N2 + 10 H2O Recently, ANME-2d is shown to be responsible nitrate-driven AOM. The ANME-2d, named Methanoperedens nitroreducens, is able to perform nitrate-driven AOM without a partner organism via reverse methanogenesis with nitrate as the terminal electron acceptor, using genes for nitrate reduction that have been laterally transferred from a bacterial donor. This was also the first complete reverse methanogenesis pathway including the mcr and mer genes. In 2010, omics, especially metagenomics, analysis showed that nitrite reduction can be coupled to methane oxidation by a single bacterial species Candidatus Methylomirabilis oxyfera (phylum NC10), without the need for an archaeal partner. Environmental relevance AOM is considered to be a very important process reducing the emission of the greenhouse gas methane from the ocean into the atmosphere. It is estimated that almost 80% of all the methane that arises from marine sediments is oxidized anaerobically by this process. See also Borg (microbiology) References Bibliography Dennis D. Coleman; J. Bruno Risatti; Martin Schoell (1981) Fractionation of carbon and hydrogen isotopes by methane-oxidizing bacteria | Geochimica et Cosmochimica Acta |Volume 45, Issue 7, July 1981, Pages 1033-1037 |https://doi.org/10.1016/0016-7037(81)90129-0 | abstract External links Methane Limnology
Anaerobic oxidation of methane
[ "Chemistry" ]
918
[ "Greenhouse gases", "Methane" ]
13,511,500
https://en.wikipedia.org/wiki/Orion%27s%20Sword
Orion's Sword is a compact asterism in the constellation Orion. It comprises three stars (42 Orionis, Theta Orionis, and Iota Orionis) and M42, the Orion Nebula, which together are thought to resemble a sword or its scabbard. This group is south of the prominent asterism, Orion's Belt. Fables and old beliefs are in Europe dominated or widely influenced by those of the Greco-Roman narratives. Beyond Europe this grouping is quite widely referenced as a weapon just as the majority of cultures perceived Orion's standout asymmetrical "hourglass" of seven very bright stars as a human. Components Orion Nebula The Orion Nebula consists of one of the nearest (thus in the Milky Way Galaxy), massive molecular clouds (30 - 40 light years in diameter) about 1,300 light years from the solar system. This makes the nebula potentially the closest HII region to Earth, a mass of hydrogen that has been ionized by nearby, hot, young stars. Regions like this are called stellar nurseries, nurturing the birth of multiple young stars such as the Orion Nebula Star Cluster. These are a hallmark of the asterism. Main stars 42 Orionis, also called c Ori, is a B1V magnitude star in the northern half of the Orion nebula. Theta Orionis has a more central position in the nebula, and is actually composed of a multi-star system. Iota Orionis is one of the brightest in the collection, in the south of the Orion nebula. Iota Orionis is a spectroscopic binary system, with a variable magnitude of O9III. Scientific studies Given the scientific significance of M42, Orion's Sword is a popular spot for stellar and protostellar studies. Using the Hubble Space Telescope, O'dell et al. focused on identifying previously unseen features of the nebula, such as high-ionization shocks, compact sources, and protoplanetary disks. Some studies have focused on the sword region overall. Gomez & Leda found that less than half of the OB and Hα stars in this region are associated with well-defined stellar clusters. This positional similarity, as well as the high star formation rates and gas pressure in the nearby molecular cloud, confirms the previous notion that old, foreground OB stars triggered star formation in this cloud. References in history and culture Hyginus described three faint stars where the sword is depicted in the constellation Orion, in his book De Astronomia. Aratus goes into significant detail about the Orion constellation as well, proclaiming: "Should anyone fail to catch sight of him (Orion) up in the heavens on a clear night, he should not expect to behold anything more splendid when he gazes up at the sky." Cicero and Germanicus, the translators of Aratus's Phaenomena, expressed it as , Latin for "sword". Arabic astronomers also saw this asterism as a sword ( ), calling it ("sword of the powerful one" or "sword of the giant"). Orion is one of the few constellations to have parallel identities in European and Chinese culture, given the name Shen, the hunter and warrior. Chinese astronomers made the sword a sub-constellation within Shen called Fa. In the myths of the Nama of Namibia and the western Cape, this was the arrow of the husband of the Pleiades, daughters of the sky god, who was represented by Orion's SW main star Rigel. When he fired his arrow at three zebras (Orion's belt) and missed; he was too afraid to retrieve the arrow due to its proximity to a fierce lion, represented by Betelgeuse. Therefore, he sits in the cold, suffering from hunger but too ashamed to return home. Regionally the prevailing cold breezes and currents come from that direction. The Tswana to the east traditionally call the unusually bright nebula and its companions , three dogs which chase the three pigs (the belt). This serves as an etiological myth for why pigs have their litters in the same season Orion is prominent in the sky. Orion's sword is referenced in the song "The Dark of the Sun" by Tom Petty on his 1991 album Into the Great Wide Open, in the line "saw you sail across a river underneath Orion's sword ...". It is also mentioned in Jethro Tull's song "Orion", on their 1979 album Stormwatch, in the lines "Your faithful dog shines brighter than its lord and master, your jewelled sword twinkles as the world rolls by." Gallery See also Orion (constellation) Orion's Belt Thornborough Henges Orion Correlation Theory References Orion (constellation) Asterisms (astronomy)
Orion's Sword
[ "Astronomy" ]
973
[ "Sky regions", "Asterisms (astronomy)", "Constellations", "Orion (constellation)" ]
13,511,542
https://en.wikipedia.org/wiki/Pseudoforest
In graph theory, a pseudoforest is an undirected graph in which every connected component has at most one cycle. That is, it is a system of vertices and edges connecting pairs of vertices, such that no two cycles of consecutive edges share any vertex with each other, nor can any two cycles be connected to each other by a path of consecutive edges. A pseudotree is a connected pseudoforest. The names are justified by analogy to the more commonly studied trees and forests. (A tree is a connected graph with no cycles; a forest is a disjoint union of trees.) Gabow and Tarjan attribute the study of pseudoforests to Dantzig's 1963 book on linear programming, in which pseudoforests arise in the solution of certain network flow problems. Pseudoforests also form graph-theoretic models of functions and occur in several algorithmic problems. Pseudoforests are sparse graphs – their number of edges is linearly bounded in terms of their number of vertices (in fact, they have at most as many edges as they have vertices) – and their matroid structure allows several other families of sparse graphs to be decomposed as unions of forests and pseudoforests. The name "pseudoforest" comes from . Definitions and structure We define an undirected graph to be a set of vertices and edges such that each edge has two vertices (which may coincide) as endpoints. That is, we allow multiple edges (edges with the same pair of endpoints) and loops (edges whose two endpoints are the same vertex). A subgraph of a graph is the graph formed by any subsets of its vertices and edges such that each edge in the edge subset has both endpoints in the vertex subset. A connected component of an undirected graph is the subgraph consisting of the vertices and edges that can be reached by following edges from a single given starting vertex. A graph is connected if every vertex or edge is reachable from every other vertex or edge. A cycle in an undirected graph is a connected subgraph in which each vertex is incident to exactly two edges, or is a loop. A pseudoforest is an undirected graph in which each connected component contains at most one cycle. Equivalently, it is an undirected graph in which each connected component has no more edges than vertices. The components that have no cycles are just trees, while the components that have a single cycle within them are called 1-trees or unicyclic graphs. That is, a 1-tree is a connected graph containing exactly one cycle. A pseudoforest with a single connected component (usually called a pseudotree, although some authors define a pseudotree to be a 1-tree) is either a tree or a 1-tree; in general a pseudoforest may have multiple connected components as long as all of them are trees or 1-trees. If one removes from a 1-tree one of the edges in its cycle, the result is a tree. Reversing this process, if one augments a tree by connecting any two of its vertices by a new edge, the result is a 1-tree; the path in the tree connecting the two endpoints of the added edge, together with the added edge itself, form the 1-tree's unique cycle. If one augments a 1-tree by adding an edge that connects one of its vertices to a newly added vertex, the result is again a 1-tree, with one more vertex; an alternative method for constructing 1-trees is to start with a single cycle and then repeat this augmentation operation any number of times. The edges of any 1-tree can be partitioned in a unique way into two subgraphs, one of which is a cycle and the other of which is a forest, such that each tree of the forest contains exactly one vertex of the cycle. Certain more specific types of pseudoforests have also been studied. A 1-forest, sometimes called a maximal pseudoforest, is a pseudoforest to which no more edges can be added without causing some component of the graph to contain multiple cycles. If a pseudoforest contains a tree as one of its components, it cannot be a 1-forest, for one can add either an edge connecting two vertices within that tree, forming a single cycle, or an edge connecting that tree to some other component. Thus, the 1-forests are exactly the pseudoforests in which every component is a 1-tree. The spanning pseudoforests of an undirected graph G are the pseudoforest subgraphs of G that have all the vertices of G. Such a pseudoforest need not have any edges, since for example the subgraph that has all the vertices of G and no edges is a pseudoforest (whose components are trees consisting of a single vertex). The maximal pseudoforests of G are the pseudoforest subgraphs of G that are not contained within any larger pseudoforest of G. A maximal pseudoforest of G is always a spanning pseudoforest, but not conversely. If G has no connected components that are trees, then its maximal pseudoforests are 1-forests, but if G does have a tree component, its maximal pseudoforests are not 1-forests. Stated precisely, in any graph G its maximal pseudoforests consist of every tree component of G, together with one or more disjoint 1-trees covering the remaining vertices of G. Directed pseudoforests Versions of these definitions are also used for directed graphs. Like an undirected graph, a directed graph consists of vertices and edges, but each edge is directed from one of its endpoints to the other endpoint. A directed pseudoforest is a directed graph in which each vertex has at most one outgoing edge; that is, it has outdegree at most one. A directed 1-forest – most commonly called a functional graph (see below), sometimes maximal directed pseudoforest – is a directed graph in which each vertex has outdegree exactly one. If D is a directed pseudoforest, the undirected graph formed by removing the direction from each edge of D is an undirected pseudoforest. Number of edges Every pseudoforest on a set of n vertices has at most n edges, and every maximal pseudoforest on a set of n vertices has exactly n edges. Conversely, if a graph G has the property that, for every subset S of its vertices, the number of edges in the induced subgraph of S is at most the number of vertices in S, then G is a pseudoforest. 1-trees can be defined as connected graphs with equally many vertices and edges. Moving from individual graphs to graph families, if a family of graphs has the property that every subgraph of a graph in the family is also in the family, and every graph in the family has at most as many edges as vertices, then the family contains only pseudoforests. For instance, every subgraph of a thrackle (a graph drawn so that every pair of edges has one point of intersection) is also a thrackle, so Conway's conjecture that every thrackle has at most as many edges as vertices can be restated as saying that every thrackle is a pseudoforest. A more precise characterization is that, if the conjecture is true, then the thrackles are exactly the pseudoforests with no four-vertex cycle and at most one odd cycle. Streinu and Theran generalize the sparsity conditions defining pseudoforests: they define a graph as being (k,l)-sparse if every nonempty subgraph with n vertices has at most kn − l edges, and (k,l)-tight if it is (k,l)-sparse and has exactly kn − l edges. Thus, the pseudoforests are the (1,0)-sparse graphs, and the maximal pseudoforests are the (1,0)-tight graphs. Several other important families of graphs may be defined from other values of k and l, and when l ≤ k the (k,l)-sparse graphs may be characterized as the graphs formed as the edge-disjoint union of l forests and k − l pseudoforests. Almost every sufficiently sparse random graph is pseudoforest. That is, if c is a constant with 0 < c < 1/2, and Pc(n) is the probability that choosing uniformly at random among the n-vertex graphs with cn edges results in a pseudoforest, then Pc(n) tends to one in the limit for large n. However, for c > 1/2, almost every random graph with cn edges has a large component that is not unicyclic. Enumeration A graph is simple if it has no self-loops and no multiple edges with the same endpoints. The number of simple 1-trees with n labelled vertices is The values for n up to 300 can be found in sequence of the On-Line Encyclopedia of Integer Sequences. The number of maximal directed pseudoforests on n vertices, allowing self-loops, is nn, because for each vertex there are n possible endpoints for the outgoing edge. André Joyal used this fact to provide a bijective proof of Cayley's formula, that the number of undirected trees on n nodes is nn − 2, by finding a bijection between maximal directed pseudoforests and undirected trees with two distinguished nodes. If self-loops are not allowed, the number of maximal directed pseudoforests is instead (n − 1)n. Graphs of functions Directed pseudoforests and endofunctions are in some sense mathematically equivalent. Any function ƒ from a set X to itself (that is, an endomorphism of X) can be interpreted as defining a directed pseudoforest which has an edge from x to y whenever ƒ(x) = y. The resulting directed pseudoforest is maximal, and may include self-loops whenever some value x has ƒ(x) = x. Alternatively, omitting the self-loops produces a non-maximal pseudoforest. In the other direction, any maximal directed pseudoforest determines a function ƒ such that ƒ(x) is the target of the edge that goes out from x, and any non-maximal directed pseudoforest can be made maximal by adding self-loops and then converted into a function in the same way. For this reason, maximal directed pseudoforests are sometimes called functional graphs. Viewing a function as a functional graph provides a convenient language for describing properties that are not as easily described from the function-theoretic point of view; this technique is especially applicable to problems involving iterated functions, which correspond to paths in functional graphs. Cycle detection, the problem of following a path in a functional graph to find a cycle in it, has applications in cryptography and computational number theory, as part of Pollard's rho algorithm for integer factorization and as a method for finding collisions in cryptographic hash functions. In these applications, ƒ is expected to behave randomly; Flajolet and Odlyzko study the graph-theoretic properties of the functional graphs arising from randomly chosen mappings. In particular, a form of the birthday paradox implies that, in a random functional graph with n vertices, the path starting from a randomly selected vertex will typically loop back on itself to form a cycle within O() steps. Konyagin et al. have made analytical and computational progress on graph statistics. Martin, Odlyzko, and Wolfram investigate pseudoforests that model the dynamics of cellular automata. These functional graphs, which they call state transition diagrams, have one vertex for each possible configuration that the ensemble of cells of the automaton can be in, and an edge connecting each configuration to the configuration that follows it according to the automaton's rule. One can infer properties of the automaton from the structure of these diagrams, such as the number of components, length of limiting cycles, depth of the trees connecting non-limiting states to these cycles, or symmetries of the diagram. For instance, any vertex with no incoming edge corresponds to a Garden of Eden pattern and a vertex with a self-loop corresponds to a still life pattern. Another early application of functional graphs is in the trains used to study Steiner triple systems. The train of a triple system is a functional graph having a vertex for each possible triple of symbols; each triple pqr is mapped by ƒ to stu, where pqs, prt, and qru are the triples that belong to the triple system and contain the pairs pq, pr, and qr respectively. Trains have been shown to be a powerful invariant of triple systems although somewhat cumbersome to compute. Bicircular matroid A matroid is a mathematical structure in which certain sets of elements are defined to be independent, in such a way that the independent sets satisfy properties modeled after the properties of linear independence in a vector space. One of the standard examples of a matroid is the graphic matroid in which the independent sets are the sets of edges in forests of a graph; the matroid structure of forests is important in algorithms for computing the minimum spanning tree of the graph. Analogously, we may define matroids from pseudoforests. For any graph G = (V,E), we may define a matroid on the edges of G, in which a set of edges is independent if and only if it forms a pseudoforest; this matroid is known as the bicircular matroid (or bicycle matroid) of G. The smallest dependent sets for this matroid are the minimal connected subgraphs of G that have more than one cycle, and these subgraphs are sometimes called bicycles. There are three possible types of bicycle: a theta graph has two vertices that are connected by three internally disjoint paths, a figure 8 graph consists of two cycles sharing a single vertex, and a handcuff graph is formed by two disjoint cycles connected by a path. A graph is a pseudoforest if and only if it does not contain a bicycle as a subgraph. Forbidden minors Forming a minor of a pseudoforest by contracting some of its edges and deleting others produces another pseudoforest. Therefore, the family of pseudoforests is closed under minors, and the Robertson–Seymour theorem implies that pseudoforests can be characterized in terms of a finite set of forbidden minors, analogously to Wagner's theorem characterizing the planar graphs as the graphs having neither the complete graph K5 nor the complete bipartite graph K3,3 as minors. As discussed above, any non-pseudoforest graph contains as a subgraph a handcuff, figure 8, or theta graph; any handcuff or figure 8 graph may be contracted to form a butterfly graph (five-vertex figure 8), and any theta graph may be contracted to form a diamond graph (four-vertex theta graph), so any non-pseudoforest contains either a butterfly or a diamond as a minor, and these are the only minor-minimal non-pseudoforest graphs. Thus, a graph is a pseudoforest if and only if it does not have the butterfly or the diamond as a minor. If one forbids only the diamond but not the butterfly, the resulting larger graph family consists of the cactus graphs and disjoint unions of multiple cactus graphs. More simply, if multigraphs with self-loops are considered, there is only one forbidden minor, a vertex with two loops. Algorithms An early algorithmic use of pseudoforests involves the network simplex algorithm and its application to generalized flow problems modeling the conversion between commodities of different types. In these problems, one is given as input a flow network in which the vertices model each commodity and the edges model allowable conversions between one commodity and another. Each edge is marked with a capacity (how much of a commodity can be converted per unit time), a flow multiplier (the conversion rate between commodities), and a cost (how much loss or, if negative, profit is incurred per unit of conversion). The task is to determine how much of each commodity to convert via each edge of the flow network, in order to minimize cost or maximize profit, while obeying the capacity constraints and not allowing commodities of any type to accumulate unused. This type of problem can be formulated as a linear program, and solved using the simplex algorithm. The intermediate solutions arising from this algorithm, as well as the eventual optimal solution, have a special structure: each edge in the input network is either unused or used to its full capacity, except for a subset of the edges, forming a spanning pseudoforest of the input network, for which the flow amounts may lie between zero and the full capacity. In this application, unicyclic graphs are also sometimes called augmented trees and maximal pseudoforests are also sometimes called augmented forests. The minimum spanning pseudoforest problem involves finding a spanning pseudoforest of minimum weight in a larger edge-weighted graph G. Due to the matroid structure of pseudoforests, minimum-weight maximal pseudoforests may be found by greedy algorithms similar to those for the minimum spanning tree problem. However, Gabow and Tarjan found a more efficient linear-time approach in this case. The pseudoarboricity of a graph G is defined by analogy to the arboricity as the minimum number of pseudoforests into which its edges can be partitioned; equivalently, it is the minimum k such that G is (k,0)-sparse, or the minimum k such that the edges of G can be oriented to form a directed graph with outdegree at most k. Due to the matroid structure of pseudoforests, the pseudoarboricity may be computed in polynomial time. A random bipartite graph with n vertices on each side of its bipartition, and with cn edges chosen independently at random from each of the n2 possible pairs of vertices, is a pseudoforest with high probability whenever c is a constant strictly less than one. This fact plays a key role in the analysis of cuckoo hashing, a data structure for looking up key-value pairs by looking in one of two hash tables at locations determined from the key: one can form a graph, the "cuckoo graph", whose vertices correspond to hash table locations and whose edges link the two locations at which one of the keys might be found, and the cuckoo hashing algorithm succeeds in finding locations for all of its keys if and only if the cuckoo graph is a pseudoforest. Pseudoforests also play a key role in parallel algorithms for graph coloring and related problems. Notes References . . . . . . . . . . . . . . . . . . . . . . . . . External links Matroid theory Graph families Graph theory objects
Pseudoforest
[ "Mathematics" ]
3,939
[ "Graph theory objects", "Graph theory", "Combinatorics", "Mathematical relations", "Matroid theory" ]
13,511,620
https://en.wikipedia.org/wiki/Bicircular%20matroid
In the mathematical subject of matroid theory, the bicircular matroid of a graph G is the matroid B(G) whose points are the edges of G and whose independent sets are the edge sets of pseudoforests of G, that is, the edge sets in which each connected component contains at most one cycle. The bicircular matroid was introduced by and explored further by and others. It is a special case of the frame matroid of a biased graph. Circuits The circuits, or minimal dependent sets, of this matroid are the bicircular graphs (or bicycles, but that term has other meanings in graph theory); these are connected graphs whose circuit rank is exactly two. There are three distinct types of bicircular graph: The theta graph consists of three paths joining the same two vertices but not intersecting each other. The figure eight graph (or tight handcuff) consists of two cycles having just one common vertex. The loose handcuff (or barbell) consists of two disjoint cycles and a minimal connecting path. All these definitions apply to multigraphs, i.e., they permit multiple edges (edges sharing the same endpoints) and loops (edges whose two endpoints are the same vertex). Flats The closed sets (flats) of the bicircular matroid of a graph can be described as the forests of such that in the induced subgraph of , every connected component has a cycle. Since the flats of a matroid form a geometric lattice when partially ordered by set inclusion, these forests of also form a geometric lattice. In the partial ordering for this lattice, that if each component tree of is either contained in or vertex-disjoint from every tree of , and each vertex of is a vertex of . For the most interesting example, let be with a loop added to every vertex. Then the flats of are all the forests of , spanning or nonspanning. Thus, all forests of a graph form a geometric lattice, the forest lattice of G . As transversal matroids Bicircular matroids can be characterized as the transversal matroids that arise from a family of sets in which each set element belongs to at most two sets. That is, the independent sets of the matroid are the subsets of elements that can be used to form a system of distinct representatives for some or all of the sets. In this description, the elements correspond to the edges of a graph, and there is one set per vertex, the set of edges having that vertex as an endpoint. Minors Unlike transversal matroids in general, bicircular matroids form a minor-closed class; that is, any submatroid or contraction of a bicircular matroid is also a bicircular matroid, as can be seen from their description in terms of biased graphs . Here is a description of deletion and contraction of an edge in terms of the underlying graph: To delete an edge from the matroid, remove it from the graph. The rule for contraction depends on what kind of edge it is. To contract a link (a non-loop) in the matroid, contract it in the graph in the usual way. To contract a loop e at vertex v, delete e and v but not the other edges incident with v; rather, each edge incident with v and another vertex w becomes a loop at w. Any other graph loops at v become matroid loops—to describe this correctly in terms of the graph one needs half-edges and loose edges; see biased graph minors. Characteristic polynomial The characteristic polynomial of the bicircular matroid B(G o) expresses in a simple way the numbers of spanning forests (forests that contain all vertices of G) of each size in G. The formula is where fk equals the number of k-edge spanning forests in G. See . Vector representation Bicircular matroids, like all other transversal matroids, can be represented by vectors over any infinite field. However, unlike graphic matroids, they are not regular: they cannot be represented by vectors over an arbitrary finite field. The question of the fields over which a bicircular matroid has a vector representation leads to the largely unsolved problem of finding the fields over which a graph has multiplicative gains. See . References . . . . . Graph theory Matroid theory
Bicircular matroid
[ "Mathematics" ]
908
[ "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations", "Matroid theory" ]
13,511,940
https://en.wikipedia.org/wiki/Language%20Technologies%20Institute
The Language Technologies Institute (LTI) is a research institute at Carnegie Mellon University in Pittsburgh, Pennsylvania, United States, and focuses on the area of language technologies. The institute is home to 33 faculty with the primary scholarly research of the institute focused on machine translation, speech recognition, speech synthesis, information retrieval, parsing, information extraction, and multimodal machine learning. Until 1996, the institute existed as the Center for Machine Translation, which was established in 1986. Subsequently, from 1996 onwards, it started awarding degrees, and the name was changed to The Language Technologies Institute. The institute was founded by Professor Jaime Carbonell, who served as director until his death in February 2020. He was followed by Jamie Callan, and then Carolyn Rosé, as interim directors. In August 2023, Mona Diab became the director of the institute. Academic programs The institute currently offers two Ph.D. programs, four different types of master degrees and an undergraduate minor. The master's programs each offer a different focus or career target. The Master of Language Technologies (MLT) is a research-focused masters in which students take all the same classes as Ph.D. students, and are frequently funded through sponsored research projects. In effect, this means they work on grants with faculty the same as Ph.D. students, so most transition to Ph.D. programs after completion. The MLT serves as a bridging masters for students from non-traditional backgrounds or with limited research experience in language technologies. In contrast, the Master of Science in Intelligent Information Systems (MIIS), Master of Computational Data Science (MCDS), and Master of Science in Artificial Intelligence and Innovation (MSAII) focus more heavily on coursework and projects that prepare students for industry jobs. The MIIS and MCDS programs are also targeted at shorter (e.g. 16 month) completion times and require an industry internship during the program. Faculty Notable faculty include Alan W Black (Speech), Louis-Philippe Morency (Multimodal Machine Learning), Scott Fahlman (Knowledge Representation), Justine Cassell (Human-computer interaction), Eric Nyberg (Information Retrieval), Carolyn Rosé (Learning Sciences), Eric Xing, and Alex Waibel (Speech Recognition), Rita Singh (voice forensics). Spinoffs and Affiliated Companies Safaba Translation Systems Co-founded by LTI faculty member Alon Lavie in 2009, Safaba was acquired in 2015 by Amazon and incorporated into the company's Pittsburgh offices. Once incorporated into Amazon's corporate structure, the Safaba team became known as the Amazon Machine Translation R&D Group, and would go on to contribute to the development of Amazon Alexa. CognistX Co-founded by LTI Professor and MCDS program director Eric Nyberg, The company's projects include work in targeted advertising, oil and gas, and psychedelic drug research. See also Human Computer Interaction Institute at Carnegie Mellon University School of Computer Science at Carnegie Mellon University References Schools and departments of Carnegie Mellon Linguistics organizations Computational linguistics Translation organizations
Language Technologies Institute
[ "Technology" ]
625
[ "Natural language and computing", "Computational linguistics" ]
13,512,486
https://en.wikipedia.org/wiki/Biobank
A biobank is a type of biorepository that stores biological samples (usually human) for use in research. Biobanks have become an important resource in medical research, supporting many types of contemporary research like genomics and personalized medicine. Biobanks can give researchers access to data representing a large number of people. Samples in biobanks and the data derived from those samples can often be used by multiple researchers for cross purpose research studies. For example, many diseases are associated with single-nucleotide polymorphisms. Genome-wide association studies using data from tens or hundreds of thousands of individuals can identify these genetic associations as potential disease biomarkers. Many researchers struggled to acquire sufficient samples prior to the advent of biobanks. Biobanks have provoked questions on privacy, research ethics, and medical ethics. Viewpoints on what constitutes appropriate biobank ethics diverge. However, a consensus has been reached that operating biobanks without establishing carefully considered governing principles and policies could be detrimental to communities that participate in biobank programs. Background The term "biobank" first appeared in the late 1990s and is a broad term that has evolved in recent years. One definition is "an organized collection of human biological material and associated information stored for one or more research purposes." Collections of plant, animal, microbe, and other nonhuman materials may also be described as biobanks but in some discussions the term is reserved for human specimens. Biobanks usually incorporate cryogenic storage facilities for the samples. They may range in size from individual refrigerators to warehouses, and are maintained by institutions such as hospitals, universities, nonprofit organizations, and pharmaceutical companies. Biobanks may be classified by purpose or design. Disease-oriented biobanks usually have a hospital affiliation through which they collect samples representing a variety of diseases, perhaps to look for biomarkers affiliated with disease. Population-based biobanks need no particular hospital affiliation because they take samples from large numbers of all kinds of people, perhaps to look for biomarkers for disease susceptibility in a general population. Virtual biobanks integrate epidemiological cohorts into a common pool. Virtual biobanks allow for sample collection to meet national regulations. Tissue banks harvest and store human tissues for transplantation and research. As biobanks become more established, it is expected that tissue banks will merge with biobanks. Population banks store biomaterial as well as associated characteristics such as lifestyle, clinical, and environmental data. In 2008, United States researchers stored 270 million specimens in biobanks, and the rate of new sample collection was 20 million per year. These numbers represent a fundamental worldwide change in the nature of research between the time when such numbers of samples could not be used and the time when researchers began demanding them. Collectively, researchers began to progress beyond single-center research centers to a next-generation qualitatively different research infrastructure. Some of the challenges raised by the advent of biobanks are ethical, legal, and social issues pertaining to their existence, including the fairness of collecting donations from vulnerable populations, providing informed consent to donors, the logistics of data disclosure to participants, the right to ownership of intellectual property, and the privacy and security of donors who participate. Because of these new problems, researchers and policymakers began to require new systems of research governance. Many researchers have identified biobanking as a key area for infrastructure development in order to promote drug discovery and drug development. Types and applications Human genetics research By the late 1990s, scientists realized that although many diseases are caused at least in part by a genetic component, few diseases originate from a single defective gene; most genetic diseases are caused by multiple genetic factors on multiple genes. Because the strategy of looking only at single genes was ineffective for finding the genetic components of many diseases, and because new technology made the cost of examining a single gene versus doing a genome-wide scan about the same, scientists began collecting much larger amounts of genetic information when any was to be collected at all. At the same time technological advances also made it possible for wide sharing of information, so when data was collected, many scientists doing genetics work found that access to data from genome-wide scans collected for any one reason would actually be useful in many other types of genetic research. Whereas before data usually stayed in one laboratory, now scientists began to store large amounts of genetic data in single places for community use and sharing. An immediate result of doing genome-wide scans and sharing data was the discovery of many single-nucleotide polymorphisms, with an early success being an improvement from the identification of about 10,000 of these with single-gene scanning and before biobanks versus 500,000 by 2007 after the genome-wide scanning practice had been in place for some years. A problem remained; this changing practice allowed the collection of genotype data, but it did not simultaneously come with a system to gather the related phenotype data. Whereas genotype data comes from a biological specimen like a blood sample, phenotype data has to come from examining a specimen donor with an interview, physical assessment, review of medical history, or some other process which could be difficult to arrange. Even when this data was available, there were ethical uncertainties about the extent to which and the ways in which patient rights could be preserved by connecting it to genotypic data. The institution of the biobank began to be developed to store genotypic data, associate it with phenotypic data, and make it more widely available to researchers who needed it. Biobanks including genetic testing samples have historically been composed of a majority of samples from individuals from European ancestry. Diversification of biobank samples is needed and researchers should consider the factors effecting the underrepresented populations. Conservation, ecosystem restoration and geoengineering In November 2020 scientists began collecting living fragments, tissue and DNA samples of the endangered corals from the Great Barrier Reef for a precautionary biobank for potential future restoration and rehabilitation activities. A few months earlier another Australian team of researchers reported that they evolved such corals to be more heat-resistant. Biological specimens The specimens stored by a biobank and made available to researchers are taken by sampling. Specimen types include blood, urine, skin cells, organ tissue, and other materials. Increasingly, methods for sampling tissue specimens are becoming more targeted, sometimes involving the use of MRI to determine which specific areas of tissue should be sampled. The biobank keeps these specimens in good condition until a researcher needs them to conduct a test, do an experiment, or perform an analysis. Storage Biobanks, like other DNA databases, must carefully store and document access to samples and donor information. The samples must be maintained reliably with minimal deterioration over time, and they must be protected from physical damage, both accidental and intentional. The registration of each sample entering and exiting the system is centrally stored, usually on a computer-based system that can be backed up frequently. The physical location of each sample is noted to allow the rapid location of specimens. Archival systems de-identify samples to respect the privacy of donors and allow blinding of researchers to analysis. The database, including clinical data, is kept separately with a secure method to link clinical information to tissue samples. Room temperature storage of samples is sometimes used, and was developed in response to perceived disadvantages of low-temperature storage, such as costs and potential for freezer failure. Current systems are small and are capable of storing nearly 40,000 samples in about one tenth of the space required by a freezer. Replicates or split samples are often stored in separate locations for security. Ownership One controversy of large databases of genetic material is the question of ownership of samples. As of 2007, Iceland had three different laws on ownership of the physical samples and the information they contain. Icelandic law holds that the Icelandic government has custodial rights of the physical samples themselves while the donors retain ownership rights. In contrast, Tonga and Estonia give ownership of biobank samples to the government, but their laws include strong protections of donor rights. Ethics The key event which arises in biobanking is when a researcher wants to collect a human specimen for research. When this happens, some issues which arise include the following: right to privacy for research participants, ownership of the specimen and its derived data, the extent to which the donor can share in the return of the research results, and the extent to which a donor is able to consent to be in a research study. With respect to consent, the main issue is that biobanks usually collect samples and data for multiple future research purposes and it is not feasible to obtain specific consent for all possible future research. It has been discussed that one-off consent or a broad consent for various research purposes may not suffice ethical and legal requirements. Dynamic consent is an approach to consent that may be better suited to biobanking, because it enables ongoing engagement and communication between the researchers and sample/data donors over time. Governance There is no internationally accepted set of governance guidelines that are designed to work with biobanks. Biobanks typically try to adapt to the broader recommendations that are internationally accepted for human subject research and change guidelines as they become updated. For many types of research and particularly medical research, oversight comes at the local level from an institutional review board. Institutional review boards typically enforce standards set by their country's government. To different extents, the law used by different countries is often modeled on biobank governance recommendations that have been internationally proposed. Key organizations Some examples of organizations that participated in creating written biobanking guidelines are the following: World Medical Association, Council for International Organizations of Medical Sciences, Council of Europe, Human Genome Organisation, World Health Organization, and UNESCO. The International Society for Biological and Environmental Repositories (ISBER) is a global biobanking organization which creates opportunities for networking, education, and innovations and harmonizes approaches to evolving challenges in biological and environmental repositories. ISBER connects repositories globally through best practices. The ISBER Best Practices, Fourth Edition was launched on January 31, 2018 with a LN2 addendum that was launched early May 2019. History In 1998, the Icelandic Parliament passed the Act on Health Sector Database. This act allowed for the creation of a national biobank in that country. In 1999, the United States National Bioethics Advisory Commission issued a report containing policy recommendations about handling human biological specimens. In 2005, the United States National Cancer Institute founded the Office of Biorepositories and Biospecimen Research so that it could have a division to establish a common database and standard operating procedures for its partner organizations with biospecimen collections. In 2006, the Council of the European Union adopted a policy on human biological specimens, which was novel for discussing issues unique to biobanks. Economics Researchers have called for a greater critical examination of the economic aspects of Biobanks, particularly those facilitated by the state. National biobanks are often funded by public/private partnerships, with finance provided by any combination of national research councils, medical charities, pharmaceutical company investment, and biotech venture capital. In this way, national biobanks enable an economic relationship mediated between states, national populations, and commercial entities. It has been illustrated that there is a strong commercial incentive underlying the systematic collection of tissue material. This can be seen particularly in the field of genomic research where population sized study lends itself more easily toward diagnostic technologies rather than basic etiological studies. Considering the potential for substantial profit, researchers Mitchell and Waldby argue that because biobanks enroll large numbers of the national population as productive participants, who allow their bodies and prospective medical histories to create a resource with commercial potential, their contribution should be seen as a form of "clinical labor" and therefore participants should also benefit economically. Legal cases There have been cases when the ownership of stored human specimens have been disputed and taken to court. Some cases include: Moore v. Regents of the University of California Greenberg v. Miami Children's Hospital Research Institute See also Biorepository Biological database Gene bank Genetic fingerprinting Genomics Genotype References Further reading . . ISO 20387:2018 "Biotechnology — Biobanking — General requirements for biobanking". ISO/TR 22758:2020 "Biotechnology — Biobanking — Implementation guide for ISO 20387". External links Specimen Central biorepository list, A worldwide listing of active biobanks and biorepositories Harvested Organs Revolutionize Medicine - 2007 PBS/Wired Science report 8 minute biobank video made by Genetic Alliance International Society for Biological and Environmental Repositories (ISBER) Applied genetics Biological databases
Biobank
[ "Biology" ]
2,583
[ "Bioinformatics", "Biobanks", "Biological databases" ]
13,512,537
https://en.wikipedia.org/wiki/Itautec
Itautec is a Brazilian electronics company founded in 1979. It is part of Itaúsa, a Brazilian business group. Itautec is an ATM, kiosk, and computer manufacturer in the Brazilian and South American markets. The company also has a key role in project deployment and IT services around the globe. It mainly focuses on making consumer electronics, banking, and retail automation. The company has a large base of ATMs globally. Itautec is headquartered in São Paulo. The company operates a manufacturing plant in the city of Jundiaí (SP), and Itautec has 5,709 direct employees – 5,285 in Brazil and 424 abroad. Product lines The company's product lines include: Personal Computers: Desktop, tablet, and laptop personal computers Monitors: LCD, LED, OLED, and touchscreen monitors Commercial and banking automation Software: Point of sale, credit card processing, an in-house Linux distribution called Librix, terminal management, digital signatures, and banking correspondence, among others Services and Integration: Technical support, infrastructure, security, phone support, servers, and networks Components: Printed circuit boards, Memory boards, and integrated circuits History In 1980, Itautec established its first online presence as GRI Gerenciador de Redes Itautec, known as the "Itautec Network Services Provider," along with Banktec mainframes. The following year, in 1981, the central agency of Itaú was founded, featuring an automation system developed by Itautec. By 1982, the Bank of Brazil had implemented GRI and Banktec systems. Itautec introduced the PC/XT microcomputer in 1985, entering the personal computing market. In 1986, Itautec installed its first compact Automated teller machine(ATM). In 1989, Itautec introduced GRIP (Gerenciamento de Redes Itautec para PC), a network management system designed for PCs. In 1990, Itautec released its first Notebook computer, the IS 386 Note. In 1994, the company launched a second-generation ATM in Brazil, followed by the introduction of the first version of Banktec Multicanal in Banco Itaú Argentina in 1995. By 2001, Itautec began exporting ATMs to the United States and Europe. In 2002, the company acquired technology from NMD for DelaRue and installed the first WEB system in Banco Itaú Buen Ayre. In 2009, Itautec ranked 24th in the Fintech ranking, which lists the world's largest IT providers. In 2011, Itautec introduced the world's first touchless 3D ATM. See also SISNE plus Sources American Banker article: "ATM's Hologram Interface Deters Theft" Credit Union Journal article: "First Touchscreen 3D ATM Launched for CUMarket" References External links Itautec home page Itaúsa Technology companies of Brazil Electronics companies of Brazil Manufacturing companies based in São Paulo Computer companies established in 1979 Computer hardware companies Computer systems companies Software companies established in 1979 Display technology companies Financial technology companies Point of sale companies Software companies of Brazil Brazilian brands Companies listed on B3 (stock exchange)
Itautec
[ "Technology" ]
644
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
13,512,792
https://en.wikipedia.org/wiki/Gamma%20Piscium
Gamma Piscium (γ Piscium) is a star approximately 135 light years away from Earth in the zodiac constellation of Pisces. It is a yellow star with a spectral type of G8 III, meaning it has a surface temperature of 4,833 K and is a giant star. It is slightly cooler than the Sun, yet it is 11 solar radii in size and shines with the light of 63 Suns. The star is a member of the red clump, which means it is undergoing core helium fusion. At an apparent magnitude of 3.7, it is the second brightest star in the constellation Pisces, between Eta and Alpha. Gamma Piscium moves across the sky at three-quarters of an arcsecond per year, which at 135 light years corresponds to 153 kilometers per second. This suggests it is a visitor from another part of the Milky Way Galaxy; in astronomical terms, it will quickly leave the vicinity of the Sun. Its metallicity is only one-fourth that of the Sun, and visitors from outside the thin disk that composes the Milky Way tend to be metal-poor. Gamma Piscium is part of the asterism known as the "circlet of Pisces." Naming In Chinese, (), meaning Thunderbolt, refers to an asterism consisting of γ Piscium, β Piscium, θ Piscium, ι Piscium and ω Piscium. Consequently, the Chinese name for γ Piscium itself is (, .) Planetary system In 2021, a gas giant planet was detected by the radial velocity method. References G-type giants Planetary systems with one confirmed planet Pisces (constellation) Piscium, Gamma 8852 BD+02 4648 Piscium, 006 219615 114971 J23170996+0316563
Gamma Piscium
[ "Astronomy" ]
387
[ "Pisces (constellation)", "Constellations" ]
13,512,823
https://en.wikipedia.org/wiki/Functional%20psychology
Functional psychology or functionalism refers to a psychological school of thought that was a direct outgrowth of Darwinian thinking which focuses attention on the utility and purpose of behavior that has been modified over years of human existence. Edward L. Thorndike, best known for his experiments with trial-and-error learning, came to be known as the leader of the loosely defined movement. This movement arose in the U.S. in the late 19th century in direct contrast to Edward Titchener's structuralism, which focused on the contents of consciousness rather than the motives and ideals of human behavior. Functionalism denies the principle of introspection, which tends to investigate the inner workings of human thinking rather than understanding the biological processes of the human consciousness. While functionalism eventually became its own formal school, it built on structuralism's concern for the anatomy of the mind and led to greater concern over the functions of the mind and later to the psychological approach of behaviorism. History Functionalism opposed the prevailing structuralism of psychology of the late 19th century. Edward Titchener, the main structuralist, gave psychology its first definition as a science of the study of mental experience, of consciousness, to be studied by trained introspection. At the start of the nineteenth century, there was a discrepancy between psychologists who were interested in the analysis of the structures of the mind and those who turned their attention to studying the function of mental processes. This resulted in a battle of structuralism versus functionalism. The main goal of Structuralism was to make attempts to study human consciousness within the confines of an actual living experience, but this could make studying the human mind impossible, functionalism is in stark contrast to that. Structural psychology was concerned with mental contents while functionalism is concerned with mental operations. It is argued that structural psychology emanated from philosophy and remained closely allied to it, while functionalism has a close ally in biology. William James is considered to be the founder of functional psychology. But he would not consider himself as a functionalist, nor did he truly like the way science divided itself into schools. John Dewey, George Herbert Mead, Harvey A. Carr, and especially James Rowland Angell were the main proponents of functionalism at the University of Chicago. Another group at Columbia, including notably James McKeen Cattell, Edward L. Thorndike, and Robert S. Woodworth, were also considered functionalists and shared some of the opinions of Chicago's professors. Egon Brunswik represents a more recent, but Continental, version. The functionalists retained an emphasis on conscious experience. Behaviourists also rejected the method of introspection but criticized functionalism because it was not based on controlled experiments and its theories provided little predictive ability. B.F. Skinner was a developer of behaviourism. He did not think that considering how the mind affects behaviour was worthwhile, for he considered behaviour simply as a learned response to an external stimulus. Yet, such behaviourist concepts tend to deny the human capacity for random, unpredictable, sentient decision-making, further blocking the functionalist concept that human behaviour is an active process driven by the individual. Perhaps, a combination of both the functionalist and behaviourist perspectives provides scientists with the most empirical value, but, even so, it remains philosophically (and physiologically) difficult to integrate the two concepts without raising further questions about human behaviour. For instance, consider the interrelationship between three elements: the human environment, the human autonomic nervous system (our fight or flight muscle responses), and the human somatic nervous system (our voluntary muscle control). The behaviourist perspective explains a mixture of both types of muscle behaviour, whereas the functionalist perspective resides mostly in the somatic nervous system. It can be argued that all behavioural origins begin within the nervous system, prompting all scientists of human behaviour to possess basic physiological understandings, something very well understood by the functionalist founder William James. The main problems with structuralism were the elements and their attributes, their modes of composition, structural characteristics, and the role of attention. Because of these problems, many psychologists began to shift their attention from mental states to mental processes. This change of thought was preceded by a change in the whole conception of what psychology is. Three parts ushered functional psychology into the modern-day psychology. Utilizing the Darwinian ideology, the mind was considered to perform a diverse biological function on its own and can evolve and adapt to varying circumstances. Secondly, the physiological functioning of the organism results in the development of the consciousness. Lastly, the promise of the impact of functional psychology to the improvement of education, mental hygiene and abnormal states. Notable people James Angell James Angell was a proponent of the struggle for the emergence of functional psychology. He argued that mental elements identified by the structuralist were temporary and only existed at the moment of sensory perception. During his American Psychological Association presidential address, Angell laid out three major ideas regarding functionalism. The first of his ideas being that functional psychology is focused on mental operations and their relationship with biology and these mental operations were a way of dealing with the conditions of the environment. Second, mental operations contribute to the relationship between an organism's needs and the environment in which it lives. Its mental functions aid in the survival of the organism in unfamiliar situations. Lastly, functionalism does not abide by the rules of dualism because it is the study of how mental functions relate to behavior. Mary Calkins Mary Calkins attempted to make strides in reconciling structural and functional psychology during her APA presidential address. It was a goal of Calkin's for her school of self-psychology to be a place where functionalism and structuralism could unite under common ground. John Dewey John Dewey, an American psychologist and philosopher, became the organizing principle behind the Chicago school of functional psychology in 1894. His first important contribution to the development of functional psychology was a paper criticizing "the reflex arc" concept in psychology. Herman Ebbinghaus Herman Ebbinghaus's study on memory was a monumental moment in psychology. He was influenced by the Fechner's work on perception and from the Elements of Psychophysics. He used himself as a subject when he set out to prove that some higher mental processes could be experimentally investigated. His experiment was hailed as an important contribution to psychology by Wundt. William James James was the first American psychologist and wrote the first general textbook regarding psychology. In this approach he reasoned that the mental act of consciousness must be an important biological function. He also noted that it was a psychologist's job to understand these functions so they can discover how the mental processes operate. This idea was an alternative approach to Structuralism, which was the first paradigm in psychology (Gordon, 1995). In opposition of Titchener's idea that the mind was simple, William James argued that the mind should be a dynamic concept. James's main contribution to functionalism was his theory of the subconscious. He said there were three ways of looking at the subconscious in which it may be related to the conscious. First, the subconscious is identical in nature with states of consciousness. Second, it's the same as conscious but impersonal. Lastly, he said that the subconscious is a simple brain state but with no mental counterpart. According to An Illustrated History of American Psychology, James was the most influential pioneer. In 1890, he argued that psychology should be a division of biology and adaptation should be an area of focus. His main theories that contributed to the development of functional psychology were his ideas about the role of consciousness, the effects of emotions, and the usefulness of instincts and habits Joseph Jastrow In 1901, Joseph Jastrow declared that functional psychology appeared to welcome the other areas of psychology that were neglected by structuralism. In 1905, a wave of acceptance was eminent as there had been a widespread acceptance of functionalism over the structural view of psychology. Edward Titchener Edward Titchener made arguments that structural psychology preceded functional psychology because mental structures need to be isolated and understood before their function be ascertained. Despite Titchener's enthusiasm towards functional psychology, he was weary and urged other psychologists to avoid the appeal of functional psychology and continue to embrace the rigorous introspective experimental psychology. James Ward James Ward was a pioneer of functional psychology in Britain. Once a minister, after experiencing a turmoil in his spiritual life, he turned to psychology but not without an attempt at physiology. He eventually settled for philosophy. He later made attempts at establishing psychological laboratory. Ward believed perception is not passive reception of sensation, but an active grasping of the environment. Ward's presence influenced the adoption of functionalist view in British psychology and later served as the turning point for the development of cognitive psychology. Wilhelm Wundt Later in his life, Dewey neglected to mention Wilhelm Wundt, a German philosopher and psychologist, as an influence towards his functional psychology. In fact, Dewey gave all credit to James. At the time it didn't seem worthwhile to bring up old theories from a German philosopher who only held a temporary spotlight and whose reputation went into a rather negative decline in America in the early twentieth century. Wundt's major contribution to functional psychology was when he made will into a structural concept. Though controversial, according to Titchener's definition of structuralism, Wundt was actually more of a structuralist than functionalist. Despite this claim, it is possibly one of the greatest ironies in the history of psychology that Wundt be deemed responsible for major contributions to functionalism due to his spark of several functionalist rebellions. Contemporary descendants Evolutionary psychology is based on the idea that knowledge concerning the function of the psychological phenomena affecting human evolution is necessary for a complete understanding of the human psyche. Even the project of studying the evolutionary functions of consciousness is now an active topic of study. Like evolutionary psychology, James's functionalism was inspired by Charles Darwin's theory of natural selection. Functionalism was the basis of development for several subtypes of psychology including child and developmental psychology, clinical psychology, psychometrics, and industrial/vocational psychology. Functionalism eventually dropped out of popular favor and was replaced by the next dominant paradigm, behaviourism. See also Functionalism (philosophy of mind) References External links "functionalism" – Encyclopædia Britannica Online Mary Calkins (1906) "A Reconciliation Between Structural And Functional Psychology" James R. Angell (1907) "The Province of Functional Psychology" James R. Angell (1906), Psychology: An Introductory Study of the Structure and Function of Human Consciousness Behaviorism History of psychology William James Psychological theories Consciousness
Functional psychology
[ "Biology" ]
2,162
[ "Behavior", "Behaviorism" ]
13,513,277
https://en.wikipedia.org/wiki/Natural%20oil%20polyols
Natural oil polyols, also known as NOPs or biopolyols, are polyols derived from vegetable oils by several different techniques. The primary use for these materials is in the production of polyurethanes. Most NOPs qualify as biobased products, as defined by the United States Secretary of Agriculture in the Farm Security and Rural Investment Act of 2002. NOPs all have similar sources and applications, but the materials themselves can be quite different, depending on how they are made. All are clear liquids, ranging from colorless to medium yellow. Their viscosity is also variable and is usually a function of the molecular weight and the average number of hydroxyl groups per molecule (higher mw and higher hydroxyl content both giving higher viscosity.) Odor is a significant property which is different from NOP to NOP. Most NOPs are still quite similar chemically to their parent vegetable oils and as such are prone to becoming rancid. This involves autoxidation of fatty acid chains containing carbon-carbon double bonds and ultimately the formation of odoriferous, low molecular weight aldehydes, ketones and carboxylic acids. Odor is undesirable in the NOPs themselves, but more importantly, in the materials made from them. There are a limited number of naturally occurring vegetable oils (triglycerides) which contain the unreacted hydroxyl groups that account for both the name and important reactivity of these polyols. Castor oil is the only commercially available natural oil polyol that is produced directly from a plant source: all other NOPs require chemical modification of the oils directly available from plants. The hope is that using renewable resources as feedstocks for chemical processes will reduce the environmental footprint by reducing the demand on non-renewable fossil fuels currently used in the chemical industry and reduce the overall production of carbon dioxide, the most notable greenhouse gas. One NOP producer, Cargill, estimates that its BiOH(TM)polyol manufacturing process produces 36% less global warming emissions (carbon dioxide), a 61% reduction in non-renewable energy use (burning fossil fuels), and a 23% reduction in the total energy demand, all relative to polyols produced from petrochemicals. Sources of natural oil polyols Ninety percent of the fatty acids that make up castor oil is ricinoleic acid, which has a hydroxyl group on C-12 and a carbon-carbon double bond. The structure below shows the major component of castor oil which is composed of the tri-ester of rincinoleic acid and glycerin: Other vegetable oils - such as soy bean oil, peanut oil, and canola oil - contain carbon-carbon double bonds, but no hydroxyl groups. There are several processes used to introduce hydroxyl groups onto the carbon chain of the fatty acids, and most of these involve oxidation of the C-C double bond. Treatment of the vegetal oils with ozone cleaves the double bond, and esters or alcohols can be made, depending on the conditions used to process the ozonolysis product. The example below shows the reaction of triolein with ozone and ethylene glycol. Air oxidation, (autoxidation), the chemistry involved in the "drying" of drying oils, gives increased molecular weight and introduces hydroxyl groups. The radical reactions involved in autoxidation can produce a complex mixture of crosslinked and oxidized triglycerides. Treatment of vegetable oils with peroxy acids gives epoxides which can be reacted with nucleophiles to give hydroxyl groups. This can be done as a one-step process. Note that in the example shown below only one of the three fatty acid chains is drawn fully, the other part of the molecule is represented by "R1" and the nucleophile is unspecified. Earlier examples also include acid catalyzed ring opening of epoxidized soybean oil to make oleochemical polyols for polyurethane foams and acid catalyzed ring opening of soy fatty acid methyl esters with multifunctional polyols to form new polyols for casting resins. Triglycerides of unsaturated (containing carbon-carbon double bonds) fatty acids or methyl esters of these acids, can be treated with carbon monoxide and hydrogen in the presence of a metal catalyst to add a -CHO (formyl) groups to the chain (hydroformylation reaction) followed by hydrogenation to give the needed hydroxyl groups. In this case R1 can be the rest of the triglyceride, or a smaller group such as methyl (in which case the substrate would be similar to biodiesel). If R=Me then additional reactions like transesterification are needed to build up a polyol. Uses Castor oil has found numerous applications, many of them due to the presence of the hydroxyl group that allows chemical derivatization of the oil or modifies the properties of castor oil relative to vegetable oils which do not have the hydroxyl group. Castor oil undergoes most of the reactions that alcohols do, but the most industrially important one is reaction with diisocyanates to make polyurethanes. Castor oil by itself has been used in making a variety of polyurethane products, ranging from coatings to foams, and the use of castor oil derivatives continues to be an area of active development. Castor oil derivatized with propylene oxide makes polyurethane foam for mattresses and yet another new derivative is used in coatings Apart from castor oil, which is a relatively expensive vegetable oil and is not produced domestically in many industrialized countries, the use of polyols derived from vegetable oils to make polyurethane products began attracting attention beginning around 2004. The rising costs of petrochemical feedstocks and an enhanced public desire for environmentally friendly green products have created a demand for these materials. One of the most vocal supporters of these polyurethanes made using natural oil polyols is the Ford Motor Company, which debuted polyurethane foam made using soy oil in the seats of its 2008 Ford Mustang. Ford has since placed soy foam seating in all its North American vehicle platforms. The interest of automakers is responsible for much of the work being done on the use of NOPs in polyurethane products for use in cars, for example is seats, and headrests, armrests, soundproofing, and even body panels. One of the first uses for NOPs (other than castor oil) was to make spray-on polyurethane foam insulation for buildings. NOPs are also finding use in polyurethane slab foam used to make conventional mattresses as well as memory foam mattresses. The characteristics of NOPs can be varied over a very wide range. This can be done by selection of the base Natural Oil (or oils) used to make up the NOP. Also, using known and increasingly novel (Garrett & Du) chemical techniques, it is possible to graft additional groups onto the triglyceride chains of the NOP and change its processing characteristics and this in turn will change and modify in a controlled manner, the physical properties of the final article which the NOP is being used to produce. Differences and modifications in the process regime and reaction conditions used to make a given NOP also generally lead to different chemical architectures and therefore different end use performance of that NOP; so that even though two NOPs may have been made from the same Natural Oil root, they may be surprisingly different when used and, will produce a detectably different end product too. Commercially, (since 2012) NOPs are available and made from; sawgrass oil, soybean oil, castor oil (as a grafted NOP), rapeseed oil, palm oil (kernel and mesocarp), and coconut oil. There is also some work being done on NOPs made from Natural Animal oils. Initially in the US, and since early 2010, it has been routinely possible to replace over 50% of petrochemical-based polyols with NOPs for use in slab foams sold into the mass market, furniture and bedding industries. The commercialised technology also eliminates or greatly reduces the odor problem, mentioned above, normally associated with the use of NOPs. This is particularly important when the NOP is to be used at ever higher percentage levels, to try to reduce dependency on petrochemical materials, and to produce materials for use in the domestic and contract furniture segments which are historically very sensitive to "chemical" odors in the final foam product in people's homes and places of work. Amongst other useful effects of using high levels of Natural Oil Polyols to make foams are the improvements seen in the long-term performance of the foam under humid conditions and also on the flammability of the foams; compared to equivalent foams made without the presence of the NOP. People perspire; and so foams used for the construction of matrasses or furniture will, over time, tend to feel softer and give less support. The perspiration gradually softens the foam. Foams made with high levels of NOPs are much less prone to this problem, so that the useful lifetime of the upholstered product can be extended. The use of high levels of NOP also make it possible to manufacture foams with flame retardants which are permanent, and therefore are not later emitted into the household or work place environment. These relatively recently developed materials can be added at very low levels to NOP foams to pass such well known tests as California Technical Bulletin 117, which is a well-known flammability test for furniture. These permanent flame-retardants are halogen free and key into the foam matrix and are therefore fixed there. An additional effect of using these new, highly efficient, permanent flame retardants, is that the smoke seen during these standard fire tests, may be considerably reduced compared to that produced when testing foams made using non-permanent flame retardant materials, which do not key themselves into the foam structure. More recent work during 2014 with this "Green Chemistry" has shown that foams containing about 50 percent by weight of natural oils can be made which produce far less smoke when involved in fire situations. The ability of these low emission foams to reduce smoke emissions by up to 80% is an interesting property which will aid escape from fire situations and also lessen the risks for first responders i.e. emergency services in general and fire department personnel in particular. Other technology can be combined with these flammability characteristics to give foams, which have extremely low overall emissions of volatile organic compounds, known as VOCs. References Polyols Green chemistry
Natural oil polyols
[ "Chemistry", "Engineering", "Environmental_science" ]
2,238
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan" ]
13,513,932
https://en.wikipedia.org/wiki/Omega%20Piscium
Omega Piscium (Omega Psc, ω Piscium, ω Psc) is a star approximately 106 light years away from Earth, in the constellation Pisces. It has a spectral type of F4IV, meaning it is a subgiant/dwarf star, and it has a temperature of 6,600 kelvins. It may or may not be a close binary star system. Variations in its spectrum were once interpreted as giving it an orbital period of 2.16 days, but this claim was later debunked as false. It is 20 times brighter than the Sun and is 1.8 times greater in mass, if it is a single star. It is part of the drawn asterism in classic and modern renderings as the start of the tail, east of the Circlet of Pisces, a near-circle which forms all but the tail (the head and body) of the western (fatter) "fish" in the constellation of two fishes. Right ascension Considering stars with Flamsteed numbers, Greek letters, and proper names, Omega Piscium at J2000 (namely in the year 2000) was the named star with the highest right ascension (akin to terrestrial longitude). Due to the 26,000-year movement of the Earth's axis tracing an imperfect circle (axial precession), it has since increased to just beyond 0 hours, which it reached in J2013. At the cusp of sunrise on the March Equinox in the present era the circlet appears just above the sunrise being the westernmost part of the asterism; the easternmost parts can be most easily seen after sunset, just above the Sun on a maximal horizon, such as the sea. A month later the progress of the Earth around the plane of the ecliptic (its orbit) by a mean 2 hours of Right Ascension (18° of orbit) means that the sun rises and sets in an outer part of Aries bordering Cetus. Naming ω Piscium is the star's Bayer designation. In the catalogue of stars in the Calendarium of Al Achsasi al Mouakket, this star was designated Dzaneb al Samkat, which was translated into Latin as Cauda Piscis, meaning the tail of fish. In Chinese, (), meaning Thunderbolt, refers to an asterism consisting of refers to an asterism consisting of ω Piscium, β Piscium, γ Piscium, θ Piscium and ι Piscium. Consequently, the Chinese name for ω Piscium itself is (, .) References F-type subgiants Pisces (constellation) Piscium, Omega Durchmusterung objects Piscium, 028 224617 118268 9072
Omega Piscium
[ "Astronomy" ]
578
[ "Pisces (constellation)", "Constellations" ]
13,514,617
https://en.wikipedia.org/wiki/Xylose%20metabolism
D-Xylose is a five-carbon aldose (pentose, monosaccharide) that can be catabolized or metabolized into useful products by a variety of organisms. There are at least four different pathways for the catabolism of D-xylose: An oxido-reductase pathway is present in eukaryotic microorganisms. Prokaryotes typically use an isomerase pathway, and two oxidative pathways, called Weimberg and Dahms pathways respectively, are also present in prokaryotic microorganisms. Pathways The oxido-reductase pathway This pathway is also called the “Xylose Reductase-Xylitol Dehydrogenase” or XR-XDH pathway. Xylose reductase (XR) and xylitol dehydrogenase (XDH) are the first two enzymes in this pathway. XR is reducing D-xylose to xylitol using NADH or NADPH. Xylitol is then oxidized to D-xylulose by XDH, using the cofactor NAD. In the last step D-xylulose is phosphorylated by an ATP utilising kinase, XK, to result in D-xylulose-5-phosphate which is an intermediate of the pentose phosphate pathway. The isomerase pathway In this pathway the enzyme xylose isomerase converts D-xylose directly into D-xylulose. D-xylulose is then phosphorylated to D-xylulose-5-phosphate as in the oxido-reductase pathway. At equilibrium, the isomerase reaction results in a mixture of 83% D-xylose and 17% D-xylulose because the conversion of xylose to xylulose is energetically unfavorable. Weimberg pathway The Weimberg pathway is an oxidative pathway where the D-xylose is oxidized to D-xylono-lactone by a D-xylose dehydrogenase followed by a lactonase to hydrolyze the lactone to D-xylonic acid. A xylonate dehydratase is splitting off a water molecule resulting in 2-keto 3-deoxy-xylonate. 2-keto-3-deox-D-xylonate dehydratase forms the α-ketoglutarate semialdehyde. This is subsequently oxidised via α-ketoglutarate semialdehyde dehydrogenase to yield 2-ketoglutarate which serves as a key intermediate in the citric acid cycle. Dahms pathway The Dahms pathway starts as the Weimberg pathway but the 2-keto-3 deoxy-xylonate is split by an aldolase to pyruvate and glycolaldehyde. Biotechnological applications It is desirable to ferment D-xylose to ethanol. This can be accomplished either by native xylose fermenting yeasts such as Scheffersomyces Pichia stipitis or by metabolically engineered strains of Saccharomyces cerevisiae. Pichia stipitis is not as ethanol tolerant as the traditional ethanol producing yeast Saccharomyces cerevisiae. S. cerevisiae on the other hand can not ferment D-xylose to ethanol. In attempts to generate S. cerevisiae strains that are able to ferment D-xylose the XYL1 and XYL2 genes of P. stipitis coding for the D-xylose reductase (XR) and xylitol dehydrogenase (XDH), respectively were introduced in S. cerevisiae by means of genetic engineering. XR catalyze the formation of xylitol from D-xylose and XDH the formation of D-xylulose from xylitol. Saccharomyces cerevisiae can naturally ferment D-xylulose through the pentose phosphate pathway. In another approach, bacterial xylose isomerases have been introduced into S. cerevisiae. This enzyme catalyze the direct formation of D-xylulose from D-xylose. Many attempts at expressing bacterial isomerases were not successful due to misfolding or other problems, but a xylose isomerase from the anaerobic fungus Piromyces Sp. has proven effective. One advantage claimed for S. cerevisiae engineered with the xylose isomerase is that the resulting cells can grow anaerobically on xylose after evolutionary adaptation. Studies on flux through the oxidative pentose phosphate pathway during D-xylose metabolism have revealed that limiting the rate of this step may be beneficial to the efficiency of fermentation to ethanol. Modifications to this flux that may improve ethanol production include deleting the GND1 gene, or the ZWF1 gene. Since the pentose phosphate pathway produces additional NADPH during metabolism, limiting this step will help to correct the already evident imbalance between NAD(P)H and NAD+ cofactors and reduce xylitol byproduct formation. Another experiment comparing the two D-xylose metabolizing pathways revealed that the XI pathway was best able to metabolize D-xylose to produce the greatest ethanol yield, while the XR-XDH pathway reached a much faster rate of ethanol production. Overexpression of the four genes encoding non-oxidative pentose phosphate pathway enzymes Transaldolase, Transketolase, Ribulose-5-phosphate epimerase and Ribose-5-phosphate ketol-isomerase led to both higher D-xylulose and D-xylose fermentation rate. The aim of this genetic recombination in the laboratory is to develop a yeast strain that efficiently produces ethanol. However, the effectiveness of D-xylose metabolizing laboratory strains do not always reflect their metabolism abilities on raw xylose products in nature. Since D-xylose is mostly isolated from agricultural residues such as wood stocks then the native or genetically altered yeasts will need to be effective at metabolizing these less pure natural sources. Varying expression of the XR and XDH enzyme levels have been tested in the laboratory in the attempt to optimize the efficiency of the D-xylose metabolism pathway. References Carbohydrate metabolism Monosaccharides Metabolism
Xylose metabolism
[ "Chemistry", "Biology" ]
1,407
[ "Carbohydrates", "Carbohydrate metabolism", "Monosaccharides", "Carbohydrate chemistry", "Cellular processes", "Biochemistry", "Metabolism" ]
13,515,245
https://en.wikipedia.org/wiki/Transformation%20efficiency
Transformation efficiency refers to the ability of a cell to take up and incorporate exogenous DNA, such as plasmids, during a process called transformation. The efficiency of transformation is typically measured as the number of transformants (cells that have taken up the exogenous DNA) per microgram of DNA added to the cells. A higher transformation efficiency means that more cells are able to take up the DNA, and a lower efficiency means that fewer cells are able to do so. In molecular biology, transformation efficiency is a crucial parameter, it is used to evaluate the ability of different methods to introduce plasmid DNA into cells and to compare the efficiency of different plasmid, vectors and host cells. This efficiency can be affected by a number of factors, including the method used for introducing the DNA, the type of cell and plasmid used, and the conditions under which the transformation is performed. Therefore, measuring and optimizing transformation efficiency is an important step in many molecular biology applications, including genetic engineering, gene therapy and biotechnology. Measurement By measuring the transformation efficiency, we can utilize the information from our experiment to evaluate how effectively our transformation went. This is a quantification of how many cells were altered by 1 μg of plasmid DNA. In essence, it is a sign that the transformation experiment was successful. It should be determined under conditions of cell excess. Transformation efficiency is typically measured as the number of transformed cells per total number of cells. It can be represented as a percentage or as colony forming units (CFUs) per microgram of DNA. One of the most common ways to measure transformation efficiency is by performing a colony forming assay. Here is an example of how to calculate transformation efficiency using colony forming units (CFUs): Plate a known number of cells on agar plates containing the appropriate antibiotics. Incubate the plates for a period of time (usually overnight) at the appropriate temperature and conditions for the cells. Count the number of colonies that grow on the plates. This represents the number of cells that have taken up and expressed the plasmid DNA. To calculate the transformation efficiency, divide the number of colonies by the number of cells plated and multiply by 100. The result will be the transformation efficiency as a percentage. For example, if you plate 1x 107 cells and count 1000 colonies, the transformation efficiency is: (1000/1x 107) x 100 = 0.1% Alternatively, CFUs can be reported per microgram of DNA used for the transformation. This can be calculated by multiplying the number of colonies by the volume of the culture plated and dividing by the amount of DNA used. Quantitative PCR (qPCR) - This method utilizes the fact that the plasmid DNA will have a specific gene or sequence that is not present in the host cell genome, and therefore can be used as a target for qPCR. By quantifying the number of copies of this specific gene or sequence in the transformed cells, it is possible to determine the amount of plasmid DNA present in the cell, and thus the transformation efficiency. Fluorescent assay - This method relies on the use of a plasmid that contains a fluorescent protein or reporter gene. The transformed cells are then analyzed by flow cytometry or fluorescence microscopy to determine the number of cells that express the fluorescent protein. The transformation efficiency is then calculated as the percentage of cells that express the fluorescent protein. The number of viable cells in a preparation for a transformation reaction may range from 2×108 to 1011; most common methods of E. coli preparation yield around 1010 viable cells per reaction. The standard plasmids used for determination of transformation efficiency in Escherichia coli are pBR322 or other similarly sized or smaller vectors, such as the pUC series of vectors. Different vectors however may be used to determine their transformation efficiency. 10–100 pg of DNA may be used for transformation, more DNA may be necessary for low-efficiency transformation (generally saturation level is reached at over 10 ng). After transformation, 1% and 10% of the cells are plated separately, the cells may be diluted in media as necessary for ease of plating. Further dilution may be used for high efficiency transformation. A transformation efficiency of 1×108 cfu/μg for a small plasmid like pUC19 is roughly equivalent to 1 in 2000 molecules of the plasmid used being introduced into cells. In E. coli, the theoretical limit of transformation efficiency for most commonly used plasmids would be over 1×1011 cfu/μg. In practice the best achievable result may be around 2–4×1010 cfu/μg for a small plasmid like pUC19, and considerably lower for large plasmids. Factors affecting transformation efficiency Individual cells are capable of taking up many DNA molecules, but the presence of multiple plasmids does not significantly affect the occurrence of successful transformation events. A number of factors may affect the transformation efficiency: Plasmid size – A study done in E. coli found that transformation efficiency declines linearly with increasing plasmid size, i.e. larger plasmids transform less well than smaller plasmids. Forms of DNA – Supercoiled plasmid have a slightly better transformation efficiency than relaxed plasmids – relaxed plasmids are transformed at around 75% efficiency of supercoiled ones. Linear and single-stranded DNA however have much lower transformation efficiency. Single-stranded DNAs are transformed at 104 lower efficiency than double-stranded ones. Media composition – The composition of the media used in the transformation process can affect the efficiency. For example, certain media supplements can increase the natural competence of cells. Genotype of cells – Cloning strains may contain mutations that improve the transformation efficiency of the cells. For example, E. coli K12 strains with the deoR mutation, originally found to confer an ability of cell to grow in minimum media using inosine as the sole carbon source, have 4-5 times the transformation efficiency of similar strains without. For linear DNA, which is poorly transformed in E. coli, the recBC or recD mutation can significantly improve the efficiency of its transformation. Culture conditions – E. coli cells are more susceptible to be made competent when it is growing rapidly, cells are therefore normally harvested in the early log phase of cell growth when preparing competent cells. The optimal optical density for harvesting cells normally lies around 0.4, although it may vary with different cell strains. A higher value of 0.94-0.95 has also been found to produce good yield of competent cells, but this can be impractical when cell growth is rapid. Presence of antibiotics – The presence of antibiotics can increase the efficiency of transformation by inhibiting the growth of non-transformed cells and selecting for transformed cells that are resistant to the antibiotic. For instance, the use of β-lactam antibiotics has been shown for glutamate-producing bacteria to increase its transformation efficiencies. Plasmid origin of replication – The origin of replication of the plasmid used in the transformation process can affect the efficiency in several ways. The copy number of the plasmid in the cell, the activity of the origin of replication in the host cells, and the expression of the genes on the plasmid can all affect the efficiency. The plasmid with a high copy number origin of replication will generally have a higher transfection efficiency than one with a low copy number origin, using a plasmid with an origin of replication that is active in the host cell can lead to a higher transfection efficiency. Transformation conditions – The method of preparation of competent cells, the length of time of heat shock, temperature of heat shock, incubation time after heat shock, growth medium used, pH and various additives, all can affect the transformation efficiency of the cells. The presence of contaminants as well as ligase in a ligation mixture can reduce the transformation efficiency in electroporation, and inactivation of ligase or chloroform extraction of DNA may be necessary for electroporation, alternatively only use a tenth of the ligation mixture to reduce the amount of contaminants. Normal preparation of competent cells can yield transformation efficiency ranging from 106 to 108 cfu/μg DNA. Protocols for chemical method however exist for making super competent cells that may yield a transformation efficiency of over 1 x 109. Damage to DNA – Exposure of DNA to UV radiation in standard preparative agarose gel electrophoresis procedure for as little as 45 seconds can damage the DNA, and this can significantly reduce the transformation efficiency. Adding cytidine or guanosine to the electrophoresis buffer at 1 mM concentration however may protect the DNA from damage. A higher-wavelength UV radiation (365 nm) which cause less damage to DNA should be used if it is necessary work for work on the DNA on a UV transilluminator for an extended period of time. This longer wavelength UV produces weaker fluorescence with the ethidium bromide intercalated into the DNA, therefore if it is necessary to capture images of the DNA bands, a shorter wavelength (302 or 312 nm) UV radiations may be used. Such exposure however should be limited to a very short time if the DNA is to be recovered later for ligation and transformation. Efficiency of transformation methods The method used for introducing the DNA have a significant impact on the transformation efficiency. Electroporation Electroporation tends to be more efficient than chemical methods and can be applied to a wide range of species and to strains that were previously resistant and recalcitrant to transformation techniques. Electroporation has been found to have an average yield typically between 104 - 108 CFU/ug . However, a transformation efficiencies as high as 0.5-5 x 1010 colony forming units (CFU) per microgram of DNA for E. coli. For samples that are hard to handle, like cDNA libraries, gDNA, and plasmids larger than 30 kb, it is suggested to use electrocompetent cells that have transformation efficiencies of over 1 x 1010 CFU/μg. This will ensure a high success rate in introducing the DNA and forming a large number of colonies. It is important to adjust and optimize the electroporation buffer (Increasing the concentration of the electroporation buffer can result in increased transformation efficiencies ) and the shape, strength, number, and number of pulses these electrical parameters play a key role in transformation efficiency. Chemical transformation Chemical transformation or heat shock can be performed in a simple laboratory setup, typically yielding transformation efficiencies that are adequate for cloning and subcloning applications, approximately 106 CFU/μg. One of the early methods used was a combination of CaCl2 and MgCl2 to treat the cells. However, these methods resulted in transformation efficiencies, with a maximum of 105 - 106 colony forming units (CFU) per microgram of plasmid DNA. Later research found that certain cations, such as Mn2+, Ca2+, Ba2+, Sr2+ and Mg2+ could have a positive effect on transformation efficiencies, with Mn2+ showing the greatest effect. Restriction barriers to an efficient transformation Some bacterial cells have restriction-modification systems that can degrade exogenous plasmids that are foreign to the host cell. This can greatly reduce the efficiency of transformation. This is due to restriction systems in the recipient cells that target and destroy exogenous DNA. These systems recognize exogenous DNA based on differences in methylation patterns. To address this problem, strategies such as altering the methylation of the exogenous DNA using commercial methylases or reducing the restriction activity in the recipient cells have been applied. For example, using methylation-negative mutants or temporarily inactivating the restriction system with heat can reduce the recipient cell's ability to impose restrictions on the exogenous DNA. See also Transformation (genetics) References External links Bacteria Transformation Efficiency Calculator Genetics
Transformation efficiency
[ "Biology" ]
2,519
[ "Genetics" ]
13,515,967
https://en.wikipedia.org/wiki/Relaxosome
The relaxosome is the complex of proteins that facilitates plasmid transfer during bacterial conjugation. The proteins are encoded by the tra operon on a fertility plasmid in the region near the origin of transfer, oriT. The most important of these proteins is relaxase, which is responsible for beginning the conjugation process by cutting at the nic site via transesterification. This nicking results in a DNA-Protein complex with the relaxosome bound to a single strand of the plasmid DNA and an exposed 3' hydroxyl group. Relaxase also unwinds the plasmid being conjugated with its helicase properties. The relaxosome interacts with integration host factors within the oriT. Other genes that code for relaxosome components include TraH, which stabilizes the relaxosome's structural formation, TraI, which encodes for the relaxase protein, TraJ, which recruits the complex to the oriT site, TraK, which increases the 'nicked' state of the target plasmid, and TraY, which imparts single-stranded DNA character on the oriT site. TraM plays a particularly important role in relaxase interaction by stimulating 'relaxed' DNA formation. References Molecular biology
Relaxosome
[ "Chemistry", "Biology" ]
265
[ "Biochemistry", "Bacteria stubs", "Bacteria", "Molecular biology" ]
13,515,974
https://en.wikipedia.org/wiki/TraA
The traA gene codes for relaxase, which is an enzyme that initiates plasmid DNA transfer during bacterial conjugation. Relaxase forms a relaxosome complex with auxiliary proteins to initiate conjugation. Relaxosome binds to the origin of transfer (oriT) sequence and cleaves the DNA strand that will be transferred (the T strand). The TraA gene is usually found on megaplasmids in bacteria, and it is somewhat conserved among different bacterial species. Thirty-one percent and 29 percent of Rhodococcus erthypolis TraA residues are identical to Gordonia westfalica TraA and Arthrobacter aurescens TraA, respectively (Yang et al. 2006). Among actinomycetales, it is common to find that the traA gene codes for both relaxase and helicase. References Chen et al., “The Ins and Outs of DNA Transfer in Bacteria.” Science 310, 1456-1460. Yang et al., “Characterization of the mobilization determinants of pAN12, a small replicon from Rhodococcus erythropolis AN12.” Plasmid 57, 71-81. Prokaryote genes
TraA
[ "Biology" ]
261
[ "Prokaryotes", "Prokaryote genes" ]
13,516,816
https://en.wikipedia.org/wiki/Fibre%20Channel%20frame
In computer networking, a Fibre Channel frame is the frame of the Fibre Channel protocol. The basic building blocks of an FC connection are the frames. They contain the information to be transmitted (payload), the address of the source and destination ports and link control information. Frames are broadly categorized as Data frames Link_control frames Data frames may be used as Link_Data frames and Device_Data frames, link control frames are classified as Acknowledge (ACK) and Link_Response (Busy and Reject) frames. The primary function of the Fabric is to receive the frames from the source port and route them to the destination port. It is the FC-2 layer's responsibility to break the data to be transmitted into frame size, and reassemble the frames. Each frame begins and ends with a frame delimiter. The frame header immediately follows the Start of Frame (SOF) delimiter. The frame header is used to control link applications, control device protocol transfers, and detect missing or out of order frames. Optional headers may contain further link control information. A maximum 2048 byte long field (payload) contains the information to be transferred from a source N_Port to a destination N_Port. The 4 byte Cyclic Redundancy Check (CRC) precedes the End of Frame (EOF) delimiter. The CRC is used to detect transmission errors. The maximum total frame length is 2148 bytes. Between successive frames a sequence of (at least) six primitives must be transmitted, sometimes called interframe gap. References Computer networking Fibre Channel
Fibre Channel frame
[ "Technology", "Engineering" ]
322
[ "Computer networking", "Computer engineering", "Computer network stubs", "Computer science", "Computing stubs" ]
14,650,830
https://en.wikipedia.org/wiki/Department%20of%20Materials%20Science%20and%20Metallurgy%2C%20University%20of%20Cambridge
The Department of Materials Science and Metallurgy (DMSM) is a large research and teaching division of the University of Cambridge. Since 2013 it has been located in West Cambridge, having previously occupied several buildings on the New Museums Site in the centre of Cambridge. Following the changes to academic titles in 2021/2022 at the University of Cambridge, the academic staff of the Department of Materials Science and Metallurgy no longer use the academic titles of Reader and Lecturer. The list below reflects the new academic titles. Academic staff Professorial staff include: Serena Best, CBE, FREng, Professor of Materials Science Ruth Cameron, Professor of Materials Science Manish Chhowalla, Goldsmiths' Professor of Materials Science Judith Driscoll, FREng, Professor of Materials Science Caterina Ducati, Professor of Nanomaterials Rachel Evans, Professor of Materials Chemistry James Elliott, Professor of Macromolecular Materials Science Lindsay Greer, Professor of Materials Science Louise Hirst, Professor of Materials Physics (jointly appointed with the Cavendish Laboratory) Nick Jones, Professor of Metallurgy Sohini Kar-Narayan, Professor of Device & Energy Materials Neil Mathur, Professor of Materials Physics Paul Midgley, FRS, Professor of Materials Science (Current Head of Department) Rachel Oliver, FREng, Professor of Materials Science Chris Pickard, Sir Alan Cottrell Professor of Materials Science Cathie Rae, Professor of Superalloys Emilie Ringe, Professor of Synthetic and Natural Nanomaterials (jointly appointed with the Department of Earth Sciences) Howard Stone, Professor of Metallurgy Jason WA Robinson, Professor of Materials Physics Heads of Department R.S Hutton - 1944 Wesley Austin 1945-1958 Sir Alan Cottrell FRS 1958-1966 Sir Robert Honeycombe FREng FRS 1966-84 Derek Hull FREng FRS 1984-1991 Sir Colin Humphreys, CBE FREng FRS 1991-1996 Alan Windle FRS 1996-2000 Derek Fray FRS FREng 2000-2005 Alan Lindsay Greer 2005-2013 Mark Blamire 2013-2018 Paul Midgley FRS 2018-2020 Ruth Cameron, James Elliott, and Jason Robinson 2020- (current) Research themes Current research spans seven themes in which there are current materials challenges to overcome: Aerospace materials Information Communication Technologies Innovative Characterisation Materials Discovery Materials for Energy and Sustainability Materials for Healthcare Novel Design and Processing Research groups Research is organised into the following groups. Device Materials Group Electron Microscopy Group Cambridge Centre for Gallium Nitride Hybrid Materials Group Macromolecular Materials Laboratory Centre for Materials Physics Materials Theory Group Cambridge Centre for Medical Materials Microstructural Kinetics Group Optical Nanomaterials Group Photoactive Materials Group Rolls-Royce University Technology Centre in Advanced Materials Space Voltaics Group Spinout companies 2019 - Barocal Ltd - developing new heating and cooling technologies to satisfy low-carbon requirements 2018 - Plastometrex Ltd - Profilometry-based Indentation Plastometry (PIP) - a revolutionary new approach to the mechanical testing of metals 2018 - Porotech - specialising in the development of Gallium Nitride material technology 2015 - Paragraf Ltd - novel deposition of graphene onto semiconductors 2010 - CamGaN (now part of Plessey) - GaN on Silicon LED technology (low cost, low energy lighting) 2007 - Inotec AMD - innovative topical oxygen therapy for wound healing 2004 - Q-flo (merged with Plasan, CNT fibres now commercialised by Tortech) - ultra-long CNT fibres 2004 - Camfridge - energy-efficient and gas-free magnetic cooling 2001 - Metalysis - commercialisation of the FFC Cambridge Process. Reduction of metal oxides and ores into pure metals and alloys 1989 - CMD Ltd (became part of Accelerys, now part of Biovia Dassault Systems) - X-ray modelling software Alumni and former staff Notable alumni and former staff include: Sir Harry Bhadeshia FRS FREng Alan Cottrell, FRS, Robert W. Cahn FRS, Robert Honeycombe Derek Fray FRS FREng Sir Colin Humphreys, CBE FREng FRS Alan Windle FRS Charles Heycock FRS Sir Graeme Davies FREng William Bonfield CBE FRS FREng Michael F. Ashby CBE FRS FREng Julia King, Baroness Brown of Cambridge DBE FREng See also DoITPoMS Department of Materials, University of Oxford Department of Materials, Imperial College London References Departments in the Faculty of Physics and Chemistry, University of Cambridge Cambridge, University of
Department of Materials Science and Metallurgy, University of Cambridge
[ "Materials_science" ]
902
[ "Materials science organizations", "Materials science institutes" ]
14,652,448
https://en.wikipedia.org/wiki/Gibson%20Lake%20%28Indiana%29
Gibson Lake is the cooling pond for Duke Energy Indiana's Gibson Generating Station. Measuring at around , it is the largest lake in Indiana built completely above ground, its shores consisting of rock levees on all but two of the lake's six sides both of which were also built up to build the power plant. Opened to fishing in 1978, Gibson Lake had been a prime source of bass and several types of catfish, bluegill, and carp. The lake was closed to fishing in 2007, due to elevated levels of selenium found in the water of the lake. The only entrance to Gibson Lake is the lake's boat ramp, located due southeast of the plant on Gibson County Road 975 South. Gibson Lake, due to it never getting colder than , caused by the hot outflows from the plant's condensers, is known to produce a little dusting of snow every now and then. Wildlife The Gibson Lake and the rest of the Gibson Generating Station complex is home to several species of birds. They include: Red-throated loon Pacific loon Eared grebe Red-necked grebe Western grebe American white pelican Brown pelican Plegadis ibis Snowy egret Ross's goose Wood stork White-faced ibis Glossy ibis Black-bellied whistling duck White-winged scoter Surf scoter Long-tailed duck Golden eagle Peregrine falcon Wild turkey King rail Piping plover Least tern Green heron Canada goose Cackling goose American avocet Black-necked stilt Whimbrel Marbled godwit Hudsonian godwit Purple sandpiper Red knot Red phalarope Laughing gull Little gull Sabine's gull Glaucous gull Iceland gull Thayer's gull Lesser black-backed gull Pomarine jaeger Parasitic jaeger Swainson's hawk Gyrfalcon Snowy owl Le Conte's sparrow Henslow's sparrow Great blue heron Whooping crane White-fronted goose Mallard duck Snow goose Blue and green-winged teal Many of these birds use the area as a stop-over on their respective destinations. Wildlife and bird watchers will notice that Google Earth and Garmin GPS maps refer to the lake's location as Broad Pond. The ancient Broad Pond-Cane Ridge-Wabash River oxbow was once considered for a National Wildlife Refuge. Specialty species Least terns - breed at Gibson Lake and Cane Ridge least tern habitat and may be seen anytime between mid-May and late August or early September. Bald eagles - are common during the winter and are usually encountered on a drive around the levee. They nest in the area next to the Wabash River. Temperature The lake temperature very rarely falls below at its coldest point, due mainly to the plant's condenser discharges. This often results in lake-effect snow or heavy frost falling in nearby areas. Outflow (West) Side Inflow (East) Side The two sides are separated by a splitter dike that juts approximately 500 yards into the lake from the main plant that forces the water to remain in the lake for around 1–2 weeks. References Protected areas of Gibson County, Indiana Owensville, Indiana Reservoirs in Indiana Bodies of water of Gibson County, Indiana Duke Energy Cooling ponds
Gibson Lake (Indiana)
[ "Chemistry", "Environmental_science" ]
687
[ "Cooling ponds", "Water pollution" ]
14,653,232
https://en.wikipedia.org/wiki/Fibronectin%20type%20II%20domain
Fibronectin type II domain is a collagen-binding protein domain. Fibronectin is a multi-domain glycoprotein, found in a soluble form in plasma, and in an insoluble form in loose connective tissue and basement membranes, that binds cell surfaces and various compounds including collagen, fibrin, heparin, DNA, and actin. Fibronectins are involved in a number of important functions e.g., wound healing; cell adhesion; blood coagulation; cell differentiation and migration; maintenance of the cellular cytoskeleton; and tumour metastasis. The major part of the sequence of fibronectin consists of the repetition of three types of domains, which are called type I, II, and III. Type II domain is approximately sixty amino acids long, contains four conserved cysteines involved in disulfide bonds and is part of the collagen-binding region of fibronectin. Type II domains occur two times in fibronectin. Type II domains have also been found in a range of proteins including blood coagulation factor XII; bovine seminal plasma proteins PDC-109 (BSP-A1/A2) and BSP-A3; cation-independent mannose-6-phosphate receptor; mannose receptor of macrophages; 180 Kd secretory phospholipase A2 receptor; DEC-205 receptor; 72 Kd and 92 Kd type IV collagenase (); and hepatocyte growth factor activator. Fibronectin type II domain and Lipid bilayer interaction Fibronectin type II domain is part of the extracellular portions of EphA2 receptor proteins. FN2 domain on EphA2 receptors bears positively-charged components, namely K441 and R443, which attract and almost exclusively bind to anionic lipids such as anionic membrane lipid phosphatidylglycerol. K441 and R443 together make up a membrane-binding motif that allows EphA2 receptors to attach to the cell membrane. Human proteins containing this domain BSPH1; ELSPBP1; F12; FN1; HGFAC; IGF2R; LY75; MMP2; MMP9; MRC1; MRC1L1; MRC2; PLA2R1; SEL1L; Fibronectin type I domain: F12; FN1; HGFAC; PLAT; References External links Fibronectin type-II collagen-binding domain in PROSITE Protein domains Peripheral membrane proteins
Fibronectin type II domain
[ "Biology" ]
561
[ "Protein domains", "Protein classification" ]
14,653,396
https://en.wikipedia.org/wiki/Beryllium%20bromide
Beryllium bromide is the chemical compound with the formula BeBr2. It is very hygroscopic and dissolves well in water. The cation, which is relevant to BeBr2, is characterized by the highest known charge density (Z/r = 6.45), making it one of the hardest cations and a very strong Lewis acid. Preparation and reactions It can be prepared by reacting beryllium metal with elemental bromine at temperatures of 500 °C to 700 °C: When the oxidation is conducted on an ether suspension, one obtains colorless dietherate: The same dietherate is obtained by suspending beryllium dibromide in diethyl ether: This ether ligand can be displaced by other Lewis bases.is ether ligand can be displaced by other Lewis bases. Beryllium bromide hydrolyzes slowly in water: BeBr2 + 2 H2O → 2 HBr + Be(OH)2 Structure Two forms (polymorphs) of BeBr2 are known. Both structures consist of tetrahedral Be2+ centers interconnected by doubly bridging bromide ligands. One form consist of edge-sharing polytetrahedra. The other form resembles zinc iodide with interconnected adamantane-like cages. Safety Beryllium compounds are toxic if inhaled or ingested. References Beryllium compounds Bromides Alkaline earth metal halides Inorganic polymers
Beryllium bromide
[ "Chemistry" ]
297
[ "Bromides", "Inorganic polymers", "Inorganic compounds", "Salts" ]
14,653,597
https://en.wikipedia.org/wiki/Glutathione%20S-transferase%2C%20C-terminal%20domain
Glutathione S-transferase, C-terminal domain is a structural domain of glutathione S-transferase (GST). GST conjugates reduced glutathione to a variety of targets including S-crystallin from squid, the eukaryotic elongation factor 1-gamma, the HSP26 family of stress-related proteins and auxin-regulated proteins in plants. The glutathione molecule binds in a cleft between N and C-terminal domains. The catalytically important residues are proposed to reside in the N-terminal domain. In plants, GSTs are encoded by a large gene family (48 GST genes in Arabidopsis) and can be divided into the phi, tau, theta, zeta, and lambda classes. Biological function and classification In eukaryotes, glutathione S-transferases (GSTs) participate in the detoxification of reactive electrophilic compounds by catalysing their conjugation to glutathione. The GST domain is also found in S-crystallins from squid, and proteins with no known GST activity, such as eukaryotic elongation factors 1-gamma and the HSP26 family of stress-related proteins, which include auxin-regulated proteins in plants and stringent starvation proteins in Escherichia coli. The major lens polypeptide of cephalopods is also a GST. Bacterial GSTs of known function often have a specific, growth-supporting role in biodegradative metabolism: epoxide ring opening and tetrachlorohydroquinone reductive dehalogenation are two examples of the reactions catalysed by these bacterial GSTs. Some regulatory proteins, like the stringent starvation proteins, also belong to the GST family. GST seems to be absent from Archaea in which gamma-glutamylcysteine substitute to glutathione as major thiol. Oligomerization Glutathione S-transferases form homodimers, but in eukaryotes can also form heterodimers of the A1 and A2 or YC1 and YC2 subunits. The homodimeric enzymes display a conserved structural fold. Each monomer is composed of a distinct N-terminal sub-domain, which adopts the thioredoxin fold, and a C-terminal all-helical sub-domain. This entry is the C-terminal domain. Human proteins containing this domain EEF1E1; EEF1G; GDAP1; GSTA1; GSTA2; GSTA3; GSTA4; GSTA5; GSTM1; GSTM2; GSTM3; GSTM4; GSTM5; GSTO1; GSTP1; GSTT1; GSTT2; GSTZ1; MARS; PGDS; PTGDS2; PTGES2; VARS; References Further reading , GST Gene Fusion System Handbook by GE Healthcare Life Sciences Protein domains Single-pass transmembrane proteins
Glutathione S-transferase, C-terminal domain
[ "Biology" ]
646
[ "Protein domains", "Protein classification" ]
14,653,704
https://en.wikipedia.org/wiki/Beryllium%20iodide
Beryllium iodide is an inorganic compound with the chemical formula . It is a hygroscopic white solid. The cation, which is relevant to salt-like BeI2, is characterized by the highest known charge density (Z/r = 6.45), making it one of the hardest cations and a very strong Lewis acid. Reactions Beryllium iodide can be prepared by reacting beryllium metal with elemental iodine at temperatures of 500 °C to 700 °C: When the oxidation is conducted on an ether suspension of elemental Be, one obtains colorless dietherate: The same dietherate is obtained by suspending beryllium iodide in diethyl ether: This ether ligands in can be displaced by other Lewis bases. Beryllium iodide reacts with fluorine giving beryllium fluoride and fluorides of iodine, with chlorine giving beryllium chloride, and with bromine giving beryllium bromide. Structure Two forms (polymorphs) of are known. Both structures consist tetrahedral centers interconnected by doubly bridging iodide ligands. One form consist of edge-sharing polytetrahedra. The other form resembles zinc iodide with interconnected adamantane-like cages. Applications Beryllium iodide can be used in the preparation of high-purity beryllium by the decomposition of the compound on a hot tungsten filament. References Beryllium compounds Iodides Alkaline earth metal halides Inorganic polymers
Beryllium iodide
[ "Chemistry" ]
324
[ "Inorganic polymers", "Inorganic compounds" ]
14,653,734
https://en.wikipedia.org/wiki/Theta%20solvent
In a polymer solution, a theta solvent (or θ solvent) is a solvent in which polymer coils act like ideal chains, assuming exactly their random walk coil dimensions. Therefore, the Mark–Houwink equation exponent is in a theta solvent. Thermodynamically, the excess chemical potential of mixing between a polymer and a theta solvent is zero. Physical interpretation The conformation assumed by a polymer chain in dilute solution can be modeled as a random walk of monomer subunits using a freely jointed chain model. However, this model does not account for steric effects. Real polymer coils are more closely represented by a self-avoiding walk because conformations in which different chain segments occupy the same space are not physically possible. This excluded volume effect causes the polymer to expand. Chain conformation is also affected by solvent quality. The intermolecular interactions between polymer chain segments and coordinated solvent molecules have an associated energy of interaction which can be positive or negative. For a good solvent, interactions between polymer segments and solvent molecules are energetically favorable, and will cause polymer coils to expand. For a poor solvent, polymer-polymer self-interactions are preferred, and the polymer coils will contract. The quality of the solvent depends on both the chemical compositions of the polymer and solvent molecules and the solution temperature. Theta temperature If a solvent is precisely poor enough to cancel the effects of excluded volume expansion, the theta (θ) condition is satisfied. For a given polymer-solvent pair, the theta condition is satisfied at a certain temperature, called the theta (θ) temperature or theta point. A solvent at this temperature is called a theta solvent. In general, measurements of the properties of polymer solutions depend on the solvent. However, when a theta solvent is used, the measured characteristics are independent of the solvent. They depend only on short-range properties of the polymer such as the bond length, bond angles, and sterically favorable rotations. The polymer chain will behave exactly as predicted by the random walk or ideal chain model. This makes experimental determination of important quantities such as the root mean square end-to-end distance or the radius of gyration much simpler. Additionally, the theta condition is also satisfied in the bulk amorphous polymer phase. Thus, the conformations adopted by polymers dissolved in theta solvents are identical to those adopted in bulk polymer polymerization . Thermodynamic definition Thermodynamically, the excess chemical potential of mixing between a theta solvent and a polymer is zero. Equivalently, the enthalpy of mixing is zero, making the solution ideal. One cannot measure the chemical potential by any direct means, but one can correlate it to the solution's osmotic pressure () and the solvent's partial specific volume (): One can use a virial expansion to express how osmotic pressure depends on concentration: M is the molecular weight of the polymer R is the gas constant T is the absolute temperature B is the second virial coefficient This relationship with osmotic pressure is one way to determine the theta condition or theta temperature for a solvent. The change in the chemical potential when the two are mixed has two terms: ideal and excess: The second virial coefficient, B, is proportional to the excess chemical potential of mixing: B reflects the energy of binary interactions between solvent molecules and segments of polymer chain. When B > 0, the solvent is "good," and when B < 0, the solvent is "poor". For a theta solvent, the second virial coefficient is zero because the excess chemical potential is zero; otherwise it would fall outside the definition of a theta solvent. A solvent at its theta temperature is, in this way, analogous to a real gas at its Boyle temperature. Similar relationships exist for other experimental techniques, including light scattering, intrinsic viscosity measurement, sedimentation equilibrium, and cloud point titration. See also Flory–Huggins solution theory References Polymer physics Thermodynamics Rubber properties
Theta solvent
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
816
[ "Polymer physics", "Thermodynamics", "Polymer chemistry", "Dynamical systems" ]
14,654,371
https://en.wikipedia.org/wiki/Protein%20kinase%20domain
The protein kinase domain is a structurally conserved protein domain containing the catalytic function of protein kinases. Protein kinases are a group of enzymes that move a phosphate group onto proteins, in a process called phosphorylation. This functions as an on/off switch for many cellular processes, including metabolism, transcription, cell cycle progression, cytoskeletal rearrangement and cell movement, apoptosis, and differentiation. They also function in embryonic development, physiological responses, and in the nervous and immune system. Abnormal phosphorylation causes many human diseases, including cancer, and drugs that affect phosphorylation can treat those diseases. Protein kinases possess a catalytic subunit which transfers the gamma phosphate from nucleoside triphosphates (almost always ATP) to the side chain of an amino acid in a protein, resulting in a conformational and/or dynamic changes affecting protein function. These enzymes fall into two broad classes, characterised with respect to substrate specificity: serine/threonine specific and tyrosine specific. Function Protein kinase function has been evolutionarily conserved from Escherichia coli to Homo sapiens. Protein kinases play a role in a multitude of cellular processes, including division, proliferation, apoptosis, and differentiation. Phosphorylation usually results in a functional change of the target protein by changing structure, dynamics, enzyme activity, cellular location, or association with other proteins. Structure The catalytic subunits of protein kinases are highly conserved, and the structures of over 280 of the approximately 494 kinase domains from 481 human genes have been determined, leading to large screens to develop kinase-specific inhibitors for the treatments of a number of diseases. Humans have only 437 kinase domains that have catalytic activity; the rest are pseudokinases or catalyze other reactions. Eukaryotic protein kinases are enzymes that belong to a very extensive family of proteins which share a conserved catalytic core common with both serine/threonine and tyrosine protein kinases. The domain consists of two sub-domains referred to as the N- and C-terminal domains. The N-terminal domain consists of five beta sheet strands and an alpha helix called the C-helix, and the C-terminal domain usually consists of six alpha helices (labeled D, E, F, G, H, and I). The C-terminal domain contains two long loops, called the catalytic loop and the activation loop, which are essential for catalytic activity. The catalytic loop includes the "HRD motif" (for the amino acid sequence His-Arg-Asp), whose aspartic acid residue interacts directly with the hydroxyl group of the target serine, threonine, or tyrosine residue that is phosphorylated. The activation loop starts with the DFG motif (for the amino acid sequence Asp-Phe-Gly), which helps to bind ATP and magnesium in the active site. Broadly, the state or conformation of the kinase may be classified as DFGin or DFGout, depending on whether the Asp residue of the DFG motif is in or out of the active site. In the active form, the first few residues of the activation loop adopt a specific form of the DFGin conformation. Some inactive structures may adopt one of several other DFGin conformations, while other inactive structures are DFGout. Examples The following is a list of human proteins containing the protein kinase domain: AAK1 ; AATK ; ABL1 ; ABL2 ; ACVR1 ; ACVR1B ; ACVR1C ; ACVR2A ; ACVR2B ; ACVRL1 ; AKT1 ; AKT2 ; AKT3 ; ALK ; AMHR2 ; ANKK1 ; ARAF ; AURKA ; AURKB ; AURKC ; AXL ; BLK ; BMP2K ; BMPR1A ; BMPR1B ; BMPR2 ; BMX ; BRAF ; BRSK1 ; BRSK2 ; BTK ; BUB1 ; BUB1B ; CAMK1 ; CAMK1D ; CAMK1G ; CAMK2A ; CAMK2B ; CAMK2D ; CAMK2G ; CAMK4 ; CAMKK1 ; CAMKK2 ; CAMKV ; CASK ; CDC42BPA ; CDC42BPB ; CDC42BPG ; CDC7 ; CDK1 ; CDK10 ; CDK11A ; CDK11B ; CDK12 ; CDK13 ; CDK14 ; CDK15 ; CDK16 ; CDK17 ; CDK18 ; CDK19 ; CDK2 ; CDK20 ; CDK3 ; CDK4 ; CDK5 ; CDK6 ; CDK7 ; CDK8 ; CDK9 ; CDKL1 ; CDKL2 ; CDKL3 ; CDKL4 ; CDKL5 ; CHEK1 ; CHEK2 ; CHUK ; CIT ; CLK1 ; CLK2 ; CLK3 ; CLK4 ; CSF1R ; CSK ; CSNK1A1 ; CSNK1A1L ; CSNK1D ; CSNK1E ; CSNK1G1 ; CSNK1G2 ; CSNK1G3 ; CSNK2A1 ; CSNK2A2 ; CSNK2A3 ; DAPK1 ; DAPK2 ; DAPK3 ; DCLK1 ; DCLK2 ; DCLK3 ; DDR1 ; DDR2 ; DMPK ; DSTYK ; DYRK1A ; DYRK1B ; DYRK2 ; DYRK3 ; DYRK4 ; EGFR ; EIF2AK1 ; EIF2AK2 ; EIF2AK3 ; EIF2AK4 ; EPHA1 ; EPHA10 ; EPHA2 ; EPHA3 ; EPHA4 ; EPHA5 ; EPHA6 ; EPHA7 ; EPHA8 ; EPHB1 ; EPHB2 ; EPHB3 ; EPHB4 ; EPHB6 ; ERBB2 ; ERBB3 ; ERBB4 ; ERN1 ; ERN2 ; FER ; FES ; FGFR1 ; FGFR2 ; FGFR3 ; FGFR4 ; FGR ; FLT1 ; FLT3 ; FLT4 ; FRK ; FYN ; GAK ; GRK1 ; GRK2 ; GRK3 ; GRK4 ; GRK5 ; GRK6 ; GRK7 ; GSG2 ; GSK3A ; GSK3B ; GUCY2C ; GUCY2D ; GUCY2F ; HCK ; HIPK1 ; HIPK2 ; HIPK3 ; HIPK4 ; HUNK ; ICK ; IGF1R ; IKBKB ; IKBKE ; ILK ; INSR ; INSRR ; IRAK1 ; IRAK2 ; IRAK3 ; IRAK4 ; ITK ; JAK1 ; JAK2 ; JAK3 ; KALRN ; KDR ; KIT ; KSR1 ; KSR2 ; LATS1 ; LATS2 ; LCK ; LIMK1 ; LIMK2 ; LMTK2 ; LMTK3 ; LRRK1 ; LRRK2 ; LTK ; LYN ; MAK ; MAP2K1 ; MAP2K2 ; MAP2K3 ; MAP2K4 ; MAP2K5 ; MAP2K6 ; MAP2K7 ; MAP3K1 ; MAP3K10 ; MAP3K11 ; MAP3K12 ; MAP3K13 ; MAP3K14 ; MAP3K15 ; MAP3K19 ; MAP3K2 ; MAP3K20 ; MAP3K21 ; MAP3K3 ; MAP3K4 ; MAP3K5 ; MAP3K6 ; MAP3K7 ; MAP3K8 ; MAP3K9 ; MAP4K1 ; MAP4K2 ; MAP4K3 ; MAP4K4 ; MAP4K5 ; MAPK1 ; MAPK10 ; MAPK11 ; MAPK12 ; MAPK13 ; MAPK14 ; MAPK15 ; MAPK3 ; MAPK4 ; MAPK6 ; MAPK7 ; MAPK8 ; MAPK9 ; MAPKAPK2 ; MAPKAPK3 ; MAPKAPK5 ; MARK1 ; MARK2 ; MARK3 ; MARK4 ; MAST1 ; MAST2 ; MAST3 ; MAST4 ; MASTL ; MATK ; MELK ; MERTK ; MET ; MINK1 ; MKNK1 ; MKNK2 ; MLKL ; MOK ; MOS ; MST1R ; MUSK ; MYLK ; MYLK2 ; MYLK3 ; MYLK4 ; MYO3A ; MYO3B ; NEK1 ; NEK10 ; NEK11 ; NEK2 ; NEK3 ; NEK4 ; NEK5 ; NEK6 ; NEK7 ; NEK8 ; NEK9 ; NIM1K ; NLK ; NPR1 ; NPR2 ; NRBP1 ; NRBP2 ; NRK ; NTRK1 ; NTRK2 ; NTRK3 ; NUAK1 ; NUAK2 ; OBSCN ; OXSR1 ; PAK1 ; PAK2 ; PAK3 ; PAK4 ; PAK5 ; PAK6 ; PAN3 ; PASK ; PBK ; PDGFRA ; PDGFRB ; PDIK1L ; PDPK1 ; PDPK2P ; PEAK1 ; PEAK3 ; PHKG1 ; PHKG2 ; PIK3R4 ; PIM1 ; PIM2 ; PIM3 ; PINK1 ; PKDCC ; PKMYT1 ; PKN1 ; PKN2 ; PKN3 ; PLK1 ; PLK2 ; PLK3 ; PLK4 ; PLK5 ; PNCK ; POMK ; PRKAA1 ; PRKAA2 ; PRKACA ; PRKACB ; PRKACG ; PRKCA ; PRKCB ; PRKCD ; PRKCE ; PRKCG ; PRKCH ; PRKCI ; PRKCQ ; PRKCZ ; PRKD1 ; PRKD2 ; PRKD3 ; PRKG1 ; PRKG2 ; PRKX ; PRKY ; PRPF4B ; PSKH1 ; PSKH2 ; PTK2 ; PTK2B ; PTK6 ; PTK7 ; PXK ; RAF1 ; RET ; RIOK1 ; RIOK2 ; RIOK3 ; RIPK1 ; RIPK2 ; RIPK3 ; RIPK4 ; RNASEL ; ROCK1 ; ROCK2 ; ROR1 ; ROR2 ; ROS1 ; RPS6KA1 ; RPS6KA2 ; RPS6KA3 ; RPS6KA4 ; RPS6KA5 ; RPS6KA6 ; RPS6KB1 ; RPS6KB2 ; RPS6KC1 ; RPS6KL1 ; RSKR ; RYK ; SBK1 ; SBK2 ; SBK3 ; SCYL1 ; SCYL2 ; SCYL3 ; SGK1 ; SGK2 ; SGK223 ; SGK3 ; SIK1 ; SIK1B ; SIK2 ; SIK3 ; SLK ; SNRK ; SPEG ; SRC ; SRMS ; SRPK1 ; SRPK2 ; SRPK3 ; STK10 ; STK11 ; STK16 ; STK17A ; STK17B ; STK24 ; STK25 ; STK26 ; STK3 ; STK31 ; STK32A ; STK32B ; STK32C ; STK33 ; STK35 ; STK36 ; STK38 ; STK38L ; STK39 ; STK4 ; STK40 ; STKLD1 ; STRADA ; STRADB ; STYK1 ; SYK ; TAOK1 ; TAOK2 ; TAOK3 ; TBCK ; TBK1 ; TEC ; TEK ; TESK1 ; TESK2 ; TEX14 ; TGFBR1 ; TGFBR2 ; TIE1 ; TLK1 ; TLK2 ; TNIK ; TNK1 ; TNK2 ; TNNI3K ; TP53RK ; TRIB1 ; TRIB2 ; TRIB3 ; TRIO ; TSSK1B ; TSSK2 ; TSSK3 ; TSSK4 ; TSSK6 ; TTBK1 ; TTBK2 ; TTK ; TTN ; TXK ; TYK2 ; TYRO3 ; UHMK1 ; ULK1 ; ULK2 ; ULK3 ; ULK4 ; VRK1 ; VRK2 ; VRK3 ; WEE1 ; WEE2 ; WNK1 ; WNK2 ; WNK3 ; WNK4 ; YES1 ; ZAP70 References Protein domains Peripheral membrane proteins
Protein kinase domain
[ "Biology" ]
2,781
[ "Protein domains", "Protein classification" ]
14,654,434
https://en.wikipedia.org/wiki/Western%20clawed%20frog
The western clawed frog (Xenopus tropicalis) is a species of frog in the family Pipidae, also known as tropical clawed frog. It is the only species in the genus Xenopus to have a diploid genome. Its genome has been sequenced, making it a significant model organism for genetics that complements the related species Xenopus laevis (the African clawed frog), a widely used vertebrate model for developmental biology. X. tropicalis also has a number of advantages over X. laevis in research, such as a much shorter generation time (<5 months), smaller size ( body length), and a larger number of eggs per spawn. It is found in Benin, Burkina Faso, Cameroon, Ivory Coast, Equatorial Guinea, Gambia, Ghana, Guinea, Guinea-Bissau, Liberia, Nigeria, Senegal, Sierra Leone, Togo, and possibly Mali. Its natural habitats are subtropical or tropical moist lowland forests, moist savanna, rivers, intermittent rivers, swamps, freshwater lakes, intermittent freshwater lakes, freshwater marshes, intermittent freshwater marshes, rural gardens, heavily degraded former forests, water storage areas, ponds, aquaculture ponds, and canals and ditches. Description The western clawed frog is a medium-sized species with a somewhat flattened body and a snout-vent length of , females being larger than males. The eyes are bulging and situated high on the head and there is a short tentacle just below each eye. A row of unpigmented dermal tubercles runs along the flank from just behind the eye, and are thought to represent a lateral line organ. The limbs are short and plump, and the fully webbed feet have horny claws. The skin is finely granular. The dorsal surface varies from pale to dark brown and has small grey and black spots. The ventral surface is dull white or yellowish with some dark mottling. Distribution and habitat The western clawed frog is an aquatic species and is found in the West African rainforest belt with a range stretching from Senegal to Cameroon and eastern Zaire. It is generally considered a forest-dwelling species and inhabits slow-moving streams, but it is also found in pools and temporary ponds in the northern Guinea and Sudan savannas. Biology In the dry season, this frog lives in shallow streams and hides under tree roots, under flat stones, or in holes in the riverbank. It feeds primarily on earthworms, insect larvae and tadpoles. When the rainy season starts it migrates across the forest floor at night to find temporary pools. Spawning may take place in large pools with much vegetation, but tadpoles are also sometimes found in muddy pools with no vegetation. Single eggs may be attached to plants or they may float. The tadpoles have broad mouths and no jaws, but have long tentacles on their upper lips. The ventral fins of their tails are broader than the dorsal ones. Their body colour is generally orange and the tail transparent but in darker locations the tail may be blackish. The tadpoles feed by filtering zooplankton from the water. In large water bodies, they may form dense swarms. Metamorphosis takes place when the tadpoles measure about in length. Sex determination Sex determination in the vast majority of amphibians is controlled by homomorphic (morphologically indistinguishable) sex chromosomes. As a result of this difficulty in sex chromosome identification, only a relatively small proportion of anuran species that have been karyotyped have also had their sex chromosomes identified. Of the species in the genus Xenopus, all have homomorphic sex chromosomes. Additionally, the DM-W gene on the W chromosome in some Xenopus species is the only sex-determining gene that has been identified in amphibians. This DM-W gene was first identified in X. laevis, however it is not found in X. tropicalis. Experimentation involving sex-reversed individuals, gynogenesis, triploids, and conventional crosses, has determined that X. tropicalis has three sex chromosomes: Y, W, and Z. These three sex chromosomes produce three different male genotypes, YW, YZ, and ZZ (all are phenotypically identical) and two different female genotypes, ZW, and WW (all are phenotypically identical). As a result, offspring of X. tropicalis can have sex ratios that differ from the commonly known 1:1 usually found in species with only two different sex chromosomes. For example, offspring resulting from a ZW female and a YZ male will have a sex ratio of 1:3 females to males and offspring resulting from a WW female and a ZZ male will be all female. As a result of this sex determination system, both male and female X. tropicalis can be either heterogametic or homogametic which is extremely rare in nature. The exact genetic mechanism and the exact alleles underlying this system is not yet known. One possible explanation is that the W chromosome contains a female-determining allele that has function that is not found on the Z chromosome while the Y chromosome contains an allele that acts a negative regulator that is dominant over the female-determining allele on the W chromosome. Although X. tropicalis does have these three sex chromosomes, the frequency of these three sex chromosomes is not evenly distributed among this species' populations throughout its natural range. The Y chromosome has been identified from two localities in Ghana and in a laboratory strain that originated in Nigeria and the Z chromosome has been confirmed to exist in individuals from western and eastern Ghana. Additionally, all three sex chromosomes have been found to exist together in X. tropicalis populations in Ghana and potentially elsewhere in its range as well. Additionally, having irregular sex ratios in offspring is generally thought to be disadvantageous so whether or not the existence of three sex chromosomes in X. tropicalis is evolutionarily stable or an indication that the species is going through a sex chromosome transition (turnover), is still a question. It seems likely that the emergence of the Y chromosome is the most recent event in the evolution of this species' sex chromosomes. It is possible that in the future extinction of the Z chromosome would make it so that the W chromosome transitions into a X chromosome making this species with sex determined by an XY system. It is also possible that if the Y chromosome were to go extinct, this species will have reverted to using an ancestral ZW system. Status The IUCN lists the western clawed frog as "Least Concern" because it has a wide distribution and is an adaptable species living in a range of habitats, and the population trend seems to be steady. Use as a genetic model system See also Xenopus#Model organism for biological research Xenopus embryos and eggs are a popular model system for a wide range of biomedical research. This animal is widely used because of its powerful combination of experimental tractability and close evolutionary relationship with humans, at least compared to many model organisms. Unlike its sister species X. laevis, X. tropicalis is diploid and has a short generation time, facilitating genetic studies. The complete genome of X. tropicalis has been sequenced. This species has n=10 chromosomes. X. tropicalis has three transferrin genes, all of which are close orthologs of other vertebrates. They are relatively far from non-vertebrate chordates, and widely divergent from protostome orthologs. Online Model Organism Database Xenbase is the Model Organism Database (MOD) for both Xenopus laevis and Xenopus tropicalis. References External links Xenbase Xenopus model organism database View the xenopus genome in Ensembl Xenopus Amphibians described in 1864 Taxa named by John Edward Gray Animal models
Western clawed frog
[ "Biology" ]
1,597
[ "Model organisms", "Animal models" ]
14,655,285
https://en.wikipedia.org/wiki/Short-chain%20dehydrogenase
The short-chain dehydrogenases/reductases family (SDR) is a very large family of enzymes, most of which are known to be NAD- or NADP-dependent oxidoreductases. As the first member of this family to be characterised was Drosophila alcohol dehydrogenase, this family used to be called 'insect-type', or 'short-chain' alcohol dehydrogenases. Most members of this family are proteins of about 250 to 300 amino acid residues. Most dehydrogenases possess at least 2 domains, the first binding the coenzyme, often NAD, and the second binding the substrate. This latter domain determines the substrate specificity and contains amino acids involved in catalysis. Little sequence similarity has been found in the coenzyme binding domain although there is a large degree of structural similarity, and it has therefore been suggested that the structure of dehydrogenases has arisen through gene fusion of a common ancestral coenzyme nucleotide sequence with various substrate specific domains. Subfamilies Glucose/ribitol dehydrogenase Insect alcohol dehydrogenase family 2,3-dihydro-2,3-dihydroxybenzoate dehydrogenase Human proteins containing this domain BDH1; BDH2; CBR1; CBR3; CBR4; DCXR; DECR1; DECR2; DHRS1; DHRS10; DHRS13; DHRS2; DHRS3; DHRS4; DHRS4L2; DHRS7; DHRS7B; DHRS8; DHRS9; DHRSX; FASN; FVT1; HADH2; HPGD; HSD11B1; HSD11B2; HSD17B1; HSD17B10; HSD17B12; HSD17B13; HSD17B2; HSD17B3; HSD17B4; HSD17B6; HSD17B7; HSD17B7P2; HSD17B8; HSDL1; HSDL2; PECR; QDPR; RDH10; RDH11; RDH12; RDH13; RDH14; RDH16; RDH5; RDH8; RDHE2; RDHS; SCDR10; SPR; WWOX; References Protein domains Single-pass transmembrane proteins
Short-chain dehydrogenase
[ "Biology" ]
524
[ "Protein domains", "Protein classification" ]
14,655,388
https://en.wikipedia.org/wiki/RX%20meter
An RX meter is used to measure the separate resistive and reactive components of reactive parallel Z network. Meter The two variable frequency oscillators track each other at frequencies 100 kHz apart. The output of a 0.5-250 MHz oscillator, F1, is fed into a bridge. When the impedance network to be measured is connected one arm across the bridge, the equivalent parallel resistance and reactance (capacitive or inductive) unbalances the bridge and the resulting voltage is fed to the mixer. The output of the 0.6-250.1 MHz oscillator F2, tracking 100 kHz above F1, is also fed to the mixer. This results in a 100 kHz difference frequency proportional in level to the bridge unbalance. The difference frequency signal is amplified by a filter amplifier combination and is applied to a null meter. When the bridge resistive and reactive controls are nulled, their respective dials accurately indicate the parallel impedance components of the network under test. The best-known RX meter was the RX250-A, developed in the early 1950s by Boonton Radio Corporation (BRC). After acquiring BRC, Hewlett-Packard continued to sell versions of the meter (both the original and the improved 250B) into the late 1960s. References Electrical engineering Impedance measurements
RX meter
[ "Physics", "Engineering" ]
277
[ "Electrical engineering", "Impedance measurements", "Physical quantities", "Electrical resistance and conductance" ]
14,655,516
https://en.wikipedia.org/wiki/Cytochrome%20c%20oxidase%20subunit%20I
Cytochrome c oxidase I (COX1) also known as mitochondrially encoded cytochrome c oxidase I (MT-CO1) is a protein that is encoded by the MT-CO1 gene in eukaryotes. The gene is also called COX1, CO1, or COI. Cytochrome c oxidase I is the main subunit of the cytochrome c oxidase complex. In humans, mutations in MT-CO1 have been associated with Leber's hereditary optic neuropathy (LHON), acquired idiopathic sideroblastic anemia, Complex IV deficiency, colorectal cancer, sensorineural deafness, and recurrent myoglobinuria. Structure In humans, the MT-CO1 gene is located from nucleotide pairs 5904 to 7444 on the guanine-rich heavy (H) section of mtDNA. The gene product is a 57 kDa protein composed of 513 amino acids. Function Cytochrome c oxidase subunit I (CO1 or MT-CO1) is one of three mitochondrial DNA (mtDNA) encoded subunits (MT-CO1, MT-CO2, MT-CO3) of cytochrome c oxidase, also known as complex IV. Cytochrome c oxidase () is a key enzyme in aerobic metabolism. It is the third and final enzyme of the electron transport chain of mitochondrial oxidative phosphorylation. Proton pumping heme-copper oxidases represent the terminal, energy-transfer enzymes of respiratory chains in prokaryotes and eukaryotes. The CuB-heme a3 (or heme o) binuclear centre, associated with the largest subunit I of cytochrome c and ubiquinol oxidases (), is directly involved in the coupling between dioxygen reduction and proton pumping. Some terminal oxidases generate a transmembrane proton gradient across the plasma membrane (prokaryotes) or the mitochondrial inner membrane (eukaryotes). The enzyme complex consists of 3-4 subunits (prokaryotes) up to 13 polypeptides (mammals) of which only the catalytic subunit (equivalent to mammalian subunit I (COI)) is found in all heme-copper respiratory oxidases. The presence of a bimetallic centre (formed by a high-spin heme and copper B) as well as a low-spin heme, both ligated to six conserved histidine residues near the outer side of four transmembrane spans within COI is common to all family members. In contrast to eukaryotes the respiratory chain of prokaryotes is branched to multiple terminal oxidases. The enzyme complexes vary in heme and copper composition, substrate type and substrate affinity. The different respiratory oxidases allow the cells to customize their respiratory systems according to a variety of environmental growth conditions. It has been shown that eubacterial quinol oxidase was derived from cytochrome c oxidase in Gram-positive bacteria and that archaebacterial quinol oxidase has an independent origin. A considerable amount of evidence suggests that Pseudomonadota (also known as proteobacteria or purple bacteria) acquired quinol oxidase through a lateral gene transfer from Gram-positive bacteria. A related nitric-oxide reductase () exists in denitrifying species of archaea and eubacteria and is a heterodimer of cytochromes b and c. Phenazine methosulphate can act as acceptor. It has been suggested that cytochrome c oxidase catalytic subunits evolved from ancient nitric oxide reductases that could reduce both nitrogen and oxygen. Clinical significance Mutations in this gene in humans are associated with Leber's hereditary optic neuropathy (LHON), acquired idiopathic sideroblastic anemia, Complex IV deficiency, colorectal cancer, sensorineural deafness, and recurrent myoglobinuria. Leber's hereditary optic neuropathy (LHON) LHON, correlated with mutations in MT-CO1, is characterized by optic nerve dysfunction, causing subacute or acute central vision loss. Some patients may display neurological or cardiac conduction defects. Because this disease is a result of mitochondrial DNA mutations affecting the respiratory chain complexes, it is inherited maternally. Acquired Idiopathic Sideroblastic Anemia MT-CO1 may be involved in the development of acquired idiopathic sideroblastic anemia. Mutations in mitochondrial DNA can cause respiratory chain dysfunction, preventing reduction of ferric iron to ferrous iron, which is required for the final step in mitochondrial biosynthesis of heme. The result is a ferric accumulation in mitochondria and insufficient heme production. Mitochondrial Complex IV deficiency (MT-C4D) Mutations in this gene can cause mitochondrial Complex IV deficiency, a disease of the mitochondrial respiratory chain displaying a wide variety of clinical manifestations ranging from isolated myopathy to a severe multisystem disease affecting multiple organs and tissues. Symptoms may include liver dysfunction and hepatomegaly, hypotonia, muscle weakness, exercise intolerance, delayed motor development, mental retardation, developmental delay, and hypertrophic cardiomyopathy. In some patients, the hypertrophic cardiomyopathy is fatal at the neonatal stage. Other affected individuals may manifest Leigh disease. Colorectal cancer (CRC) MT-CO1 mutations play a role in colorectal cancer, a very complex disease displaying malignant lesions in the inner walls of the colon and rectum. Numerous such genetic alterations are often involved with the progression of adenoma, or premalignant lesions, to invasive adenocarcinoma. Long-standing ulcerative colitis, colon polyps, and family history are risk factors for colorectal cancer. Recurrent myoglobinuria mitochondrial (RM-MT) RM-MT is a disease that is characterized by recurrent attacks of rhabdomyolysis (necrosis or disintegration of skeletal muscle) associated with muscle pain and weakness, exercise intolerance, low muscle capacity for oxidative phosphorylation, and followed by excretion of myoglobin in the urine. It has been associated with mitochondrial myopathy. A G5920A mutation, and a heteroplasmic G6708A nonsense mutation have been associated with COX deficiency and RM-MT. Deafness, sensorineural, mitochondrial (DFNM) DFNM is a form of non-syndromic deafness with maternal inheritance. Affected individuals manifest progressive, postlingual, sensorineural hearing loss involving high frequencies. The mutation, A1555G, has been associated with this disease. Subfamilies Cytochrome c oxidase cbb3-type, subunit I Cytochrome o ubiquinol oxidase, subunit I Cytochrome aa3 quinol oxidase, subunit I Cytochrome c oxidase, subunit I bacterial type Use in DNA barcoding MT-CO1 is a gene that is often used as a DNA barcode to identify animal species. The MT-CO1 gene sequence is suitable for this role because its mutation rate is generally fast enough to distinguish closely related species and also because its sequence is conserved among conspecifics. Contrary to the primary objection raised by skeptics that MT-CO1 sequence differences are too small to be detected between closely related species, more than 2% sequence divergence is typically detected between closely related animal species, suggesting that the barcode is effective for most animals. In most if not all seed plants, however, the rate of evolution of MT-CO1 is very slow. It has also been suggested that MT-CO1 may be a better gene for DNA barcoding of soil fungi than ITS (the gene most commonly used for mycological barcoding). MT-COI (= CCOI) in colonic crypts The MT-COI protein, also known as CCOI, is usually expressed at a high level in the cytoplasm of colonic crypts of the human large intestine (colon). However, MT-COI is frequently lost in colonic crypts with age in humans and is also often absent in field defects that give rise to colon cancers as well as in portions of colon cancers. The epithelial inner surface of the colon is punctuated by invaginations, the colonic crypts. The colon crypts are shaped like microscopic thick walled test tubes with a central hole down the length of the tube (the crypt lumen). Four tissue sections are shown in the image in this section, two cut across the long axes of the crypts and two cut parallel to the long axes. Most of the human colonic crypts in the images have high expression of the brown-orange stained MT-COI. However, in some of the colonic crypts all of the cells lack MT-COI and appear mostly white, with their main color being the blue-gray staining of the nuclei at the outer walls of the crypts. Greaves et al. showed that deficiencies of MT-COI in colonic crypts are due to mutations in the MT-COI gene. As seen in panel B, a portion of the stem cells of three crypts appear to have a mutation in MT-COI, so that 40% to 50% of the cells arising from those stem cells form a white segment in the cross-cut area. In humans, the percent of colonic crypts deficient for MT-COI is less than 1% before age 40, but then increases linearly with age. On average, the percent of colonic crypts deficient for MT-COI reaches 18% in women and 23% in men by 80–84 years of age. Colonic tumors often arise in a field of crypts containing a large cluster (as many as 410) of MT-COI-deficient crypts. In colonic cancers, up to 80% of tumor cells can be deficient in MT-COI. As seen in panels C and D, crypts are about 75 to about 110 cells long. The average crypt circumference is 23 cells. Based on these measurements, crypts have between 1725 and 2530 cells. Another report gave a range of 1500 to 4900 cells per colonic crypt. The occurrence of frequent crypts with almost complete loss of MT-COI in their 1700 to 5,000 cells suggests a process of natural selection. However, it has also been shown that a deficiency throughout a particular crypt due to an initial mitochondrial DNA mutation may occasionally occur through a stochastic process. Nevertheless, the frequent occurrence of MT-COI deficiency in many crypts within a colon epithelium indicates that absence of MT-COI likely provides a selective advantage. MT-COI is coded for by the mitochondrial chromosome. There are multiple copies of the chromosome in most mitochondria, usually between 2 and 6 per mitochondrion. If a mutation occurs in MT-COI in one chromosome of a mitochondrion, there may be random segregation of the chromosomes during mitochondrial fission to generate new mitochondria. This can give rise to a mitochondrion with primarily or solely MT-COI-mutated chromosomes. A mitochondrion with largely MT-COI-mutated chromosomes would need to have a positive selection bias in order to frequently become the main type of mitochondrion in a cell (a cell with MT-COI-deficient homoplasmy). There are about 100 to 700 mitochondria per cell, depending on cell type. Furthermore, there is fairly rapid turnover of mitochondria, so that a mitochondrion with MT-COI-mutated chromosomes and a positive selection bias could shortly become the major type of mitochondrion in a cell. The average half-life of mitochondria in rats, depending on cell type, is between 9 and 24 days, and in mice is about 2 days. In humans it is likely that the half life of mitochondria is also a matter of days to weeks. A stem cell at the base of a colonic crypt that was largely MT-COI-deficient may compete with the other 4 or 5 stem cells to take over the stem cell niche. If this occurs, then the colonic crypt would be deficient in MT-COI in all 1700 to 5,000 cells, as is indicated for some crypts in panels A, B and D of the image. Crypts of the colon can reproduce by fission, as seen in panel C, where a crypt is fissioning to form two crypts, and in panel B where at least one crypt appears to be fissioning. Most crypts deficient in MT-COI are in clusters of crypts (clones of crypts) with two or more MT-COI-deficient crypts adjacent to each other (see panel D). This illustrates that clones of deficient crypts often arise, and thus that there is likely a positive selective bias that has allowed them to spread in the human colonic epithelium. It is not clear why a deficiency of MT-COI should have a positive selective bias. One suggestion is that deficiency of MT-COI in a mitochondrion leads to lower reactive oxygen production (and less oxidative damage) and this provides a selective advantage in competition with other mitochondria within the same cell to generate homoplasmy for MT-COI-deficiency. Another suggestion was that cells with a deficiency in cytochrome c oxidase are apoptosis resistant, and thus more likely to survive. The linkage of MT-COI to apoptosis arises because active cytochrome c oxidase oxidizes cytochrome c, which then activates pro-caspase 9, leading to apoptosis. These two factors may contribute to the frequent occurrence of MT-COI-deficient colonic crypts with age or during carcinogenesis in the human colon. Interactions Within the MITRAC (mitochondrial translation regulation assembly intermediate of cytochrome c oxidase) complex, the encoded protein interacts with COA3 and SMIM20/MITRAC7. This interaction with SMIM20 stabilizes the newly synthesized MT-CO1 and prevents its premature turnover. Additionally, it interacts with TMEM177 in a COX20-dependent manner. References Further reading Protein domains Protein families Transmembrane proteins Human mitochondrial genes
Cytochrome c oxidase subunit I
[ "Biology" ]
3,070
[ "Protein families", "Protein domains", "Protein classification" ]