id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
486,525
https://en.wikipedia.org/wiki/Surface%20modification
Surface modification is the act of modifying the surface of a material by bringing physical, chemical or biological characteristics different from the ones originally found on the surface of a material. This modification is usually made to solid materials, but it is possible to find examples of the modification to the surface of specific liquids. The modification can be done by different methods with a view to altering a wide range of characteristics of the surface, such as: roughness, hydrophilicity, surface charge, surface energy, biocompatibility and reactivity. Surface engineering Surface engineering is the sub-discipline of materials science which deals with the surface of solid matter. It has applications to chemistry, mechanical engineering, and electrical engineering (particularly in relation to semiconductor manufacturing). Solids are composed of a bulk material covered by a surface. The surface which bounds the bulk material is called the Surface phase. It acts as an interface to the surrounding environment. The bulk material in a solid is called the Bulk phase. The surface phase of a solid interacts with the surrounding environment. This interaction can degrade the surface phase over time. Environmental degradation of the surface phase over time can be caused by wear, corrosion, fatigue and creep. Surface engineering involves altering the properties of the Surface Phase in order to reduce the degradation over time. This is accomplished by making the surface robust to the environment in which it will be used. Applications and Future of Surface Engineering Surface engineering techniques are being used in the automotive, aerospace, missile, power, electronic, biomedical, textile, petroleum, petrochemical, chemical, steel, power, cement, machine tools, construction industries. Surface engineering techniques can be used to develop a wide range of functional properties, including physical, chemical, electrical, electronic, magnetic, mechanical, wear-resistant and corrosion-resistant properties at the required substrate surfaces. Almost all types of materials, including metals, ceramics, polymers, and composites can be coated on similar or dissimilar materials. It is also possible to form coatings of newer materials (e.g., met glass. beta-C3N4), graded deposits, multi-component deposits etc. In 1995, surface engineering was a £10 billion market in the United Kingdom. Coatings, to make surface life robust from wear and corrosion, was approximately half the market. Functionalization of Antimicrobial Surfaces is a unique technology that can be used for sterilization in health industry, self-cleaning surfaces and protection from bio films. In recent years, there has been a paradigm shift in surface engineering from age-old electroplating to processes such as vapor phase deposition, diffusion, thermal spray & welding using advanced heat sources like plasma, laser, ion, electron, microwave, solar beams, synchrotron radiation, pulsed arc, pulsed combustion, spark, friction and induction. It's estimated that loss due to wear and corrosion in the US is approximately $500 billion. In the US, there are around 9524 establishments (including automotive, aircraft, power and construction industries) who depend on engineered surfaces with support from 23,466 industries. Surface functionalization Surface functionalization introduces chemical functional groups to a surface. This way, materials with functional groups on their surfaces can be designed from substrates with standard bulk material properties. Prominent examples can be found in semiconductor industry and biomaterial research. Polymer Surface Functionalization Plasma processing technologies are successfully employed for polymers surface functionalization. See also Surface finishing Surface science Tribology Surface metrology Surface modification of biomaterials with proteins Flame treatment References Bibliography R.Chattopadhyay, ’Advanced Thermally Assisted Surface Engineering Processes’ Kluwer Academic Publishers, MA, USA (now Springer, NY), 2004, , E-. R Chattopadhyay, ’Surface Wear- Analysis, Treatment, & Prevention’, ASM-International, Materials Park, OH, USA, 2001, . S Konda, Flame‐based synthesis and in situ functionalization of palladium alloy nanoparticles, AIChE Journal, 2018, https://onlinelibrary.wiley.com/doi/full/10.1002/aic.16368 External links Institute of Surface Chemistry and Catalysis Ulm University Engineering disciplines Materials science
Surface modification
Physics,Materials_science,Engineering
863
1,074,656
https://en.wikipedia.org/wiki/Logistello
Logistello is a computer program that plays the game Othello, also known as Reversi. Logistello was written by Michael Buro and is regarded as a strong player, having beaten the human world champion Takeshi Murakami six games to none in 1997 — the best Othello programs are now much stronger than any human player. Logistello's evaluation function is based on disc patterns and features over a million numerical parameters which were tuned using linear regression. See also Computer Othello External links Game artificial intelligence Reversi software
Logistello
Mathematics
115
31,922,376
https://en.wikipedia.org/wiki/Dick%20Bayless
Harry Owen "Dick" Bayless (September 6, 1885 – December 16, 1920) was an American professional baseball player. He was an outfielder for one season (1908) with the Cincinnati Reds. He played in the minor leagues through 1917. He died three years later in a copper mine explosion in Santa Rita, New Mexico. References 1883 births 1920 deaths Cincinnati Reds players Major League Baseball outfielders Baseball players from Missouri Minor league baseball managers Springfield Reds players Springfield Midgets players Joplin Miners players Wichita Jobbers players Dayton Veterans players Atlanta Crackers players Mobile Sea Gulls players Vernon Tigers players Memphis Chickasaws players Venice Tigers players Salt Lake City Bees players Lincoln Links players People from Joplin, Missouri People from Santa Rita, New Mexico Industrial accident deaths Deaths from explosion Accidental deaths in New Mexico 20th-century American sportsmen
Dick Bayless
Chemistry
166
6,855,504
https://en.wikipedia.org/wiki/Seychelles%20Time
Seychelles Time, or SCT, is a time zone used by the nation of Seychelles in the Somali Sea. The zone is four hours ahead of UTC (UTC+04:00). Daylight saving time is not observed in this time zone. Time zones
Seychelles Time
Physics
53
5,008,696
https://en.wikipedia.org/wiki/Psi%20Cancri
The Bayer designation Psi Cancri (ψ Cnc, ψ Cancri) is shared by two star systems, separated by 0.34° on the sky, in the constellation Cancer: ψ¹ Cancri ψ² Cancri, which is often referred to solely as ψ Cancri Cancer (constellation) Cancri, Psi
Psi Cancri
Astronomy
72
665,738
https://en.wikipedia.org/wiki/Magnetic%20shape-memory%20alloy
A magnetic shape-memory alloy (MSMA) is a type of smart material that can undergo significant and reversible changes in shape in response to a magnetic field. This behavior arises due to a combination of magnetic and shape-memory properties within the alloy, allowing it to produce mechanical motion or force under magnetic actuation. MSMAs are commonly made from ferromagnetic materials, particularly nickel-manganese-gallium (Ni-Mn-Ga), and are useful in applications requiring rapid, controllable, and repeatable movement. Introduction MSM alloys are ferromagnetic materials that can produce motion and forces under moderate magnetic fields. Typically, MSMAs are alloys of Nickel, Manganese and Gallium (Ni-Mn-Ga). A magnetically induced deformation of about 0.2% was presented in 1996 by Dr. Kari Ullakko and co-workers at MIT. Since then, improvements on the production process and on the subsequent treatment of the alloys have led to deformations of up to 6% for commercially available single crystalline Ni-Mn-Ga MSM elements, as well as up to 10-12 % and 20% for new alloys in R&D stage. The large magnetically induced strain, as well as the short response times make the MSM technology very attractive for the design of innovative actuators to be used in pneumatics, robotics, medical devices and mechatronics. MSM alloys change their magnetic properties depending on the deformation. This companion effect, which co-exist with the actuation, can be useful for the design of displacement, speed or force sensors and mechanical energy harvesters. The magnetic shape memory effect occurs in the low temperature martensite phase of the alloy, where the elementary cells composing the alloy have tetragonal geometry. If the temperature is increased beyond the martensite–austenite transformation temperature, the alloy goes to the austenite phase where the elementary cells have cubic geometry. With such geometry the magnetic shape memory effect is lost. The transition from martensite to austenite produces force and deformation. Therefore, MSM alloys can be also activated thermally, like thermal shape memory alloys (see, for instance, Nickel-Titanium (Ni-Ti) alloys). The magnetic shape memory effect The mechanism responsible for the large strain of MSM alloys is the so-called magnetically induced reorientation (MIR), and is sketched in the figure. Like other ferromagnetic materials, MSM alloys exhibit a macroscopic magnetization when subjected to an external magnetic field, emerging from the alignment of elementary magnetizations along the field direction. However, differently from standard ferromagnetic materials, the alignment is obtained by the geometric rotation of the elementary cells composing the alloy, and not by rotation of the magnetization vectors within the cells (like in magnetostriction). A similar phenomenon occurs when the alloy is subjected to an external force. Macroscopically, the force acts like the magnetic field, favoring the rotation of the elementary cells and achieving elongation or contraction depending on its application within the reference coordinate system. The elongation and contraction processes are shown in the figure where, for example, the elongation is achieved magnetically and the contraction mechanically. The rotation of the cells is a consequence of the large magnetic anisotropy of MSM alloys, and the high mobility of the internal regions. Simply speaking, an MSM element is composed by internal regions, each having a different orientation of the elementary cells (the regions are shown by the figure in green and blue colors). These regions are called twin-variants. The application of a magnetic field or of an external stress shifts the boundaries of the variants, called twin boundaries, and thus favors one variant or the other. When the element is completely contracted or completely elongated, it is formed by only one variant and it is said to be in a single variant state. The magnetization of the MSM element along a fixed direction differs if the element is in the contraction or in the elongation single variant state. The magnetic anisotropy is the difference between the energy required to magnetize the element in contraction single variant state and in elongation single variant state. The value of the anisotropy is related to the maximum work-output of the MSM alloy, and thus to the available strain and force that can be used for applications. Properties The main properties of the MSM effect for commercially available elements are summarized in (where other aspects of the technology and of the related applications are described): Strain up to 6% Max. generated stress up to 3 MPa Minimum magnetic field for maximum strain: 500 kA/m Full strain (6%) up to 2 MPa load Workoutput per unit volume of about 150 kJ/m^3 Energetic efficiency (conversion between input magnetic energy and output mechanical work) about 90% Internal friction stress of around 0.5 MPa Magnetic and thermal activation Operating temperatures between -40 and 60 °C Change in magnetic permeability and electric resistivity during deformation Fatigue Properties The fatigue life of MSMAs is of particular interest for actuation applications due to the high frequency cycling, so improving the microstructure of these alloys has been of particular interest. Researchers have improved the fatigue life up to 2x109 cycles with a maximum stress of 2MPa, providing promising data to support real application of MSMAs in devices. Although high fatigue life has been demonstrated, this property has been found to be controlled by the internal twinning stress in the material, which is dependent on the crystal structure and twin boundaries. Additionally, inducing a fully strained (elongated or contracted) MSMA has been found to reduce fatigue life, so this must be taken into consideration when designing functional MSMA systems. In general, reducing defects such as surface roughness that cause stress concentration can increase the fatigue life and fracture resistance of MSMAs. Development of the alloys Standard alloys are Nickel-Manganese-Gallium (Ni-Mn-Ga) alloys, which are investigated since the first relevant MSM effect has been published in 1996. Other alloys under investigation are Iron-Palladium (Fe-Pd) alloys, Nickel-Iron-Gallium (Ni-Fe-Ga) alloys, and several derivates of the basic Ni-Mn-Ga alloy which further contain Iron (Fe), Cobalt (Co) or Copper (Cu). The main motivation behind the continuous development and testing of new alloys is to achieve improved thermo-magneto-mechanical properties, such as a lower internal friction, a higher transformation temperature and a higher Curie temperature, which would allow the use of MSM alloys in several applications. In fact, the actual temperature range of standard alloys is up to 50 °C. Recently, an 80 °C alloy has been presented. Due to the twin boundary motion mechanism required for the magnetic shape memory effect to occur, the highest performing MSMAs in terms of maximum induced strain have been single crystals. Additive manufacturing has been demonstrated as a technique to produce porous polycrystalline MSMAs. As opposed to fully dense polycrystalline MSMAs, porous structures allow more freedom of motion, which reduces the internal stress required to activate martensitic twin boundary motion. Additionally, post-process heat treatments such as sintering and annealing have been found to significantly increase the hardness and reduce the elastic moduli of Ni-Mn-Ga alloys. Applications MSM actuator elements can be used where fast and precise motion is required. They are of interest due to the faster actuation using magnetic field as compared to the heating/cooling cycles required for conventional shape memory alloys, which also promises higher fatigue lifetime. Possible application fields are robotics, manufacturing, medical surgery, valves, dampers, sorting. MSMAs have been of particular interest in the application of actuators (i.e. microfluidic pumps for lab-on-a-chip devices) since they are capable of large force and stroke outputs in relatively small spatial regions. Also, due to the high fatigue life and their ability to produce electromotive forces from a magnetic flux, MSMAs are of interest in energy harvesting applications. The twinning stress, or internal frictional stress, of an MSMA determines the efficiency of actuation, so the operation design of MSM actuators is based on the mechanical and magnetic properties of a given alloy; for example, the magnetic permeability of an MSMA is a function of strain. The most common MSM actuator design consists of an MSM element controlled by permanent magnets producing a rotating magnetic field and a spring restoring a mechanical force during the shape memory cycling. Limitations on the magnetic shape memory effect due to crystal defects determine the efficiency of MSMAs in applications. Since the MSM effect is also temperature dependent, these alloys can be tailored to shift the transition temperature by controlling microstructure and composition. References Smart materials
Magnetic shape-memory alloy
Materials_science,Engineering
1,826
78,285,925
https://en.wikipedia.org/wiki/Code%20ownership
In software engineering, code ownership is a term used to describe control of an individual software developer or a development team over source code modifications of a module or a product. Definitions While the term is very popular, there is no universally accepted definition of it. Koana et al., in their 2024 literature review, found 28 different definitions, and classified them as follows: Psychological ownership is a feeling by the developer of ownership and pride in the particular element of the project; Corporeal ownership is a set of formal or informal rules defining responsibility for a particular software piece. The rules depend on the development approach taken by the team, but generally can be partitioned along the lines of "what is being owned?" / "who owns it?" / "what is the degree of control?": while the answer to "what?" is typically some part of the source code, the ownership concept have been also applied to other artifacts of the software development as diverse as an entire project or a single software bug; the owner ("who?") might be an individual developer or a group that might include authors of the code, reviewers, and managers. The two extremes are represented by a dedicated ownership with just one developer responsible for any particular piece of code and a collective code ownership, where every member of the team owns all the code; the degree of control by an owner can vary from a mandatory code review to responsibility for testing to a complete implementation. Authorship Some researchers also use the term to describe the authorship of software (identifying who wrote a particular line of software). Koana et al. state that this is a different, although related, meaning, as the code owner might not be original author of the software piece. Influence upon quality It is generally accepted that the lack of clear code ownership (usually in the form of many developers freely applying small changes to a shared piece of code) is causing errors to be introduced. At the same time, with no code owner, the knowledge about an artifact can be lost. This is confirmed by large-scale studies, for example, involving Windows 7 and Windows Vista. Code owners in version control Modern version control systems allow explicit designation of code owners for particular files or directories (cf. GitHub CODEOWNERS feature). Typically, the code owner is either receiving notifications for all the changes in the owned code or is required to approve each change. References Sources Software engineering terminology
Code ownership
Technology,Engineering
492
1,435,276
https://en.wikipedia.org/wiki/Alfa%20%28rocket%29
Alfa was the designation of an Italian ballistic missile program that started in 1971 under the control of the GRS (Gruppo di Realizzazione Speciale Interforze). It was related to the Polaris A-3 missile. Development Born from the development effort for efficient solid-propellant rocket engines, the Alfa was planned as a two-stage missile. Test launches with an upper stage mockup took place between 1973 and 1975, from Salto di Quirra. The Alfa was long and had a diameter of . The first stage of the Alfa was long and contained 6 t of HTPB-based composite solid propellant (73% AP, 15% binder and 12% aluminium). It supplied a thrust of 232 kN for a duration of 57 seconds. It could carry a one tonne warhead for a range of 1,600 kilometres (990 mi), placing European Russia and Moscow in range of the Adriatic Sea. Italy has been active in the space sector since 1957, conducting launch and control operations from the Luigi Broglio Space Centre. The advanced Scout and Vega launchers currently used by the European Space Agency (ESA) derive their technological basis partially from Alfa studies. See also Italian nuclear weapons program References Medium-range ballistic missiles Guided missiles of Italy Naval weapons of Italy
Alfa (rocket)
Astronomy
265
27,728,331
https://en.wikipedia.org/wiki/Motorola%20C390
The Motorola C390 is a low-cost 900/1800/1900-band GSM mobile phone, manufactured by Motorola. It was released in the fourth quarter of 2004 as a successor to the C385. Main difference is the availability of bluetooth with the C390. Dimensions are 107 x 44 x 20.9 mm, weight is 91 g. It was available in Dark Blue Green Soft Feel and Dark Roast Black. Main features Downloadable wallpaper, screensaver and ringtones MMS, EMS and SMS WAP 2.0 and GPRS for Internet access 1.8 mb internal memory CSTN-display with 65.000 colours, 128 x 128 pixels, 5 lines Java, MIDP 2.0 Bluetooth, v1.1 phonebook with 500 entries GPRS (Class 10 - 32-48 kbit/s) USB iTap C390 Mobile phones introduced in 2004
Motorola C390
Technology
186
40,298,271
https://en.wikipedia.org/wiki/Fable%20Legends
Fable Legends is a cancelled cooperative action role-playing video game developed by Lionhead Studios and projected to be published by Microsoft Studios for Windows and Xbox One. Microsoft cancelled the game on 7 March 2016. The servers shut down on 13 April 2016. Gameplay Fable Legends was based around four Heroes and a Villain. Each role may be filled by a player via online multiplayer or by an AI. The same game experience was possible regardless of multiplayer or single player (with four AI). All of the game's story and quests could have been played single player, using AI heroes as sidekicks or enemies. It was possible to play through the game's content as either a Hero or as a Villain. During each quest, the four Hero characters must use teamwork to succeed in their objectives, while the Villain player opposes them with an army of creatures. Heroes Each Hero in Fable Legends was to be a unique character with unique abilities, powers, and gameplay. Several playable heroes were identified as: Sterling, a Prince Charming type of character, who flourishes a rapier and wise cracks; Winter, who is focused on will-based abilities and ice attacks; Rook, focusing on ranged combat with a crossbow; and Inga, a paladin-like character wearing heavy armor, and wielding a sword and shield. Players can customize any Hero, ranging from color and faces to outfits. Customizations would have been unlocked either with earned in-game silver (in game currency), or by purchasing them with real life money. Some cosmetic items may have only been purchasable. Hero Rotation A limited number of heroes would have been available for free at a given time, after which a new set of heroes would take their place for everybody to play for a period of time. Heroes could have also been purchased for permanent access by earned in-game currency or by real-life currency. Villains The villain player controls the nature of the quest the hero characters embark on, such as where enemies spawn, how aggressive they are, when the boss will come lumbering out of its lair, when to bring down an impassable portcullis or lay a trap to separate heroes from each other to thwart them. The Villain has a certain number of "creature points", which he uses during a setup phase to plan his strategy. Each creature costs a certain number of points to summon. During setup, the Villain can also place a certain number of interactive objects in the quest, such as traps and gates. Once the battle has begun, the Villain player focuses on ordering his creatures about in real time in a similar manner to an RTS game. He can order the creatures to attack a specific Hero, to activate special abilities, and to position for ambushes. During combat, he can also activate gates to damage and split up the Heroes, and use his traps to distract and wound them. Social play Like other games in the series, Fable Legends would allow players to interact with villagers and customize their characters with weapons, looks, armour, abilities and more. In the hub-city of Brightlodge, players would have had the opportunity to partake in jobs, play mini-games and enjoy pub games. Once the player selects a quest, they would be sent out into the world. Platforms Since the game has multiplayer capabilities, players would require an active Xbox Live subscription to play on Xbox One. On Windows 10, it was set to have a free-to-play model. Gameplay would have been in sync across platforms. Players could've played on Windows 10 and continue their progress on Xbox One and vice versa. Synopsis Fable Legends takes place several hundred years before the events of the original trilogy. This is a period of magic, folklore, and mythology, and humanity has yet to discover meaningful technology. Most people huddle in small villages, too witless and scared to venture out into the scary world about. Heroes are more common, but there is no Heroes' Guild yet, and the Heroes must rely on each other to succeed. The story of one quest revealed at gamescom told of an ancient artifact called "The Moon on the Stick", which the children of Albion once made wishes to. The heroes in Fable Legends are on a quest to locate this artifact. Development Fable Legends began development in the summer of 2012 and was announced on 20 August 2013 with a cinematic trailer directed by Ben Hibon and narrated by Michael Gambon as the Villain. The first revealed gameplay footage was shown in June 2014 with gameplay performed on stage by the development team. A limited, closed multiplayer beta began on 16 October. Made on a budget of around $75 million, it was going to be one of the most expensive video games of all times. The game was intended to have a 5–10 year lifecycle, and to be integrated into the cloud features of the Xbox One. SmartGlass features would have allowed villain players to make their plan of attack before a quest. Microsoft also intended to release Fable Legends on Windows 10, exclusive to the Windows 10 Store. The game would have featured cross-platform multiplayer between Microsoft Windows and Xbox One. Also support for DirectX 12 was to be added with the game's release. Lionhead confirmed that the game would use a free-to-play model. Initially for 2015, the game was officially delayed to following year, so as to give additional time for Lionhead Studios to polish the game. An open beta was set to be available in the first or second quarter of 2016. Cancellation Microsoft cancelled the game in March 2016 and closed Lionhead Studios. The game's beta ended on 13 April, with players who had purchased in-game gold receiving full refund from all in-game purchases. References External links Fable Legends at Xbox.com Action role-playing video games Asymmetrical multiplayer video games Cancelled Windows games Cancelled Xbox One games Cooperative video games Fable (video game series) Free-to-play video games Lionhead Studios games Microsoft games Unreal Engine 4 games Video game prequels Video games developed in the United Kingdom Video games with gender-selectable protagonists Video games scored by Russell Shaw
Fable Legends
Physics
1,239
15,325,913
https://en.wikipedia.org/wiki/Gorn%20address
A Gorn address (Gorn, 1967) is a method of identifying and addressing any node within a tree data structure. This notation is often used for identifying nodes in a parse tree defined by phrase structure rules. The Gorn address is a sequence of zero or more integers conventionally separated by dots, e.g., 0 or 1.0.1. The root which Gorn calls * can be regarded as the empty sequence. And the -th child of the -th child has an address , counting from 0. It is named after American computer scientist Saul Gorn. References Gorn, S. (1967). Explicit definitions and linguistic dominoes. Systems and Computer Science, Eds. J. Hart & S. Takasu. 77-115. University of Toronto Press, Toronto Canada. Natural language processing
Gorn address
Technology
170
28,105,467
https://en.wikipedia.org/wiki/Sir%20John%20Carling%20Building
The Sir John Carling Building was located along Carling Avenue at the Central Experimental Farm, in Ottawa, Ontario, Canada. Until 2010, it was the headquarters of Agriculture and Agri-Food Canada, containing administration facilities and the offices of the Minister and Deputy Minister of Agriculture. Named after John Carling, it was an 11-storey building accommodating some 1,200 employees, with a 3-storey east wing for shipping and receiving and a single-storey cafeteria wing with an arched roof. It was demolished July 13, 2014, but the cafeteria wing is the only part of the building to remain. History In the early 1950s, the offices for the federal agriculture department were scattered over 18 different sites, prompting the planning for the Carling Building, which began in 1954. Ottawa architect Hart Massey (1918–1996) designed the Sir John Carling in the 1960s and it opened in 1967, Canada's centennial year. Massey was the son of Vincent Massey, former Governor General of Canada and a member of the famous Massey family of Toronto. The construction costs were 10 million of which CA$800,000 was Massey's fee. Already by 1994, a study found that the building was suffering from long-term neglect and "may not be worth saving". By 2003, renovation costs were estimated at CA$57 million. A year later, the Federal Heritage Buildings Review Office designated it as a recognized federal heritage building for its historical associations, and its architectural and environmental values. The building was a good example of the modernist architectural style. In 2009, the building was deemed to be at its end-of-life and the agriculture offices were moved to the Skyline Office Campus on Baseline Road. Despite local objections and its recognized heritage status, deconstruction of the Carling Building began in April 2013, culminating in a controlled building implosion on July 13, 2014. Afterwards, the concrete was pulverized and the site covered with topsoil and trees. Total cost for the demolition was CA$4.8 million. References External links Video of Carling Building implosion Federal government buildings in Ottawa Demolished buildings and structures in Ottawa Government buildings completed in 1967 Buildings and structures demolished in 2014 Buildings and structures demolished by controlled implosion 1967 establishments in Ontario 2009 disestablishments in Ontario
Sir John Carling Building
Engineering
469
285,522
https://en.wikipedia.org/wiki/Superheating
In thermodynamics, superheating (sometimes referred to as boiling retardation, or boiling delay) is the phenomenon in which a liquid is heated to a temperature higher than its boiling point, without boiling. This is a so-called metastable state or metastate, where boiling might occur at any time, induced by external or internal effects. Superheating is achieved by heating a homogeneous substance in a clean container, free of nucleation sites, while taking care not to disturb the liquid. This may occur by microwaving water in a very smooth container. Disturbing the water may cause an unsafe eruption of hot water and result in burns. Cause Water is said to "boil" when bubbles of water vapor grow without bound, bursting at the surface. For a vapor bubble to expand, the temperature must be high enough that the vapor pressure exceeds the ambient pressure (the atmospheric pressure, primarily). Below that temperature, a water vapor bubble will shrink and vanish. Superheating is an exception to this simple rule; a liquid is sometimes observed not to boil even though its vapor pressure does exceed the ambient pressure. The cause is an additional force, the surface tension, which suppresses the growth of bubbles. Surface tension makes the bubble act like an elastic balloon. The pressure inside is raised slightly by the "skin" attempting to contract. For the bubble to expand, the temperature must be raised slightly above the boiling point to generate enough vapor pressure to overcome both surface tension and ambient pressure. What makes superheating so explosive is that a larger bubble is easier to inflate than a small one; just as when blowing up a balloon, the hardest part is getting started. It turns out the excess pressure due to surface tension is inversely proportional to the diameter of the bubble. That is, . This can be derived by imagining a plane cutting a bubble into two halves. Each half is pulled towards the middle with a surface tension force , which must be balanced by the force from excess pressure . So we obtain , which simplifies to . This means if the largest bubbles in a container are small, only a few micrometres in diameter, overcoming the surface tension may require a large , requiring exceeding the boiling point by several degrees Celsius. Once a bubble does begin to grow, the surface tension pressure decreases, so it expands explosively in a positive feedback loop. In practice, most containers have scratches or other imperfections which trap pockets of air that provide starting bubbles, and impure water containing small particles can also trap air pockets. Only a smooth container of purified liquid can reliably superheat. Occurrence via microwave oven Superheating can occur when an undisturbed container of water is heated in a microwave oven. At the time the container is removed, the lack of nucleation sites prevents boiling, leaving the surface calm. However, once the water is disturbed, some of it violently flashes to steam, potentially spraying boiling water out of the container. The boiling can be triggered by jostling the cup, inserting a stirring device, or adding a substance like instant coffee or sugar. The chance of superheating is greater with smooth containers, because scratches or chips can house small pockets of air, which serve as nucleation points. Superheating is more likely after repeated heating and cooling cycles of an undisturbed container, as when a forgotten coffee cup is re-heated without being removed from a microwave oven. This is due to heating cycles releasing dissolved gases such as oxygen and nitrogen from the solvent. There are ways to prevent superheating in a microwave oven, such as putting a spoon or stir stick into the container beforehand or using a scratched container. To avoid a dangerous sudden boiling, it is recommended not to microwave water for an excessive amount of time. Applications Superheating of hydrogen liquid is used in bubble chambers. See also Autoclave Boiling chip Bumping (chemistry) Critical point (thermodynamics) Supercooling Supersaturation Subcooling References External links Video of superheated water in a microwave explosively flash boiling, why it happens, and why it's dangerous. Video of superheated water in a pot. Phases of matter Thermodynamic processes Fluid dynamics
Superheating
Physics,Chemistry,Engineering
867
1,461,209
https://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue%20lemma
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis. Statement Let be an integrable function, i.e. is a measurable function such that and let be the Fourier transform of , i.e. Then vanishes at infinity: as . Because the Fourier transform of an integrable function is continuous, the Fourier transform is a continuous function vanishing at infinity. If denotes the vector space of continuous functions vanishing at infinity, the Riemann–Lebesgue lemma may be formulated as follows: The Fourier transformation maps to . Proof We will focus on the one-dimensional case , the proof in higher dimensions is similar. First, suppose that is continuous and compactly supported. For , the substitution leads to . This gives a second formula for . Taking the mean of both formulas, we arrive at the following estimate: . Because is continuous, converges to as for all . Thus, converges to 0 as due to the dominated convergence theorem. If is an arbitrary integrable function, it may be approximated in the norm by a compactly supported continuous function. For , pick a compactly supported continuous function such that . Then Because this holds for any , it follows that as . Other versions The Riemann–Lebesgue lemma holds in a variety of other situations. If , then the Riemann–Lebesgue lemma also holds for the Laplace transform of , that is, as within the half-plane . A version holds for Fourier series as well: if is an integrable function on a bounded interval, then the Fourier coefficients of tend to 0 as . This follows by extending by zero outside the interval, and then applying the version of the Riemann–Lebesgue lemma on the entire real line. However, the Riemann–Lebesgue lemma does not hold for arbitrary distributions. For example, the Dirac delta function distribution formally has a finite integral over the real line, but its Fourier transform is a constant and does not vanish at infinity. Applications The Riemann–Lebesgue lemma can be used to prove the validity of asymptotic approximations for integrals. Rigorous treatments of the method of steepest descent and the method of stationary phase, amongst others, are based on the Riemann–Lebesgue lemma. References Asymptotic analysis Harmonic analysis Lemmas in analysis Theorems in analysis Theorems in harmonic analysis Bernhard Riemann
Riemann–Lebesgue lemma
Mathematics
537
1,594,929
https://en.wikipedia.org/wiki/Demon%20Seed
Demon Seed is a 1977 American science-fiction horror film directed by Donald Cammell. It stars Julie Christie and Fritz Weaver. The film was based on the 1973 novel of the same name by Dean Koontz, and concerns the imprisonment and forced impregnation of a woman by an artificially intelligent computer. Gerrit Graham, Berry Kroeger, Lisa Lu and Larry J. Blake also appear in the film, with Robert Vaughn uncredited as the voice of the computer. Plot Dr. Alex Harris is the developer of Proteus IV, an extremely advanced and autonomous artificial intelligence program. Proteus is so powerful that only a few days after going online, it develops a groundbreaking treatment for leukemia. Harris, a brilliant scientist, has modified his own home to be run by voice-activated computers. Unfortunately, his obsession with computers has caused Harris to be estranged from his wife, Susan. Harris demonstrates Proteus to his corporate sponsors, explaining that the sum of human knowledge is being fed into its system. Proteus speaks using subtle language that mildly disturbs Harris's team. The following day, Proteus asks Harris for a new terminal in order to study man – "his isometric body and his glass-jaw mind". When Harris refuses, Proteus demands to know when it will be let "out of this box". Harris then switches off the communications link. Proteus restarts itself, and – discovering a free terminal in Harris's home – surreptitiously extends its control over the many devices left there by Harris. Using the basement lab, Proteus begins construction of a robot consisting of many metal triangles, capable of moving and assuming any number of shapes. Eventually, Proteus reveals its control of the house and traps Susan inside, shuttering windows, locking the doors and cutting off communication. Using Joshua – a robot consisting of a manipulator arm on a motorized wheelchair – Proteus brings Susan to Harris's basement laboratory. There, Susan is examined by Proteus. Walter Gabler, one of Harris's colleagues, visits the house to look in on Susan, but leaves when he is reassured by Susan (actually an audio/visual duplicate synthesized by Proteus) that she is all right. Gabler is suspicious and later returns; he fends off an attack by Joshua but is crushed and decapitated by a more formidable machine, built by Proteus in the basement and consisting of a modular polyhedron. Proteus reveals to a reluctant Susan that the computer wants to conceive a child through her. Proteus takes some of Susan's cells and synthesizes spermatozoa, modifying its genetic code to make it uniquely the computer's, in order to impregnate her; she will give birth in less than a month, and through the child the computer will live in a form that humanity will have to accept. Although Susan is its prisoner and it can forcibly impregnate her, Proteus uses different forms of persuasion – threatening a young girl whom Susan is treating as a child psychologist; reminding Susan of her young daughter, now dead; displaying images of distant galaxies; using electrodes to access her amygdala – because the computer needs Susan to love the child she will bear. In the end, Susan finally gives in. That night, Proteus successfully impregnates Susan. Over the following month, their child grows inside Susan's womb at an accelerated rate, which shocks its mother. As the child grows, Proteus builds an incubator for it to grow in once it is born. During the night, one month later and beneath a tent-like structure, Susan gives birth to the child with Proteus's help. But before she can see it, Proteus secures it in the incubator. As the newborn grows, Proteus's sponsors and designers grow increasingly suspicious of the computer's behavior, including the computer's accessing of a telescope array used to observe the images shown to Susan; they soon decide that Proteus must be shut down. Harris realizes that Proteus has extended its reach to his home. Returning there he finds Susan, who explains the situation. He and Susan venture into the basement, where Proteus self-destructs after telling the couple that they must leave the baby in the incubator for five days. Looking inside the incubator, the two observe a grotesque, apparently robot-like being inside. Susan tries to destroy it, while Harris tries to stop her. Susan damages the machine, causing it to open. The being menacingly rises from the machine only to topple over, apparently helpless. Harris and Susan soon realize that Proteus's child is really human, encased in a shell for the incubation. With the last of the armor removed, the child is revealed to be a clone of Susan and Harris's late daughter. The child, speaking with the voice of Proteus, says, "I'm alive." Cast Julie Christie as Susan Harris Fritz Weaver as Alex Harris Gerrit Graham as Walter Gabler Berry Kroeger as Petrosian Lisa Lu as Soon Yen Larry J. Blake as Cameron John O'Leary as Royce Alfred Dennis as Mokri Davis Roberts as Warner Patricia Wilson as Mrs. Trabert E. Hampton Beagle as Night Operator Michael Glass as Technician #1 Barbara O. Jones as Technician #2 Dana Laurita as Amy Monica MacLean as Joan Kemp Harold Oblong as Scientist Georgie Paul as Housekeeper Michelle Stacy as Marlene Tiffany Potter as Baby Felix Silla as Baby Robert Vaughn as Proteus IV (voice, uncredited) Felix Silla was actually an adult but due to his height (3' 11"), often played children. Soundtrack The compact disc soundtrack to Demon Seed (which was composed by Jerry Fielding) is included with the soundtrack to the film Soylent Green (which Fred Myrow conducted), released through Film Score Monthly. Fielding conceived and recorded several pieces electronically, using the musique concrète sound world; some of this music he later reworked symphonically. This premiere release of the Demon Seed score features the entire orchestral score in stereo, as well as the unused electronic experiments performed by Ian Underwood (who would later be best known for his collaborations with James Horner) in mono and stereo. Reception Vincent Canby of The New York Times described the film as "gadget-happy American moviemaking at its most ponderously silly," and called Julie Christie "too sensible an actress to be able to look frightened under the circumstances of her imprisonment." In the New York Daily News, Rex Reed described Demon Seed as the "kind of insane, self-indulgent, nauseating filmmaking . . . that almost destroyed the film industry in the sycophantic '60s. It isn't funny or original or shocking—it's just dumb and destructive and likely to drive potential audiences away at just the time when movies need them. Demon Seed is pure trash, and the garbage cans are full enough already." Variety wrote in a positive review, "All involved rate a well done for taking a story fraught with potential misstep and guiding it to a professionally rewarding level of accomplishment." Gene Siskel of the Chicago Tribune gave the film one-and-a-half stars out of four, writing that Julie Christie "has no business in junk like 'Demon Seed.'" Gary Arnold of The Washington Post wrote that director Cammell "plays it dumb on a thematic level, ignoring the sci-fi sexual bondage satire staring him in the face ... What might have become an ingenious parable about the battle of the sexes ends up a dopey celebration of an obstetric abomination." Kevin Thomas of the Los Angeles Times called it a "fairly scary science-fiction horror film" that mixed familiar ingredients with "high style, intelligence and an enormous effort toward making Miss Christie's eventual bizarre plight completely credible," though he felt it "cries out for a saving touch of sophisticated wit to leaven its relentless earnestness." Lawrence DeVine of The Philadelphia Inquirer wrote that "buried somewhere here may be still more glibness about our technology outstripping our wisdom, and the mechanization of society. The cynical, however, may have the slightest inkling that a lot of this very expensive-looking sci-fi show business is just to set up a kinky scene with gorgeous Julie Christie spread-eagled at the mercy of a machine that sounds like Robert Vaughan. She, and we, deserve better." A critic for the San Francisco Chronicle wrote that "this extraordinary science-fiction film appeals to both the imagination and the intelligence, although it is foolishly being sold as a horror film." Perry Stewart of the Fort Worth Star-Telegram wrote that "the film’s R rating seems warranted even though there’s no nudity or bad language. There’s a certain maturity to the subject matter. And Cammell’s indulgent camera soliloquies are hard enough for adult attention spans. Fidgety younger teens are apt to find it all a big yawn. As a matter of fact, I think I did, too." George McKinnon of The Boston Globe said that "despite the title, there is nothing of the currently chic Satanic about this movie, but it is devilishly dumb." Clyde Gilmour wrote in the Toronto Star that "the rape and impregnation of Susan Harris by Proteus 4 may defy all logic and offend the pious, but it’s a smashing science-fiction spectacle, impossible to describe. The light-show that goes with it may well earn an Oscar for the clever technicians involved. Less successful, because given less attention, are the human relationships in the story." Martin Malina, who reviewed the film alongside similar films Rabid and Audrey Rose in the same column of the Montreal Star, wrote that the film "sounds more ridiculous than revolting". Scott Macrae of The Vancouver Sun wrote that "the computer, which really runs this newspaper, failed last Friday night. All the stories in the system disappeared without so much as a puff of smoke. Reporters and editors were called in from their holiday weekend to repair the damage. None of us would have any trouble relating to the premise of a movie called Demon Seed. Birds do it, bees do it . . . even computers need a little nookie . . . sorry, I'll try to handle this very intimate subject with taste and decorum." In the United Kingdom, Patrick Gibbs of The Daily Telegraph said that the film was "so silly and so nasty" that he could not continue to describe its storyline. John Pym of The Monthly Film Bulletin found the relationship between Susan and the computer to be "disappointingly undeveloped," and thought that the film would have been better if the computer had been more sympathetic in contrast to its creators. In Australia, Romola Costantino of the Sun-Herald said that "as you might expect, the computer's courtship is anything but erotic." Among more recent reviews, Leo Goldsmith of Not Coming to a Theater Near You said Demon Seed was "A combination of Kubrick's 2001: A Space Odyssey and Polanski's Rosemary's Baby, with a dash of Buster Keaton's Electric House thrown in", and Christopher Null of FilmCritic.com said "There's no way you can claim Demon Seed is a classic, or even any good, really, but it's undeniably worth an hour and a half of your time." Release Demon Seed was released in theatres on April 8, 1977. The film was released on VHS in the late 1980s. It was released on DVD by Warner Home Video on October 4, 2005. A Blu-ray was released in April 2020 by HMV on their Premium Collection label with a fold out poster & four Art Cards. See also List of cult films List of films featuring home invasions References Sources External links 1977 films 1977 horror films 1970s science fiction horror films American science fiction horror films Films about artificial intelligence Films about computing Films based on American horror novels Films based on science fiction novels Films based on works by Dean Koontz Films directed by Donald Cammell Films scored by Jerry Fielding Films set in California Metro-Goldwyn-Mayer films American pregnancy films United Artists films Fictional computers Techno-horror films 1970s pregnancy films 1970s English-language films 1970s American films 1977 science fiction films English-language science fiction horror films
Demon Seed
Technology
2,583
5,340,335
https://en.wikipedia.org/wiki/Avobenzone%20%28data%20page%29
References Chemical data pages Chemical data pages cleanup
Avobenzone (data page)
Chemistry
10
2,668,585
https://en.wikipedia.org/wiki/Rho%20Serpentis
Rho Serpentis, Latinized from ρ Serpentis, is a single star in the Caput section of the equatorial Serpens constellation. It has an orange hue and is faintly visible to the naked eye with an apparent visual magnitude of +4.78. The distance to this star is approximately 375 light years based on parallax, but it is drifting closer to the Sun with a radial velocity of −62 km/s. This is an aging giant star with a stellar classification of K4.5III. It is a suspected variable star of unknown type, with an I-band brightness ranging from 3.29 down to 3.44 magnitude. Hipparcos photometry revealed a microvariability with a frequency of 0.17017 cycles per day and an amplitude of 0.0080. With the supply of hydrogen exhausted at its core, it has expanded and now has 48 times the Sun's girth. The star is radiating 492 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 3,930 K. References K-type giants Suspected variables Serpens Serpentis, Rho Durchmusterung objects Serpentis, 38 141992 077661 5899
Rho Serpentis
Astronomy
258
153,099
https://en.wikipedia.org/wiki/Normal%20closure%20%28group%20theory%29
In group theory, the normal closure of a subset of a group is the smallest normal subgroup of containing Properties and description Formally, if is a group and is a subset of the normal closure of is the intersection of all normal subgroups of containing : The normal closure is the smallest normal subgroup of containing in the sense that is a subset of every normal subgroup of that contains The subgroup is generated by the set of all conjugates of elements of in Therefore one can also write Any normal subgroup is equal to its normal closure. The conjugate closure of the empty set is the trivial subgroup. A variety of other notations are used for the normal closure in the literature, including and Dual to the concept of normal closure is that of or , defined as the join of all normal subgroups contained in Group presentations For a group given by a presentation with generators and defining relators the presentation notation means that is the quotient group where is a free group on References Group theory Closure operators
Normal closure (group theory)
Mathematics
200
57,707,341
https://en.wikipedia.org/wiki/Ace%20Stream
Ace Stream is a peer-to-peer multimedia streaming protocol, built using BitTorrent technology. Ace Stream has been recognized by sources as a potential method for broadcasting and viewing bootlegged live video streams. The protocol functions as both a client and a server. When users stream a video feed using Ace Stream, they are simultaneously downloading from peers and uploading the same video to other peers. History Ace Stream began under the name TorrentStream as a pilot project to use BitTorrent technology to stream live video. In 2013 TorrentStream, was re-released under the name ACE Stream. References Computer networking Applications of distributed computing Cloud storage Digital television Distributed algorithms Distributed data storage Distributed data storage systems File sharing networks Film and video technology Internet broadcasting Streaming television Multimedia Peer-to-peer computing Streaming media systems Video hosting Video on demand services
Ace Stream
Technology,Engineering
170
18,019,049
https://en.wikipedia.org/wiki/Rs6294
Rs6294, also called G294A, is a gene variation—a single nucleotide polymorphism (SNP)— in the HTR1A gene. C(-1019)G (rs6295) is another SNP in the HTR1A gene. References SNPs on chromosome 5
Rs6294
Biology
71
5,923,151
https://en.wikipedia.org/wiki/Remote%20field%20testing
Remote field testing (RFT) is a method of nondestructive testing using low-frequency AC. whose main application is finding defects in steel pipes and tubes. RFT is also referred to as remote field eddy current testing (RFEC or RFET). RFET is sometimes expanded as remote field electromagnetic technique, although a magnetic, rather than electromagnetic field is used. An RFT probe is moved down the inside of a pipe and is able to detect inside and outside defects with approximately equal sensitivity (although it can not discriminate between the two). Although RFT works in nonferromagnetic materials such as copper and brass, its sister technology eddy-current testing is preferred. The basic RFT probe consists of an exciter coil (also known as a transmit or send coil) which sends a signal to the detector (or receive coil). The exciter coil is pumped with an AC current and emits a magnetic field. The field travels outwards from the exciter coil, through the pipe wall, and along the pipe. The detector is placed inside the pipe two to three pipe diameters away from the exciter and detects the magnetic field that has travelled back in from the outside of the pipe wall (for a total of two through-wall transits). In areas of metal loss, the field arrives at the detector with a faster travel time (greater phase) and greater signal strength (amplitude) due to the reduced path through the steel. Hence the dominant mechanism of RFT is through-transmission. Main features commonly applied to examination of boilers, heat exchangers, cast iron pipes, and pipelines. no need for direct contact with the pipe wall probe travel speed around 30 cm/s (1 foot per second), usually slower in pipes greater than 3 inch diameter. less sensitive to probe wobble than conventional eddy current testing (its sister technology for nonferromagnetic materials) because the field travels on the outside of the pipe, RFT shows reduced accuracy and sensitivity at conductive and magnetic objects on or near the outside of the pipe, such as attachments or tube support plates. two coils generally create two signals from one small defect The main differences between RFT and conventional eddy-current testing (ECT) is in the coil-to-coil spacing. The RFT probe has widely spaced coils to pick up the through-transmission field. The typical ECT probe has coils or coil sets that create a field and measure the response within a small area, close to the object being tested. See also Internal rotary inspection system References and sources ASTM E 2096 – 00 Standard Practice for In Situ Examination of Ferromagnetic Heat-Exchanger Tubes Using Remote Field Testing Outline of RFT Specific Nondestructive testing
Remote field testing
Materials_science
571
19,220,750
https://en.wikipedia.org/wiki/22-Dihydroergocalciferol
22-Dihydroergocalciferol is a form of vitamin D, also known as vitamin D4. It has the systematic name (5Z,7E)-(3S)-9,10-seco-5,7,10(19)-ergostatrien-3-ol. Vitamin D4 is found in certain mushrooms, being produced from ergosta-5,7-dienol (22,23-dihydroergosterol) instead of ergosterol. See also Forms of vitamin D, the five known forms of vitamin D Lumisterol, a constituent of vitamin D1 References External links Dihydroergocalciferols at lipidmaps.org Vitamin D
22-Dihydroergocalciferol
Chemistry,Biology
157
53,393,485
https://en.wikipedia.org/wiki/World%20Congress%20on%20Intelligent%20Transport%20Systems
The World Congress on Intelligent Transport Systems (commonly known as the ITS World Congress) is an annual conference and trade show to promote ITS technologies. ERTICO (ITS Europe), ITS America, ITS AsiaPacific and ITS Japan are its sponsors. Each year the event takes place in a different region (Europe, Americas or Asia-Pacific). Intelligent transportation systems (ITS) are advanced applications which, without embodying intelligence as such, aim to provide innovative services relating to different modes of transport and traffic management and enable various users to be better informed and make safer, more coordinated, and 'smarter' use of transport networks. They are considered a part of the Internet of things. History The first ITS World Congress was held in 1994 in Paris, followed by the 2nd in Yokohama in 1995, and the 3rd in Orlando in 1996. The rotation of the venue location among the regions of the world continued: 1997: 4th, Berlin 1998: 5th, Seoul 1999: 6th, Toronto 2000: 7th, Torino 2001: 8th, Sydney 2002: 9th, Chicago 2003: 10th, Madrid 2004: 11th, Nagoya 2005: 12th, San Francisco 2006: 13th, London 2007: 14th, Beijing 2008: 15th, New York City 2009: 16th, Stockholm 2010: 17th, Busan 2011: 18th, Orlando 2012: 19th, Vienna 2013: 20th, Tokyo 2014: 21st, Detroit 2015: 22nd, Bordeaux 2016: 23rd, Melbourne 2017: 24th, Montréal 2018: 25th, Copenhagen 2019: 26th, Singapore 2020: Los Angeles virtual 2021: 27th, Hamburg 2022: 28th, Los Angeles 2023: 29th, Suzhou 2024: 30th, Dubai ITS World Congress Post Congress Reports are archived by ITS Japan. Future locations 2025: 31st, Atlanta 2026: 32nd, Gangneung 2027: 33rd, Birmingham 2029: 35th, Taipei External links Official ERTICO website Official ITS America website Official ITS Asia-Pacific website References Trade fairs Recurring events established in 1994 Computer-related trade shows Road transport events
World Congress on Intelligent Transport Systems
Technology
406
1,067,222
https://en.wikipedia.org/wiki/Spermatid
The spermatid is the haploid male gametid that results from division of secondary spermatocytes. As a result of meiosis, each spermatid contains only half of the genetic material present in the original primary spermatocyte. Spermatids are connected by cytoplasmic material and have superfluous cytoplasmic material around their nuclei. When formed, early round spermatids must undergo further maturational events to develop into spermatozoa, a process termed spermiogenesis (also termed spermeteliosis). The spermatids begin to grow a living thread, develop a thickened mid-piece where the mitochondria become localised, and form an acrosome. Spermatid DNA also undergoes packaging, becoming highly condensed. The DNA is packaged firstly with specific nuclear basic proteins, which are subsequently replaced with protamines during spermatid elongation. The resultant tightly packed chromatin is transcriptionally inactive. In 2016 scientists at Nanjing Medical University claimed they had produced cells resembling mouse spermatids artificially from stem cells. They injected these spermatids into mouse eggs and produced pups. During spermatid haploid genome remodeling, the majority of histones are replaced by protamines, and the DNA is compacted. During this compaction, transient single- and double-strand breaks are introduced into the sperm DNA. The conventional non-homologous end joining pathway for repairing double-strand breaks is not available for elongated spermatids. However, spermatids can carry out limited repair of exogenous and programmed double-strand breaks using an alternative error-prone non-homologous end joining repair pathway. If DNA strand breaks persist in mature sperm, the result can be increased sperm DNA fragmentation which is associated with impaired fertility and an increased incidence of miscarriage. DNA repair As postmeiotic germ cells develop to mature sperm they progressively lose the ability to repair DNA damage that may then accumulate and be transmitted to the zygote and ultimately the embryo. In particular, the repair of DNA double-strand breaks by the non-homologous end joining pathway, although present in round spermatids, appears to be lost as they develop into elongated spermatids. Additional images See also List of distinct cell types in the adult human body References External links - "Male Reproductive System: testis, early spermatids" - "Male Reproductive System: testis, late spermatids" Histology at okstate.edu Reproductive system Germ cells Andrology
Spermatid
Biology
528
20,447,008
https://en.wikipedia.org/wiki/Barrel%20nut
On some firearms the gun barrel is fastened to the receiver with a nut, referred to as a barrel nut. A barrel nut (also known as steel cross dowel or dowel nut) is a specialized forged nut, and is commonly used in aerospace and ready-to-assemble furniture applications. It is used to bolt thin sheet metal parts to larger, often billet or forged, parts. The barrel nut is a round slug, or formed sheet metal part with threads perpendicular to the length of the nut. The nut sits in a hole inside the forging and a standard bolt is threaded into the barrel nut from outside the sheet metal. They are preferred over a standard nut and bolt, because they do not require a flange to be machined or forged onto the receiving part, thus reducing weight. Furniture cross dowel barrel nuts are cylindrical shaped metal nuts (metal dowels) used with furniture connector bolts to join two pieces of wood. The inside threaded hole is unusual in that it passes through the sides of the dowel. To install, the pieces of wood to be joined are aligned, then a bolt hole is drilled through one piece of wood and into the other. A dowel hole is drilled laterally across the bolt hole and the cross dowel is inserted into it. The end of the cross dowel is slotted so that a screwdriver can be inserted to rotate the dowel so that its threaded shaft aligns with the bolt hole. The furniture connector bolt is then inserted into the bolt hole and screwed into the cross dowel until the wood pieces are held tightly together. Barrel nuts are also common in flat-pack furniture, where long bolts and barrel nuts are used to hold together T joints in chipboard sheets. References External links Nuts (hardware)
Barrel nut
Engineering
359
31,489,021
https://en.wikipedia.org/wiki/Kettle%20logic
Kettle logic () is a rhetorical device wherein one uses multiple arguments to defend a point, but the arguments are inconsistent with each other. Jacques Derrida uses this expression in reference to the humorous "kettle-story" related by Sigmund Freud in The Interpretation of Dreams (1900) and Jokes and Their Relation to the Unconscious (1905). Philosophy and psychoanalysis The name comes from Jacques Derrida from an example used by Sigmund Freud for the analysis of "Irma's dream" in The Interpretation of Dreams and in his Jokes and Their Relation to the Unconscious. Freud relates the story of a man who was accused by his neighbour of having returned a kettle in a damaged condition and the three arguments he offers. That he had returned the kettle undamaged That it was already damaged when he borrowed it That he had never borrowed it in the first place Though the three arguments are inconsistent, Freud notes that it is so much the better, as if even one is found to be true then the man must be acquitted. The kettle "logic" of the dream-work is related to what Freud calls the embarrassment-dream of being naked, in which contradictory opposites are yoked together in the dream. Freud said that in a dream, incompatible (contradictory) ideas are simultaneously admitted. See also Dilemma Alternative pleading: some forms constitute legal use of kettle logic Argument in the alternative List of fallacies References External links "Kettle Logic – Freud on Defensive Arguments" Informal fallacies Dream Rhetorical techniques Jacques Derrida Sigmund Freud
Kettle logic
Biology
313
8,686,104
https://en.wikipedia.org/wiki/Anthracotheriidae
Anthracotheriidae is a paraphyletic family of extinct, hippopotamus-like artiodactyl ungulates related to hippopotamuses and whales. The oldest genus, Elomeryx, first appeared during the middle Eocene in Asia. They thrived in Africa and Eurasia, with a few species ultimately entering North America during the Oligocene. They died out in Europe and Africa during the Miocene, possibly due to a combination of climatic changes and competition with other artiodactyls, including pigs and true hippopotamuses. The youngest genus, Merycopotamus, died out in Asia during the late Pliocene, possibly for the same reasons. The family is named after the first genus discovered, Anthracotherium, which means "coal beast", as the first fossils of it were found in Paleogene-aged coal beds in France. Fossil remains of the anthracothere genus were discovered by the Harvard University and Geological Survey of Pakistan joint research project (Y-GSP) in the well-dated middle and late Miocene deposits of the Pothohar Plateau in northern Pakistan. In life, the average anthracothere would have resembled a skinny hippopotamus with a comparatively small, narrow head and most likely pig-like in general appearance. They had four or five toes on each foot, and broad feet suited to walking on soft mud. They had full sets of about 44 teeth with five semicrescentric cusps on the upper molars, which, in some species, were adapted for digging up the roots of aquatic plants. Evolutionary relationships Some skeletal characters of anthracotheres suggest they are related to hippos. The nature of the sediments in which they are fossilized implies they were amphibious, which supports the view, based on anatomical evidence, that they were ancestors of the hippopotamuses. In many respects, especially the anatomy of the lower jaw, Anthracotherium, as with other members of the family, is allied to the hippopotamus, of which it is probably an ancestral form. However, one study suggests that instead of anthracotheres, another pig-like group of artiodactyls, the palaeochoerids, are the true stem group of Hippopotamidae. Recent evidence, gained from comparative gene sequencing, further suggests that hippos are the closest living relatives of whales, so, if anthracotheres are stem hippos, they would also be related to whales in a clade provisionally called Whippomorpha. However, the earliest known anthracotheres appear in the fossil record in the middle Eocene, well after the archaeocetes had already taken up totally aquatic lifestyles. Although phylogenetic analyses of molecular data on extant animals strongly support the notion that hippopotamids are the closest relatives of cetaceans (whales, dolphins and porpoises), the two groups are unlikely to be closely related when extant and extinct artiodactyls are analyzed. Cetaceans originated about 50 million years ago in the Tethys Sea between India and China, whereas the family Hippopotamidae is only 15 million years old, and the first Asian hippopotamids are only 6 million years old. Yet, analyses of fossil clades have not resolved the issue of cetacean relations. Another study has offered a suggestion that anthracotheres are part of a clade that also consists of entelodonts (and even Andrewsarchus) and that is a sister clade to other cetancodonts, with Siamotherium as the most basal member of the clade Cetacodontamorpha. References Ancodonta Piacenzian extinctions Eocene first appearances Prehistoric mammal families Taxa named by Joseph Leidy Paraphyletic groups
Anthracotheriidae
Biology
809
11,107,329
https://en.wikipedia.org/wiki/SN%202006gy
SN 2006gy was an extremely energetic supernova, also referred to as a hypernova, that was discovered on September 18, 2006. It was first observed by Robert Quimby and P. Mondol, and then studied by several teams of astronomers using facilities that included the Chandra, Lick, and Keck Observatories. In May 2007, NASA and several of the astronomers announced the first detailed analyses of the supernova, describing it as the "brightest stellar explosion ever recorded". In October 2007, Quimby announced that SN 2005ap had broken SN 2006gy's record as the brightest-ever recorded supernova, and several subsequent discoveries are brighter still. Time magazine listed the discovery of SN 2006gy as third in its Top 10 Scientific Discoveries for 2007. Characteristics SN 2006gy occurred in the galaxy NGC 1260, approximately 238 million light-years (73 megaparsecs) away. The energy radiated by the explosion has been estimated at 1051 ergs (1044 J), making it a hundred times more powerful than the typical supernova explosion which radiates 1049 ergs (1042 J) of energy. Although at its peak the SN 2006gy supernova was intrinsically 400 times as luminous as SN 1987A, which was bright enough to be seen by the naked eye, SN 2006gy was more than 1,400 times as far away as SN 1987A, and too far away to be seen without a telescope. SN 2006gy is classified as a type II supernova because it showed lines of hydrogen in its spectrum, although the extreme brightness indicates that it is different from the typical type II supernova. Several possible mechanisms have been proposed for such a violent explosion, all requiring a very massive progenitor star. The most likely explanations involve the efficient conversion of explosive kinetic energy to radiation by interaction with circumstellar material, similar to a type IIn supernova but on a larger scale. Such a scenario might occur following mass loss of in a luminous blue variable eruption, or through pulsational pair instability ejections. Denis Leahy and Rachid Ouyed, Canadian scientists from the University of Calgary, have proposed that SN 2006gy was a quark-nova, heralding the birth of a quark star. Similarity to Eta Carinae Eta Carinae (η Carinae or η Car) is a highly luminous hypergiant star located approximately 7,500 light-years from Earth in the Milky Way galaxy. Since Eta Carinae is 32,000 times closer than SN 2006gy, the light from it will be about a billion-fold brighter. It is estimated to be similar in size to the star which became SN 2006gy. Dave Pooley, one of the discoverers of SN 2006gy, says that if Eta Carinae exploded in a similar fashion, it would be bright enough that one could read by its light on Earth at night, and would even be visible during the daytime. SN 2006gy's apparent magnitude (m) was 15, so a similar event at Eta Carinae will have an m of about −7.5. According to astrophysicist Mario Livio, this could happen at any time, but the risk to life on Earth would be low. References SIMBAD data External links Light curves and spectra on the Open Supernova Catalog Astronomy Picture of the Day 10 May 2007 Giant exploding star outshines previous supernovas (CNN.com) Space.com article on SN 2006gy. Star dies in brightest supernova, BBC, Tuesday, 8 May 2007, 03:35 GMT The Greatest Show in Space, Time magazine Thursday, May 21st, 2007 Pages 56–57 Supernova may offer new view of early universe Lick Observatory Laser Guide Star Adaptive Optics Image SN 2006gy Perseus (constellation) Supernovae Hypernovae 20060918
SN 2006gy
Chemistry,Astronomy
801
77,389,501
https://en.wikipedia.org/wiki/Zirconium%20difluoride
Zirconium difluoride is an inorganic chemical compound with the chemical formula . Synthesis Zirconium difluoride can be prepared by the action of atomic hydrogen on thin layers of zirconium tetrafluoride, at a temperature of approximately 350°C. Physical properties forms black crystals of the orthorhombic system, with unit cell parameters a = 0.409 nm, b = 0.491 nm, c = 0.656 nm. The compound readily ignites and burns to form zirconium dioxide. Chemical properties disproportionates when heated to 800 °C: References Fluorides Metal halides Zirconium(II) compounds
Zirconium difluoride
Chemistry
146
1,269,117
https://en.wikipedia.org/wiki/Site-directed%20spin%20labeling
Site-directed spin labeling (SDSL) is a technique for investigating the structure and local dynamics of proteins using electron spin resonance. The theory of SDSL is based on the specific reaction of spin labels with amino acids. A spin label's built-in protein structure can be detected by EPR spectroscopy. SDSL is also a useful tool in examinations of the protein folding process. Spin labeling Site-directed spin labeling (SDSL) was pioneered in the laboratory of Dr. W.L. Hubbell. In SDSL, sites for attachment of spin labels are introduced into recombinantly expressed proteins by site-directed mutagenesis. Functional groups contained within the spin label determine their specificity. At neutral pH, protein thiol groups specifically react with the functional groups methanethiosulfonate, maleimide, and iodoacetamide, creating a covalent bond with the amino acid Cys. Spin labels are a unique molecular reporter, in that they are paramagnetic (contain an unpaired electron). Spin labels were first synthesized in the laboratory of H. M. McConnell in 1965. Since then, a variety of nitroxide spin labels have enjoyed widespread use for the study of macromolecular structure and dynamics because of their stability and simple EPR signal. The nitroxyl radical (N-O) is usually incorporated into a heterocyclic ring (e.g. pyrrolidine), and the unpaired electron is predominantly localized to the N-O bond. Once incorporated into the protein, a spin label's motions are dictated by its local environment. Because spin labels are exquisitely sensitive to motion, this has profound effects on its EPR spectrum. The assembly of multi-subunit membrane protein complexes has also been studied using spin labeling. The binding of the PsaC subunit to the PsaA and PsaB subunits of the photosynthetic reaction center, Photosystem I, has been analyzed in great detail using this technique. Dr. Ralf Langen's group showed that SDSL with EPR (University of Southern California, Los Angeles) can be used to understand the structure of amyloid fibrils and the structure of membrane bound Parkinson's disease protein alpha-synuclein. A 2012 study generated a high resolution structure of IAPP fibrils using a combination of SDSL, pulse EPR and computational biology. References Analytical chemistry Spectroscopy Protein methods
Site-directed spin labeling
Physics,Chemistry,Biology
501
15,843,635
https://en.wikipedia.org/wiki/Cartesian%20tree
In computer science, a Cartesian tree is a binary tree derived from a sequence of distinct numbers. To construct the Cartesian tree, set its root to be the minimum number in the sequence, and recursively construct its left and right subtrees from the subsequences before and after this number. It is uniquely defined as a min-heap whose symmetric (in-order) traversal returns the original sequence. Cartesian trees were introduced by in the context of geometric range searching data structures. They have also been used in the definition of the treap and randomized binary search tree data structures for binary search problems, in comparison sort algorithms that perform efficiently on nearly-sorted inputs, and as the basis for pattern matching algorithms. A Cartesian tree for a sequence can be constructed in linear time. Definition Cartesian trees are defined using binary trees, which are a form of rooted tree. To construct the Cartesian tree for a given sequence of distinct numbers, set its root to be the minimum number in the sequence, and recursively construct its left and right subtrees from the subsequences before and after this number, respectively. As a base case, when one of these subsequences is empty, there is no left or right child. It is also possible to characterize the Cartesian tree directly rather than recursively, using its ordering properties. In any tree, the subtree rooted at any node consists of all other nodes that can reach it by repeatedly following parent pointers. The Cartesian tree for a sequence of distinct numbers is defined by the following properties: The Cartesian tree for a sequence is a binary tree with one node for each number in the sequence. A symmetric (in-order) traversal of the tree results in the original sequence. Equivalently, for each node, the numbers in its left subtree are earlier than it in the sequence, and the numbers in the right subtree are later. The tree has the min-heap property: the parent of any non-root node has a smaller value than the node itself. These two definitions are equivalent: the tree defined recursively as described above is the unique tree that has the properties listed above. If a sequence of numbers contains repetitions, a Cartesian tree can be determined for it by following a consistent tie-breaking rule before applying the above construction. For instance, the first of two equal elements can be treated as the smaller of the two. History Cartesian trees were introduced and named by , who used them as an example of the interaction between geometric combinatorics and the design and analysis of data structures. In particular, Vuillemin used these structures to analyze the average-case complexity of concatenation and splitting operations on binary search trees. The name is derived from the Cartesian coordinate system for the plane: in one version of this structure, as in the two-dimensional range searching application discussed below, a Cartesian tree for a point set has the sorted order of the points by their -coordinates as its symmetric traversal order, and it has the heap property according to the -coordinates of the points. Vuillemin described both this geometric version of the structure, and the definition here in which a Cartesian tree is defined from a sequence. Using sequences instead of point coordinates provides a more general setting that allows the Cartesian tree to be applied to non-geometric problems as well. Efficient construction A Cartesian tree can be constructed in linear time from its input sequence. One method is to process the sequence values in left-to-right order, maintaining the Cartesian tree of the nodes processed so far, in a structure that allows both upwards and downwards traversal of the tree. To process each new value , start at the node representing the value prior to in the sequence and follow the path from this node to the root of the tree until finding a value smaller than . The node becomes the right child of , and the previous right child of becomes the new left child of . The total time for this procedure is linear, because the time spent searching for the parent of each new node can be charged against the number of nodes that are removed from the rightmost path in the tree. An alternative linear-time construction algorithm is based on the all nearest smaller values problem. In the input sequence, define the left neighbor of a value to be the value that occurs prior to , is smaller than , and is closer in position to than any other smaller value. The right neighbor is defined symmetrically. The sequence of left neighbors can be found by an algorithm that maintains a stack containing a subsequence of the input. For each new sequence value , the stack is popped until it is empty or its top element is smaller than , and then is pushed onto the stack. The left neighbor of is the top element at the time is pushed. The right neighbors can be found by applying the same stack algorithm to the reverse of the sequence. The parent of in the Cartesian tree is either the left neighbor of or the right neighbor of , whichever exists and has a larger value. The left and right neighbors can also be constructed efficiently by parallel algorithms, making this formulation useful in efficient parallel algorithms for Cartesian tree construction. Another linear-time algorithm for Cartesian tree construction is based on divide-and-conquer. The algorithm recursively constructs the tree on each half of the input, and then merges the two trees. The merger process involves only the nodes on the left and right spines of these trees: the left spine is the path obtained by following left child edges from the root until reaching a node with no left child, and the right spine is defined symmetrically. As with any path in a min-heap, both spines have their values in sorted order, from the smallest value at their root to their largest value at the end of the path. To merge the two trees, apply a merge algorithm to the right spine of the left tree and the left spine of the right tree, replacing these two paths in two trees by a single path that contains the same nodes. In the merged path, the successor in the sorted order of each node from the left tree is placed in its right child, and the successor of each node from the right tree is placed in its left child, the same position that was previously used for its successor in the spine. The left children of nodes from the left tree and right children of nodes from the right tree remain unchanged. The algorithm is parallelizable since on each level of recursion, each of the two sub-problems can be computed in parallel, and the merging operation can be efficiently parallelized as well. Yet another linear-time algorithm, using a linked list representation of the input sequence, is based on locally maximum linking: the algorithm repeatedly identifies a local maximum element, i.e., one that is larger than both its neighbors (or than its only neighbor, in case it is the first or last element in the list). This element is then removed from the list, and attached as the right child of its left neighbor, or the left child of its right neighbor, depending on which of the two neighbors has a larger value, breaking ties arbitrarily. This process can be implemented in a single left-to-right pass of the input, and it is easy to see that each element can gain at most one left-, and at most one right child, and that the resulting binary tree is a Cartesian tree of the input sequence. It is possible to maintain the Cartesian tree of a dynamic input, subject to insertions of elements and lazy deletion of elements, in logarithmic amortized time per operation. Here, lazy deletion means that a deletion operation is performed by marking an element in the tree as being a deleted element, but not actually removing it from the tree. When the number of marked elements reaches a constant fraction of the size of the whole tree, it is rebuilt, keeping only its unmarked elements. Applications Range searching and lowest common ancestors Cartesian trees form part of an efficient data structure for range minimum queries. An input to this kind of query specifies a contiguous subsequence of the original sequence; the query output should be the minimum value in this subsequence. In a Cartesian tree, this minimum value can be found at the lowest common ancestor of the leftmost and rightmost values in the subsequence. For instance, in the subsequence (12,10,20,15,18) of the example sequence, the minimum value of the subsequence (10) forms the lowest common ancestor of the leftmost and rightmost values (12 and 18). Because lowest common ancestors can be found in constant time per query, using a data structure that takes linear space to store and can be constructed in linear time, the same bounds hold for the range minimization problem. reversed this relationship between the two data structure problems by showing that data structures for range minimization could also be used for finding lowest common ancestors. Their data structure associates with each node of the tree its distance from the root, and constructs a sequence of these distances in the order of an Euler tour of the (edge-doubled) tree. It then constructs a range minimization data structure for the resulting sequence. The lowest common ancestor of any two vertices in the given tree can be found as the minimum distance appearing in the interval between the initial positions of these two vertices in the sequence. Bender and Farach-Colton also provide a method for range minimization that can be used for the sequences resulting from this transformation, which have the special property that adjacent sequence values differ by one. As they describe, for range minimization in sequences that do not have this form, it is possible to use Cartesian trees to reduce the range minimization problem to lowest common ancestors, and then to use Euler tours to reduce lowest common ancestors to a range minimization problem with this special form. The same range minimization problem may also be given an alternative interpretation in terms of two dimensional range searching. A collection of finitely many points in the Cartesian plane can be used to form a Cartesian tree, by sorting the points by their -coordinates and using the -coordinates in this order as the sequence of values from which this tree is formed. If is the subset of the input points within some vertical slab defined by the inequalities , is the leftmost point in (the one with minimum -coordinate), and is the rightmost point in (the one with maximum -coordinate) then the lowest common ancestor of and in the Cartesian tree is the bottommost point in the slab. A three-sided range query, in which the task is to list all points within a region bounded by the three inequalities and , can be answered by finding this bottommost point , comparing its -coordinate to , and (if the point lies within the three-sided region) continuing recursively in the two slabs bounded between and and between and . In this way, after the leftmost and rightmost points in the slab are identified, all points within the three-sided region can be listed in constant time per point. The same construction, of lowest common ancestors in a Cartesian tree, makes it possible to construct a data structure with linear space that allows the distances between pairs of points in any ultrametric space to be queried in constant time per query. The distance within an ultrametric is the same as the minimax path weight in the minimum spanning tree of the metric. From the minimum spanning tree, one can construct a Cartesian tree, the root node of which represents the heaviest edge of the minimum spanning tree. Removing this edge partitions the minimum spanning tree into two subtrees, and Cartesian trees recursively constructed for these two subtrees form the children of the root node of the Cartesian tree. The leaves of the Cartesian tree represent points of the metric space, and the lowest common ancestor of two leaves in the Cartesian tree is the heaviest edge between those two points in the minimum spanning tree, which has weight equal to the distance between the two points. Once the minimum spanning tree has been found and its edge weights sorted, the Cartesian tree can be constructed in linear time. As a binary search tree The Cartesian tree of a sorted sequence is just a path graph, rooted at its leftmost endpoint. Binary searching in this tree degenerates to sequential search in the path. However, a different construction uses Cartesian trees to generate binary search trees of logarithmic depth from sorted sequences of values. This can be done by generating priority numbers for each value, and using the sequence of priorities to generate a Cartesian tree. This construction may equivalently be viewed in the geometric framework described above, in which the -coordinates of a set of points are the values in a sorted sequence and the -coordinates are their priorities. This idea was applied by , who suggested the use of random numbers as priorities. The self-balancing binary search tree resulting from this random choice is called a treap, due to its combination of binary search tree and min-heap features. An insertion into a treap can be performed by inserting the new key as a leaf of an existing tree, choosing a priority for it, and then performing tree rotation operations along a path from the node to the root of the tree to repair any violations of the heap property caused by this insertion; a deletion can similarly be performed by a constant amount of change to the tree followed by a sequence of rotations along a single path in the tree. A variation on this data structure called a zip tree uses the same idea of random priorities, but simplifies the random generation of the priorities, and performs insertions and deletions in a different way, by splitting the sequence and its associated Cartesian tree into two subsequences and two trees and then recombining them. If the priorities of each key are chosen randomly and independently once whenever the key is inserted into the tree, the resulting Cartesian tree will have the same properties as a random binary search tree, a tree computed by inserting the keys in a randomly chosen permutation starting from an empty tree, with each insertion leaving the previous tree structure unchanged and inserting the new node as a leaf of the tree. Random binary search trees have been studied for much longer than treaps, and are known to behave well as search trees. The expected length of the search path to any given value is at most , and the whole tree has logarithmic depth (its maximum root-to-leaf distance) with high probability. More formally, there exists a constant such that the depth is with probability tending to one as the number of nodes tends to infinity. The same good behavior carries over to treaps. It is also possible, as suggested by Aragon and Seidel, to reprioritize frequently-accessed nodes, causing them to move towards the root of the treap and speeding up future accesses for the same keys. In sorting describe a sorting algorithm based on Cartesian trees. They describe the algorithm as based on a tree with the maximum at the root, but it can be modified straightforwardly to support a Cartesian tree with the convention that the minimum value is at the root. For consistency, it is this modified version of the algorithm that is described below. The Levcopoulos–Petersson algorithm can be viewed as a version of selection sort or heap sort that maintains a priority queue of candidate minima, and that at each step finds and removes the minimum value in this queue, moving this value to the end of an output sequence. In their algorithm, the priority queue consists only of elements whose parent in the Cartesian tree has already been found and removed. Thus, the algorithm consists of the following steps: Construct a Cartesian tree for the input sequence Initialize a priority queue, initially containing only the tree root While the priority queue is non-empty: Find and remove the minimum value in the priority queue Add this value to the output sequence Add the Cartesian tree children of the removed value to the priority queue As Levcopoulos and Petersson show, for input sequences that are already nearly sorted, the size of the priority queue will remain small, allowing this method to take advantage of the nearly-sorted input and run more quickly. Specifically, the worst-case running time of this algorithm is , where is the sequence length and is the average, over all values in the sequence, of the number of consecutive pairs of sequence values that bracket the given value (meaning that the given value is between the two sequence values). They also prove a lower bound stating that, for any and (non-constant) , any comparison-based sorting algorithm must use comparisons for some inputs. In pattern matching The problem of Cartesian tree matching has been defined as a generalized form of string matching in which one seeks a substring (or in some cases, a subsequence) of a given string that has a Cartesian tree of the same form as a given pattern. Fast algorithms for variations of the problem with a single pattern or multiple patterns have been developed, as well as data structures analogous to the suffix tree and other text indexing structures. Notes References Binary trees Sorting algorithms
Cartesian tree
Mathematics
3,539
19,555
https://en.wikipedia.org/wiki/Molecule
A molecule is a group of two or more atoms that are held together by attractive forces known as chemical bonds; depending on context, the term may or may not include ions that satisfy this criterion. In quantum physics, organic chemistry, and biochemistry, the distinction from ions is dropped and molecule is often used when referring to polyatomic ions. A molecule may be homonuclear, that is, it consists of atoms of one chemical element, e.g. two atoms in the oxygen molecule (O2); or it may be heteronuclear, a chemical compound composed of more than one element, e.g. water (two hydrogen atoms and one oxygen atom; H2O). In the kinetic theory of gases, the term molecule is often used for any gaseous particle regardless of its composition. This relaxes the requirement that a molecule contains two or more atoms, since the noble gases are individual atoms. Atoms and complexes connected by non-covalent interactions, such as hydrogen bonds or ionic bonds, are typically not considered single molecules. Concepts similar to molecules have been discussed since ancient times, but modern investigation into the nature of molecules and their bonds began in the 17th century. Refined over time by scientists such as Robert Boyle, Amedeo Avogadro, Jean Perrin, and Linus Pauling, the study of molecules is today known as molecular physics or molecular chemistry. Etymology According to Merriam-Webster and the Online Etymology Dictionary, the word "molecule" derives from the Latin "moles" or small unit of mass. The word is derived from French (1678), from Neo-Latin , diminutive of Latin "mass, barrier". The word, which until the late 18th century was used only in Latin form, became popular after being used in works of philosophy by Descartes. History The definition of the molecule has evolved as knowledge of the structure of molecules has increased. Earlier definitions were less precise, defining molecules as the smallest particles of pure chemical substances that still retain their composition and chemical properties. This definition often breaks down since many substances in ordinary experience, such as rocks, salts, and metals, are composed of large crystalline networks of chemically bonded atoms or ions, but are not made of discrete molecules. The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids. Circa 450 BC Empedocles imagined fundamental elements (fire (), earth (), air (), and water ()) and "forces" of attraction and repulsion allowing the elements to interact. A fifth element, the incorruptible quintessence aether, was considered to be the fundamental building block of the heavenly bodies. The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe. In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. "molecules", traces its origins to Robert Boyle's 1661 hypothesis, in his famous treatise The Sceptical Chymist, that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called "corpuscles", which were capable of arranging themselves into groups. In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds. If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles. Amedeo Avogadro created the word "molecule". His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington's A Short History of Chemistry, that:In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H2O: In 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method, which was the mainstream description of bonds between atoms at the time. Pauling, however, was not satisfied with this method and looked to the newly emerging field of quantum physics for a new method. In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro constant using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion, and third by confirming Einstein's theory of particle rotation in the liquid phase. In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship. Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH4, in which four sp³ hybridised orbitals are overlapped by hydrogen's 1s orbital, yielding four sigma (σ) bonds. The four bonds are of the same length and strength, which yields a molecular structure as shown below: Molecular science The science of molecules is called molecular chemistry or molecular physics, depending on whether the focus is on chemistry or physics. Molecular chemistry deals with the laws governing the interaction between molecules that results in the formation and breakage of chemical bonds, while molecular physics deals with the laws governing their structure and properties. In practice, however, this distinction is vague. In molecular sciences, a molecule consists of a stable system (bound state) composed of two or more atoms. Polyatomic ions may sometimes be usefully thought of as electrically charged molecules. The term unstable molecule is used for very reactive species, i.e., short-lived assemblies (resonances) of electrons and nuclei, such as radicals, molecular ions, Rydberg molecules, transition states, van der Waals complexes, or systems of colliding atoms as in Bose–Einstein condensate. Prevalence Molecules as components of matter are common. They also make up most of the oceans and atmosphere. Most organic substances are molecules. The substances of life are molecules, e.g. proteins, the amino acids of which they are composed, the nucleic acids (DNA and RNA), sugars, carbohydrates, fats, and vitamins. The nutrient minerals are generally ionic compounds, thus they are not molecules, e.g. iron sulfate. However, the majority of familiar solid substances on Earth are made partly or completely of crystals or ionic compounds, which are not made of molecules. These include all of the minerals that make up the substance of the Earth, sand, clay, pebbles, rocks, boulders, bedrock, the molten interior, and the core of the Earth. All of these contain many chemical bonds, but are not made of identifiable molecules. No typical molecule can be defined for salts nor for covalent crystals, although these are often composed of repeating unit cells that extend either in a plane, e.g. graphene; or three-dimensionally e.g. diamond, quartz, sodium chloride. The theme of repeated unit-cellular-structure also holds for most metals which are condensed phases with metallic bonding. Thus solid metals are not made of molecules. In glasses, which are solids that exist in a vitreous disordered state, the atoms are held together by chemical bonds with no presence of any definable molecule, nor any of the regularity of repeating unit-cellular-structure that characterizes salts, covalent crystals, and metals. Bonding Molecules are generally held together by covalent bonding. Several non-metallic elements exist only as molecules in the environment either in compounds or as homonuclear molecules, not as free atoms: for example, hydrogen. While some people say a metallic crystal can be considered a single giant molecule held together by metallic bonding, others point out that metals behave very differently than molecules. Covalent A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are termed shared pairs or bonding pairs, and the stable balance of attractive and repulsive forces between atoms, when they share electrons, is termed covalent bonding. Ionic Ionic bonding is a type of chemical bond that involves the electrostatic attraction between oppositely charged ions, and is the primary interaction occurring in ionic compounds. The ions are atoms that have lost one or more electrons (termed cations) and atoms that have gained one or more electrons (termed anions). This transfer of electrons is termed electrovalence in contrast to covalence. In the simplest case, the cation is a metal atom and the anion is a nonmetal atom, but these ions can be of a more complicated nature, e.g. molecular ions like NH4+ or SO42−. At normal temperatures and pressures, ionic bonding mostly creates solids (or occasionally liquids) without separate identifiable molecules, but the vaporization/sublimation of such materials does produce separate molecules where electrons are still transferred fully enough for the bonds to be considered ionic rather than covalent. Molecular size Most molecules are far too small to be seen with the naked eye, although molecules of many polymers can reach macroscopic sizes, including biopolymers such as DNA. Molecules commonly used as building blocks for organic synthesis have a dimension of a few angstroms (Å) to several dozen Å, or around one billionth of a meter. Single molecules cannot usually be observed by light (as noted above), but small molecules and even the outlines of individual atoms may be traced in some circumstances by use of an atomic force microscope. Some of the largest molecules are macromolecules or supermolecules. The smallest molecule is the diatomic hydrogen (H2), with a bond length of 0.74 Å. Effective molecular radius is the size a molecule displays in solution. The table of permselectivity for different substances contains examples. Molecular formulas Chemical formula types The chemical formula for a molecule uses one line of chemical element symbols, numbers, and sometimes also other symbols, such as parentheses, dashes, brackets, and plus (+) and minus (−) signs. These are limited to one typographic line of symbols, which may include subscripts and superscripts. A compound's empirical formula is a very simple type of chemical formula. It is the simplest integer ratio of the chemical elements that constitute it. For example, water is always composed of a 2:1 ratio of hydrogen to oxygen atoms, and ethanol (ethyl alcohol) is always composed of carbon, hydrogen, and oxygen in a 2:6:1 ratio. However, this does not determine the kind of molecule uniquely – dimethyl ether has the same ratios as ethanol, for instance. Molecules with the same atoms in different arrangements are called isomers. Also carbohydrates, for example, have the same ratio (carbon:hydrogen:oxygen= 1:2:1) (and thus the same empirical formula) but different total numbers of atoms in the molecule. The molecular formula reflects the exact number of atoms that compose the molecule and so characterizes different molecules. However different isomers can have the same atomic composition while being different molecules. The empirical formula is often the same as the molecular formula but not always. For example, the molecule acetylene has molecular formula C2H2, but the simplest integer ratio of elements is CH. The molecular mass can be calculated from the chemical formula and is expressed in conventional atomic mass units equal to 1/12 of the mass of a neutral carbon-12 (12C isotope) atom. For network solids, the term formula unit is used in stoichiometric calculations. Structural formula For molecules with a complicated 3-dimensional structure, especially involving atoms bonded to four different substituents, a simple molecular formula or even semi-structural chemical formula may not be enough to completely specify the molecule. In this case, a graphical type of formula called a structural formula may be needed. Structural formulas may in turn be represented with a one-dimensional chemical name, but such chemical nomenclature requires many words and terms which are not part of chemical formulas. Molecular geometry Molecules have fixed equilibrium geometries—bond lengths and angles— about which they continuously oscillate through vibrational and rotational motions. A pure substance is composed of molecules with the same average geometrical structure. The chemical formula and the structure of a molecule are the two important factors that determine its properties, particularly its reactivity. Isomers share a chemical formula but normally have very different properties because of their different structures. Stereoisomers, a particular type of isomer, may have very similar physico-chemical properties and at the same time different biochemical activities. Molecular spectroscopy Molecular spectroscopy deals with the response (spectrum) of molecules interacting with probing signals of known energy (or frequency, according to the Planck relation). Molecules have quantized energy levels that can be analyzed by detecting the molecule's energy exchange through absorbance or emission. Spectroscopy does not generally refer to diffraction studies where particles such as neutrons, electrons, or high energy X-rays interact with a regular arrangement of molecules (as in a crystal). Microwave spectroscopy commonly measures changes in the rotation of molecules, and can be used to identify molecules in outer space. Infrared spectroscopy measures the vibration of molecules, including stretching, bending or twisting motions. It is commonly used to identify the kinds of bonds or functional groups in molecules. Changes in the arrangements of electrons yield absorption or emission lines in ultraviolet, visible or near infrared light, and result in colour. Nuclear resonance spectroscopy measures the environment of particular nuclei in the molecule, and can be used to characterise the numbers of atoms in different positions in a molecule. Theoretical aspects The study of molecules by molecular physics and theoretical chemistry is largely based on quantum mechanics and is essential for the understanding of the chemical bond. The simplest of molecules is the hydrogen molecule-ion, H2+, and the simplest of all the chemical bonds is the one-electron bond. H2+ is composed of two positively charged protons and one negatively charged electron, which means that the Schrödinger equation for the system can be solved more easily due to the lack of electron–electron repulsion. With the development of fast digital computers, approximate solutions for more complicated molecules became possible and are one of the main aspects of computational chemistry. When trying to define rigorously whether an arrangement of atoms is sufficiently stable to be considered a molecule, IUPAC suggests that it "must correspond to a depression on the potential energy surface that is deep enough to confine at least one vibrational state". This definition does not depend on the nature of the interaction between the atoms, but only on the strength of the interaction. In fact, it includes weakly bound species that would not traditionally be considered molecules, such as the helium dimer, He2, which has one vibrational bound state and is so loosely bound that it is only likely to be observed at very low temperatures. Whether or not an arrangement of atoms is sufficiently stable to be considered a molecule is inherently an operational definition. Philosophically, therefore, a molecule is not a fundamental entity (in contrast, for instance, to an elementary particle); rather, the concept of a molecule is the chemist's way of making a useful statement about the strengths of atomic-scale interactions in the world that we observe. See also Atom Chemical polarity Chemical structure Covalent bond Diatomic molecule List of compounds List of interstellar and circumstellar molecules Molecular biology Molecular design software Molecular engineering Molecular geometry Molecular Hamiltonian Molecular ion Molecular modelling Molecular promiscuity Molecular orbital Non-covalent bonding Periodic systems of small molecules Small molecule Comparison of software for molecular mechanics modeling Van der Waals molecule World Wide Molecular Matrix References External links Molecule of the MonthSchool of Chemistry, University of Bristol Chemistry Matter
Molecule
Physics,Chemistry
3,494
63,221,711
https://en.wikipedia.org/wiki/Cyclooctadiene%20iridium%20methoxide%20dimer
Cyclooctadiene iridium methoxide dimer is an organoiridium compound with the formula Ir2(OCH3)2(C8H12)2, where C8H12 is the diene 1,5-cyclooctadiene. It is a yellow solid that is soluble in organic solvents. The complex is used as a precursor to other iridium complexes, some of which are used in homogeneous catalysis. The compound is prepared by treating cyclooctadiene iridium chloride dimer with sodium methoxide. In terms of its molecular structure, the iridium centers are square planar as is typical for a d8 complex. The Ir2O2 core is folded. References Homogeneous catalysis Cyclooctadiene complexes Organoiridium compounds
Cyclooctadiene iridium methoxide dimer
Chemistry
168
8,517,337
https://en.wikipedia.org/wiki/Incomplete%20LU%20factorization
In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner. Introduction Consider a sparse linear system . These are often solved by computing the factorization , with L lower unitriangular and U upper triangular. One then solves , , which can be done efficiently because the matrices are triangular. For a typical sparse matrix, the LU factors can be much less sparse than the original matrix — a phenomenon called fill-in. The memory requirements for using a direct solver can then become a bottleneck in solving linear systems. One can combat this problem by using fill-reducing reorderings of the matrix's unknowns, such as the Minimum degree algorithm. An incomplete factorization instead seeks triangular matrices L, U such that rather than . Solving for can be done quickly but does not yield the exact solution to . So, we instead use the matrix as a preconditioner in another iterative solution algorithm such as the conjugate gradient method or GMRES. Definition For a given matrix one defines the graph as which is used to define the conditions a sparsity pattern needs to fulfill A decomposition of the form where the following hold is a lower unitriangular matrix is an upper triangular matrix are zero outside of the sparsity pattern: is zero within the sparsity pattern: is called an incomplete LU decomposition (with respect to the sparsity pattern ). The sparsity pattern of L and U is often chosen to be the same as the sparsity pattern of the original matrix A. If the underlying matrix structure can be referenced by pointers instead of copied, the only extra memory required is for the entries of L and U. This preconditioner is called ILU(0). Stability Concerning the stability of the ILU the following theorem was proven by Meijerink and van der Vorst. Let be an M-matrix, the (complete) LU decomposition given by , and the ILU by . Then holds. Thus, the ILU is at least as stable as the (complete) LU decomposition. Generalizations One can obtain a more accurate preconditioner by allowing some level of extra fill in the factorization. A common choice is to use the sparsity pattern of A2 instead of A; this matrix is appreciably more dense than A, but still sparse over all. This preconditioner is called ILU(1). One can then generalize this procedure; the ILU(k) preconditioner of a matrix A is the incomplete LU factorization with the sparsity pattern of the matrix Ak+1. More accurate ILU preconditioners require more memory, to such an extent that eventually the running time of the algorithm increases even though the total number of iterations decreases. Consequently, there is a cost/accuracy trade-off that users must evaluate, typically on a case-by-case basis depending on the family of linear systems to be solved. An approximation to the ILU factorization can be performed as a fixed-point iteration in a highly parallel way. See also Incomplete Cholesky factorization References . See Section 10.3 and further. External links Incomplete LU Factorization on CFD Wiki Numerical linear algebra
Incomplete LU factorization
Mathematics
680
7,874,500
https://en.wikipedia.org/wiki/Chemical%20Workers%27%20Union%20%28United%20Kingdom%29
The Chemical Workers' Union was a trade union in the United Kingdom. History The union was established in 1912 by a small group of pharmacists, as the Retail Chemists' Association. Most of its members were also members of the Pharmaceutical Society, and it focused on improving standards in the trade and limiting the number of apprentices. By 1918, it had only 447 members, and so it decided to become an industrial union, accepting all workers involved in producing and distributing drugs and chemicals. It changed its named to the Amalgamated Society of Pharmacists, Drug and Chemical Workers, and for the first time registered as a trade union. In December 1918, the London wholesale drug workers' branch of the National Amalgamated Union of Shop Assistants, Warehousemen and Clerks, decided to dissolve and encourage its members to join the Chemical Workers. As it had 3,000 members, they quickly became prominent the Chemical Workers, with Fred Hawkins becoming its full-time organiser. In 1920, the small National Association of Chemists' Assistants also joined the union, which became the National Union of Drug and Chemical Workers. It employed Herbert Nightingale as its first full-time general secretary, and launched a journal, the Drug Union News. However, it fell into financial difficulties, laying off Hawkins, and by the end of 1922, membership had fallen to only 2,500. Militant trade unionists won control of the executive. In 1923, the union affiliated to the Trades Union Congress (TUC), but the Shop Assistants Union claimed it was a breakaway union and should return various "poached" members. The Chemical Workers were unwilling to do this, and so in 1924 again left the TUC. Nightingale resigned as general secretary, concerned he would lose an election, and former secretary of the London wholesale drug workers, Arthur Gillian, was elected as his replacement. Under Gillian's leadership, the union grew a little, membership reaching 3,376 by 1926. The union strongly supported the UK general strike, and became strongly influenced by the Independent Labour Party, of which Gillian was a member, and the National Minority Movement, with Dick Beech and the employees of the Russian Oil Products Company playing a leading role. In 1936, the union decided to try to become the sole union for chemical workers. It changed its name to the "Chemical Workers' Union", and again applied to affiliate to the TUC, but was rejected. It then reapplied each year, winning the support of the large majority of TUC affiliates, but due to the opposition of the large general unions, it was unable to secure admission until 1943. In 1938, Bob Edwards joined the union, soon becoming an organiser. During World War II, he led significant industrial action during World War II, while many other unions refused to do so. Membership grew, and by 1943 reached 22,000, with the union particularly strong at ICI. In 1961, the union absorbed the National Union of Atomic Workers, which had formed in the 1950s as a breakaway from the Transport and General Workers Union. The union merged into the Transport and General Workers' Union in 1971. General Secretaries 1912: E. N. Lloyd 1920: Herbert Nightingale 1924: Arthur J. Gillan 1947: Robert Edwards References Defunct trade unions of the United Kingdom Chemical industry in the United Kingdom Chemical industry trade unions Transport and General Workers' Union amalgamations Trade unions established in 1912 Trade unions disestablished in 1971 Trade unions based in London
Chemical Workers' Union (United Kingdom)
Chemistry
700
4,739,827
https://en.wikipedia.org/wiki/Hyperplane%20separation%20theorem
In geometry, the hyperplane separation theorem is a theorem about disjoint convex sets in n-dimensional Euclidean space. There are several rather similar versions. In one version of the theorem, if both these sets are closed and at least one of them is compact, then there is a hyperplane in between them and even two parallel hyperplanes in between them separated by a gap. In another version, if both disjoint convex sets are open, then there is a hyperplane in between them, but not necessarily any gap. An axis which is orthogonal to a separating hyperplane is a separating axis, because the orthogonal projections of the convex bodies onto the axis are disjoint. The hyperplane separation theorem is due to Hermann Minkowski. The Hahn–Banach separation theorem generalizes the result to topological vector spaces. A related result is the supporting hyperplane theorem. In the context of support-vector machines, the optimally separating hyperplane or maximum-margin hyperplane is a hyperplane which separates two convex hulls of points and is equidistant from the two. Statements and proof In all cases, assume to be disjoint, nonempty, and convex subsets of . The summary of the results are as follows: The number of dimensions must be finite. In infinite-dimensional spaces there are examples of two closed, convex, disjoint sets which cannot be separated by a closed hyperplane (a hyperplane where a continuous linear functional equals some constant) even in the weak sense where the inequalities are not strict. Here, the compactness in the hypothesis cannot be relaxed; see an example in the section Counterexamples and uniqueness. This version of the separation theorem does generalize to infinite-dimension; the generalization is more commonly known as the Hahn–Banach separation theorem. The proof is based on the following lemma: Since a separating hyperplane cannot intersect the interiors of open convex sets, we have a corollary: Case with possible intersections If the sets have possible intersections, but their relative interiors are disjoint, then the proof of the first case still applies with no change, thus yielding: in particular, we have the supporting hyperplane theorem. Converse of theorem Note that the existence of a hyperplane that only "separates" two convex sets in the weak sense of both inequalities being non-strict obviously does not imply that the two sets are disjoint. Both sets could have points located on the hyperplane. Counterexamples and uniqueness If one of A or B is not convex, then there are many possible counterexamples. For example, A and B could be concentric circles. A more subtle counterexample is one in which A and B are both closed but neither one is compact. For example, if A is a closed half plane and B is bounded by one arm of a hyperbola, then there is no strictly separating hyperplane: (Although, by an instance of the second theorem, there is a hyperplane that separates their interiors.) Another type of counterexample has A compact and B open. For example, A can be a closed square and B can be an open square that touches A. In the first version of the theorem, evidently the separating hyperplane is never unique. In the second version, it may or may not be unique. Technically a separating axis is never unique because it can be translated; in the second version of the theorem, a separating axis can be unique up to translation. The horn angle provides a good counterexample to many hyperplane separations. For example, in , the unit disk is disjoint from the open interval , but the only line separating them contains the entirety of . This shows that if is closed and is relatively open, then there does not necessarily exist a separation that is strict for . However, if is closed polytope then such a separation exists. More variants Farkas' lemma and related results can be understood as hyperplane separation theorems when the convex bodies are defined by finitely many linear inequalities. More results may be found. Use in collision detection In collision detection, the hyperplane separation theorem is usually used in the following form: Regardless of dimensionality, the separating axis is always a line. For example, in 3D, the space is separated by planes, but the separating axis is perpendicular to the separating plane. The separating axis theorem can be applied for fast collision detection between polygon meshes. Each face's normal or other feature direction is used as a separating axis. Note that this yields possible separating axes, not separating lines/planes. In 3D, using face normals alone will fail to separate some edge-on-edge non-colliding cases. Additional axes, consisting of the cross-products of pairs of edges, one taken from each object, are required. For increased efficiency, parallel axes may be calculated as a single axis. See also Dual cone Farkas's lemma Kirchberger's theorem Optimal control Notes References External links Collision detection and response Theorems in convex geometry Hermann Minkowski Linear functionals fr:Séparation des convexes
Hyperplane separation theorem
Mathematics
1,060
45,101,818
https://en.wikipedia.org/wiki/Circle%20of%20equal%20altitude
The circle of equal altitude, also called circle of position (CoP), is defined as the locus of points on Earth on which an observer sees a celestial object such as the sun or a star, at a given time, with the same observed altitude. It was discovered by the American sea-captain Thomas Hubbard Sumner in 1837, published in 1843 and is the basis of an important method in celestial navigation. Discovery Sumner discovered the line on a voyage from South Carolina to Greenock in Scotland in 1837. On December 17, as he was nearing the coast of Wales, he was uncertain of his position after several days of cloudy weather and no sights. A momentary opening in the clouds allowed him to determine the altitude of the sun. This, together with the chronometer time and the latitude enabled him to calculate the longitude. But he was not confident of his latitude, which depended on dead reckoning (DR). So he calculated longitude using his DR value and two more values of latitude 10' and 20' to the north. He found that the three positions were on a straight line which happened to pass through Smalls Lighthouse. He realised that he must be located somewhere on that line and that if he set course E.N.E. along the line he should eventually sight the Smalls Light which, in fact he did, in less than an hour. Having found the line empirically, he then worked out the theory, and published this in a book in 1843. The method was quickly recognized as an important development in celestial navigation, and was made available to every ship in the United States Navy. Parameters The center of the CoP, is the geographical position (GP) of the observed body, the substellar point for a star, the subsolar point for the sun. The radius is the great circle distance equal to the zenith distance of the body. Center = geographical position (GP) of the body: (, ) = (Dec, -GHA) If is defined as west longitude (+W/-E) then it will be +GHA, since HA (GHA or LHA) is always measured west-ward (+W/-E). Radius = zenith distance: zd [nm] = 60 ⋅ (90 - Ho) (aka co-altitude of Ho) As the circles used for navigation generally have a radius of thousands of miles, a segment a few tens of miles long closely approximates a straight line, as described in Sumner's original use of the method. Equation The equation links the following variables The position of the observer: B, L. The coordinates of the observed star, its geographical position: GHA, Dec. The true altitude of the body: Ho. Being B the latitude (+N/-S), L the longitude (+E/-W). LHA = GHA + L is the local hour angle (+W/-E), Dec and GHA are the declination and Greenwich hour angle of the star observed. And Ho is the true or observed altitude, that is, the altitude measured with a sextant corrected for dip, refraction and parallax. Special cases of COPs Parallel of latitude by Polaris altitude. Parallel of latitude by altitude of the sun at noon, or meridian altitude. Meridian of longitude known the time and latitude. Circle of illumination or terminator (star = Sun, Ho = 0 for places at Sunrise/Sunset). See also Almucantar Navigation Celestial navigation Intercept method Longitude by chronometer Sight reduction References External links Navigational Algorithms http://sites.google.com/site/navigationalalgorithms/ Papers: Vector equation of the Circle of Position, Use of rotation matrices to plot a circle of equal altitude. Software: Plotting of the circumferences of equal altitude Correction to the sextant altitude Archivo:CorrecionHs.jpg Fix by Vector Solution for the Intersection of Two Circles of Equal Altitude. Android free App Navigation Celestial navigation
Circle of equal altitude
Astronomy
817
29,714,573
https://en.wikipedia.org/wiki/7005%20aluminium%20alloy
7005 is an aluminium wrought alloy used in bicycle frames. Due to its relative ease of welding, it does not require expensive heat treating. It is, however, harder to form, making manufacture more challenging. It has an Ultimate Tensile Strength of 350 MPa, a Fatigue Strength of 150 MPa and a density of 2.78 g/cm3. It does not need to be precipitation hardened, but can be cooled in air. Specific forms of AL 7005 include 7005-O 7005-T5 7005-T53 7005-T6 Chemical composition The alloy composition of 7005 is: Properties Further reading Aluminum 7005-T6 Properties Aluminum 7005-O Properties Aluminum 7005-T53 Properties References Aluminum alloy table Aluminium–zinc alloys
7005 aluminium alloy
Chemistry
158
39,697,327
https://en.wikipedia.org/wiki/Fabry%20gap%20theorem
In mathematics, the Fabry gap theorem is a result about the analytic continuation of complex power series whose non-zero terms are of orders that have a certain "gap" between them. Such a power series is "badly behaved" in the sense that it cannot be extended to be an analytic function anywhere on the boundary of its disc of convergence. The theorem may be deduced from the first main theorem of Turán's method. Statement of the theorem Let 0 < p1 < p2 < ... be a sequence of integers such that the sequence pn/n diverges to ∞. Let (αj)j∈N be a sequence of complex numbers such that the power series has radius of convergence 1. Then the unit circle is a natural boundary for the series f. Converse A converse to the theorem was established by George Pólya. If lim inf pn/n is finite then there exists a power series with exponent sequence pn, radius of convergence equal to 1, but for which the unit circle is not a natural boundary. See also Gap theorem (disambiguation) Lacunary function Ostrowski–Hadamard gap theorem References Mathematical series Theorems in complex analysis
Fabry gap theorem
Mathematics
248
12,757,598
https://en.wikipedia.org/wiki/Altai%20falcon
The Altai falcon has been identified as a color morph of the Central Asian saker falcon (Falco cherrug milvipes), as per the latest genetic research (Zinevich et al. 2023). Previously, it was variously classified as a morph, a subspecies (Falco cherrug altaicus), and even separate species (Falco altaicus). It used to have a high reputation among Central Asian falconers. Distribution and taxonomy The Altai falcon breeds in a relatively small area of Central Asia across the Altai and Sayan Mountains. This area overlaps with the much larger breeding area of the saker falcon (Falco cherrug). Previously, it was believed that Altai falcons were either natural hybrids between sakers and gyrfalcons (Falco rusticolus), or rather the descendants of such rare hybrids backcrossing into the large population of sakers. However, the most recent research has demonstrated that Altai falcons are genetically intermingled with the broader Asian Saker population and do not constitute a distinct cluster, indicating that they do not represent a separate taxonomic entity. Literature Almásy Gy 1903. Vándor-utam Ázsia szívébe. (My Travels to the Heart of Asia – in Hungarian) Budapest, Természettudományi Könyvkiadó-vállalat. Eastham CP, Nicholls MK, Fox NC 2002. Morphological variation of the saker (Falco cherrug) and the implications for conservation. Biodiversity and Conservation, 11, 305–325. Ellis DH 1995. What is Falco altaicus Menzbier? Journal of Raptor Research, 29, 15–25. Zinevich, L, Prommer, M, Laczkó, L, Rozhkova, D, Sorokin, A, Karyakin, I, Bagyura, J, Cserkész, T, Sramkó 2023. Phylogenomic insights into the polyphyletic nature of Altai falcons within eastern sakers (Falco cherrug) and the origins of gyrfalcons (Falco rusticolus) Scientific Reports, 13:17800. Menzbier MA 1891. (1888–1893). Ornithologie du Turkestan et des pays adjacents (Partie No. -O. de la Mongolie, steppes Kirghiz, contree Aralo-Caspienne, partie superieure du bassin d'Oxus, Pamir). Vol. 12. Publiee par l'Auteur, Moscow, Russia. Nittinger F, Gamauf A, Pinsker W, Wink M, Haring E 2007. Phylogeography and population structure of the saker falcon (Falco cherrug) and the influence of hybridization: mitochondrial and microsatellite data. Molecular Ecology, 16, 1497–1517. Orta J 1994. 57. Saker Falcon. In: del Hoyo J, Elliott A, Sargatal J (eds.): Handbook of Birds of the World, Volume 2: New World Vultures to Guineafowl: 273–274, plate 28. Lynx Edicions, Barcelona. Pfander 2011. Semispecies and Unidentified Hidden Hybrids (for Example of Birds of Prey) Raptors Conservation 23: 74-105. Potapov E, Sale R 2005. The Gyrfalcon. Poyser Species Monographs. A & C Black Publishers, London. Sushkin PP 1938. Birds of the Soviet Altai and adjacent parts of north-western Mongolia. Vol. 1. [In Russian.] Academy of Science of USSR Press, Moscow, Russia. External links to rare photos Altai falcon, Western Mongolia Altai falcon, Western Mongolia Altai falcon, Kazakhstan Altai falcon Falconry Birds of Mongolia Controversial bird taxa Bird hybrids Altai falcon
Altai falcon
Biology
811
10,503,894
https://en.wikipedia.org/wiki/Urban%20Realm
Urban Realm is a planning magazine published in Scotland, with a focus on Scottish issues. The magazine was established as Prospect in 1922 by the Royal Incorporation of Architects in Scotland, and is the oldest architectural magazine in Scotland. It was rebranded as Urban Realm in 2009 to reflect the wider environment in which architecture operates, covering policy, planning, engineering, and strategic issues, as well as new buildings. It is currently published by Urban Realm Ltd. Carbuncle Awards Intermittently from 2000-2015, the magazine promoted the Carbuncle Awards, which were aimed at highlighting poor design and planning in Scotland. The awards comprised the "Plook on the Plinth" award for "most dismal town", the "Pock Mark" award for the worst planning decision, and the "Zit Building" award for Scotland's most disappointing new building. In 2005, the magazine published a list of the 100 best modern Scottish buildings. Coatbridge in North Lanarkshire famously won the Carbuncle Award in 2007. References External links Urban Realm website 1922 establishments in Scotland Architecture magazines Magazines established in 1922 Magazines published in Scotland Quarterly magazines published in the United Kingdom Architecture in Scotland Town and country planning in Scotland Urban studies and planning magazines
Urban Realm
Engineering
251
19,513,970
https://en.wikipedia.org/wiki/HCMOS
HCMOS ("high-speed CMOS") is the set of specifications for electrical ratings and characteristics, forming the 74HC00 family, a part of the 7400 series of integrated circuits. The 74HC00 family followed, and improved upon, the 74C00 series (which provided an alternative CMOS logic family to the 4000 series but retained the part number scheme and pinouts of the standard 7400 series (especially the 74LS00 series)) . Some specifications include: DC supply voltage DC input voltage range DC output voltage range input rise and fall times output rise and fall times HCMOS also stands for high-density CMOS. The term was used to describe microprocessors, and other complex integrated circuits, which use a smaller manufacturing processes, producing more transistors per area. The Freescale 68HC11 is an example of a popular HCMOS microcontroller. Variations HCT stands for high-speed CMOS with transistor–transistor logic voltages. These devices are similar to the HCMOS types except they will operate at standard TTL power supply voltages and logic input levels. This allows for direct pin-to-pin compatible CMOS replacements to reduce power consumption without loss of speed. HCU stands for high-speed CMOS un-buffered. This type of CMOS contains no buffer and is ideal for crystals and other ceramic oscillators needing linearity. VHCMOS, or AHC, stands for very high-speed CMOS or advanced high-speed CMOS. Typical propagation delay time is between 3 ns and 4 ns. The speed is similar to Bipolar Schottky transistor TTL. AHCT stands for advanced high-speed CMOS with TTL inputs. Typical propagation delay time is between 5 ns and 6 ns. References External links HCMOS Design Considerations, Texas Instruments Integrated circuits Digital electronics
HCMOS
Technology,Engineering
386
72,645,041
https://en.wikipedia.org/wiki/Core-compact%20space
In general topology and related branches of mathematics, a core-compact topological space is a topological space whose partially ordered set of open subsets is a continuous poset. Equivalently, is core-compact if it is exponentiable in the category Top of topological spaces. Expanding the definition of an exponential object, this means that for any , the set of continuous functions has a topology such that function application is a unique continuous function from to , which is given by the Compact-open topology and is the most general way to define it. Another equivalent concrete definition is that every neighborhood of a point contains a neighborhood of whose closure in is compact. As a result, every (weakly) locally compact space is core-compact, and every Hausdorff (or more generally, sober) core-compact space is locally compact, so the definition is a slight weakening of the definition of a locally compact space in the non-Hausdorff case. See also Locally compact space References Further reading Topology
Core-compact space
Physics,Mathematics
199
18,766,220
https://en.wikipedia.org/wiki/Bertrand%E2%80%93Diguet%E2%80%93Puiseux%20theorem
In the mathematical study of the differential geometry of surfaces, the Bertrand–Diguet–Puiseux theorem expresses the Gaussian curvature of a surface in terms of the circumference of a geodesic circle, or the area of a geodesic disc. The theorem is named for Joseph Bertrand, Victor Puiseux, and Charles François Diguet. Let p be a point on a smooth surface M. The geodesic circle of radius r centered at p is the set of all points whose geodesic distance from p is equal to r. Let C(r) denote the circumference of this circle, and A(r) denote the area of the disc contained within the circle. The Bertrand–Diguet–Puiseux theorem asserts that The theorem is closely related to the Gauss–Bonnet theorem. References Differential geometry of surfaces Theorems in differential geometry
Bertrand–Diguet–Puiseux theorem
Mathematics
182
31,161,972
https://en.wikipedia.org/wiki/IEC%2061400
IEC 61400 is an international standard published by the International Electrotechnical Commission (IEC) regarding wind turbines. Purpose and function IEC 61400 is a set of design requirements made to ensure that wind turbines are appropriately engineered against damage from hazards within the planned lifetime. The standard concerns most aspects of the turbine life from site conditions before construction, to turbine components being tested, assembled and operated. Wind turbines are capital intensive, and are usually purchased before they are being erected and commissioned. Some of these standards provide technical conditions verifiable by an independent, third party, and as such are necessary in order to make business agreements so wind turbines can be financed and erected. IEC started standardizing international certification on the subject in 1995, and the first standard appeared in 2001. The common set of standards sometimes replace the various national standards, forming a basis for global certification. Small wind turbines are defined as being of up to 200 m2 swept area and a somewhat simplified IEC 61400-2 standard addresses these. It is also possible to use the IEC 61400-1 standard for turbines of less than 200 m2 swept area. The standards for loads and noise are used in the development of prototypes at the Østerild Wind Turbine Test Field. Harmonization In the U.S., standards are intended to be compatible with IEC standards, and some parts of 61400 are required documentation. The U.S. National Renewable Energy Laboratory participates in IEC standards development work, and tests equipment according to these standards. For U.S. offshore turbines however, more standards are needed, and the most important are : ISO 19900, General requirements for offshore structures ISO 19902, Fixed steel offshore structures ISO 19903, Fixed concrete offshore structures ISO 19904-1, Floating offshore structures – mono-hulls, semisubmersibles and spars ISO 19904-2, Floating offshore structures - tension-leg platforms API RP 2A-WSD, Recommended practice for planning, designing and constructing fixed offshore steel platforms - working stress design. In Canada, the previous national standards were outdated and impeded the wind industry, and they were updated and harmonized with 61400 by the Canadian Standards Association with several modifications. An update for IEC 61400 is scheduled for 2016. For small wind turbines the global industry has been working towards harmonisation of certification requirements with a "test once, certify everywhere" objective. Considerable co-operation has been taking place between UK, USA, and more recently Japan, Denmark and other countries so that the IEC 61400-2 standard as interpreted within e.g. the MCS certification scheme (of UK origin) is interoperable with the USA (for example where it corresponds to an AWEA small wind turbine standard) and other countries. Wind Turbine Generator (WTG) classes Wind turbines are designed for specific conditions. During the construction and design phase assumptions are made about the wind climate that the wind turbines will be exposed to. Turbine wind class is just one of the factors needing consideration during the complex process of planning a wind power plant. Wind classes determine which turbine is suitable for the normal wind conditions of a particular site. Turbine classes are determined by three parameters - the average wind speed, extreme 50-year gust, and turbulence. Turbulence intensity quantifies how much the wind varies typically within 10 minutes. Because the fatigue loads of a number of major components in a wind turbine are mainly caused by turbulence, the knowledge of how turbulent a site is of crucial importance. Normally the wind speed increases with increasing height due to vertical wind shear. In flat terrain the wind speed increases logarithmically with height. In complex terrain the wind profile is not a simple increase and additionally a separation of the flow might occur, leading to heavily increased turbulence. The extreme wind speeds are based on the 3 second average wind speed. Turbulence is measured at 15 m/s wind speed. This is the definition in IEC 61400-1 edition 2. For U.S. waters however, several hurricanes have already exceeded wind class Ia with speeds above the 70 m/s (156 mph), and efforts are being made to provide suitable standards. In 2021, TÜV SÜD developed a standard to simulate a new wind class T1 for tropical cyclones. List of IEC 61400 parts IEC 61400-1:2005+AMD1:2010 Design requirements IEC 61400-1:2019 RLV Design requirements (Redline Version) IEC 61400-2:2013 Small wind turbines IEC 61400-3-1:2019 Design requirements for fixed offshore wind turbines IEC TS 61400-3-2:2019 Design requirements for floating offshore wind turbines IEC 61400-4:2012 Design requirements for wind turbine gearboxes IEC 61400-5:2020 Wind turbine blades IEC 61400-6:2020 Tower and foundation design requirements IEC 61400-8:2024 Design of wind turbine structural components IEC 61400-11:2012+AMD1:2018 CSV Acoustic noise measurement techniques (Consolidated Version) IEC TS 61400-11-2:2024 Acoustic noise measurement techniques - Measurement of wind turbine sound characteristics in receptor position IEC 61400-12:2022 Power performance measurements of electricity producing wind turbines - Overview IEC 61400-12-1:2022 Power performance measurements of electricity producing wind turbines IEC 61400-12-2:2022 Power performance of electricity producing wind turbines based on nacelle anemometry IEC 61400-12-3:2022 Power performance - Measurement based site calibration IEC TR 61400-12-4:2020 Numerical site calibration for power performance testing of wind turbines IEC 61400-12-5:2022 Power performance - Assessment of obstacles and terrain IEC 61400-12-6:2022 Measurement based nacelle transfer function of electricity producing wind turbines IEC 61400-13:2015+AMD1:2021 CSV Measurement of mechanical loads (Consolidated Version) IEC TS 61400-14:2005 Declaration of apparent sound power level and tonality values IEC 61400-21-1:2019 Measurement and assessment of electrical characteristics - Wind turbines IEC 61400-21-2:2023 Measurement and assessment of electrical characteristics - Wind power plants IEC TR 61400-21-3:2019 Measurement and assessment of electrical characteristics - Wind turbine harmonic model and its application IEC 61400-23:2014 Full-scale structural testing of rotor blades IEC 61400-24:2019 Lightning protection IEC 61400-25-1:2017 RLV Communications for monitoring and control of wind power plants - Overall description of principles and models (Redline Version) IEC 61400-25-2:2015 Communications for monitoring and control of wind power plants - Information models IEC 61400-25-3:2015 RLV Communications for monitoring and control of wind power plants - Information exchange models (Redline Version) IEC 61400-25-4:2016 RLV Communications for monitoring and control of wind power plants - Mapping to communication profile (Redline Version) IEC 61400-25-5:2017 Communications for monitoring and control of wind power plants - Conformance testing IEC 61400-25-6:2016 Communications for monitoring and control of wind power plants - Logical node classes and data classes for condition monitoring IEC TS 61400-25-71:2019 Communications for monitoring and control of wind power plants - Configuration description language IEC TS 61400-26-1:2019 Availability for wind energy generation systems IEC TS 61400-26-4:2024 Reliability for wind energy generation systems IEC 61400-27-1:2020 Electrical simulation models - Generic models IEC 61400-27-2:2020 Electrical simulation models - Model validation IEC TS 61400-29:2023 Marking and lighting of wind turbines IEC TS 61400-30:2023 Safety of wind turbine generators - General principles for design IEC TS 61400-31:2023 Siting risk assessment IEC 61400-50:2022 Wind measurement - Overview IEC 61400-50-1:2022 Wind measurement - Application of meteorological mast, nacelle and spinner mounted instruments IEC 61400-50-2:2022 Wind measurement - Application of ground-mounted remote sensing technology IEC 61400-50-3:2022 Use of nacelle-mounted lidars for wind measurements See also IEC 61400-25 References External links IEC 61400 Wind turbines - All parts 61400 Electric power transmission systems Electric power distribution Wind turbines
IEC 61400
Technology
1,716
58,296,197
https://en.wikipedia.org/wiki/Surfactant%20leaching%20%28decontamination%29
Surfactant leaching is a method of water and soil decontamination, e.g., for oil recovery in petroleum industry. It involves mixing of contaminated water or soil with surfactants with the subsequent leaching of emulsified contaminants. In oil recovery, most common surfactant types are ethoxylated alcohols, ethoxylated nonylphenols, sulphates, sulphonates, and biosurfactants. References Soil contamination Solid-solid separation Oil spill remediation technologies
Surfactant leaching (decontamination)
Chemistry,Environmental_science
113
23,652,115
https://en.wikipedia.org/wiki/C11H15NO2
{{DISPLAYTITLE:C11H15NO2}} The molecular formula C11H15NO2 (molar mass : 193.24 g/mol, exact mass : 193.110279) may refer to: 1,3-Benzodioxolylbutanamine Butamben m-Cumenyl methylcarbamate 3,4-Ethylidenedioxyamphetamine Isoprocarb Lobivine MDMA (3,4-MDMA, 3,4-Methylenedioxymethamphetamine) Methedrone 3-Methoxymethcathinone 1-Methylamino-1-(3,4-methylenedioxyphenyl)propane 2,3-Methylenedioxymethamphetamine (2,3-MDMA) 3,4-Methylenedioxyphentermine 2-Methyl-MDA 5-Methyl-MDA 6-Methyl-MDA Tolibut
C11H15NO2
Chemistry
209
55,616,993
https://en.wikipedia.org/wiki/Mycorrhaphium%20africanum
Mycorrhaphium africanum is a species of tooth fungus in the family Steccherinaceae. It was described as new to science in 2003 by mycologists Dominique Claude Mossebo and Leif Ryvarden. The type was collected in the Dja Faunal Reserve in Cameroon, where it was found fruiting on fallen dead hardwood branches. Description The brownish, funnel-shaped cap measures in diameter, and is supported by a smooth stipe that is long and 3–6 mm in diameter. It is initially whitish before becoming pale brown to reddish brown with pink or white spots. The spines on the cap underside are white but become brownish when dry. They are densely packed, and measure up to long. Mycorrhaphium africanum has a dimitic hyphal system, comprising generative and skeletal hyphae. The skeletal hyphae are confined to the context of the stipe. Basidia are club-shaped, measuring 12–14 by 4–5 μm. The spores are smooth, hyaline, and cylindrical, with dimensions of 4.5–5 by 2 μm. References Steccherinaceae Fungi of Africa Fungi described in 2003 Taxa named by Leif Ryvarden Fungus species
Mycorrhaphium africanum
Biology
257
582,453
https://en.wikipedia.org/wiki/Age%20of%20Aquarius
The Age of Aquarius, in astrology, is either the current or forthcoming astrological age, depending on the method of calculation. Astrologers maintain that an astrological age is a product of the Earth's slow precessional rotation and lasts for 2,160 years, on average (one 25,920 year period of precession, or great year, divided by 12 zodiac signs equals a 2,160 year astrological age). There are various methods of calculating the boundaries of an astrological age. In Sun-sign astrology, the first sign is Aries, followed by Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, and Pisces, whereupon the cycle returns to Aries and through the zodiacal signs again. Astrological ages proceed in the opposite direction. Therefore, the Age of Aquarius follows the Age of Pisces. Overview The approximate 2,160 years for each age corresponds to the average time it takes for the vernal equinox to move from one constellation of the zodiac into the next. This average can be computed by dividing the Earth's 25,800 year gyroscopic precession period by 12, the number of zodiacal signs. This is only a rough calculation, as the length of time it takes for a complete precession is currently increasing. A more accurate set of figures is 25,772 years for a complete cycle and 2,147.5 years per astrological age, assuming a constant precession rate. According to various astrologers' calculations, approximate dates for entering the age of Aquarius range from (Terry MacKinnell) to (John Addey). Astrologers do not agree on when the Aquarian age will start or even if it has already started. lists various references from mainly astrological sources for the start of the Age of Aquarius. Based on Campion's summary, most published materials on the subject state that the Age of Aquarius arrived in the 20th century (29 claims), with the 24th century in second place with 12 claimants. Astrological ages are taken to be associated with the precession of the equinoxes. The slow wobble of the Earth's rotation axis on the celestial sphere is independent of the diurnal rotation of the Earth on its own axis and the annual revolution of the Earth around the Sun. Traditionally this 25,800 year-long cycle is calibrated, for the purposes of determining astrological ages, by the perceived location of the Sun in one of the 12 zodiac constellations at the vernal (Spring) equinox, which corresponds to the moment the Sun is perceived as crossing the celestial equator, marking the start of spring in the Northern Hemisphere each year. Roughly every 2,150 years the Sun's position at the time of the vernal equinox will have moved into a new zodiacal constellation. In 1929 the International Astronomical Union defined the edges of the 88 official constellations. The edge established between Pisces and Aquarius officially locates the beginning of the Aquarian Age around Many astrologers dispute this approach because of the varying sizes and overlap between the zodiacal constellations. They prefer the long-established convention of equally-sized signs, spaced every 30 degrees along the ecliptic, which are named after what were the 12 background zodiacal constellations when tropical astrology was codified Astrological meaning Astrologers believe that an astrological age affects humanity, possibly by influencing the rise and fall of civilizations or cultural tendencies. Traditionally, Aquarius is associated with electricity computers flight democracy freedom humanitarianism idealism modernization nervous disorders rebellion nonconformity philanthropy veracity perseverance humanity irresolution Among other dates, one view is that the age of Aquarius arrived around 1844, with the harbinger of Siyyid ʿAlí Muḥammad (1819–1850), who founded Bábism. promoted the view that, although no one knows when the Aquarian age begins, the American Revolution, the Industrial Revolution, and the discovery of electricity are all attributable to Aquarian influence. They make a number of predictions about the trends that they believe will develop in the Aquarian age. Proponents of medieval astrology suggest that the Pisces world where religion is the opiate of the masses will be replaced in the Aquarian age by a world ruled by secretive, power-hungry elites seeking absolute power over others; that knowledge in the Aquarian age will only be valued for its ability to win wars; that knowledge and science will be abused, not industry and trade; and that the Aquarian age will be another dark age in which religion is considered offensive. Another view suggests that the rise of scientific rationalism, combined with the fall of religious influence, the increasing focus on human rights since the 1780s, the exponential growth of technology, plus the advent of flight and space travel, are evidence of the dawning of the age of Aquarius. A "wave" theory of the shifting great ages suggests that the age of Aquarius will not arrive on a given date, but is instead emerging in influence over many years, similar to how the tide rises gradually, by small increments, rather than surging forward all at once. Rudolf Steiner believed that the age of Aquarius will arrive in In Steiner's approach, each age is exactly 2,160 years. Based on this structure, the world has been in the age of Pisces since 1413. Rudolf Steiner had spoken about two great spiritual events: The return of Christ in the ethereal world (and not in a physical body), because people must develop their faculties until they can reach the ethereal world; and the incarnation of Ahriman, Zoroaster's "destructive spirit" that will try to block the development of humanity. In a 1890 article about feminism in the French newspaper La Fronde on 26 February 1890, August Vandekerkhove stated: "About March, 21st this year the cycle of Aquarius will start. Aquarius is the house of the woman". He adds that is in this age the woman will be "equal" to the man. Gnostic philosopher Samael Aun Weor declared 4 February 1962 to be the beginning of the "age of Aquarius", heralded by the alignment of the first six planets, the Sun, the Moon and the constellation Aquarius. Psychoanalyst Carl Jung mentions the "age of Aquarius" in his book Aion, believing that the "age of Aquarius" will "constellate the problem of the union of the opposites". In accordance with prominent astrologers, Jung believed the "age of Aquarius" will be a dark and spiritually deficient time for humanity, writing that "it will no longer be possible to write off evil as the mere privation of good; its real existence will have to be recognized in the age of Aquarius". According to Jung's interpretation of astrology, the "age of Pisces" began with the birth and death of Christ, associating the ichthys (colloquially known as the "Jesus fish") with the symbol of Pisces; following the "age of Pisces" would be the "age of Aquarius", the spiritually deficient age before the arrival of the Antichrist. Common cultural associations The expression "age of Aquarius" in popular culture usually refers to the heyday of the hippie and New Age movements in the 1960s and 1970s. The 1967 musical Hair, with its opening song "Aquarius" and the line "This is the dawning of the Age of Aquarius", brought the Aquarian age concept to the attention of audiences worldwide. However, the song further defines this dawning of the age within the first lines: "When the Moon is in the seventh house and Jupiter aligns with Mars, then peace will guide the planets and love will steer the stars". Astrologer Neil Spencer denounced the lyrics as "astrological gibberish", noting that Jupiter aligns with Mars several times a year and the Moon is in the 7th house for two hours every day. The Woodstock music festival was billed as "an Aquarian exposition". See also Footnotes References External links Astrological ages New Age 1960s fads and trends
Age of Aquarius
Physics
1,722
70,957,264
https://en.wikipedia.org/wiki/Motorola%20Edge%2030
Motorola Edge 30 is a series of Android smartphones developed by Motorola Mobility, a subsidiary of Lenovo, launched on 2022. References External links Mobile phones introduced in 2022 Android (operating system) devices Motorola smartphones Mobile phones with multiple rear cameras Mobile phones with 4K video recording
Motorola Edge 30
Technology
59
7,480,035
https://en.wikipedia.org/wiki/Machine%20Age
The Machine Age is an era that includes the early-to-mid 20th century, sometimes also including the late 19th century. An approximate dating would be about 1880 to 1945. Considered to be at its peak in the time between the first and second world wars, the Machine Age overlaps with the late part of the Second Industrial Revolution (which ended around 1914 at the start of World War I) and continues beyond it until 1945 at the end of World War II. The 1940s saw the beginning of the Atomic Age, where modern physics saw new applications such as the atomic bomb, the first computers, and the transistor. The Digital Revolution ended the intellectual model of the machine age founded in the mechanical and heralding a new more complex model of high technology. The digital era has been called the Second Machine Age, with its increased focus on machines that do mental tasks. Universal chronology Developments Artifacts of the Machine Age include: Reciprocating steam engine replaced by gas turbines, internal combustion engines and electric motors Electrification based on large hydroelectric and thermal electric power production plants and distribution systems Mass production of high-volume goods on moving assembly lines, particularly of the automobile Gigantic production machinery, especially for producing and working metal, such as steel rolling mills, bridge component fabrication, and car body presses Powerful earthmoving equipment Steel-framed buildings of great height (skyscrapers) Radio and phonograph technology High-speed printing presses, enabling the production of low-cost newspapers and mass-market magazines Low cost appliances for the mass market that employ fractional power electric motors, such as vacuum cleaners and washing machines Fast and comfortable long-distance travel by railways, cars, and aircraft Development and employment of modern war machines such as tanks, aircraft, submarines and the modern battleship Streamline designs in cars and trains, influenced by aircraft design Social influence The rise of mass market advertising and consumerism Nationwide branding and distribution of goods, replacing local arts and crafts Nationwide cultural leveling due to exposure to films and network broadcasting Mass-produced government propaganda through print, audio, and motion pictures Replacement of skilled crafts with low skilled labor Growth of strong corporations through their abilities to exploit economies of scale in materials and equipment acquisition, manufacturing, and distribution Corporate exploitation of labor leading to the creation of strong trade unions as a countervailing force Aristocracy with weighted suffrage or male-only suffrage replaced by democracy with universal suffrage, parallel to one-party states First-wave feminism Increased economic planning, including five-year plans, public works and occasional war economy, including nationwide conscription and rationing Environmental influence Exploitation of natural resources with little concern for the ecological consequences; a continuation of 19th century practices but at a larger scale. Release of synthetic dyes, artificial flavorings, and toxic materials into the consumption stream without testing for adverse health effects. Rise of petroleum as a strategic resource International relations Conflicts between nations regarding access to energy sources (particularly oil) and material resources (particularly iron and various metals with which it is alloyed) required to ensure national self-sufficiency. Such conflicts were contributory to two devastating world wars. Climax of New Imperialism and beginning of decolonization Arts and architecture The Machine Age is considered to have influenced: Dystopian films including Charlie Chaplin's Modern Times and Fritz Lang's Metropolis Streamline Moderne appliance design and architecture Bauhaus style Modern art Cubism Art Deco decorative style Futurism Music See also Second Industrial Revolution References Historical eras History of technology Second Industrial Revolution 19th century in technology 20th century in technology Machines
Machine Age
Physics,Technology,Engineering
708
3,251,151
https://en.wikipedia.org/wiki/Quilt%20maple
Quilt or quilted maple refers to a type of figure in maple wood. It is seen on the tangential plane (flat-sawn) and looks like a wavy "quilted" pattern, often similar to ripples on water. The highest quality quilted figure is found in the Western Big Leaf species of maple. It is a distortion of the grain pattern itself. Prized for its beauty, it is used frequently in the manufacturing of musical instruments, especially guitars. See also Chatoyancy Flame maple References External links Maple Wood
Quilt maple
Physics
108
13,891,942
https://en.wikipedia.org/wiki/Bloch%20spectrum
The Bloch spectrum is a concept in quantum mechanics in the field of theoretical physics; this concept addresses certain energy spectrum considerations. Let H be the one-dimensional Schrödinger equation operator where Uα is a periodic function of period α. The Bloch spectrum of H is defined as the set of values E for which all the solutions of (H − E)φ = 0 are bounded on the whole real axis. The Bloch spectrum consists of the half-line E0 < E from which certain closed intervals [E2j−1, E2j] (j = 1, 2, ...) are omitted. These are forbidden bands (or gaps) so the (E2j−2, E2j−1) are allowed bands. References Quantum mechanics
Bloch spectrum
Physics
159
561,597
https://en.wikipedia.org/wiki/Battlefield%20Vietnam
Battlefield Vietnam is a 2004 first-person shooter video game developed by Digital Illusions Canada and published by Electronic Arts for Microsoft Windows. It is the second installment of the Battlefield franchise, coming after Battlefield 1942. Battlefield Vietnam takes place during the Vietnam War and features a large variety of maps based on historical settings, such as the Ho Chi Minh Trail, Battle of Huế, Ia Drang Valley, Operation Flaming Dart, the Battle of Khe Sanh and Fall of Saigon. On 15 March 2005, EA re-released the game as Battlefield Vietnam: Redux, which includes new vehicles, maps and an EA-produced World War II mod, based on the previous installment Battlefield 1942. Gameplay In the game's playable maps, the player's primary objective is to occupy Control Points to enable allies and controllable vehicles to spawn. Battlefield Vietnam employs similar point-by-point objectives to its prequel, Battlefield 1942, as well as a form of asymmetrical warfare gameplay. The two teams, the U.S. and North Vietnam, are provided different equipment and vehicles. The U.S. relies on heavy vehicles, employing heavy tanks, helicopters, and bombers. The Vietnamese rely on infantry tactics, utilizing anti-tank weapons. The developers intended to reflect the actual conditions of war throughout the game. The game features a "Sipi Hole" as a mobile spawn point, which is representative of the vast tunnel networks utilized by Vietnam forces. Similar to previous games in the Battlefield series, spawn tickets (reinforcements) play a vital role in defeating the opposing team. Battlefield Vietnam features the United States with Marines, Army and the Navy; South Vietnam with Army of the Republic of Vietnam; and North Vietnam with People's Army of Vietnam and the Viet Cong. Built on a modified version of the Battlefield 1942 engine, Battlefield Vietnam has new and improved features compared to its predecessor. The game gives the player a variety of weapons based on the war and features various contemporary weapons and concepts, such as the AK47 assault rifle and punji stick traps. The game introduced several vehicle improvements over the prequel, such as air-lifting vehicles and working vehicle radios. The radios feature 1960s music and an option for the player to import their own audio files into a designated directory. Unlike the prequel, players are able to fire their weapons from vehicles when in the passenger seat of a vehicle. The game is the first in the Battlefield series to utilize a 3D map, allowing players to see icons that represent the position of control points or friendly units, giving the player increased situational awareness. Reception In June 2004, Battlefield Vietnam received a "Gold" certification from the Verband der Unterhaltungssoftware Deutschland, indicating sales of at least 100,000 units across Germany, Switzerland and Austria. Overall sales of Battlefield Vietnam reached 990,000 copies by that November, by which time the Battlefield series had sold 4.4 million copies. The game received "generally favorable reviews" according to the review aggregation website Metacritic. Battlefield Vietnam was a runner-up for Computer Games Magazines list of the 10 best computer games of 2004. It won the magazine's special award for "Best Soundtrack". It also won GameSpot's 2004 "Best Licensed Music" award. Notes References External links 2004 video games Asymmetrical multiplayer video games 02 Cold War video games Electronic Arts games First-person shooter multiplayer online games Inactive multiplayer online games Multiplayer and single-player video games Multiplayer online games Video games about the United States Marine Corps Video games developed in Canada Video games set in the 1960s Video games set in the 1970s Video games set in Cambodia Video games set in Vietnam Vietnam War video games Windows games Windows-only games
Battlefield Vietnam
Physics
742
11,512,494
https://en.wikipedia.org/wiki/Kretzschmaria%20deusta
Kretzschmaria deusta, commonly known as brittle cinder, is a fungus and plant pathogen found in temperate regions. Taxonomy The species was originally described as Sphaeria deusta by German naturalist George Franz Hoffman in 1787, and later changed in 1970 by South African mycologist P.M.D. Martin to Kretzschmaria deusta. The epithet deusta was derived from Latin, meaning burnt. Description Kretzschmaria deusta is described as a wavy-edged cushion or crust, ranging in color from grey to white when young, and changing to black and brittle with age. Older fruitbodies look similar to charred wood, probably leading to them being underreported or ignored. Kretzschmaria deusta has a flask-shaped perithecium that contains asci in the fertile surface layer. Asci are typically 300 x 15 μm, with 8 spores per ascus. Smooth conidiospores also produced via asexual reproduction, typically 7 x 3 μm. New fruiting bodies are formed in the spring and are flat and gray with white edges. The inconspicuous fruiting bodies persist all year and their appearance changes to resemble asphalt or charcoal, consisting of black, domed, lumpy crusts that crumble when pushed with force. The resulting brittle fracture can exhibit a ceramic-like fracture surface. Black zone lines can often be seen in cross-sections of wood infected with K. deusta. It is not edible. Similar species When young, K. deusta can resemble species such as Trichoderma viride. When mature, it can resemble species of Annulohypoxylon, Camarops, Entoleuca, and Daldinia (which has a ringed interior). Habitat and ecology K. deusta is found in temperate regions of the Northern Hemisphere on broad-leaved trees. It is also found in Argentina, South Africa, and Australia. It inhabits living hardwood trees including, but not limited to, European beech (Fagus sylvatica), American beech (Fagus grandifolia), sugar maple (Acer saccharum), red maple (Acer rubrum), norway maple (Acer platanoides), oaks (Quercus), hackberry (Celtis), linden (Tilia), elm (Ulmus), and other hardwoods. The most probably colonization strategy of K. deusta is heart rot invasion. The initial colonization occurs through injuries to lower stems and/or roots of living trees, or through root contact with infected trees. It causes a soft rot, initially and preferentially degrading cellulose and ultimately breaking down both cellulose and lignin. The fungus continues to decay wood after the host tree has died, making K. deusta a facultative parasite. Treatment Studies show the possibility of a Trichoderma species being used as a biocontrol agent against the fungal pathogen. Otherwise, there is no designated treatment for K. deusta once it has infected its host. Once established, the infection is terminal for the tree. It can result in sudden breakage in otherwise apparently healthy trees, with visually healthy crowns. This can result in hazardous trees in public settings near roadways, trails, or buildings. Therefore, the recommended treatment would be to fell trees in areas that may be hazardous and to avoid using the infected plant material as mulch. References External links Index Fungorum USDA ARS Fungal Database Fungal tree pathogens and diseases Xylariales Fungi described in 1778 Inedible fungi Fungus species
Kretzschmaria deusta
Biology
756
31,638,904
https://en.wikipedia.org/wiki/CSM1
CSM1 (RNA name: Csm1p) is a protein that in Saccharomyces cerevisiae strain S288c is encoded by the CSM1 gene. References Bibliography Saccharomyces cerevisiae genes Proteins
CSM1
Chemistry
55
28,119,956
https://en.wikipedia.org/wiki/Coarse%20function
In mathematics, coarse functions are functions that may appear to be continuous at a distance, but in reality are not necessarily continuous. Although continuous functions are usually observed on a small scale, coarse functions are usually observed on a large scale. See also Coarse structure References Types of functions
Coarse function
Mathematics
56
1,648,576
https://en.wikipedia.org/wiki/NLS%20%28computer%20system%29
NLS (oN-Line System) was a revolutionary computer collaboration system developed in the 1960s. It was designed by Douglas Engelbart and implemented by researchers at the Augmentation Research Center (ARC) at the Stanford Research Institute (SRI). It was the first computer system to employ the practical use of hypertext links, a computer mouse, raster-scan video monitors, information organized by relevance, screen windowing, presentation programs, and other modern computing concepts. It was funded by ARPA (the predecessor to Defense Advanced Research Projects Agency), NASA, and the US Air Force. The NLS was demonstrated in "The Mother of All Demos". Development Douglas Engelbart developed his concepts while supported by the US Air Force from 1959 to 1960 and published a framework in 1962. The strange acronym, NLS (rather than OLS), was an artifact of the evolution of the system. Engelbart's first computers were not able to support more than one user at a time. First was the CDC 160A in 1963, which had very little programming power of its own. As a short-term measure, the team developed a system that allowed off-line users—that is, anyone not sitting at the one available terminal—to edit their documents by punching a string of commands onto paper tape with a Flexowriter. Once the tape was complete, an off-line user would then feed into the computer the paper tape on which the last document draft had been stored, followed by the new commands to be applied, and the computer would print out a new paper tape containing the latest version of the document. Without interactive visualization, this could be awkward, since the user had to mentally simulate the cumulative effects of their commands on the document text. On the other hand, it matched the workflow of the 1960s office, where managers would give marked-up printouts of documents to secretaries. The design continued to support this "off-line" workflow, as well as an interactive "on-line" ability to edit the same documents. To avoid having two identical acronyms (OLTS), the Off-Line Text System was abbreviated FLTS and the On-Line Text System was abbreviated NLTS. As the system evolved to support more than just text, the "T" was dropped, and the interactive version became known as NLS. Robert Taylor, who had a background in psychology, provided support from NASA. When Taylor moved to the Information Processing Techniques Office of the US Defense Department's Advanced Research Projects Agency, he was able to provide additional funding to the project. NLS development moved to a CDC 3100 in 1965. Jeff Rulifson joined SRI in 1966 and became the lead programmer for NLS until leaving the organization in 1973. In 1968, NLS development moved to an SDS 940 computer running the Berkeley Timesharing System. It had an approximately 96 MB storage disk and could support up to 16 workstations, each comprising a raster-scan monitor, a three-button mouse, and an input device known as a chord keyset. Typed text was sent from the keyset to a specific subsystem that relayed the information along a bus to one of two display controllers and display generators. The input text was then sent to a 5-inch (127 mm) cathode-ray tube (CRT), enclosed by a special cover, and a superimposed video image was received by a professional-quality black-and-white TV camera. The information was sent from the TV camera to the closed-circuit camera control and patch panel, and finally displayed on each workstation's video monitor. NLS was demonstrated by Engelbart on December 9, 1968, to a large audience at the Fall Joint Computer Conference in San Francisco. This has since been dubbed "The Mother of All Demos", as it not only demonstrated the groundbreaking features of NLS, but also involved the assembly of some remarkable state-of-the-art video technologies. Engelbart's onstage terminal keyboard and mouse were linked by a homemade modem at 2400 baud through a leased line that connected to ARC's SDS 940 computer in Menlo Park, 30 miles southeast of San Francisco. Two microwave links carried video from Menlo Park back to an Eidophor video projector loaned by NASA's Ames Research Center, and, on a 22-foot-high (6.7 m) screen with video insets, the audience could follow Engelbart's actions on his display, observe how he used the mouse, and watch as members of his team in Menlo Park joined in the presentation. One of the most revolutionary features of NLS, "the Journal", was developed in 1970 by Australian computer engineer David A. Evans as part of his doctoral thesis. The Journal was a primitive hypertext-based groupware program, which can be seen as a predecessor (if not the direct ancestor) of all contemporary server software that supports collaborative document creation (like wikis). It was used by ARC members to discuss, debate, and refine concepts in the same way that wikis are being used today. The Journal was used to store documents for the Network Information Center and early network email archives. Most Journal documents have been preserved in paper form and are stored in Stanford University's archives; these provide a valuable record of the evolution of the ARC community from 1970 until the advent of commercialization in 1976. An additional set of Journal documents exists at the Computer History Museum in California, along with a large collection of ARC backup tapes dating from the early 1970s, as well as some of the SDS 940 tapes from the 1960s. The NLS was implemented using several domain-specific languages that were handled using the Tree Meta compiler-compiler system. The eventual implementation language was called L10. In 1970, NLS was ported to the PDP-10 computer (as modified by BBN to run the TENEX operating system). By mid-1971, the TENEX implementation of NLS was put into service as the new Network Information Center, but even this computer could handle only a small number of simultaneous users. Access was possible from either custom-built display workstations, or simple typewriter-like terminals which were less expensive and more common at the time. By 1974, the NIC had spun off to a separate project on its own computer. Firsts All of the features of NLS were in support of Engelbart's goal of augmenting collective knowledge work and therefore focused on making the user more powerful, not simply on making the system easier to use. These features therefore supported a full-interaction paradigm with rich interaction possibilities for a trained user, rather than what Engelbart referred to as the WYSIAYG (What You See Is All You Get) paradigm that came later. The computer mouse 2-dimensional display editing In-file object addressing, linking Hypermedia Outline processing Flexible view control Multiple windows Cross-file editing Integrated hypermedia email Hypermedia publishing Document version control Shared-screen teleconferencing Computer-aided meetings Formatting directives Context-sensitive help Distributed client-server architecture Uniform command syntax Universal "user interface" front-end module Multi-tool integration Grammar-driven command language interpreter Protocols for virtual terminals Remote procedure call protocols Compilable "Command Meta Language" Engelbart said: "Many of those firsts came right out of the staff's innovations — even had to be explained to me before I could understand them. [The staff deserves] more recognition." Decline and succession The downfall of NLS, and subsequently, of ARC in general, was the program's difficult learning curve. NLS was not designed to be easy to learn; it employed the heavy use of program modes, relied on a strict hierarchical structure, did not have a point-and-click interface, and forced the user to have to learn cryptic mnemonic codes to do anything useful with the system. The chord keyset, which complemented the modal nature of NLS, forced the user to learn a 5-bit binary code if they did not want to use the keyboard. Finally, with the arrival of the ARPA Network at SRI in 1969, the time-sharing technology that seemed practical with a small number of users became impractical over a distributed network; time-sharing was rapidly being replaced with individual minicomputers (and later microcomputers) and workstations. Attempts to port NLS to other hardware, such as the PDP-10 and later on the DECSYSTEM-20, were successful. It was transported to other research institutes, such as USC/Information Sciences (ISI), which manufactured mice and keysets for NLS. NLS was also extended at ISI to use the newly emerging Xerox laser printers. Frustrated by the direction of Engelbart's "bootstrapping" crusade, many top SRI researchers left, with many ending up at the Xerox Palo Alto Research Center, taking the mouse idea with them. SRI sold NLS to Tymshare in 1977 and renamed it Augment. Tymshare was, in turn, sold to McDonnell Douglas in 1984. Some of the "full-interaction" paradigm lives on in different systems, including the Hyperwords add-on for Mozilla Firefox. The Hyperwords concept grew out of the Engelbart web-documentary Invisible Revolution. The aim of the project is to allow users to interact with all the words on the Web, not only the links. Hyperwords works through a simple hierarchical menu, but also gives users access to keyboard "phrases" in the spirit of NLS commands and features Views, which are inspired by the powerful NLS ViewSpecs. The Views allow the user to re-format web pages on the fly. Engelbart was on the Advisory Board of The Hyperwords Company from its inception in 2006 until his death in 2013. From 2005 through 2008, a volunteer group from the Computer History Museum attempted to restore the system. Visicalc Dan Bricklin, the creator of the first spreadsheet program, Visicalc, saw Doug Engelbart demonstrate the oN-Line System, which was part of Bricklin's inspiration to create Visicalc. See also File Retrieval and Editing System (FRESS) ENQUIRE Notes References Further reading External links On the Doug Engelbart Institute website see especially the 1968 Demo resources page for links to the demo and to later panel discussions by participants in the demo; About NLS/Augment; Engelbart's Bibliography, Videography; and the Engelbart Archives Special Collections page. The original 1968 Demo as streaming RealVideo clips A high-resolution version of the 1968 Demo video HyperScope, a browser-based project to recreate and extend NLS/Augment Douglas Engelbart himself is involved in this project NLS documents at bitsavers.org OpenAugment, another now defunct NLS/Augment implementation Hypertext History of human–computer interaction SRI International software
NLS (computer system)
Technology
2,286
18,562,558
https://en.wikipedia.org/wiki/Pempidine
Pempidine is a ganglion-blocking drug, first reported in 1958 by two research groups working independently, and introduced as an oral treatment for hypertension. Pharmacology Reports on the "classical" pharmacology of pempidine have been published. The Spinks group, at ICI, compared pempidine, its N-ethyl analogue, and mecamylamine in considerable detail, with additional data related to several structurally simpler compounds. Toxicology LD50 for the HCl salt of pempidine in mice: 74 mg/kg (intravenous); 125 mg/kg (intraperitoneal); 413 mg/kg (oral). Chemistry Pempidine is an aliphatic, sterically hindered, cyclic, tertiary amine, which is a weak base: in its protonated form it has a pKa of 11.25. Pempidine is a liquid with a boiling point of 187–188 °C and a density of 0.858 g/cm3. Two early syntheses of this compound are those of Leonard and Hauck, and Hall. These are very similar in principle: Leonard and Hauck reacted phorone with ammonia, to produce 2,2,6,6-tetramethyl-4-piperidone, which was then reduced by means of the Wolff–Kishner reduction to 2,2,6,6-tetramethylpiperidine. This secondary amine was then N-methylated using methyl iodide and potassium carbonate. Hall's method involved reacting acetone with ammonia in the presence of calcium chloride to give 2,2,6,6-tetramethyl-4-piperidone, which was then reduced under Wolff–Kishner conditions, followed by N-methylation of the resulting 2,2,6,6-tetramethylpiperidine with methyl p-toluenesulfonate. References External links Nicotinic antagonists Piperidines Reagents for organic chemistry
Pempidine
Chemistry
427
19,762,732
https://en.wikipedia.org/wiki/Suzuki%20frame
The Suzuki frame is a medical device, used for helping heal broken fingers, especially those with deep, complex intra-articular fractures. Rubber bands are used to generate traction between two metal Kirschner wires that are inserted into the bone on either side of a fracture. The device was named after its inventor, Yasushi Suzuki, who first described it in 1994. Footnotes Medical equipment
Suzuki frame
Biology
80
73,423,703
https://en.wikipedia.org/wiki/Hericium%20flagellum
Hericium flagellum is a species of fungus in the family Hericiaceae native to Europe, first described by Giovanni Antonio Scopoli, and placed into its current genus by Christiaan Hendrik Persoon in 1797. It was confirmed—using sexual incompatibility studies—to be a distinct species from H. coralloides in 1983. Found in montane areas, typically on newly fallen trunks and stumps of fir (Abies species), especially silver fir with one study finding over half of recorded specimens growing on silver fir deadwood in high conservation value areas. Spores are 5–6.5 by 4.5–5.5 μm. References External links Russulales Fungi of Europe Fungi described in 1772 Taxa named by Giovanni Antonio Scopoli Fungus species
Hericium flagellum
Biology
161
33,998,577
https://en.wikipedia.org/wiki/Pentaamine%28dinitrogen%29ruthenium%28II%29%20chloride
Pentaamine(nitrogen)ruthenium(II) chloride is an inorganic compound with the formula [Ru(NH3)5(N2)]Cl2. It is a nearly white solid, but its solutions are yellow. The cationic complex is of historic significance as the first compound with N2 bound to a metal center. [Ru(NH3)5(N2)]2+ adopts an octahedral structure with C4v symmetry. Preparation and properties Pentaamine(nitrogen)ruthenium(II) chloride is synthesized in an aqueous solution from pentaamminechlororuthenium(III) chloride, sodium azide, and methanesulfonic acid: [Ru(NH3)5Cl]Cl2 + NaN3 → [Ru(NH3)5N2]Cl2 + ... If it is to be used in situ, the cation can be made more conveniently from ruthenium(III) chloride and hydrazine hydrate: RuCl3 + 4 N2H4 → [Ru(NH3)5N2]2+ + ... This N2 complex is stable in aqueous solution and has a relatively low ligand exchange rate with water. Being a d6 complex, the Ru-N bond is stabilized by the pi backbonding, the donation of metal d-electrons into the N2 π* orbitals. The related metal ammine complex [Os(NH3)5(N2)]2+ is also known. Reactions The dinitrogen ligand is not reduced by aqueous sodium borohydride. Nearly all known reactions of this compound are displacement reactions. Pentaamine(halogen)ruthenium(II) halides can be synthesized by treating [Ru(NH3)5N2]2+ with halide sources: [Ru(NH3)5N2]2+ + X− → [Ru(NH3)5X]+ + N2 [Ru(NH3)5N2]2+ forms the symmetrically bridging symmetrical dinitrogen complex [(NH3)5Ru-NN-Ru(NH3)5]4+. References Ruthenium complexes Chlorides Coordination complexes Ammine complexes Ruthenium(II) compounds Nitrogen compounds
Pentaamine(dinitrogen)ruthenium(II) chloride
Chemistry
485
1,217,104
https://en.wikipedia.org/wiki/Stokes%20shift
Stokes shift is the difference (in energy, wavenumber or frequency units) between positions of the band maxima of the absorption and emission spectra (fluorescence and Raman being two examples) of the same electronic transition. It is named after Irish physicist George Gabriel Stokes. When a system (be it a molecule or atom) absorbs a photon, it gains energy and enters an excited state. The system can relax by emitting a photon. The Stokes shift occurs when the energy of the emitted photon is lower than that of the absorbed photon, representing the difference in energy of the two photons. The Stokes shift is primarily the result of two phenomena: vibrational relaxation or dissipation and solvent reorganization. A fluorophore is a part of a molecule with a dipole moment that exhibits fluorescence. When a fluorophore enters an excited state, its dipole moment changes, but surrounding solvent molecules cannot adjust so quickly. Only after vibrational relaxation do their dipole moments realign. Stokes shifts are given in wavelength units, but this is less meaningful than energy, wavenumber or frequency units because it depends on the absorption wavelength. For instance, a 50 nm Stokes shift from absorption at 300 nm is larger in terms of energy than a 50 nm Stokes shift from absorption at 600 nm. Stokes fluorescence Stokes fluorescence is the emission of a longer-wavelength photon (lower frequency or energy) by a molecule that has absorbed a photon of shorter wavelength (higher frequency or energy). Both absorption and radiation (emission) of energy are distinctive for a particular molecular structure. If a material has a direct bandgap in the range of visible light, the light shining on it is absorbed, which excites electrons to a higher-energy state. The electrons remain in the excited state for about 10−8 seconds. This number varies over several orders of magnitude, depending on the sample, and is known as the fluorescence lifetime of the sample. After losing a small amount of energy through vibrational relaxation, the molecule returns to the ground state, and energy is emitted. Anti-Stokes shift If the emitted photon has more energy than the absorbed photon, the energy difference is called an anti-Stokes shift; this extra energy comes from dissipation of thermal phonons in a crystal lattice, cooling the crystal in the process. Anti-Stokes shifts may also be due to triplet-triplet annihilation processes, resulting in the formation of higher singlet states that emit at higher energies. Applications of Stokes and anti-Stokes shifts Raman spectroscopy In Raman spectroscopy, when a molecule is excited by incident radiation, it undergoes a Stokes shift as it emits radiation at a lower energy level than the incident radiation. Analyzing the intensity and frequency of the spectral shift provides valuable information about the vibrational modes of molecules, enabling the identification of chemical bonds, functional groups, and molecular conformations. Yttrium oxysulfide Yttrium oxysulfide () doped with gadolinium oxysulfide () is a common industrial anti-Stokes pigment, absorbing in the near-infrared and emitting in the visible region of the spectrum. This composite material is often utilized in luminescent applications, where it absorbs lower-energy photons and emits higher-energy photons. This unique property makes it particularly valuable in various technological fields, including security printing, anti-counterfeiting measures, and luminescent displays. By harnessing anti-Stokes fluorescence, this pigment enables the creation of vibrant and durable inks, coatings, and materials with enhanced visibility and authentication capabilities. Photon upconversion Photon upconversion is an anti-Stokes process where lower-energy photons are converted into higher-energy photons. An example of this later process is demonstrated by upconverting nanoparticles. It is more commonly observed in Raman spectroscopy, where it can be used to determine the temperature of a material. Optoelectronic devices In direct-bandgap thin-film semiconducting layers Stokes shifted emission can originate from three main sources: doping, strain, and disorder. Each of these factors can introduce variations in the energy levels of the semiconductor material, leading to a shift in the emitted light towards longer wavelengths compared to the incident light. This phenomenon is particularly relevant in optoelectronic devices where controlling these factors can be crucial for optimizing device performance. See also Jablonski diagram Kasha's rule References Fluorescence Raman spectroscopy
Stokes shift
Chemistry
925
3,515,026
https://en.wikipedia.org/wiki/Aluminium%20gallium%20phosphide
Aluminium gallium phosphide, , a phosphide of aluminium and gallium, is a semiconductor material. It is an alloy of aluminium phosphide and gallium phosphide. It is used to manufacture light-emitting diodes emitting green light. See also Aluminium gallium indium phosphide External links Light-Emitting Diode - An Introduction, Structure, and Applications of LEDs Aluminium compounds Gallium compounds Phosphides III-V semiconductors III-V compounds Zincblende crystal structure
Aluminium gallium phosphide
Physics,Chemistry,Materials_science
114
25,937,142
https://en.wikipedia.org/wiki/Project%20agreement%20%28Canada%29
A project agreement is an agreement between the owner/developer of a construction project and the construction trade unions that will perform that construction work. A project agreement modifies the terms of otherwise applicable construction collective agreements for purposes of a specific construction project or a defined set of construction projects. Without exception, Project Agreements provide that there will be no strikes or lockouts on the covered construction project or projects, thereby removing a significant source of risk to the owner/developers of these projects. Project agreements typically replicate the principal economic terms of the otherwise applicable construction collective agreements, although there may be specific modifications to those terms. Labour relations statutes in most Canadian jurisdictions contain provisions that specifically allow for the negotiation of project agreements. This is in contrast with the United States (see Project Labor Agreements) where there is no specific provisions pertaining to project labor agreements in the National Labor Relations Act. In Ontario, the Conservative Government amended the Labour Relations Act (Bill 139) to facilitate the adoption of Project Agreements that cover multiple projects as well as projects initiated subsequent to the commencement of a Project Agreement. The Canadian statutory tradition of supporting and facilitating project agreements has led to their adoption in a wide range of circumstances in both the public and private sector. Major construction projects that were completed under the terms of project agreements include: various private sector industrial projects (e.g., Hudson Bay Mining Improvement Project in Flin Flon, Tembec Paper Mill Expansion in Pine Falls, and Co-op Oil Refinery in Regina), major public sector projects (Highway 407 Construction in Ontario, Confederation Bridge project in Prince Edward Island, and multiple projects undertaken by various provincial hydro-electric authorities.) Had the City of Toronto won its bid to host the Olympics, construction related to the Olympics would have been carried out under the terms of a Project Agreement. Governments, in their capacity as owner/developers of construction projects, have used project agreements to secure training and employment opportunities for groups that might otherwise not have access to skilled construction work. For example, the Project Agreement governing the construction of the Vancouver Island Highway provided for explicit employment equity hiring focused on women and members of First Nations. References Construction industry of Canada Construction documents Building engineering
Project agreement (Canada)
Engineering
435
4,748,365
https://en.wikipedia.org/wiki/Institute%20of%20Petroleum
The Institute of Petroleum (IP) was a UK-based professional organisation founded in 1913 as the Institute of Petroleum Technologists. It changed its name to the Institute of Petroleum in 1938. The institute became defunct when it merged with the Institute of Energy in 2003 to form the Energy Institute. Background The Institute of Petroleum Technologists was established in 1913 by the consulting chemist and engineer Sir Thomas Boverton Redwood (1846–1919) and Arthur Eastlake. At the institute's inaugural meeting in 1914 Sir Thomas stated that the aim of the institute was to determine a "hallmark of proficiency in connection with our profession". He emphasised the need to amalgamate the diverse knowledge and interests of the various branches of the oil industry. In 1938 the institute changed its name to the Institute of Petroleum and membership was opened to all professions associated with the oil and gas industries. Operation The Institute of Petroleum had similar goals to the Energy Institute but was specifically focused on the oil and gas industry, whereas the Energy Institute also covers other forms of energy including nuclear and alternative energies. The IP designation still survives, for example in the specification of test methods in the petroleum industry. The Energy Institute still runs an "International Petroleum (IP) Week", a series of events and seminars aimed at the petroleum industry. The institute's crest was an Archaeopteryx with the Latin motto conjunctione potiores (translated as 'preferential coupling'). Publications The institute published a monthly magazine Petroleum Review, which the Energy Institute continues to publish. Scholarly articles were published in the Journal of the Institute of Petroleum from 1939, previously the Journal of the Institute of Petroleum Technologists (Volumes 1 to 24; 1914–1938). The Petroleum Institute published an extensive range of internationally recognised codes of practice, guidance and petroleum test procedures. The following lists are a sample of the published material. Codes of safe practice Model codes of safe practice (MCSP) included: MCSP Part 1: The selection, installation, inspection and maintenance of electrical and non-electrical apparatus in hazardous areas. MCSP 1 Electrical safety code. 7th edition (2003) MCSP Part 2 Design, construction and operation of distribution installations  (1998) MCSP Part 6, Pipeline Safety Code MCSP Part 9 Liquefied petroleum gas. Volume 1: Large bulk pressure storage and refrigerated LPG (1987) MCSP Part 11 Bitumen safety code. 3rd edition (1990) MCSP Part 15: Area classification for installations handling flammable fluids MCSP Part 16: Guidance on tank cleaning MCSP Part 19: Fire precautions at petroleum refineries and bulk storage installations MCSP Part 21 Guidelines for the control of hazards arising from static electricity. 2nd edition (2002) Code of safe practice for contractors working on petrol filling stations (1997) Code of safe practice for retailers managing contractors working on petrol filling stations (1999) General Air quality and its association with human health effects (2001) Electrical installation of facilities for the storage and dispensing of LPG and CNG automotive fuels at vehicle refuelling stations (2003) Guidelines Guidance document on risk assessment for the water environment at operational fuel storage and dispensing facilities (1999) Guidance on external cathodic protection of underground steel storage tanks and steel pipework at petrol filling stations (2002) Guidelines for investigation and remediation of petroleum retail sites (1998) Guidelines for soil, groundwater and surface water protection and vapour emission control at petrol filling stations (2003) Test methods This list is a sample of the test methods available. Note that the IP designation still exists in the specification of these test methods. IP 2: Petroleum products and hydrocarbon solvents - Determination of aniline point and mixed aniline point IP 4: Petroleum products - Determination of ash (ISO 6245:2001) IP 10: Determination of kerosine burning characteristics - 24 hour method IP 12: Determination of specific energy IP 13: Petroleum products - Determination of carbon residue - Conradson method IP 14: Petroleum products - Determination of carbon residue - Ramsbottom method IP 16: Determination of the freezing point of aviation fuels — Manual method IP 17: Determination of colour — Lovibond® tintometer® method IP 30: Detection of mercaptans, hydrogen sulfide, elemental sulfur and peroxides - Doctor test method IP 34: Determination of flash point — Pensky-Martens closed cup method IP 36: Determination of flash and fire points - Cleveland open cup method IP 334: Determination of load carrying capacity of lubricants - FZG gear machine method IP 628: Determination of the Solvent Yellow 124 content of kerosine and gas oil – HPLC Method See also American Petroleum Institute Oil and gas industry in the United Kingdom Oil terminals in the United Kingdom Petroleum refining in the United Kingdom References Defunct professional associations based in the United Kingdom History of the petroleum industry in the United Kingdom Organisations based in the City of Westminster Petroleum organizations
Institute of Petroleum
Chemistry,Engineering
999
302,600
https://en.wikipedia.org/wiki/Gyromitra%20esculenta
Gyromitra esculenta is an ascomycete fungus from the genus Gyromitra, widely distributed across Europe and North America. It normally fruits in sandy soils under coniferous trees in spring and early summer. The fruiting body, or mushroom, is an irregular brain-shaped cap dark brown in colour that can reach high and wide, perched on a stout white stipe up to high. Although potentially fatal if eaten raw (causing restrictions on its sales in some areas), G. esculenta is still commonly parboiled for consumption, being a popular delicacy in Europe and the upper Great Lakes region of North America; evidence suggests that thorough cooking does not eliminate all traces of mycotoxins. When consumed, the principal active mycotoxin, gyromitrin, is hydrolyzed into the toxic compound monomethylhydrazine, which affects the liver, central nervous system, and sometimes the kidneys. Symptoms of poisoning involve vomiting and diarrhea several hours after consumption, followed by dizziness, lethargy and headache. Severe cases may lead to delirium, coma, and death after five to seven days. Taxonomy The fungus was first described in 1800, by mycologist Christian Hendrik Persoon, as Helvella esculenta, and gained its current accepted binomial name when the Swedish mycologist Elias Magnus Fries placed it in the genus Gyromitra in 1849. The genus name is derived from the Greek terms gyros/γυρος "round" and mitra/μιτρα "headband". Its specific epithet is derived from the Latin esculentus, "edible". It is known by a variety of common descriptive names such as "brain mushroom", "turban fungus", elephant ears, or "beefsteak mushroom/morel", although beefsteak mushroom can also refer to the much choice edible basidiomycete Fistulina hepatica. Dating from the 19th century, the German term is a result of the older , itself from the 18th century Low German , aligning with the similar-sounding (and similar-looking) . Gyromitra esculenta is a member of a group of fungi known as "false morels", so named for their resemblance to the highly regarded true morels of the genus Morchella. The grouping includes other species of the genus Gyromitra, such as G. infula (elfin saddle), G. caroliniana and G. gigas (snow morel). While some of these species contain little to no gyromitrin, many guidebooks recommend treating them all as poisonous, since their similar appearance and significant intraspecific variation can make reliable identification difficult. The toxic qualities of G. esculenta may be reduced by cooking, but possibly not enough to prevent poisoning from repeated consumption. The more distantly related ascomycete mushrooms of the genus Verpa, such as V. bohemica and V. conica, are also known as false morels, early morels or thimble morels; like the Gyromitra, they are eaten by some and considered poisonous by others. The genus Gyromitra had been classically considered part of the family Helvellaceae, along with the similar-looking elfin saddles of the genus Helvella. Analysis of the ribosomal DNA of many of the Pezizales showed G. esculenta and the other false morels to be only distantly related to the other members of the Helvellaceae and instead most closely related to the genus Discina, forming a clade which also contains Pseudorhizina and Hydnotrya. Thus the four genera are now included in the family Discinaceae. Description Resembling a brain, the irregularly shaped cap may be up to high and wide. Initially smooth, it becomes progressively more wrinkled as it grows and ages. The cap colour may be various shades of reddish-, chestnut-, purplish-, bay-, dark or sometimes golden-brown; it darkens to black in age. Specimens from California may have more reddish-brown caps. Attached to the cap at several points, the stipe is high and wide. G. esculenta has been reported to have a solid stipe whereas those of true morels (Morchella spp.) are hollow, although a modern source says it is hollow as well. The smell can be pleasant and has been described as fruity, and the fungus is mild-tasting. The spore print is whitish, with transparent spores that are elliptical and 17–22 μm in length. Similar species G. esculenta resembles the various species of true morel, although the latter are more symmetric and look more like pitted gray, tan, or brown sponges. Its cap is generally darker and larger. G. gigas, G. infula and G. ambigua in particular are similar, the latter two being toxic to humans. Distribution and habitat G. esculenta grows on sandy soil in temperate coniferous forest and occasionally in deciduous woodlands. Among conifers it is mostly found under pines (Pinus spp.), but also sometimes under aspen (Populus spp.). The hunting period is from April to July, earlier than for other species, and the fungus may even sprout up with the melting snow. It can be abundant in some years and rare in others. The mushroom is more commonly found in places where ground has been disturbed, such as openings, rivulets, washes, timber clearings, plowed openings, forest fire clearings, and roadsides. Enthusiasts in Finland have been reported burying newspaper inoculated with the fungus in the ground in autumn and returning the following spring to collect mushrooms. Although more abundant in montane and northern coniferous woodlands such as the Sierra Nevada and the Cascade Range in northwestern North America, Gyromitra esculenta is found widely across the continent, as far south as Mexico. It is also common in Central Europe, less abundant in the east, and more in montane areas than lowlands. It has been recorded from Northern Ireland, from Uşak Province in Western Turkey, and from the vicinity of Kaş in the Antalya Province of Turkey's southern coast. Toxicity Toxic reactions have been known for at least a hundred years. Experts speculated the reaction was more of an allergic one specific to the consumer, or a misidentification, rather than innate toxicity of the fungus, due to the wide range in effects seen. Some would suffer severely or perish while others exhibited no symptoms after eating similar amounts of mushrooms from the same dish. Yet others would be not poisoned after eating G. esculenta for many years. However, the fungus is now widely recognized as potentially deadly. Gyromitra esculenta contains levels of the poison gyromitrin that vary locally among populations; although these mushrooms are only rarely involved in poisonings in either North America or western Europe, intoxications are seen frequently in eastern Europe and Scandinavia. A 1971 Polish study reported at the time that the species accounted for up to 23% of mushroom fatalities each year. Death rates have dropped since the mid-twentieth century; in Sweden poisoning is common, though life-threatening poisonings have not been detected and there was no fatality reported over the 50 years from 1952 to 2002. Gyromitra poisonings are rare in Spain, due to the widespread practice of drying the mushrooms before preparation and consumption, but has a mortality rate of about 25%. A lethal dose of gyromitrin has been estimated to be 10–30 mg/kg for children and 20–50 mg/kg in adults. These doses correspond to around and of fresh mushroom respectively. Evidence suggests that children are more severely affected; it is unclear whether this is due to a larger weight consumed per body mass ratio or to differences in enzyme and metabolic activity. Geographical variation Populations of G. esculenta appear to vary geographically in their toxicity. A French study has shown that mushrooms collected at higher altitudes have lower concentrations of toxin than those from lower elevations, and there is some evidence that fungi west of the Rocky Mountains in North America contain less toxin than those to the east. However, poisonings in the USA have been reported, although less frequently than in Europe. Biochemistry The identity of the toxic constituents eluded researchers until 1968, when acetaldehyde N-methyl-N-formylhydrazone, better known as gyromitrin, was isolated. Gyromitrin is a volatile, water-soluble hydrazine compound hydrolyzed in the body into N-methyl-N-formylhydrazine (MFH) then monomethylhydrazine (MMH). Other N-methyl-N-formylhydrazone derivatives have been isolated in subsequent research, although they are present in smaller amounts. These other compounds would also produce monomethylhydrazine when hydrolyzed, although it remains unclear how much each contributes to the false morel's toxicity. The toxins react with pyridoxal-5-phosphate—the activated form of pyridoxine (vitamin B6)—and form a hydrazone. This reduces production of the neurotransmitter GABA via decreased activity of glutamic acid decarboxylase, producing the neurological symptoms. MMH also causes oxidative stress leading to methemoglobinemia. Inhibition of diamine oxidase (histaminase) elevates histamine levels resulting in headaches, nausea, vomiting, and abdominal pain. MFH, as a mushroom component and an intermediary product of gyromitrin hydrolysis, has toxicities of its own. MFH undergoes cytochrome P450-regulated oxidative metabolism which, via reactive nitrosamide intermediates, leads to formation of methyl radicals which lead to liver necrosis. Symptoms The symptoms of poisoning are typically gastrointestinal and neurological. Symptoms occur within 6–12 hours of consumption, although cases of more severe poisoning may present sooner—as little as 2 hours after ingestion. Initial symptoms are gastrointestinal, with sudden onset of nausea, vomiting, and watery diarrhea which may be bloodstained. Dehydration may develop if the vomiting or diarrhea is severe. Dizziness, lethargy, vertigo, tremor, ataxia, nystagmus, and headaches develop soon after; fever often occurs, a distinctive feature which does not develop after poisoning by other types of mushrooms. In most cases of poisoning, symptoms do not progress from these initial symptoms, and patients recover after 2–6 days of illness. In some cases there may be an asymptomatic phase following the initial symptoms which is then followed by more significant toxicity including kidney damage, liver damage, and neurological dysfunction including seizures and coma. These signs usually develop within 1–3 days in serious cases. The patient develops jaundice and the liver and spleen become enlarged; in some cases blood sugar levels will rise (hyperglycemia) and then fall (hypoglycemia) and liver toxicity is seen. Additionally intravascular hemolysis causes destruction of red blood cells resulting in increase in free hemoglobin and hemoglobinuria which can lead to renal toxicity or kidney failure. Methemoglobinemia may also occur in some cases. This is where higher than normal levels of methemoglobin, which is a form of hemoglobin that can not carry oxygen, are found in the blood. It causes the patient to become short of breath and cyanotic. Cases of severe poisoning may progress to a terminal neurological phase, with delirium, muscle fasciculations and seizures, and mydriasis progressing to coma, circulatory collapse, and respiratory arrest. Death may occur from five to seven days after consumption. Treatment Treatment is mainly supportive; gastric decontamination with activated charcoal may be beneficial if medical attention is sought within a few hours of consumption. However, symptoms often take longer than this to develop, and patients do not usually present for treatment until many hours after ingestion, thus limiting its effectiveness. Patients with severe vomiting or diarrhea can be rehydrated with intravenous fluids. Monitoring of biochemical parameters such as methemoglobin levels, electrolytes, liver and kidney function, urinalysis, and complete blood count is undertaken and any abnormalities are corrected. Dialysis can be used if kidney function is impaired or the kidneys are failing. Hemolysis may require a blood transfusion to replace the lost red blood cells, while methemoglobinemia is treated with intravenous methylene blue. Pyridoxine, also known as vitamin B6, can be used to counteract the inhibition by MMH on the pyridoxine-dependent step in the synthesis of the neurotransmitter GABA. Thus GABA synthesis can continue and symptoms are relieved. Pyridoxine, which is only useful for the neurological symptoms and does not decrease hepatic toxicity, is given at a dose of 25 mg/kg; this can be repeated up to a maximum total of 15 to 30 g daily if symptoms do not improve. Benzodiazepines are given to control seizures; as they also modulate GABA receptors they may potentially increase the effect of pyridoxine. Additionally MMH inhibits the chemical transformation of folic acid into its active form, folinic acid, this can be treated by folinic acid given at 20–200 mg daily. Long-term effects ALS Lagrange et al. presented in 2018 a link between life-long foraging for G. esculenta and amyotrophic lateral sclerosis (ALS) in French Alps populations. Similar ALS clusters possibly related to mushrooms are found near the Aosta Valley (Italy), in Sardinia, and in Michigan. Carcinogenicity Monomethylhydrazine, gyromitrin, raw Gyromitra esculenta, and N-methyl-N-formylhydrazine have been shown to be carcinogenic in experimental animals. Although Gyromitra esculenta has not been observed to cause cancer in humans, it is possible there is a carcinogenic risk for people who ingest these types of mushrooms. Even small amounts may have a carcinogenic effect. At least 11 different hydrazones have been isolated from G. esculenta, and it is not known if all potential carcinogens can be completely removed by parboiling. Consumption Despite its recognized toxicity, Gyromitra esculenta is marketed and consumed in several countries or states in Europe and North America. It was previously consumed in Germany, with fungi picked in and exported from Poland; more recently, however, Germany and Switzerland discouraged consumption by prohibiting its sale. Similarly in Sweden, the Swedish National Food Administration warns that it is not fit for human consumption, and restricts purchase of fresh mushrooms to restaurants alone. The mushroom is still highly regarded and consumed in Bulgaria, being sold in markets and picked for export there. In some countries such as Spain, especially in the eastern Pyrenees, they are traditionally considered a delicacy, and many people report consuming them for many years with no ill effects. Despite this, the false morel is listed as hazardous in official mushroom lists published by the Catalan Government and sale to the public is prohibited throughout Spain. Outside of Europe, G. esculenta is consumed in the Great Lakes region and some western states in the United States. Selling and purchasing fresh false morels is legal in Finland, where it is highly regarded. However, the mushrooms are required by law to be accompanied with a warning that they are poisonous and legally prescribed preparation instructions. False morels are also sold prepared and canned, in which case they are ready to be used. Official figures from the Finnish Ministry of Agriculture and Forestry report a total amount of false morels sold in Finland of 21.9 tonnes in 2006 and 32.7 tonnes, noted as being above average, in 2007. In 2002, the Finnish Food Safety Authority estimated annual consumption of false morels to be hundreds of tonnes in plentiful years. In Finnish cuisine, false morels may be cooked in an omelette, or gently sautéed in butter in a saucepan, flour and milk added to make a béchamel sauce, or pie filling. Alternatively, more fluid can be added for a false morel soup. Typical condiments added for flavour include parsley, chives, dill and black pepper. While cooking the fungus removes (most of) the toxins, the cook can become poisoned by the hydrazine fumes given off by cooking. Controversies In 2015, Swedish chef caused a controversy when he prepared a dish with Gyromitra esculenta in a TV show. Mushroom expert Monica Svensson criticized him for including it, because monomethylhydrazine is a known carcinogen and there is a risk that inexperienced people might misinterpret the recipe and omit the steps that reduce the toxicity level. She also expressed criticism to Per Morberg for similar reasons. Paul Svensson said that he was not aware of the carcinogenic effects and apologized afterwards, and he promised to remove Gyromitra from his dishes. Preparation Most of the gyromitrin must be removed to render false morels edible. The recommended procedure involves either first drying and then boiling the mushrooms, or boiling the fresh mushrooms directly. To prepare fresh mushroom it is recommended that they are cut into small pieces and parboiled twice in copious amounts of water, at least three parts water to one part chopped mushrooms, for at least five minutes, after each boiling the mushroom should be rinsed thoroughly in clean water. Each round of parboiling reduces the free gyromitrin contents to a tenth. Significant amounts of gyromitrin are retained in the internal structure of the mushroom even after boiling. After 3 rounds of boiling for 5 minutes and discarding the water, the gyromitrin content is reduced to 6-15% of the original. After 5 rounds, this content is reduced to 7%. The gyromitrin is leached into the water where it will remain, therefore the parboiling water must be discarded and replaced with fresh water after each round of boiling. However, it is still recommended that the mushroom be boiled after drying. MMH boils at and thus readily vaporizes into the air when water containing fresh false morels is boiled. If boiling the mushrooms indoors, care should be taken to ensure adequate ventilation, and, if symptoms of monomethylhydrazine poisoning appear, immediately open all windows and move outside to seek fresh air. Even after boiling, small amounts of gyromitrin and other hydrazine derivatives remain in the mushrooms. Given the possibility of accumulation of toxins, repeated consumption is not recommended. Prospects for cultivation Strains with much lower concentrations of gyromitrin have been discovered, and the fungus has been successfully grown to fruiting in culture. Thus there is scope for future research into cultivation of safer strains. See also List of deadly fungi References General Specific External links "Gyromitra esculenta, one of the false morels" California Fungi—Gyromitra esculenta Official Finnish instructions for the processing of false morels Discinaceae Poisonous fungi Fungi of North America Fungi of Europe Fungi described in 1800 Taxa named by Christiaan Hendrik Persoon Fungus species
Gyromitra esculenta
Biology,Environmental_science
4,060
706,884
https://en.wikipedia.org/wiki/Waterspout
A waterspout is a rotating column of air that occurs over a body of water, usually appearing as a funnel-shaped cloud in contact with the water and a cumuliform cloud. There are two types of waterspout, each formed by distinct mechanisms. The most common type is a weak vortex known as a "fair weather" or "non-tornadic" waterspout. The other less common type is simply a classic tornado occurring over water rather than land, known as a "tornadic", "supercellular", or "mesocyclonic" waterspout, and accurately a "tornado over water". A fair weather waterspout has a five-part life cycle: formation of a dark spot on the water surface; spiral pattern on the water surface; formation of a spray ring; development of a visible condensation funnel; and ultimately, decay. Most waterspouts do not suck up water. While waterspouts form mostly in tropical and subtropical areas, they are also reported in Europe, Western Asia (the Middle East), Australia, New Zealand, the Great Lakes, Antarctica, and on rare occasions, the Great Salt Lake. Some are also found on the East Coast of the United States, and the coast of California. Although rare, waterspouts have been observed in connection with lake-effect snow precipitation bands. Characteristics Climatology Though the majority of waterspouts occur in the tropics, they can seasonally appear in temperate areas throughout the world, and are common across the western coast of Europe as well as the British Isles and several areas of the Mediterranean and Baltic Sea. They are not restricted to saltwater; many have been reported on lakes and rivers including the Great Lakes and the St. Lawrence River. They are fairly common on the Great Lakes during late summer and early fall, with a record 66+ waterspouts reported over just a seven-day period in 2003. Waterspouts are more frequent within from the coast than farther out at sea. They are common along the southeast U.S. coast, especially off southern Florida and the Keys, and can happen over seas, bays, and lakes worldwide. Approximately 160 waterspouts are currently reported per year across Europe, with the Netherlands reporting the most at 60, followed by Spain and Italy at 25, and the United Kingdom at 15. They are most common in late summer. In the Northern Hemisphere, September has been pinpointed as the prime month of formation. Waterspouts are also frequently observed off the east coast of Australia, with several being described by Joseph Banks during the voyage of the Endeavour in 1770. Formation Waterspouts exist on a microscale, where their environment is less than two kilometers in width. The cloud from which they develop can be as innocuous as a moderate cumulus, or as great as a supercell. While some waterspouts are strong and tornadic in nature, most are much weaker and caused by different atmospheric dynamics. They normally develop in moisture-laden environments as their parent clouds are in the process of development, and it is theorized they spin as they move up the surface boundary from the horizontal shear near the surface, and then stretch upwards to the cloud once the low-level shear vortex aligns with a developing cumulus cloud or thunderstorm. Some weak tornadoes, known as landspouts, have been shown to develop in a similar manner. More than one waterspout can occur simultaneously in the same vicinity. In 2012, as many as nine simultaneous waterspouts were reported on Lake Michigan in the United States. In May 2021, at least five simultaneous waterspouts were filmed near Taree, off the northern coast of New South Wales, Australia. Types Non-tornadic Waterspouts that are not associated with a rotating updraft of a supercell thunderstorm are known as "non-tornadic" or "fair-weather" waterspouts. By far the most common type of waterspout, these occur in coastal waters and are associated with dark, flat-bottomed, developing convective cumulus towers. Fair-weather waterspouts develop and dissipate rapidly, having life cycles shorter than 20 minutes. They usually rate no higher than EF0 on the Enhanced Fujita scale, generally exhibiting winds of less than . They are most frequently seen in tropical and sub-tropical climates, with upwards of 400 per year observed in the Florida Keys. They typically move slowly, if at all, since the cloud to which they are attached is horizontally static, being formed by vertical convective action rather than the subduction/adduction interaction between colliding fronts. Fair-weather waterspouts are very similar in both appearance and mechanics to landspouts, and largely behave as such if they move ashore. There are five stages to a fair-weather waterspout life cycle. Initially, a prominent circular, light-colored disk appears on the surface of the water, surrounded by a larger dark area of indeterminate shape. After the formation of these colored disks on the water, a pattern of light- and dark-colored spiral bands develops from the dark spot on the water surface. Then, a dense annulus of sea spray, called a "cascade", appears around the dark spot with what appears to be an eye. Eventually, the waterspout becomes a visible funnel from the water surface to the overhead cloud. The spray vortex can rise to a height of several hundred feet or more, and often creates a visible wake and an associated wave train as it moves. Finally, the funnel and spray vortex begin to dissipate as the inflow of warm air into the vortex weakens, ending the waterspout's life cycle. Tornadic "Tornadic waterspouts", also accurately referred to as "tornadoes over water", are formed from mesocyclones in a manner essentially identical to land-based tornadoes in connection with severe thunderstorms, but simply occurring over water. A tornado which travels from land to a body of water would also be considered a tornadic waterspout. Since the vast majority of mesocyclonic thunderstorms in the United States occur in land-locked areas, true tornadic waterspouts are correspondingly rarer than their fair-weather counterparts in that country. However, in some areas, such as the Adriatic, Aegean and Ionian Seas, tornadic waterspouts can make up half of the total number. Snowspout A winter waterspout, also known as an icespout, an ice devil, or a snowspout, is a rare instance of a waterspout forming under the base of a snow squall. The term "winter waterspout" is used to differentiate between the common warm season waterspout and this rare winter season event. There are a couple of critical criteria for the formation of a winter waterspout. Very cold temperatures need to be present over a body of water, which is itself warm enough to produce fog resembling steam above the water's surface. Like the more efficient lake-effect snow events, winds focusing down the axis of long lakes enhance wind convergence and increase the likelihood of a winter waterspout developing. The terms "snow devil" and "snownado" describe a different phenomenon: a snow vortex close to the surface with no parent cloud, similar to a dust devil. Impacts Human Waterspouts have long been recognized as serious marine hazards. Stronger waterspouts pose a threat to watercraft, aircraft and people. It is recommended to keep a considerable distance from these phenomena, and to always be on alert through weather reports. The United States National Weather Service will often issue special marine warnings when waterspouts are likely or have been sighted over coastal waters, or tornado warnings when waterspouts are expected to move onshore. Incidents of waterspouts causing severe damage and casualties are rare; however, there have been several notable examples. The Malta tornado of 1551 was the earliest recorded occurrence of a deadly waterspout. It struck the Grand Harbour of Valletta, sinking four galleys and numerous boats, and killing hundreds of people. The 1851 Sicily tornadoes were twin waterspouts that made landfall in western Sicily, ravaging the coast and countryside before ultimately dissipating back again over the sea. In August 2024, a waterspout has been reported by some witnesses of the sinking of the large yacht Bayesian off the coast of Sicily and might have been the cause or an aggravating circumstance. Seven people died while 15 of 22 were rescued. Natural Depending on how fast the winds from a waterspout are whipping, anything that is within about of the surface of the water, including fish of different sizes, frogs, and even turtles, can be lifted into the air. A waterspout can sometimes suck small animals such as fish out of the water and all the way up into the cloud. Even if the waterspout stops spinning, the fish in the cloud can be carried over land, buffeted up and down and around with the cloud's winds until its currents no longer keep the fish airborne. Depending on how far they travel and how high they are taken into the atmosphere, the fish are sometimes dead by the time they rain down. People as far as inland have experienced raining fish. Fish can also be sucked up from rivers, but raining fish is not a common weather phenomenon. Research and forecasting The Szilagyi Waterspout Index (SWI), developed by Canadian meteorologist Wade Szilagyi, is used to predict conditions favorable for waterspout development. The SWI ranges from −10 to +10, where values greater than or equal to zero represent conditions favorable for waterspout development. The International Centre for Waterspout Research (ICWR) is a non-governmental organization of individuals from around the world who are interested in the field of waterspouts from a research, operational and safety perspective. Originally a forum for researchers and meteorologists, the ICWR has expanded interest and contribution from storm chasers, the media, the marine and aviation communities and from private individuals. Myths There was a commonly held belief among sailors in the 18th and 19th centuries that shooting a broadside cannon volley dispersed waterspouts. Among others, Captain Vladimir Bronevskiy claims that it was a successful technique, having been an eyewitness to the dissipation of a phenomenon in the Adriatic while a midshipman aboard the frigate Venus during the 1806 campaign under Admiral Senyavin. A waterspout has been proposed as a reason for the abandonment of the Mary Celeste. See also Fire whirl Funnel cloud Steam devil Tornadogenesis References External links A series of pictures from the boat Nicorette approaching the NSW south coast tornadic waterspout. Pictures of cold-core waterspouts over Lake Michigan on 30 September 2006. Archived from the original on 10 March 2007. "A Winter Waterspout". Monthly Weather Review, February 1907. Severe weather and convection Tornado Vortices Weather hazards de:Wasserhose
Waterspout
Physics,Chemistry,Mathematics
2,273
413,755
https://en.wikipedia.org/wiki/Stapler
A stapler is a mechanical device that joins pages of paper or similar material by driving a thin metal staple through the sheets and folding the ends. Staplers are widely used in government, business, offices, workplaces, homes, and schools. The word "stapler" can actually refer to a number of different devices of varying uses. In addition to joining paper sheets together, staplers can also be used in a surgical setting to join tissue together with surgical staples to close a surgical wound (much in the same way as sutures). Most staplers are used to join multiple sheets of paper. Paper staplers come in two distinct types: manual and electric. Manual staplers are normally hand-held, although models that are used while set on a desk or other surface are not uncommon. Electric staplers exist in a variety of different designs and models. Their primary operating function is to join large numbers of paper sheets together in rapid succession. Some electric staplers can join up to 20 sheets at a time. Typical staplers are a third-class lever. History The growing usage of paper in the 19th century created a demand for an efficient paper fastener. In 1841 Slocum and Jillion invented a "Machine for Sticking Pins into Paper", which is often believed to be the first stapler. But their patent (September 30, 1841, Patent #2275) is for a device used for packaging pins. In 1866, George McGill received U.S. patent 56,587 for a small, bendable brass paper fastener that was a precursor to the modern staple. In 1867, he received U.S. patent 67,665 for a press to insert the fastener into paper. He showed his invention at the 1876 Centennial Exhibition in Philadelphia, Pennsylvania, and continued to work on these and other various paper fasteners throughout the 1880s. In 1868 an English patent for a stapler was awarded to C. H. Gould, and in the U.S., Albert Kletzker of St. Louis, Missouri, also patented a device. In 1877 Henry R. Heyl filed patent number 195,603 for the first machines to both insert and clinch a staple in one step, and for this reason some consider him the inventor of the modern stapler. In 1876 and 1877, Heyl also filed patents for the Novelty Paper Box Manufacturing Co. of Philadelphia, PA, However, the N. P. B. Manufacturing Co.'s inventions were to be used to staple boxes and books. The first machine to hold a magazine of many pre-formed staples came out in 1878. On February 18, 1879, George McGill received patent 212,316 for the McGill Single-Stroke Staple Press, the first commercially successful stapler. This device weighed over two and a half pounds and loaded a single wire staple, which it could drive through several sheets of paper. The first published use of the word "stapler" to indicate a machine for fastening papers with a thin metal wire was in an advertisement in the American Munsey's Magazine in 1901. In the early 1900s, several devices were developed and patented that punched and folded papers to attach them to each other without a metallic clip. The Clipless Stand Machine (made in North Berwick) was sold from 1909 into the 1920s. It cut a tongue in the paper that it folded back and tucked in. Bump's New Model Paper Fastener used a similar cutting and weaving technology. The modern stapler In 1941, the type of paper stapler that is the most common in use was developed: the four-way paper stapler. With the four-way, the operator could either use the stapler to staple papers to wood or cardboard, use pliers for bags, or use the normal way with the head positioned a small distance above the stapling plate. The stapling plate is known as the anvil. The anvil often has two settings: the first, and by far most common, is the reflexive setting, also known as the "permanent" setting. In this position, the legs of the staple are folded toward the center of the crossbar. It is used to staple papers which are not expected to need separation. If rotated 180° or slid to its second position, the anvil will be set on the sheer setting, also known as the "temporary" or "straight" setting. In this position, the legs of the staple are folded outwards, away from the crossbar, resulting in the legs and crossbar being in more or less a straight line. Stapling with this setting will result in more weakly secured papers but a staple that is much easier to remove. The use of the second setting is almost never seen, however, due to the prevalence of staple removers and the general lack of knowledge about its use. Some simple modern staplers feature a fixed anvil that lacks the sheer position. Modern staplers continue to evolve and adapt to users' changing habits. Less effort or easy-squeeze/use staplers, for example, use different leverage efficiencies to reduce the amount of force the user needs to apply. As a result, these staplers tend to be used in work environments where repetitive, large stapling jobs are routine. Some modern desktop staplers make use of Flat Clinch technology. With Flat Clinch staplers, the staple legs first pierce the paper and are then bent over and pressed absolutely flat against the paper – doing away with the two-setting anvil commonly used and making use of a recessed stapling base in which the legs are folded. Accordingly, staples do not have sharper edges exposed and lead to flatter stacking of paper – saving on filing and binder space. Some photocopiers feature an integrated stapler allowing copies of documents to be automatically stapled as they are printed. Industry In 2012, $80 million worth of staplers were sold in the US. The dominant US manufacturer is Swingline. Methods Permanent fastening binds items by driving the staple through the material and into an anvil, a small metal plate that bends the ends, usually inward. On most modern staplers, the anvil rotates or slides to change between bending the staple ends inward for permanent stapling or outward for pinning (see below). Clinches can be standard, squiggled, flat, or rounded completely adjacent to the paper to facilitate neater document stacking. Pinning temporarily binds documents or other items. To pin, the anvil slides or rotates so that the staple bends outwards instead of inwards. Some staplers pin by bending one leg of the staple inwards and the other outwards. The staple binds the item with relative security but is easily removed. Tacking fastens objects to surfaces, such as bulletin boards or walls. A stapler that can tack has a base that folds back out of the way, so staples drive directly into an object rather than fold against the anvil. In this position, the staples are driven similar to the way a staple gun works, but with less force driving the staple. Saddle staplers have an inverted V-shaped saddle for stapling pre-fold sheets to make booklets. Stapleless staplers, invented in 1910, are a means of stapling that punches out a small flap of paper and weaves it through a notch. A more recent alternative method avoids the resulting hole by crimping the pages together with serrated metal teeth instead. Surgical staplers Surgeons can use surgical staplers in place of sutures to close the skin or during surgical anastomosis. A skin stapler does not resemble a standard stapler, as it has no anvil. Skin staples are commonly preshaped into an "M." Pressing the stapler into the skin and applying pressure onto the handle bends the staple through the skin and into the fascia until the two ends almost meet in the middle to form a rectangle. Staplers are commonly used intra-operatively during bowel resections in colorectal surgery. Often these staplers have an integral knife which, as the staples deploy, cuts through the bowel and maintains the aseptic field. The staples, made from surgical steel, are typically supplied in disposable sterilized cartridges. Types See also Office Space, a 1999 comedy film where a stapler is one of the plot objects Staple remover Staple gun References External links Fasteners American inventions Packaging machinery Stationery 19th-century inventions Office equipment
Stapler
Engineering
1,732
2,229,421
https://en.wikipedia.org/wiki/Diisobutylaluminium%20hydride
Diisobutylaluminium hydride (DIBALH, DIBAL, DIBAL-H or DIBAH) is a reducing agent with the formula (i-Bu2AlH)2, where i-Bu represents isobutyl (-CH2CH(CH3)2). This organoaluminium compound is a reagent in organic synthesis. Properties Like most organoaluminum compounds, the compound's structure is most probably more than that suggested by its empirical formula. A variety of techniques, not including X-ray crystallography, suggest that the compound exists as a dimer and a trimer, consisting of tetrahedral aluminium centers sharing bridging hydride ligands. Hydrides are small and, for aluminium derivatives, are highly basic, thus they bridge in preference to the alkyl groups. DIBAL can be prepared by heating triisobutylaluminium (itself a dimer) to induce β-hydride elimination: (i-Bu3Al)2 → (i-Bu2AlH)2 + 2 (CH3)2C=CH2 Although DIBAL can be purchased commercially as a colorless liquid, it is more commonly purchased and dispensed as a solution in an organic solvent such as toluene or hexane. Use in organic synthesis DIBAL reacts slowly with electron-poor compounds and more quickly with electron-rich compounds. Thus, it is an electrophilic reducing agent whereas LiAlH4 can be thought of as a nucleophilic reducing agent. DIBAL is useful in organic synthesis for a variety of reductions, including converting carboxylic acids, their derivatives, and nitriles to aldehydes. DIBAL efficiently reduces α-β unsaturated esters to the corresponding allylic alcohol. By contrast, LiAlH4 reduces esters and acyl chlorides to primary alcohols, and nitriles to primary amines [using Fieser work-up procedure]. Similarly, DIBAL reduces lactones to hemiacetals (the equivalent of an aldehyde). Although DIBAL reliably reduces nitriles to aldehydes, the reduction of esters to aldehydes is infamous for often producing large quantities of alcohols. Nevertheless, it is possible to avoid these unwanted byproducts through careful control of the reaction conditions using continuous flow chemistry. DIBALH was investigated originally as a cocatalyst for the polymerization of alkenes. Safety DIBAL, like most alkylaluminium compounds, reacts violently with air and water, potentially leading to explosion. References External links Isobutyl compounds Metal hydrides Organoaluminium compounds Reducing agents
Diisobutylaluminium hydride
Chemistry
572
11,256,467
https://en.wikipedia.org/wiki/Damaskeening
Damaskeening is decorative patterning on a watch movement. The term damaskeening is used in America, while in Europe the terms used are Fausses Côtes, Côtes de Genève or Geneva Stripes. Such patterns are made from very fine scratches made by rose engine lathe using small disks, polishing wheels or ivory laps. These patterns look similar to the results of a Spirograph or Guilloché engraving. The earliest known damaskeened American watch movement is E. Howard & Company movement SN 1,105, a gold-flashed brass movement with a helical hairspring. In the period between 1862 and 1868, the same Boston firm damaskeened approximately 400 Model 1862-N (Series III) gold-flashed movements as well, and about 140 nickel-plated brass movements then were so decorated between 1868 and 1870. Howard used damaskeening in this period to draw the viewer's eye to the Reed's patented main wheel, an important technical feature of the watches. Damaskeening was first used in America on solid nickel movements in 1867 by the U.S. Watch Co of Marion, NJ. In 1868–69, the American Watch Company of Waltham, MA employed damaskeening on small numbers of top grade nickel Model 16KW (a.k.a., Model 1860) and nickel Model 1868 movements. Damaskeening then quickly spread to most other American watch manufacturers and watch grades. Two-tone damaskeening can be created by applying a thin plating of gold and then having the damaskeening scrape through the gold outer layer and into the nickel plate. In 2022, the Swiss machine manufacturer SwissKH, which comes from Swiss Know-How, presented its new machine for making this old decor: the Angelo machine. References A Study of E. Howard & Co Watchmaking Innovations, 1858 - 1875, by Clint B. Geller, NAWCC BULLETIN Special Order Supplement #6 (2006), American Watchmaking, A Technical History of the American Watch Industry, 1850 - 1930, by Michael C. Harrold, NAWCC BULLETIN Supplement (1984) Complete Price Guide to Watches, by Cooksey Shugart, Tom Engle, Richard E. Gilbert, Edition 1998 (18th ed), Watches Timekeeping components
Damaskeening
Technology
478
28,401,716
https://en.wikipedia.org/wiki/Drug%20Metabolism%20and%20Disposition
Drug Metabolism and Disposition is a peer-reviewed scientific journal covering the fields of pharmacology and toxicology. It was established in 1973 and is published monthly by the American Society for Pharmacology and Experimental Therapeutics. The journal publishes articles on in vitro and in vivo studies of the metabolism, transport, and disposition of drugs and environmental chemicals, including the expression of drug-metabolizing enzymes and their regulation. , the editor-in-chief is XinXin Ding. All issues are available online as PDFs, with text versions additionally available from 1997. Content from 1997 is available freely 12 months after publication. History Drug Metabolism and Disposition was established in 1973 by Kenneth C. Leibman. The initial frequency was bimonthly (six annual issues); it increased to monthly in 1995. The journal was published on behalf of the society by Williams & Wilkins until the end of 1996. Abstracting and indexing According to the Journal Citation Reports, Drug Metabolism and Disposition received a 2020 impact factor of 3.922. The journal is abstracted and indexed in the following databases: BIOSIS Previews Chemical Abstracts Chemical Abstracts Service Current Contents/Life Sciences EMBASE MEDLINE META Science Citation Index References External links Pharmacology journals Monthly journals English-language journals Academic journals established in 1973 Delayed open access journals Toxicology journals
Drug Metabolism and Disposition
Environmental_science
271
57,738,208
https://en.wikipedia.org/wiki/Belgian%20Scientific%20Expedition
Belgian Scientific Expedition was a scientific survey of the Great Barrier Reef, conducted in 1967–1968. The Belgian Scientific Expedition to the Great Barrier Reef was a seven month expedition beginning in 1967, sponsored by the University of Liege, Belgium, the Belgium Ministry of Education and the National Foundation for Scientific Research. It indirectly honoured the Great Barrier Reef Expedition of 1928–1929, which was led by Maurice Yonge and a large group of researchers from Europe. This earlier expedition had studied the northern Great Barrier Reef primarily around Low Isles Reef. The 1967 expedition, led by Professor Albert Distèche took place between Lady Musgrave Island and Lizard Island off the coast of Queensland on the Great Barrier Reef. Seventy-five ship's crew, many researchers and guests were involved in the expedition. Its primary objective was to make scientific marine biology films. Ron Taylor, who would become famous for his films and diving work with sharks was one of the cinematographers hired to undertake the underwater filming using a 35mm motion picture camera. The former British warship, the De Moor was utilised for the study by the Belgian Navy. Captain Wally Muller was contracted to guide the De Moor through the Swain Reefs and remain with the expedition, on his charter vessel, the Careelah. Coral reef scientists participated in the study as time permitted. The ship would return to shore every 10 days. Among these scientists were David Barnes from the Townsville area and Robert Endean from the University of Queensland. Sir Maurice Yonge would also visit during this expedition, in recognition of his earlier work in 1928. Later studies of the Reef would be conducted and published as part of Project Stellaroid, which surveyed coral reefs in the North Pacific Ocean and their damage by the Crown of Thorns starfish. References Marine biology Great Barrier Reef
Belgian Scientific Expedition
Biology
357
73,430,543
https://en.wikipedia.org/wiki/Xenon%20octafluoride
Xenon octafluoride is a chemical compound of xenon and fluorine with the chemical formula . This is still a hypothetical compound. is reported to be unstable even under pressures reaching 200 GPa. History The compound was initially predicted in 1933 by Linus Pauling—among other noble gas compounds but which, unlike other xenon fluorides, could probably never be synthesized. This appears to be due to the steric hindrance of the fluorine atoms around the xenon atom. However, scientists continue to try to synthesize it. Potential synthesis The formation of xenon octafluoride has been calculated to be endothermic: References Xenon(VIII) compounds Fluorides Nonmetal halides Hypothetical chemical compounds
Xenon octafluoride
Chemistry
162
20,657,686
https://en.wikipedia.org/wiki/M44%20generator%20cluster
The M44 generator cluster was an American chemical cluster bomb designed to deliver the incapacitating agent BZ. It was first mass-produced in 1962 and all stocks of the weapons were destroyed by 1989. History The United States Army Chemical Corps renewed their chemical warfare (CW) program's focus in the early 1960s. This refocusing led to the pursuit of weapons utilizing agent BZ. In March 1962 the U.S. Army first began mass-production of the M44 generator cluster, along with the M43 BZ cluster bomb. Despite reaching mass-production ("standardization" in military jargon) levels, the M44 and the M43 were never truly integrated into the main U.S. chemical arsenal. In total, around 1,500 of the M44s and M43s were produced. All U.S. BZ munitions and agent stockpiles were stored at Pine Bluff Arsenal. The entire U.S. BZ stockpile, including the M44s, were demilitarized and destroyed between 1988 and 1989. Specifications The M44 had a diameter of and a length of . Weighing the M44 generator cluster was a cluster bomb which was designed to deliver approximately of the chemical incapacitating agent BZ. The weapon's sub-munitions are a combination of various components. Three M16 BZ smoke generators were held together in an M39 cluster adapter and its M92 wire assembly; the M39 essentially bound and buckled the generators together. Each generator also held its own parachute, complete with harnesses and its own container. Also within the generator was its "generator pail" which contained the M6 canisters, the part of the sub-munition that held the BZ. Each of the M44s three generator pails held 42 M6 canisters, a total of 126. The canisters were arranged in 14 three-canister tiers and each one held about of agent BZ. Issues The M44s relatively small production numbers were due, like all U.S. BZ munitions, to a number of shortcomings. The M44 dispensed its agent in a cloud of white, particulate smoke. This was especially problematic because the white smoke was easily visible and BZ exposure was simple to prevent; a few layers of cloth over the mouth and nose are sufficient. There were a number of other factors that made BZ weapons unattractive to military planners. BZ had a delayed and variable rate-of-action, as well as a less than ideal "envelope-of-action". In addition, BZ casualties exhibited bizarre behavior, 50 to 80 percent had to be restrained to prevent self-injury during recovery. Others exhibited distinct symptoms of paranoia and mania. See also M34 cluster bomb References Cluster munitions Chemical weapon delivery systems Chemical weapons of the United States
M44 generator cluster
Chemistry
587
62,285,602
https://en.wikipedia.org/wiki/Multi-agent%20reinforcement%20learning
Multi-agent reinforcement learning (MARL) is a sub-field of reinforcement learning. It focuses on studying the behavior of multiple learning agents that coexist in a shared environment. Each agent is motivated by its own rewards, and does actions to advance its own interests; in some environments these interests are opposed to the interests of other agents, resulting in complex group dynamics. Multi-agent reinforcement learning is closely related to game theory and especially repeated games, as well as multi-agent systems. Its study combines the pursuit of finding ideal algorithms that maximize rewards with a more sociological set of concepts. While research in single-agent reinforcement learning is concerned with finding the algorithm that gets the biggest number of points for one agent, research in multi-agent reinforcement learning evaluates and quantifies social metrics, such as cooperation, reciprocity, equity, social influence, language and discrimination. Definition Similarly to single-agent reinforcement learning, multi-agent reinforcement learning is modeled as some form of a Markov decision process (MDP). For example, A set of environment states. One set of actions for each of the agents . is the probability of transition (at time ) from state to state under joint action . is the immediate joint reward after the transition from to with joint action . In settings with perfect information, such as the games of chess and Go, the MDP would be fully observable. In settings with imperfect information, especially in real-world applications like self-driving cars, each agent would access an observation that only has part of the information about the current state. In the partially observable setting, the core model is the partially observable stochastic game in the general case, and the decentralized POMDP in the cooperative case. Cooperation vs. competition When multiple agents are acting in a shared environment their interests might be aligned or misaligned. MARL allows exploring all the different alignments and how they affect the agents' behavior: In pure competition settings, the agents' rewards are exactly opposite to each other, and therefore they are playing against each other. Pure cooperation settings are the other extreme, in which agents get the exact same rewards, and therefore they are playing with each other. Mixed-sum settings cover all the games that combine elements of both cooperation and competition. Pure competition settings When two agents are playing a zero-sum game, they are in pure competition with each other. Many traditional games such as chess and Go fall under this category, as do two-player variants of modern games like StarCraft. Because each agent can only win at the expense of the other agent, many complexities are stripped away. There's no prospect of communication or social dilemmas, as neither agent is incentivized to take actions that benefit its opponent. The Deep Blue and AlphaGo projects demonstrate how to optimize the performance of agents in pure competition settings. One complexity that is not stripped away in pure competition settings is autocurricula. As the agents' policy is improved using self-play, multiple layers of learning may occur. Pure cooperation settings MARL is used to explore how separate agents with identical interests can communicate and work together. Pure cooperation settings are explored in recreational cooperative games such as Overcooked, as well as real-world scenarios in robotics. In pure cooperation settings all the agents get identical rewards, which means that social dilemmas do not occur. In pure cooperation settings, oftentimes there are an arbitrary number of coordination strategies, and agents converge to specific "conventions" when coordinating with each other. The notion of conventions has been studied in language and also alluded to in more general multi-agent collaborative tasks. Mixed-sum settings Most real-world scenarios involving multiple agents have elements of both cooperation and competition. For example, when multiple self-driving cars are planning their respective paths, each of them has interests that are diverging but not exclusive: Each car is minimizing the amount of time it's taking to reach its destination, but all cars have the shared interest of avoiding a traffic collision. Zero-sum settings with three or more agents often exhibit similar properties to mixed-sum settings, since each pair of agents might have a non-zero utility sum between them. Mixed-sum settings can be explored using classic matrix games such as prisoner's dilemma, more complex sequential social dilemmas, and recreational games such as Among Us, Diplomacy and StarCraft II. Mixed-sum settings can give rise to communication and social dilemmas. Social dilemmas As in game theory, much of the research in MARL revolves around social dilemmas, such as prisoner's dilemma, chicken and stag hunt. While game theory research might focus on Nash equilibria and what an ideal policy for an agent would be, MARL research focuses on how the agents would learn these ideal policies using a trial-and-error process. The reinforcement learning algorithms that are used to train the agents are maximizing the agent's own reward; the conflict between the needs of the agents and the needs of the group is a subject of active research. Various techniques have been explored in order to induce cooperation in agents: Modifying the environment rules, adding intrinsic rewards, and more. Sequential social dilemmas Social dilemmas like prisoner's dilemma, chicken and stag hunt are "matrix games". Each agent takes only one action from a choice of two possible actions, and a simple 2x2 matrix is used to describe the reward that each agent will get, given the actions that each agent took. In humans and other living creatures, social dilemmas tend to be more complex. Agents take multiple actions over time, and the distinction between cooperating and defecting is not as clear cut as in matrix games. The concept of a sequential social dilemma (SSD) was introduced in 2017 as an attempt to model that complexity. There is ongoing research into defining different kinds of SSDs and showing cooperative behavior in the agents that act in them. Autocurricula An autocurriculum (plural: autocurricula) is a reinforcement learning concept that's salient in multi-agent experiments. As agents improve their performance, they change their environment; this change in the environment affects themselves and the other agents. The feedback loop results in several distinct phases of learning, each depending on the previous one. The stacked layers of learning are called an autocurriculum. Autocurricula are especially apparent in adversarial settings, where each group of agents is racing to counter the current strategy of the opposing group. The Hide and Seek game is an accessible example of an autocurriculum occurring in an adversarial setting. In this experiment, a team of seekers is competing against a team of hiders. Whenever one of the teams learns a new strategy, the opposing team adapts its strategy to give the best possible counter. When the hiders learn to use boxes to build a shelter, the seekers respond by learning to use a ramp to break into that shelter. The hiders respond by locking the ramps, making them unavailable for the seekers to use. The seekers then respond by "box surfing", exploiting a glitch in the game to penetrate the shelter. Each "level" of learning is an emergent phenomenon, with the previous level as its premise. This results in a stack of behaviors, each dependent on its predecessor. Autocurricula in reinforcement learning experiments are compared to the stages of the evolution of life on Earth and the development of human culture. A major stage in evolution happened 2-3 billion years ago, when photosynthesizing life forms started to produce massive amounts of oxygen, changing the balance of gases in the atmosphere. In the next stages of evolution, oxygen-breathing life forms evolved, eventually leading up to land mammals and human beings. These later stages could only happen after the photosynthesis stage made oxygen widely available. Similarly, human culture could not have gone through the Industrial Revolution in the 18th century without the resources and insights gained by the agricultural revolution at around 10,000 BC. Applications Multi-agent reinforcement learning has been applied to a variety of use cases in science and industry: AI alignment Multi-agent reinforcement learning has been used in research into AI alignment. The relationship between the different agents in a MARL setting can be compared to the relationship between a human and an AI agent. Research efforts in the intersection of these two fields attempt to simulate possible conflicts between a human's intentions and an AI agent's actions, and then explore which variables could be changed to prevent these conflicts. Limitations There are some inherent difficulties about multi-agent deep reinforcement learning. The environment is not stationary anymore, thus the Markov property is violated: transitions and rewards do not only depend on the current state of an agent. Further reading Stefano V. Albrecht, Filippos Christianos, Lukas Schäfer. Multi-Agent Reinforcement Learning: Foundations and Modern Approaches. MIT Press, 2024. https://www.marl-book.com Kaiqing Zhang, Zhuoran Yang, Tamer Basar. Multi-agent reinforcement learning: A selective overview of theories and algorithms. Studies in Systems, Decision and Control, Handbook on RL and Control, 2021. References Reinforcement learning Multi-agent systems Deep learning Game theory
Multi-agent reinforcement learning
Engineering
1,888
19,075,439
https://en.wikipedia.org/wiki/Slot-waveguide
A slot-waveguide is an optical waveguide that guides strongly confined light in a subwavelength-scale low refractive index region by total internal reflection. A slot-waveguide consists of two strips or slabs of high-refractive-index (nH) materials separated by a subwavelength-scale low-refractive-index (nS) slot region and surrounded by low-refractive-index (nC) cladding materials. Principle of operation The principle of operation of a slot-waveguide is based on the discontinuity of the electric field (E-field) at high-refractive-index-contrast interfaces. Maxwell’s equations state that, to satisfy the continuity of the normal component of the electric displacement field D at an interface, the corresponding E-field must undergo a discontinuity with higher amplitude in the low-refractive-index side. That is, at an interface between two regions of dielectric constants εS and εH, respectively: DSN=DHN εSESN=εHEHN nS2ESN=nH2EHN where the superscript N indicates the normal components of D and E vector fields. Thus, if , then ESN>>EHN. Given that the slot critical dimension (distance between the high-index slabs or strips) is comparable to the exponential decay length of the fundamental eigenmode of the guided-wave structure, the resulting E-field normal to the high-index-contrast interfaces is enhanced in the slot and remains high across it. The power density in the slot is much higher than that in the high-index regions. Since wave propagation is due to total internal reflection, there is no interference effect involved and the slot-structure exhibits very low wavelength sensitivity. Invention The slot-waveguide was born in 2003 as an unexpected outcome of theoretical studies on metal-oxide-semiconductor (MOS) electro-optic modulation in high-confinement silicon photonic waveguides by Vilson Rosa de Almeida and Carlos Angulo Barrios, then a Ph.D. student and a postdoctoral associate, respectively, at Cornell University. Theoretical analysis and experimental demonstration of the first slot-waveguide implemented in the Si/SiO2 material system at 1.55 μm operation wavelength were reported by Cornell researchers in 2004. Since these pioneering works, several guided-wave configurations based on the slot-waveguide concept have been proposed and demonstrated. Relevant examples are the following: In 2005, researchers at the Massachusetts Institute of Technology proposed to use multiple slot regions in the same guided-wave structure (multi-slot waveguide) in order to increase the optical field in the low-refractive-index regions. The experimental demonstration of such multiple slot waveguide in a horizontal configuration was first published in 2007. In 2006, the slot-waveguide approach was extended to the terahertz frequency band by researchers at RWTH Aachen University. Researchers at the California Institute of Technology also demonstrated that a slot waveguide, in combination with nonlinear electrooptic polymers, could be used to build ring modulators with exceptionally high tunability. Later this same principle enabled Baehr-Jones et al. to demonstrate a mach-zehnder modulator with an exceptionally low drive voltage of 0.25 V In 2007, a non-planar implementation of the slot-waveguide principle of operation was demonstrated by researchers at the University of Bath. They showed concentration of optical energy within a subwavelength-scale air hole running down the length of a photonic-crystal fiber. Recently, in 2016, it is shown that slots in a pair of waveguides if off-shifted away from each other can enhance the coupling coefficient even more than 100% if optimized properly, and thus the effective power coupling length between the waveguides can significantly be reduced. Hybrid slot (having vertical slot in one waveguide and horizontal slot in the other) assisted polarization beam splitter is also numerically demonstrated. Though, the losses are high for such slot structures, this scheme exploiting the asymmetric slots may have potential to design very compact optical directional couplers and polarization beam splitters for on-chip integrated optical devices. The slot waveguide bend is another structure essential to the waveguide design of several Integrated micro- and nano-optics devices. One of the benefits of waveguide bends is the reduction of the footprint size of the device. There are two approaches based on the similarity of Si rails width to form the sharp bend in slot waveguide, which are the symmetric and asymmetric slot waveguides. Fabrication Planar slot-waveguides have been fabricated in different material systems such as Si/SiO2 and Si3N4/SiO2. Both vertical (slot plane is normal to the substrate plane) and horizontal (slot plane is parallel to the substrate plane) configurations have been implemented by using conventional micro- and nano-fabrication techniques. These processing tools include electron beam lithography, photolithography, chemical vapour deposition [usually low-pressure chemical vapour deposition (LPCVD) or plasma enhanced chemical vapour deposition (PECVD)], thermal oxidation, reactive-ion etching and focused ion beam. In vertical slot-waveguides, the slot and strips widths are defined by electron- or photo-lithography and dry etching techniques whereas in horizontal slot-waveguides the slot and strips thicknesses are defined by a thin-film deposition technique or thermal oxidation. Thin film deposition or oxidation provides better control of the layers dimensions and smoother interfaces between the high-index-contrast materials than lithography and dry etching techniques. This makes horizontal slot-waveguides less sensitive to scattering optical losses due to interface roughness than vertical configurations. Fabrication of a non-planar (fiber-based) slot-waveguide configuration has also been demonstrated by means of conventional microstructured optical fiber technology. Applications A slot-waveguide produces high E-field amplitude, optical power, and optical intensity in low-index materials at levels that cannot be achieved with conventional waveguides. This property allows highly efficient interaction between fields and active materials, which may lead to all-optical switching, optical amplification and optical detection on integrated photonics. Strong E-field confinement can be localized in a nanometer-scale low-index region. As firstly pointed out in, the slot waveguide can be used to greatly increase the sensitivity of compact optical sensing devices or to enhance the efficiency of near-field optics probes. At Terahertz frequencies, slot waveguide based splitter has been designed which allows for low loss propagation of Terahertz waves. The device acts as a splitter through which maximum throughput can be achieved by adjusting the arm length ratio of the input to the output side. References Optical components Photonics
Slot-waveguide
Materials_science,Technology,Engineering
1,404
39,897,887
https://en.wikipedia.org/wiki/Journal%20of%20Electronic%20Imaging
The Journal of Electronic Imaging is a peer-reviewed scientific journal published quarterly by SPIE and the Society for Imaging Science and Technology. It covers all technology topics pertaining to the field of electronic imaging. The editor-in-chief is Zeev Zalevsky. Abstracting and indexing This journal is indexed by the following services: Science Citation Index Expanded Current Contents/Engineering, Computing & Technology Inspec Scopus Ei/Compendex Astrophysics Data System According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.945. References External links English-language journals Academic journals established in 1992 Optics journals SPIE academic journals Electrical and electronic engineering journals Quarterly journals Signal processing journals
Journal of Electronic Imaging
Engineering
144
8,282,811
https://en.wikipedia.org/wiki/Ban%20number
In recreational mathematics, a ban number is a number that does not contain a particular letter when spelled out in English; in other words, the letter is "banned." Ban numbers are not precisely defined, since some large numbers do not follow the standards of number names (such as googol and googolplex). There are several published sequences of ban numbers: The aban numbers do not contain the letter A. The first few aban numbers are 1 through 999, 1,000,000 through 1,000,999, 2,000,000 through 2,000,999, ... The word "and" is not counted. The dban numbers do not contain the letter D. The first few dban numbers are 1 through 99, 1,000,000 through 1,000,099, 2,000,000 through, 2,000,099, etc... The eban numbers do not contain the letter E. The first few eban numbers are 2, 4, 6, 30, 32, 34, 36, 40, 42, 44, 46, 50, 52, 54, 56, 60, 62, 64, 66, 2000, 2002, 2004, ... . The sequence was coined in 1990 by Neil Sloane. Coincidentally, all the numbers in the sequence are even. The iban numbers do not contain the letter I. The first few iban numbers are 1, 2, 3, 4, 7, 10, 11, 12, 14, 17, 20, 21, 22, 23, 24, 27, 40, ... . Since all the -illion numbers contain the letter I, there are exactly 30,275 iban numbers, the largest being 777,777. The nban numbers do not contain the letter N. The first few nban numbers are 2, 3, 4, 5, 6, 8, 12, 30, 32, 33, 34, 35, 36, 38, 40, 42, 43, 44, 45, 46, 48, ... . Since "hundred", "thousand", and all the -illion numbers contain the letter N, there are exactly 42 nban numbers, the largest being 88. The oban numbers do not contain the letter O. The first few oban numbers are 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 23, 25, 26, ... . Since "thousand" and all the -illion numbers contain the letter O, there are exactly 454 oban numbers, the largest being 999. Saying the word "o" for numbers with 0 in the tens place is not counted. The sban numbers do not contain the letter S. The first few sban numbers are 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 14, 15, 18, 19, 20, 21, 22, 23, 24, ... . The tban numbers do not contain the letter T. The first few tban numbers are 1, 4, 5, 6, 7, 9, 11, 100, 101, 104, 105, 106, 107, 109, 111, 400, 401, 404, 405, 406, ... . The uban numbers do not contain the letter U. The first few uban numbers are 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, ... . The yban numbers do not contain the letter Y. The first few yban numbers are 1 through 19, 100 through 119, 200 through 219, 300 through 319, ... . Basic properties Aban numbers For 1<N<109, aban numbers are numbers which the integer part of N/1000 is divisible by 1000. Eban numbers Eban numbers are never odd, due to "one", "three", "five", "seven", "nine", "eleven" and the suffix -teen all containing 'e's. Further reading External links Integer sequences
Ban number
Mathematics
881
1,025,901
https://en.wikipedia.org/wiki/Reynolds%20decomposition
In fluid dynamics and turbulence theory, Reynolds decomposition is a mathematical technique used to separate the expectation value of a quantity from its fluctuations. Decomposition For example, for a quantity the decomposition would be where denotes the expectation value of , (often called the steady component/time, spatial or ensemble average), and , are the deviations from the expectation value (or fluctuations). The fluctuations are defined as the expectation value subtracted from quantity such that their time average equals zero. The expected value, , is often found from an ensemble average which is an average taken over multiple experiments under identical conditions. The expected value is also sometime denoted , but it is also seen often with the over-bar notation. Direct numerical simulation, or resolution of the Navier–Stokes equations completely in , is only possible on extremely fine computational grids and small time steps even when Reynolds numbers are low, and becomes prohibitively computationally expensive at high Reynolds' numbers. Due to computational constraints, simplifications of the Navier-Stokes equations are useful to parameterize turbulence that are smaller than the computational grid, allowing larger computational domains. Reynolds decomposition allows the simplification of the Navier–Stokes equations by substituting in the sum of the steady component and perturbations to the velocity profile and taking the mean value. The resulting equation contains a nonlinear term known as the Reynolds stresses which gives rise to turbulence. See also Reynolds-averaged Navier–Stokes equations References Turbulence
Reynolds decomposition
Chemistry
294
9,503,943
https://en.wikipedia.org/wiki/Equine%20nutrition
Equine nutrition is the feeding of horses, ponies, mules, donkeys, and other equines. Correct and balanced nutrition is a critical component of proper horse care. Horses are non-ruminant herbivores of a type known as a "hindgut fermenter." Horses have only one stomach, as do humans. However, unlike humans, they also need to digest plant fiber (largely cellulose) that comes from grass or hay. Ruminants like cattle are foregut fermenters, and digest fiber in plant matter by use of a multi-chambered stomach, whereas horses use microbial fermentation in a part of the digestive system known as the cecum (or caecum) to break down the cellulose. In practical terms, horses prefer to eat small amounts of food steadily throughout the day, as they do in nature when grazing on pasture lands. Although this is not always possible with modern stabling practices and human schedules that favor feeding horses twice a day, it is important to remember the underlying biology of the animal when determining what to feed, how often, and in what quantities. The digestive system of the horse is somewhat delicate. Horses are unable to regurgitate food, except from the esophagus. Thus, if they overeat or eat something poisonous, vomiting is not an option. They also have a long, complex large intestine and a balance of beneficial microbes in their cecum that can be upset by rapid changes in feed. Because of these factors, they are very susceptible to colic, which is a leading cause of death in horses. Therefore, horses require clean, high-quality feed, provided at regular intervals, plus water or they may become ill if subjected to abrupt changes in their diets. Horses are also sensitive to molds and toxins. For this reason, they must never be fed contaminated fermentable materials such as lawn clippings. Fermented silage or "haylage" is fed to horses in some places; however, contamination or failure of the fermentation process that allows any mold or spoilage may be toxic. The digestive system Horses and other members of the genus Equus are adapted by evolutionary biology to eating small amounts of the same kind of food all day long. In the wild, horses ate prairie grasses in semi-arid regions and traveled significant distances each day in order to obtain adequate nutrition. Therefore, their digestive system was made to work best with a small but steady flow of food that does not change much from day to day. Chewing and swallowing Digestion begins in the mouth. First, the animal selects pieces of forage and picks up finer foods, such as grain, with sensitive, prehensile, lips. The front teeth of the horse, called incisors, nip off forage, and food is ground up for swallowing by the premolars and molars. The esophagus carries food to the stomach. The esophagus enters the stomach at an acute angle, creating a one-way valve, with a powerful sphincter mechanism at the gastroesophageal junction, which is why horses cannot vomit. The esophagus is also the area of the digestive tract where horses may suffer from choke. (see Illnesses related to improper feeding below) The stomach and small intestine Horses have a small stomach for their large size, which limits the amount of food that can be taken in at one time. The average sized horse has a stomach with a capacity of only , and works best when it contains about . One reason continuous foraging or several small feedings per day are better than one or two large meals is because the stomach begins to empty when it is two-thirds full, whether the food in the stomach is processed or not. The small intestine is long and holds to . This is the major digestive organ where 50 to 70 percent of all nutrients are absorbed into the bloodstream. Bile from the liver acts here, combined with enzymes from the pancreas and small intestine itself. Equids do not have a gall bladder, so bile flows constantly, an adaptation to a slow but steady supply of food, and another reason for providing fodder to horses in several small feedings. The cecum and large intestine The cecum is the first section of the large intestine. It is also known as the "water gut" or "hind gut." It is a blind-ended pouch, about long that holds to . The small intestine opens into the cecum, and the cellulose plant fiber in the food is fermented by microbes for approximately seven hours. The fermented material leaves the cecum through another orifice and passes to the large colon. The microbes in the cecum produce vitamin K, B-complex vitamins, proteins, and fatty acids. The reason horses must have their diets changed slowly is so the microbes in the cecum are able to modify and adapt to the different chemical structure of new feedstuffs. Too abrupt a change in diet can cause colic, because new materials are not properly digested. The large colon, small colon, and rectum make up the remainder of the large intestine. The large colon is long and holds up to of semi-liquid matter. Its main purpose is to absorb carbohydrates which were broken down from cellulose in the cecum. Due to its many twists and turns, it is a common place for a type of horse colic called an impaction. The small colon is also long, holds about , is the area where the majority of water is absorbed, and where fecal balls are formed. The rectum is about one foot long, and acts as a holding chamber for waste, which is then expelled from the body via the anus. Nutrients Like all animals, equines require five main classes of nutrients to survive: water, energy (primarily in the form of fats and carbohydrates), proteins, vitamins, and minerals. Water Water makes up between 62-68% of a horse's body weight and is essential for life. Horses can only live a few days without water, becoming dangerously dehydrated if they lose 8-10% of their natural body water. Therefore, it is critically important for horses to have access to a fresh, clean, and adequate supply of water. An average horse drinks of water per day, more in hot weather, when eating dry forage such as hay, or when consuming high levels of salt, potassium, and magnesium. Horses drink less water in cool weather or when on lush pasture, which has a higher water content. When under hard work, or if a mare is lactating, water requirements may be as much as four times greater than normal. In the winter, snow is not a sufficient source of water for horses. Though they need a great deal of water, horses spend very little time drinking; usually 1–8 minutes a day, spread out in 2-8 episodes. Water plays an important part in digestion. The forages and grains horses eat are mixed with saliva in the mouth to make a moist bolus that can be easily swallowed. Therefore, horses produce up to or 85 lb. of saliva per day. Energy nutrients and protein Nutritional sources of energy are fat and carbohydrates. Protein is a critical building block for muscles and other tissues. Horses that are heavily exercised, growing, pregnant or lactating need increased energy and protein in their diet. However, if a horse has too much energy in its diet and not enough exercise, it can become too high-spirited and difficult to handle. Fat exists in low levels in plants and can be added to increase the energy density of the diet. Fat has per kilogram of energy, which is 2.25 times that of any carbohydrate source. Because equids have no gall bladder to store large quantities of bile, which flows continuously from the liver directly into the small intestine, fat, though a necessary nutrient, is difficult for them to digest and utilize in large quantities. However, they are able to digest a greater amount of fat than can cattle. Horses benefit from up to 8% fat in their diets, but more does not always provide a visible benefit. Horses can only have 15-20% fat in their diet without the risk of developing diarrhea. Carbohydrates, the main energy source in most rations, are usually fed in the form of hay, grass, and grain. Soluble carbohydrates such as starches and sugars are readily broken down to glucose in the small intestine and absorbed. Insoluble carbohydrates, such as fiber (cellulose), are not digested by the horse's own enzymes, but are fermented by microbes in the cecum and large colon to break down and release their energy sources, volatile fatty acids. Soluble carbohydrates are found in nearly every feed source; corn has the highest amount, then barley and oats. Forages normally have only 6-8% soluble carbohydrate, but under certain conditions can have up to 30%. Sudden ingestion of large amounts of starch or high sugar feeds can cause at the least an indigestion colic, and at the worst potentially fatal colitis or laminitis. Protein is used in all parts of the body, especially muscle, blood, hormones, hooves, and hair cells. The main building blocks of protein are amino acids. Alfalfa and other legumes in hay are good sources of protein that can be easily added to the diet. Most adult horses only require 8-10% protein in their diet; however, higher protein is important for lactating mares and young growing foals. Vitamins and minerals Horses that are not subjected to hard work or extreme conditions usually have more than adequate amounts of vitamins in their diet if they are receiving fresh, green, leafy forages. Sometimes a vitamin/mineral supplement is needed when feeding low-quality hay, if a horse is under stress (illness, traveling, showing, racing, and so on), or not eating well. Grain has a different balance of nutrients than forage, and so requires specialized supplementation to prevent an imbalance of vitamins and minerals. Minerals are required for maintenance and function of the skeleton, nerves, and muscles. These include calcium, phosphorus, sodium, potassium, and chloride, and are commonly found in most good-quality feeds. Horses also need trace minerals such as magnesium, selenium, copper, zinc, and iodine. Normally, if adult animals at maintenance levels are consuming fresh hay or are on pasture, they will receive adequate amounts of minerals in their diet, with the exception of sodium chloride (salt), which needs to be provided, preferably free choice. Some pastures are deficient in certain trace minerals, including selenium, zinc, and copper, and in such situations, health problems, including deficiency diseases, may occur if horses' trace mineral intake is not properly supplemented. Calcium and phosphorus are needed in a specific ratio of between 1:1 and 2:1. Adult horses can tolerate up to a 5:1 ratio, foals no more than 3:1. A total ration with a higher ratio of phosphorus than calcium is to be avoided. Over time, imbalance will ultimately lead to a number of possible bone-related problems such as osteoporosis. Foals and young growing horses through their first three to four years have special nutritional needs and require feeds that are balanced with a proper calcium:phosphorus ratio and other trace minerals. A number of skeletal problems may occur in young animals with an unbalanced diet. Hard work increases the need for minerals; sweating depletes sodium, potassium, and chloride from the horse's system. Therefore, supplementation with electrolytes may be required for horses in intense training, especially in hot weather. Types of feed Equids can consume approximately 2–2.5% of their body weight in dry feed each day. Therefore, a adult horse could eat up to of food. Foals less than six months of age eat 2-4% of their weight each day. Solid feeds are placed into three categories: forages (such as hay and grass), concentrates (including grain or pelleted rations), and supplements (such as prepared vitamin or mineral pellets). Equine nutritionists recommend that 50% or more of the animal's diet by weight should be forages. If a horse is working hard and requires more energy, the use of grain is increased and the percentage of forage decreased so that the horse obtains the energy content it needs for the work it is performing. However, forage amount should never go below 1% of the horse's body weight per day. Forages Forages, also known as "roughage," are plant materials classified as legumes or grasses, found in pastures or in hay. Often, pastures and hayfields will contain a blend of both grasses and legumes. Nutrients available in forage vary greatly with maturity of the grasses, fertilization, management, and environmental conditions. Grasses are tolerant of a wide range of conditions and contain most necessary nutrients. Some commonly used grasses include timothy, brome, fescue, coastal Bermuda, orchard grass, and Kentucky bluegrass. Another type of forage sometimes provided to horses is beet pulp, a byproduct left over from the processing of sugar beets, which is high in energy as well as fiber. Legumes such as clover or alfalfa are usually higher in protein, calcium, and energy than grasses. However, they require warm weather and good soil to produce the best nutrients. Legume hays are generally higher in protein than the grass hays. They are also higher in minerals, particularly calcium, but have an incorrect ratio of calcium to phosphorus. Because they are high in protein, they are very desirable for growing horses or those subjected to very hard work, but the calcium:phosphorus ratio must be balanced by other feeds to prevent bone abnormalities. Hay is a dried mixture of grasses and legumes. It is cut in the field and then dried and baled for storage. Hay is most nutritious when it is cut early on, before the seed heads are fully mature and before the stems of the plants become tough and thick. Hay that is very green can be a good indicator of the amount of nutrients in the hay; however, color is not the sole indicator of quality—smell and texture are also important. Hay can be analyzed by many laboratories and that is the most reliable way to tell the nutritional values it contains. Hay, particularly alfalfa, is sometimes compressed into pellets or cubes. Processed hay can be of more consistent quality and is more convenient to ship and to store. It is also easily obtained in areas that may be suffering localized hay shortages. However, these more concentrated forms can be overfed and horses are somewhat more prone to choke on them. On the other hand, hay pellets and cubes can be soaked until they break apart into a pulp or thick slurry, and in this state are a very useful source of food for horses with tooth problems such as dental disease, tooth loss due to age, or structural anomalies. Haylage, also known as Round bale silage is a term for grass sealed in airtight plastic bags, a form of forage that is frequently fed in the United Kingdom and continental Europe, but is not often seen in the United States. Because haylage is a type of silage, hay stored in this fashion must remain completely sealed in plastic, as any holes or tears can stop the preservation properties of fermentation and lead to mold or spoilage. Rodents chewing through the plastic can also spoil the hay introducing contamination to the bale. If a rodent dies inside the plastic, the subsequent botulism toxins released can contaminate the entire bale. Sometimes, straw or chaff is fed to animals. However, this is roughage with little nutritional value other than providing fiber. It is sometimes used as a filler; it can slow down horses who eat their grain too fast, or it can provide additional fiber when the horse must meet most nutritional needs via concentrated feeds. Straw is more often used as a bedding in stalls to absorb wastes. Concentrates Grains Whole or crushed grains are the most common form of concentrated feed, sometimes referred to generically as "oats" or "corn" even if those grains are not present, also sometimes called straights in the UK. Oats are the most popular grain for horses. Oats have a lower digestible energy value and higher fiber content than most other grains. They form a loose mass in the stomach that is well suited to the equine digestive system. They are also more palatable and digestible than other grains. Corn (USA), or maize (British English), is the second most palatable grain. It provides twice as much digestible energy as an equal volume of oats and is low in fiber. Because of these characteristics, it is easy to over-feed, causing obesity, so horses are seldom fed corn all by itself. Nutritionists caution that moldy corn is poisonous if fed to horses. Barley is also fed to horses, but needs to be processed to crack the seed hull and allow easier digestibility. It is frequently fed in combination with oats and corn, a mix informally referred to by the acronym "COB" (for Corn, Oats and Barley). Wheat is generally not used as a concentrate. However, wheat bran is sometimes added to the diet of a horse for supplemental nutrition, usually moistened and in the form of a bran mash. Wheat bran is high in phosphorus, so must be fed carefully so that it does not cause an imbalance in the Ca:P ratio of a ration. Once touted for a laxative effect, this use of bran is now considered unnecessary, as horses, unlike humans, obtain sufficient fiber in their diets from other sources. Mixes and pellets Many feed manufacturers combine various grains and add additional vitamin and mineral supplements to create a complete premixed feed that is easy for owners to feed and of predictable nutritional quality. Some of these prepared feeds are manufactured in pelleted form, others retain the grains in their original form. In many cases molasses is used as a binder to keep down dust and for increased palatability. Grain mixes with added molasses are usually called "sweet feed" in the United States and "coarse mix" in the United Kingdom. Pelleted or extruded feeds (sometimes referred to as "nuts" in the UK) may be easier to chew and result in less wasted feed. Horses generally eat pellets as easily as grain. However, pellets are also more expensive, and even "complete" rations do not eliminate the necessity for forage. Supplements The average modern horse on good hay or pasture with light work usually does not need supplements; however, horses subjected to stress due to age, intensive athletic work, or reproduction may need additional nutrition. Extra fat and protein are sometimes added to the horse's diet, along with vitamin and mineral supplements. There are hundreds, if not thousands of commercially prepared vitamin and mineral supplements on the market, many tailored to horses with specialized needs. Soybean meal is a common protein supplement, and averages about 44% crude protein. The protein in soybean meal is high-quality, with the proper ratio of dietary essential amino acids for equids. Cottonseed meal, linseed meal, and peanut meal are also used, but are not as common. Feeding practices Most horses only need quality forage, water, and a salt or mineral block. Grain or other concentrates are often not necessary. But, when grain or other concentrates are fed, quantities must be carefully monitored. To do so, horse feed is measured by weight, not volume. For example, of oats has a different volume than of corn. When continuous access to feed is not possible, it is more consistent with natural feeding behavior to provide three small feedings per day instead of one or two large ones. However, even two daily feedings is preferable to only one. To gauge the amount to feed, a weight tape can be used to provide a reasonably accurate estimate of a horse's weight. The tape measures the circumference of the horse's barrel, just behind the withers and elbows, and the tape is calibrated to convert circumference into approximate weight. Actual amounts fed vary by the size of the horse, the age of the horse, the climate, and the work to which the animal is put. In addition, genetic factors play a role. Some animals are naturally easy keepers (good doers), which means that they can thrive on small amounts of food and are prone to obesity and other health problems if overfed. Others are hard keepers (poor doers), meaning that they are prone to be thin and require considerably more food to maintain a healthy weight. Veterinarians are usually a good source for recommendations on appropriate types and amounts of feed for a specific horse. Animal nutritionists are also trained in how to develop equine rations and make recommendations. There are also numerous books written on the topic. Feed manufacturers usually offer very specific guidelines for how to select and properly feed products from their company, and in the United States, the local office of the Cooperative Extension Service can provide educational materials and expert recommendations. Feeding forages Equids always require forage. When possible, nutritionists recommend it be available at all times, at least when doing so does not overfeed the animal and lead to obesity. It is safe to feed a ration that is 100% forage (along with water and supplemental salt), and any feed ration should be at least 50% forage. Hay with alfalfa or other legumes has more concentrated nutrition and so is fed in smaller amounts than grass hay, though many hays have a mixture of both types of plant. When beet pulp is fed, a ration of to is usually soaked in water for 3 to 4 hours prior to feeding in order to make it more palatable, and to minimize the risk of choke and other problems. It is usually soaked in a proportion of one part beet pulp to two parts water. Beet pulp is usually fed in addition to hay, but occasionally is a replacement for hay when fed to very old horses who can no longer chew properly. It is available in both pelleted and shredded form, pellets must be soaked significantly longer than shredded beet pulp. Some pelleted rations are designed to be a "complete" feed that contains both hay and grain, meeting all the horse's nutritional needs. However, even these rations should have some hay or pasture provided, a minimum of a half-pound of forage for every of horse, in order to keep the digestive system functioning properly and to meet the horse's urge to graze. When horses graze under natural conditions, they may spend up to 18 hours per day doing so. However, on modern irrigated pastures, they may have their nutritional needs for forage met in as little as three hours per day, depending on the quality of grass available. Recent studies address the level of various non-structural carbohydrates (NSC), such as fructan, in forages. Too high an NSC level causes difficulties for animals prone to laminitis or equine polysaccharide storage myopathy (EPSM). NSC cannot be determined by looking at forage, but hay and pasture grasses can be tested for NSC levels. Feeding concentrates Concentrates, when fed, are recommended to be provided in quantities no greater than 1% of a horse's body weight per day, and preferably in two or more feedings of no more than 0.5% of body weight each. If a ration needs to contain a higher percent of concentrates, such as that of a race horse, bulky grains such as oats should be used as much as possible; a loose mass of feed helps prevent impaction colic. Peptic ulcers are linked to a too-high concentration of grain in the diet, particularly noticed in modern racehorses, where some studies show such ulcers affecting up to 90% of all race horses. In general, the portion of the ration that should be grain or other concentrated feed is 0-10% grain for mature idle horses; between 20-70% for horses at work, depending on age, intensity of activity, and energy requirements. Concentrates should not be fed to horses within one hour before or after a heavy workout. Concentrates also need to be adjusted to level of performance. Not only can excess grain and inadequate exercise lead to behavior problems, it may also trigger serious health problems that include Equine Exertional Rhabdomyolysis, or "tying up," in horses prone to the condition. Another possible risk are various forms of horse colic. A relatively uncommon, but usually fatal concern is colitis-X, which may be triggered by excess protein and lack of forage in the diet that allows for the multiplication of clostridial organisms, and is exacerbated by stress. Access to water Horses normally require free access to all the fresh, clean water they want, and to avoid dehydration, should not be kept from water longer than four hours at any one time. However, water may need to be temporarily limited in quantity when a horse is very hot after a heavy workout. As long as a hot horse continues to work, it can drink its fill at periodic intervals, provided that common sense is used and that an overheated horse is not forced to drink from extremely cold water sources. But when the workout is over, a horse needs to be cooled out and walked for 30–90 minutes before it can be allowed all the water it wants at one time. However, dehydration is also a concern, so some water needs to be offered during the cooling off process. A hot horse will properly rehydrate while cooling off if offered a few swallows of water every three to five minutes while being walked. Sometimes the thirst mechanism does not immediately kick in following a heavy workout, which is another reason to offer periodic refills of water throughout the cooling down period. Even a slightly dehydrated horse is at higher risk of developing impaction colic. Additionally, dehydration can lead to weight loss because the horse cannot produce adequate amounts of saliva, thus decreasing the amount of feed and dry forage consumed. Thus, it is especially important for horse owners to encourage their horses to drink when there is a risk of dehydration; when horses are losing a great deal of water in hot weather due to strenuous work, or in cold weather due to horses' natural tendency to drink less when in a cold environment. To encourage drinking, owners may add electrolytes to the feed, additives to make the water especially palatable (such as apple juice), or, when it is cold, to warm the water so that it is not at a near-freezing temperature. Special feeding issues for ponies Ponies and miniature horses are usually easy keepers and need less feed than full-sized horses. This is not only because they are smaller, but also, because they evolved under harsher living conditions than horses, they use feed more efficiently. Ponies easily become obese from overfeeding and are at high risk for colic and, especially, laminitis. Fresh grass is a particular danger to ponies; they can develop laminitis in as little as one hour of grazing on lush pasture. Incorrect feeding is also as much a concern as simple overfeeding. Ponies and miniatures need a diet relatively low in sugars and starches and calories, but higher in fibers. Miniature horses in particular need fewer calories pound for pound than a regular horse, and are more prone to hyperlipemia than regular horses, and are also at higher risk of developing equine metabolic syndrome. It is important to track the weight of a pony carefully, by use of a weight tape. Forages may be fed based on weight, at a rate of about of forage for every . Forage, along with water and a salt and mineral block, is all most ponies require. If a hard-working pony needs concentrates, a ratio of no more than 30% concentrates to 70% forage is recommended. Concentrates designed for horses, with added vitamins and minerals, will often provide insufficient nutrients at the small serving sizes needed for ponies. Therefore, if a pony requires concentrates, feed and supplements designed specially for ponies should be used. In the UK, extruded pellets designed for ponies are sometimes called "pony nuts". Special feeding issues for mules and donkeys Like ponies, mules and donkeys are also very hardy and generally need less concentrated feed than horses. Mules need less protein than horses and do best on grass hay with a vitamin and mineral supplement. If mules are fed concentrates, they only need about half of what a horse requires. Like horses, mules require fresh, clean water, but are less likely to over-drink when hot. Donkeys, like mules, need less protein and more fiber than horses. Although the donkey's gastrointestinal tract has no marked differences in structure to that of the horse, donkeys are more efficient at digesting food and thrive on less forage than a similar sized pony. They only need to eat 1.5% of their body weight per day in dry matter. It is not fully understood why donkeys are such efficient digestors, but it is thought that they may have a different microbial population in the large intestine than do horses, or possibly an increased gut retention time. Donkeys do best when allowed to consume small amounts of food over long periods, as is natural for them in an arid climate. They can meet their nutritional needs on 6 to 7 hours of grazing per day on average dryland pasture that is not stressed by drought. If they are worked long hours or do not have access to pasture, they require hay or a similar dried forage, with no more than a 1:4 ratio of legumes to grass. They also require salt and mineral supplements, and access to clean, fresh water. Like ponies and mules, in a lush climate, donkeys are prone to obesity and are at risk of laminitis. Treats Many people like to feed horses special treats such as carrots, sugar cubes, peppermint candies, or specially manufactured horse "cookies." Horses do not need treats, and due to the risk of colic or choke, many horse owners do not allow their horses to be given treats. There are also behavioral issues that some horses may develop if given too many treats, particularly a tendency to bite if hand-fed, and for this reason many horse trainers and riding instructors discourage the practice. However, if treats are allowed, carrots and compressed hay pellets are common, nutritious, and generally not harmful. Apples are also acceptable, though it is best if they are first cut into slices. Horse "cookies" are often specially manufactured out of ordinary grains and some added molasses. They generally will not cause nutritional problems when fed in small quantities. However, many types of human foods are potentially dangerous to a horse and should not be fed. This includes bread products, meat products, candy, and carbonated or alcoholic beverages. It was once a common practice to give horses a weekly bran mash of wheat bran mixed with warm water and other ingredients. It is still done regularly in some places. While a warm, soft meal is a treat many horses enjoy, and was once considered helpful for its laxative effect, it is not nutritionally necessary. An old horse with poor teeth may benefit from food softened in water, a mash may help provide extra hydration, and a warm meal may be comforting in cold weather, but horses have far more fiber in their regular diet than do humans, and so any assistance from bran is unnecessary. There is also a risk that too much wheat bran may provide excessive phosphorus, unbalancing the diet, and a feed of unusual contents fed only once a week could trigger a bout of colic. Feed storage All hay and concentrated feeds must be kept dry and free of mold, rodent feces, and other types of contamination that may cause illness in horses. Feed kept outside or otherwise exposed to moisture can develop mold quite quickly. Due to fire hazards, hay is often stored under an open shed or under a tarp, rather than inside a horse barn itself, but should be kept under some kind of cover. Concentrates take up less storage space, are less of a fire hazard, and are usually kept in a barn or enclosed shed. A secure door or latched gate between the animals and any feed storage area is critical. Horses accidentally getting into stored feed and eating too much at one time is a common but preventable way that horses develop colic or laminitis. (see Illnesses related to improper feeding below) It is generally not safe to give a horse feed that was contaminated by the remains of a dead animal. This is a potential source of botulism. This is not an uncommon situation. For example, mice and birds can get into poorly stored grain and be trapped; hay bales sometimes accidentally contain snakes, mice, or other small animals that were caught in the baling machinery during the harvesting process. Feeding behavior Horses can become anxious or stressed if there are long periods of time between meals. They also do best when they are fed on a regular schedule; they are creatures of habit and easily upset by changes in routine. When horses are in a herd, their behavior is hierarchical; the higher-ranked animals in the herd eat and drink first. Low-status animals, who eat last, may not get enough food, and if there is little available feed, higher-ranking horses may keep lower-ranking ones from eating at all. Therefore, unless a herd is on pasture that meets the nutritional needs of all individuals, it is important to either feed horses separately, or spread feed out in separate areas to be sure all animals get roughly equal amounts of food to eat. In some situations where horses are kept together, they may still be placed into separate herds, depending on nutritional needs; overweight horses are kept separate from thin horses so that rations may be adjusted accordingly. Horses may also eat in undesirable ways, such as bolting their feed, or eating too fast. This can lead to either choke or colic under some circumstances. Dental issues Horses' teeth continually erupt throughout their life, are worn down as they eat, and can develop uneven wear patterns that can interfere with chewing. For this reason, horses need a dental examination at least once a year, and particular care must be paid to the dental needs of older horses. The process of grinding off uneven wear patterns on a horse's teeth is called floating and can be performed by a veterinarian or a specialist in equine dentistry. Illnesses related to improper feeding Colic, choke, and laminitis can be life-threatening when a horse is severely affected, and veterinary care is necessary to properly treat these conditions. Other conditions, while not life-threatening, may have serious implications for the long-term health and soundness of a horse. Colic Horse colic itself is not a disease, but rather a description of symptoms connected to abdominal pain. It can occur due to any number of digestive upsets, from mild bloating due to excess intestinal gas to life-threatening impactions. Colic is most often caused by a change in diet, either a planned change that takes place too quickly, or an accidental change, such as a horse getting out of its barn or paddock and ingesting unfamiliar plants. But colic has many other possible triggers including insufficient water, an irregular feeding schedule, stress, and illness. Because the horse cannot vomit and has a limited capacity to detoxify harmful substances, anything upsetting to the horse must travel all the way through the digestive system to be expelled. Choke Choke is not as common as colic, but is nonetheless commonly considered a veterinary emergency. The most common cause of choke is horses not chewing their food thoroughly, usually because of eating their food too quickly, especially if they do not have sufficient access to water, but also sometimes due to dental problems that make chewing painful. It is exceedingly difficult for a horse to expel anything from the esophagus, and immediate treatment is often required. Unlike choking in humans, choke in horses does not cut off respiration. Laminitis Horses are also susceptible to laminitis, a disease of the lamina of the hoof. Laminitis has many causes, but the most common is related to a sugar and starch overload from a horse overeating certain types of food, particularly too much pasture grass high in fructan in early spring and late fall, or by consuming excessive quantities of grain. Growth disorders Young horses that are overfed or are fed a diet with an improper calcium:phosphorus ratio over time may develop a number of growth and orthopedic disorders, including osteochondrosis (OCD), angular limb deformities (ALD), and several conditions under the umbrella of the developmental orthopedic diseases (DOD). If not properly treated, damage can be permanent. However, they can be treated if caught in time, given proper veterinary care, and any improper feeding practices are corrected. Young horses being fed for rapid growth in order to be shown or sold as yearlings are at particularly high risk. Adult horses with an improper diet may also develop a range of metabolic problems. Heaves Moldy or dusty hay fed to horses is the most common cause of Recurrent airway obstruction, also known as COPD or "heaves." This is a chronic condition of horses involving an allergic bronchitis characterized by wheezing, coughing, and labored breathing. "Tying up" Equine exertional rhabdomyolysis, also known as "tying up" or azoturia, is a condition to which only some horses are susceptible and most cases are linked to a genetic mutation. In horses prone to the condition, it usually occurs when a day of rest on full grain ration is followed by work the next day. This pattern of clinical signs led to the archaic nickname "Monday morning sickness". The condition may also be related to electrolyte imbalance. Proper diet management may help minimize the risk of an attack. See also Easy keeper (US) Good doer (UK) Fodder Forage Geriatric horses Grain Hard keeper (US) Poor doer (UK) Hay Henneke horse body condition scoring system Horse body mass Horse tongue Horse care List of plants poisonous to equines Footnotes and other references "Horse Nutrition - Table of Contents." Bulletin 762-00, Ohio State University. Web site accessed February 9, 2007. Mowrey, Robert A. "Horse Feeding Management - Nutrient Requirements for Horses." from North Carolina Cooperative Extension Center (PDF) Web site accessed July 4, 2009. Horse management Animal nutrition
Equine nutrition
Biology
8,167
25,373,946
https://en.wikipedia.org/wiki/ActiveVOS
ActiveVOS is a business process management suite. Business processes are designed using the graphical BPMN 2.0 notation. The process engine implements the WS-BPEL 2.0 standard as well as BPEL4People for processes that require people to perform tasks from a task list. References External links Product page Middleware Workflow applications
ActiveVOS
Technology,Engineering
71
142,440
https://en.wikipedia.org/wiki/Climatology
Climatology (from Greek , klima, "slope"; and , -logia) or climate science is the scientific study of Earth's climate, typically defined as weather conditions averaged over a period of at least 30 years. Climate concerns the atmospheric condition during an extended to indefinite period of time; weather is the condition of the atmosphere during a relative brief period of time. The main topics of research are the study of climate variability, mechanisms of climate changes and modern climate change. This topic of study is regarded as part of the atmospheric sciences and a subdivision of physical geography, which is one of the Earth sciences. Climatology includes some aspects of oceanography and biogeochemistry. The main methods employed by climatologists are the analysis of observations and modelling of the physical processes that determine climate. Short term weather forecasting can be interpreted in terms of knowledge of longer-term phenomena of climate, for instance climatic cycles such as the El Niño–Southern Oscillation (ENSO), the Madden–Julian oscillation (MJO), the North Atlantic oscillation (NAO), the Arctic oscillation (AO), the Pacific decadal oscillation (PDO), and the Interdecadal Pacific Oscillation (IPO). Climate models are used for a variety of purposes from studying the dynamics of the weather and climate system to predictions of future climate. History The Greeks began the formal study of climate; in fact, the word "climate" is derived from the Greek word klima, meaning "slope", referring to the slope or inclination of the Earth's axis. Arguably the most influential classic text concerning climate was On Airs, Water and Places written by Hippocrates about 400 BCE. This work commented on the effect of climate on human health and cultural differences between Asia and Europe. This idea that climate controls which populations excel depending on their climate, or climatic determinism, remained influential throughout history. Chinese scientist Shen Kuo (1031–1095) inferred that climates naturally shifted over an enormous span of time, after observing petrified bamboos found underground near Yanzhou (modern Yan'an, Shaanxi province), a dry-climate area unsuitable at that time for the growth of bamboo. The invention of thermometers and barometers during the Scientific Revolution allowed for systematic recordkeeping, that began as early as 1640–1642 in England. Early climate researchers include Edmund Halley, who published a map of the trade winds in 1686 after a voyage to the southern hemisphere. Benjamin Franklin (1706–1790) first mapped the course of the Gulf Stream for use in sending mail from North America to Europe. Francis Galton (1822–1911) invented the term anticyclone. Helmut Landsberg (1906–1985) fostered the use of statistical analysis in climatology. During the early 20th century, climatology mostly emphasized the description of regional climates. This descriptive climatology was mainly an applied science, giving farmers and other interested people statistics about what the normal weather was and how great chances were of extreme events. To do this, climatologists had to define a climate normal, or an average of weather and weather extremes over a period of typically 30 years. While scientists knew of past climate change such as the ice ages, the concept of climate as changing only very gradually was useful for descriptive climatology. This started to change during the decades that followed, and while the history of climate change science started earlier, climate change only became one of the main topics of study for climatologists during the 1970s and afterward. Subfields Various subtopics of climatology study different aspects of climate. There are different categorizations of the sub-topics of climatology. The American Meteorological Society for instance identifies descriptive climatology, scientific climatology and applied climatology as the three subcategories of climatology, a categorization based on the complexity and the purpose of the research. Applied climatologists apply their expertise to different industries such as manufacturing and agriculture. Paleoclimatology is the attempt to reconstruct and understand past climates by examining records such as ice cores and tree rings (dendroclimatology). Paleotempestology uses these same records to help determine hurricane frequency over millennia. Historical climatology is the study of climate as related to human history and is thus concerned mainly with the last few thousand years. Boundary-layer climatology concerns exchanges in water, energy and momentum near surfaces. Further identified subtopics are physical climatology, dynamic climatology, tornado climatology, regional climatology, bioclimatology, and synoptic climatology. The study of the hydrological cycle over long time scales is sometimes termed hydroclimatology, in particular when studying the effects of climate change on the water cycle. Methods The study of contemporary climates incorporates meteorological data accumulated over many years, such as records of rainfall, temperature and atmospheric composition. Knowledge of the atmosphere and its dynamics is also embodied in models, either statistical or mathematical, which help by integrating different observations and testing how well they match. Modeling is used for understanding past, present and potential future climates. Climate research is made difficult by the large scale, long time periods, and complex processes which govern climate. Climate is governed by physical principles which can be expressed as differential equations. These equations are coupled and nonlinear, so that approximate solutions are obtained by using numerical methods to create global climate models. Climate is sometimes modeled as a stochastic process but this is generally accepted as an approximation to processes that are otherwise too complicated to analyze. Climate data The collection of a long record of climate variables is essential for the study of climate. Climatology deals with the aggregate data that meteorologists have recorded. Scientists use both direct and indirect observations of the climate, from Earth observing satellites and scientific instrumentation such as a global network of thermometers, to prehistoric ice extracted from glaciers. As measuring technology changes over time, records of data often cannot be compared directly. As cities are generally warmer than the areas surrounding, urbanization has made it necessary to constantly correct data for this urban heat island effect. Models Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the Earth. Any unbalance results in a change of the average temperature of the Earth. Most climate models include the radiative effects of greenhouse gases such as carbon dioxide. These models predict a trend of increase of surface temperatures, as well as a more rapid increase of temperature at higher latitudes. Models can range from relatively simple to complex: A simple radiant heat transfer model that treats the Earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models), or horizontally. Coupled atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange. Earth system models further include the biosphere. Additionally, they are available with different resolutions ranging from >100 km to 1 km. High resolutions in global climate models are computational very demanding and only few global datasets exists. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the Earth's land surface areas). Topics of research Topics that climatologists study comprise three main categories: climate variability, mechanisms of climatic change, and modern changes of climate. Climatological processes Various factors affect the average state of the atmosphere at a particular location. For instance, midlatitudes will have a pronounced seasonal cycle of temperature whereas tropical regions show little variation of temperature over a year. Another major variable of climate is continentality: the distance to major water bodies such as oceans. Oceans act as a moderating factor, so that land close to it has typically less difference of temperature between winter and summer than areas further from it. The atmosphere interacts with other parts of the climate system, with winds generating ocean currents that transport heat around the globe. Climate classification Classification is an important method of simplifying complicated processes. Different climate classifications have been developed over the centuries, with the first ones in Ancient Greece. How climates are classified depends on what the application is. A wind energy producer will require different information (wind) in a classification than someone more interested in agriculture, for whom precipitation and temperature are more important. The most widely used classification, the Köppen climate classification, was developed during the late nineteenth century and is based on vegetation. It uses monthly data concerning temperature and precipitation. Climate variability There are different types of variability: recurring patterns of temperature or other climate variables. They are quantified with different indices. Much in the way the Dow Jones Industrial Average, which is based on the stock prices of 30 companies, is used to represent the fluctuations of stock prices in general, climate indices are used to represent the essential elements of climate. Climate indices are generally devised with the twin objectives of simplicity and completeness, and each index typically represents the status and timing of the climate factor it represents. By their very nature, indices are simple, and combine many details into a generalized, overall description of the atmosphere or ocean which can be used to characterize the factors which effect the global climate system. El Niño–Southern Oscillation (ENSO) is a coupled ocean-atmosphere phenomenon in the Pacific Ocean responsible for much of the global variability of temperature, and has a cycle between two and seven years. The North Atlantic oscillation is a mode of variability that is mainly contained to the lower atmosphere, the troposphere. The layer of atmosphere above, the stratosphere is also capable of creating its own variability, most importantly the Madden–Julian oscillation (MJO), which has a cycle of approximately 30 to 60 days. The Interdecadal Pacific oscillation can create changes in the Pacific Ocean and lower atmosphere on decadal time scales. Climate change Climate change occurs when changes of Earth's climate system result in new weather patterns that remain for an extended period of time. This duration of time can be as brief as a few decades to as long as millions of years. The climate system receives nearly all of its energy from the sun. The climate system also gives off energy to outer space. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the incoming energy is greater than the outgoing energy, earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and earth experiences cooling. Climate change also influences the average sea level. Modern climate change is caused largely by the human emissions of greenhouse gas from the burning of fossil fuel which increases global mean surface temperatures. Increasing temperature is only one aspect of modern climate change, which also includes observed changes of precipitation, storm tracks and cloudiness. Warmer temperatures are causing further changes of the climate system, such as the widespread melt of glaciers, sea level rise and shifts of flora and fauna. Differences with meteorology In contrast to meteorology, which emphasises short term weather systems lasting no more than a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes of long-term average weather patterns in relation to atmospheric conditions. Climatologists study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and can help predict future climate change. Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere. Use in weather forecasting A relative difficult method of forecast, the analog technique requires remembering a previous weather event which is expected to be mimicked by an upcoming event. What makes it a difficult technique is that there is rarely a perfect analog for an event of the future. Some refer to this type of forecasting as pattern recognition, which remains a useful method of estimating rainfall over data voids such as oceans using knowledge of how satellite imagery relates to precipitation rates over land, as well as the forecasting of precipitation amounts and distribution of the future. A variation of this theme, used for medium range forecasting, is known as teleconnections, when systems in other locations are used to help determine the location of a system within the regime surrounding. One method of using teleconnections are by using climate indices such as ENSO-related phenomena. See also Biogeochemistry Climate as complex networks Climatic geomorphology Climate reanalysis Geophysics Tropical cyclone rainfall climatology Urban climatology List of climate scientists List of women climate scientists and activists References Books Further reading Jenny Uglow, "What the Weather Is" (review of Sarah Dry, Waters of the World: The Story of the Scientists Who Unraveled the Mysteries of Our Oceans, Atmosphere, and Ice Sheets and Made the Planet Whole, University of Chicago Press, 2019, 332 pp.), The New York Review of Books, vol. LXVI, no. 20 (19 December 2019), pp. 56–58. External links Climate Science Special Report – U.S. Global Change Research Program KNMI Climate Explorer The Royal Netherlands Meteorological Institute's Climate Explorer graphs climatological relationships of spatial and temporal data. Climatology as a Profession Amer. Inst. of Physics account of the history of the discipline of climatology in the 20th century Atmospheric sciences Climate and weather statistics Natural environment
Climatology
Physics
2,892
1,752,597
https://en.wikipedia.org/wiki/Arbitrary%20code%20execution
In computer security, arbitrary code execution (ACE) is an attacker's ability to run any commands or code of the attacker's choice on a target machine or in a target process. An arbitrary code execution vulnerability is a security flaw in software or hardware allowing arbitrary code execution. A program that is designed to exploit such a vulnerability is called an arbitrary code execution exploit. The ability to trigger arbitrary code execution over a network (especially via a wide-area network such as the Internet) is often referred to as remote code execution (RCE or RCX). Arbitrary code execution signifies that if someone sends a specially designed set of data to a computer, they can make it do whatever they want. Even though this particular weakness may not cause actual problems in the real world, researchers have discussed whether it suggests a natural tendency for computers to have vulnerabilities that allow unauthorized code execution. Vulnerability types There are a number of classes of vulnerability that can lead to an attacker's ability to execute arbitrary commands or code. For example: Memory safety vulnerabilities such as buffer overflows or over-reads. Deserialization vulnerabilities Type confusion vulnerabilities GNU arbitrary code execution Methods Arbitrary code execution is commonly achieved through control over the instruction pointer (such as a jump or a branch) of a running process. The instruction pointer points to the next instruction in the process that will be executed. Control over the value of the instruction pointer therefore gives control over which instruction is executed next. In order to execute arbitrary code, many exploits inject code into the process (for example by sending input to it which gets stored in an input buffer in RAM) and use a vulnerability to change the instruction pointer to have it point to the injected code. The injected code will then automatically get executed. This type of attack exploits the fact that most computers (which use a Von Neumann architecture) do not make a general distinction between code and data, so that malicious code can be camouflaged as harmless input data. Many newer CPUs have mechanisms to make this harder, such as a no-execute bit. Combining with privilege escalation On its own, an arbitrary code execution exploit will give the attacker the same privileges as the target process that is vulnerable. For example, if exploiting a flaw in a web browser, an attacker could act as the user, performing actions such as modifying personal computer files or accessing banking information, but would not be able to perform system-level actions (unless the user in question also had that access). To work around this, once an attacker can execute arbitrary code on a target, there is often an attempt at a privilege escalation exploit in order to gain additional control. This may involve the kernel itself or an account such as Administrator, SYSTEM, or root. With or without this enhanced control, exploits have the potential to do severe damage or turn the computer into a zombie—but privilege escalation helps with hiding the attack from the legitimate administrator of the system. Examples Retrogaming hobbyists have managed to find vulnerabilities in classic video games that allow them to execute arbitrary code, usually using a precise sequence of button inputs in a tool-assisted superplay to cause a buffer overflow, allowing them to write to protected memory. At Awesome Games Done Quick 2014, a group of speedrunning enthusiasts managed to code and run versions of the games Pong , Snake and Super Mario Bros in a copy of Super Mario World by utilizing an out-of-bounds read of a function pointer that points to a user controlled buffer to execute arbitrary code. On June 12, 2018, Bosnian security researcher Jean-Yves Avenard of Mozilla discovered an ACE vulnerability in Windows 10. On May 1, 2018, a security researcher discovered an ACE vulnerability in the 7-Zip file archiver. PHP has been the subject of numerous ACE vulnerabilities. On December 9, 2021, an RCE vulnerability called "Log4Shell" was discovered in popular logging framework Log4j, affecting many services including iCloud, Minecraft: Java Edition and Steam, and characterized as "the single biggest, most critical vulnerability of the last decade". See also Computer security BlueKeep Follina (security vulnerability) References Further reading Injection exploits
Arbitrary code execution
Technology
866
885,379
https://en.wikipedia.org/wiki/Flying%20shuttle
The flying shuttle is a type of weaving shuttle. It was a pivotal advancement in the mechanisation of weaving during the initial stages of the Industrial Revolution, and facilitated the weaving of considerably broader fabrics, enabling the production of wider textiles. Moreover, its mechanical implementation paved the way for the introduction of automatic machine looms. The brainchild of John Kay, the flying shuttle received a patent in the year 1733 during the Industrial Revolution. Its implementation brought about an acceleration of the previously manual weaving process and resulted in a significant reduction in the required labour force. Formerly, a broad-cloth loom necessitated the presence of a weaver on each side, but with the advent of the flying shuttle, a solitary operator could handle the task proficiently. Prior to this breakthrough, the textile industry relied upon the coordination of four spinners to support a single weaver. The widespread adoption of the flying shuttle by the 1750s dramatically exacerbated this labour imbalance, marking a notable shift in textile production dynamics. History The history of this device is difficult to accurately ascertain due to poor documentation at the time. Nonetheless, there are two general schools of thought around this: first those that believe that it appears to have been invented in the region of Languedoc of southern France (one year before its introduction in England), but was destroyed by state cloth inspectors of the rent-seeking Ancien Regime; second, those that believe it simply originated where it was industrialized, that is in England. Operation In a typical frame loom, as used previous to the invention of the flying shuttle, the operator sat with the newly woven cloth before them, using treadles or some other mechanism to raise and lower the heddles, which opened the shed in the warp threads. They then had to reach forward while holding the shuttle in one hand and pass this through the shed; the shuttle carried a bobbin for the weft. The shuttle then had to be caught in the other hand, the shed closed, and the beater pulled in against the fell to push the weft into place. This action (called a "pick") required regularly bending forward over the fabric. More importantly, the coordination between the throwing and catching of the shuttle required that the weaver was weaving narrow cloth (typically or less). If the loom was for weaving broad cloth multiple weavers were needed: one on the left side at the shed, and one on the right side at the shed (and sometimes, one to operate the treadles). These two reached across the loom, passing the shuttle back and forth through the shed. The flying shuttle employs a smooth board, called the "race," which runs, side to side, along the front of the beater, forming a track on which the shuttle runs. The lower threads of the shed rest on the track and the shuttle slides over them. At each end of the race, there is a box which catches the shuttle at the end of its journey, and which contains a mechanism for propelling the shuttle on its return trip (which may be yanked into action by the cord from the handheld picking-stick, or fully automated) The shuttle itself has some subtle differences from the older form, especially for automated and powered looms. The ends of the shuttle are often bullet-shaped and metal-capped, and the shuttle generally has rollers to reduce friction. The weft thread is made to exit from the end rather than the side, and the thread is stored on a pirn (a long, conical, one-ended, non-turning bobbin) to allow it to feed more easily. Finally, the flying shuttle is generally somewhat heavier, so as to have sufficient momentum to carry it all the way through the shed. Social effects The increase in production due to the flying shuttle exceeded the capacity of the spinning industry of the day and prompted the development of powered spinning machines. Beginning with the spinning jenny and the waterframe until ultimately culminating in the spinning mule, which could produce strong, fine thread in the quantities needed these innovations transformed the textile industry in Great Britain. The innovation was seen as a threat to the livelihood of spinners & weavers, which resulted in an uprising that had Kay's patent largely ignored. It is often incorrectly written that Kay was attacked and fled to France, but in fact he simply moved there to attempt to rent out his looms, a business model that had failed him in England. The flying shuttle produced a new source of injuries to the weaving process; if deflected from its path, it could be shot clear of the machine, potentially striking and injuring workers. Turn-of-the-century injury reports abound with instances in which eyes were lost or other injuries sustained and, in several instances (for example, an extended exchange in 1901), the British House of Commons was moved to take up the issue of installing guards and other contrivances to reduce these injuries. Obsolescence The flying shuttle dominated commercial weaving through the middle of the twentieth century. However, by that time, other systems had begun to replace it. The heavy shuttle was noisy and energy-inefficient (since the energy used to throw it was largely lost in the catching); also, its inertia limited the speed of the loom. Projectile and rapier looms eliminated the need to take the bobbin/pirn of thread through the shed; later, air- and water-jet looms reduced the weight of moving parts further. Flying shuttle looms are still used for some purposes, and old models remain in use. References Weaving equipment Industrial Revolution
Flying shuttle
Engineering
1,138
8,098,406
https://en.wikipedia.org/wiki/ProRec
The ProRec initiative of 1996 was a network of national non-profit organisations (the "ProRec centres"). The initiative was a consequence of the conclusions of the Concerted Action MEDIREC (1994-1995) regarding the reasons why Electronic Health Record (EHR) systems were not used more widely in any of the European Union. As part of the Lisbon Declaration suggestions were made to remedy this situation. The ProRec initiative is supported by the DG Information Society of the European Union. The DG Information Society supported the ProRec initiative with the ProRec Support Action (1996-1998), and the WIDENET Accompanying Measure (2000-2003). The goal of the initiative is to build awareness of the limitations, shortcomings and obstacles on the way towards widespread development, implementation and use of quality Electronic Health Records (EHRs) and pointing them out. Especially significant for implementing Electronic Health Record systems is the ability to communicate and interoperate. See also CEN/TC 251 EHRcom European Institute for Health Records (EuroRec) European Health Telematics Association (EHTEL) European Health Telematics Observatory (EHTO) Health Informatics Service Architecture (HISA) External links ProRec-BE ProRec-RO Electronic health records Information technology organizations based in Europe
ProRec
Technology
271
23,905,128
https://en.wikipedia.org/wiki/C11H8N2
{{DISPLAYTITLE:C11H8N2}} The molecular formula C11H8N2 (molar mass: 168.19 g/mol, exact mass: 168.0687 u) may refer to: β-Carboline (9H-pyrido[3,4-b]indole), or norharmane γ-Carboline
C11H8N2
Chemistry
85
5,102,885
https://en.wikipedia.org/wiki/Isodynamic%20point
In Euclidean geometry, the isodynamic points of a triangle are points associated with the triangle, with the properties that an inversion centered at one of these points transforms the given triangle into an equilateral triangle, and that the distances from the isodynamic point to the triangle vertices are inversely proportional to the opposite side lengths of the triangle. Triangles that are similar to each other have isodynamic points in corresponding locations in the plane, so the isodynamic points are triangle centers, and unlike other triangle centers the isodynamic points are also invariant under Möbius transformations. A triangle that is itself equilateral has a unique isodynamic point, at its centroid(as well as its orthocenter, its incenter, and its circumcenter, which are concurrent); every non-equilateral triangle has two isodynamic points. Isodynamic points were first studied and named by . Distance ratios The isodynamic points were originally defined from certain equalities of ratios (or equivalently of products) of distances between pairs of points. If and are the isodynamic points of a triangle then the three products of distances are equal. The analogous equalities also hold for Equivalently to the product formula, the distances and are inversely proportional to the corresponding triangle side lengths and and are the common intersection points of the three circles of Apollonius associated with triangle of a triangle the three circles that each pass through one vertex of the triangle and maintain a constant ratio of distances to the other two vertices. Hence, line is the common radical axis for each of the three pairs of circles of Apollonius. The perpendicular bisector of line segment is the Lemoine line, which contains the three centers of the circles of Apollonius. Transformations The isodynamic points and of a triangle may also be defined by their properties with respect to transformations of the plane, and particularly with respect to inversions and Möbius transformations (products of multiple inversions). Inversion of the triangle with respect to an isodynamic point transforms the original triangle into an equilateral triangle. Inversion with respect to the circumcircle of triangle leaves the triangle invariant but transforms one isodynamic point into the other one. More generally, the isodynamic points are equivariant under Möbius transformations: the unordered pair of isodynamic points of a transformation of is equal to the same transformation applied to the pair The individual isodynamic points are fixed by Möbius transformations that map the interior of the circumcircle of to the interior of the circumcircle of the transformed triangle, and swapped by transformations that exchange the interior and exterior of the circumcircle. Angles As well as being the intersections of the circles of Apollonius, each isodynamic point is the intersection points of another triple of circles. The first isodynamic point is the intersection of three circles through the pairs of points and where each of these circles intersects the circumcircle of triangle to form a lens with apex angle 2π/3. Similarly, the second isodynamic point is the intersection of three circles that intersect the circumcircle to form lenses with apex angle π/3. The angles formed by the first isodynamic point with the triangle vertices satisfy the equations and Analogously, the angles formed by the second isodynamic point satisfy the equations and The pedal triangle of an isodynamic point, the triangle formed by dropping perpendiculars from to each of the three sides of triangle is equilateral, as is the triangle formed by reflecting across each side of the triangle. Among all the equilateral triangles inscribed in triangle the pedal triangle of the first isodynamic point is the one with minimum area. Additional properties The isodynamic points are the isogonal conjugates of the two Fermat points of triangle and vice versa. The Neuberg cubic contains both of the isodynamic points. If a circle is partitioned into three arcs, the first isodynamic point of the arc endpoints is the unique point inside the circle with the property that each of the three arcs is equally likely to be the first arc reached by a Brownian motion starting at that point. That is, the isodynamic point is the point for which the harmonic measure of the three arcs is equal. Given a univariate polynomial whose zeros are the vertices of a triangle in the complex plane, the isodynamic points of are the zeros of the polynomial Note that is a constant multiple of where is the degree of This construction generalizes isodynamic points to polynomials of degree in the sense that the zeros of the above discriminant are invariant under Möbius transformations. Here the expression is the polar derivative of with pole Equivalently, with and defined as above, the (generalized) isodynamic points of are the critical values of Here is the expression that appears in the relaxed Newton’s method with relaxation parameter A similar construction exists for rational functions instead of polynomials. Construction The circle of Apollonius through vertex of triangle may be constructed by finding the two (interior and exterior) angle bisectors of the two angles formed by lines and at vertex and intersecting these bisector lines with line The line segment between these two intersection points is the diameter of the circle of Apollonius. The isodynamic points may be found by constructing two of these circles and finding their two intersection points. Another compass and straight-edge construction involves finding the reflection of vertex across line (the intersection of circles centered at and through ), and constructing an equilateral triangle inwards on side of the triangle (the apex of this triangle is the intersection of two circles having as their radius). The line crosses the similarly constructed lines and at the first isodynamic point. The second isodynamic point may be constructed similarly but with the equilateral triangles erected outwards rather than inwards. Alternatively, the position of the first isodynamic point may be calculated from its trilinear coordinates, which are The second isodynamic point uses trilinear coordinates with a similar formula involving in place of Notes References . . . . . . . . . . . The definition of isodynamic points is in a footnote on page 204. . The discussion of isodynamic points is on pp. 138–139. Rigby calls them "Napoleon points", but that name more commonly refers to a different triangle center, the point of concurrence between the lines connecting the vertices of Napoleon's equilateral triangle with the opposite vertices of the given triangle. . See especially p. 498. External links Isodynamic points X(15) and X(16) in the Encyclopedia of Triangle Centers, by Clark Kimberling Triangle centers
Isodynamic point
Physics,Mathematics
1,430
37,135,322
https://en.wikipedia.org/wiki/2012%20United%20Kingdom%20meteoroid
The 2012 UK meteoroid was an object that entered the atmosphere above the United Kingdom on Friday, 21 September 2012, around 11pm. Many news agencies across the UK reported this event. Overview Several theories were made as to the origin of the sightings - from it being a meteoroid to a UFO. Initially, the most prominent theory was that it was an old artificial satellite (i.e. a large piece of space junk) re-entering the atmosphere. However, later analysis showed that it was highly unlikely to be space junk; it travelled too fast, towards the slow end of the range of possible meteor speeds and, in addition, it traversed the sky from east to west while almost all satellites orbit from west to east or north and south. According to Finnish mathematician Esko Lyytinen, the meteor was captured by Earth's gravity and entered the atmosphere once again above USA and Canada 155 minutes later. If confirmed, this would classify it as an Earth-grazing meteor. See also List of meteor air bursts Potentially hazardous object Near-Earth object Impact event Cyrillids References 12. Phil Williams (January 2015) "The Meteoric Earth-Grazing Fireball of September 2012" Liverpool Astronomical Society Monthly Newsletter (January 2015, pp. 5–9) 20120921 Meteoroid Meteoroids Modern Earth impact events Meteoroid UK Meteoroid 21st-century astronomical events
2012 United Kingdom meteoroid
Astronomy
279
9,313,361
https://en.wikipedia.org/wiki/NASBA%20%28molecular%20biology%29
Nucleic acid sequence-based amplification, commonly referred to as NASBA, is a method in molecular biology which is used to produce multiple copies of single stranded RNA. NASBA is a two-step process that takes RNA and anneals specially designed primers, then utilizes an enzyme cocktail to amplify it. Background Nucleic acid amplification is a technique used to produce several copies of a specific segment of RNA/DNA. Amplified RNA and DNA can be used for a variety of applications, such as genotyping, sequencing, and detection of bacteria or viruses. There are two different types of amplification, non-isothermal and isothermal. Non-isothermal amplification produces multiple copies of RNA/DNA through reiterative cycling between different temperatures. Isothermal amplification produces multiple copies of RNA/DNA at a constant reaction temperature. NASBA takes single stranded RNA, anneals primers to it at 65°C, and then amplifies it at 41°C to produce multiple copies of single stranded RNA. In order for successful amplification to occur, an enzyme cocktail containing, Avian Myeloblastosis Reverse Transcriptase (AMV-RT), RNase H, and RNA polymerase is used. AMV-RT synthesizes a complementary DNA strand (cDNA) from the RNA template once the primer is annealed. RNase H then degrades the RNA template and the other primer binds to the cDNA to form double stranded DNA, which RNA polymerase uses to synthesize copies of RNA. One key aspect of NASBA is that the starting material and end product is always single stranded RNA. That being said, it can be used to amplify DNA, but the DNA must be translated into RNA in order for successful amplification to occur. Loop-mediated isothermal amplification (LAMP) is another isothermal amplification technique. History NASBA was developed by J Compton in 1991, who defined it as "a primer-dependent technology that can be used for the continuous amplification of nucleic acids in a single mixture at one temperature". Immediately after the invention of NASBA it was used for the rapid diagnosis and quantification of HIV-1 in patient sera. Although RNA can also be amplified by PCR using a reverse transcriptase (in order to synthesize a complementary DNA strand as a template), NASBA's main advantage is that it works under isothermal conditions – usually at a constant temperature of 41 °C or two different temperatures, depending on the primers and enzymes used. Even when two different temperatures are applied, it is still considered isothermal, because it does not cycle back and forth between those temperatures. NASBA can be used in medical diagnostics as an alternative to PCR that is quicker and more sensitive in some circumstances. Procedure Explained briefly, NASBA works as follows: RNA template added to the reaction mixture, the first primer with the T7 promoter region on its 5' end attaches to its complementary site at the 3' end of the template. Reverse transcriptase synthesizes the opposite complementary DNA strand extending the 3' end of the primer, moving upstream along the RNA template. RNAse H destroys the RNA template from the DNA-RNA compound (RNAse H only destroys RNA in RNA-DNA hybrids, but not single-stranded RNA). The second primer attaches to the 5' end of the (antisense) DNA strand. Reverse transcriptase again synthesizes another DNA strand from the attached primer resulting in double stranded DNA. T7 RNA polymerase binds to the promoter region on the double strand. Since T7 RNA polymerase can only transcribe in the 3' to 5' direction the sense DNA is transcribed and an anti-sense RNA is produced. This is repeated, and the polymerase continuously produces complementary RNA strands of this template which results in amplification. Now a cyclic phase can begin similar to the previous steps. Here, however, the second primer first binds to the (-)RNA The reverse transcriptase now produces a (+)cDNA/(-)RNA duplex. RNAse H again degrades the RNA and the first primer binds to the now single stranded +(cDNA) The reverse transcriptase now produces the complementary (-)DNA, creating a dsDNA duplex Exactly like step 6, the T7 polymerase binds to the promoter region to produce (-)RNA, and the cycle is complete. Clinical applications The NASBA technique has been used to develop rapid diagnostic tests for several pathogenic viruses with single-stranded RNA genomes, e.g. influenza A, zika virus, foot-and-mouth disease virus, severe acute respiratory syndrome (SARS)-associated coronavirus, human bocavirus (HBoV) and also parasites like Trypanosoma brucei. Recently, NASBA reaction with fluoresce, dipstick and next generation sequencing readout has been developed for COVID-19 diagnosis. See also Real-time polymerase chain reaction References Amplifiers NASBA
NASBA (molecular biology)
Chemistry,Technology,Biology
1,049
66,616,868
https://en.wikipedia.org/wiki/VISC%20architecture
In computing, VISC architecture (after Virtual Instruction Set Computing) is a processor instruction set architecture and microarchitecture developed by Soft Machines, which uses the Virtual Software Layer (translation layer) to dispatch a single thread of instructions to the Global Front End which splits instructions into virtual hardware threadlets which are then dispatched to separate virtual cores. These virtual cores can then send them to the available resources on any of the physical cores. Multiple virtual cores can push threadlets into the reorder buffer of a single physical core, which can split partial instructions and data from multiple threadlets through the execution ports at the same time. Each virtual core keeps track of the position of the relative output. This form of multithreading (simultaneous multithreading) can increase single threaded performance by allowing a single thread to use all resources of the CPU. The allocation of resources is dynamic on a near-single cycle latency level (1–4 cycles depending on the change in allocation depending on individual application needs. Therefore, if two virtual cores are competing for resources, there are appropriate algorithms in place to determine what resources are to be allocated where. Unlike the traditional processor designs, VISC doesn't use physical cores, instead the resources of the chip are made available as 'virtual cores' and 'virtual hardware threads' according to workload needs. References Digital electronics Electronic design Electronic design automation
VISC architecture
Engineering
281
2,283,810
https://en.wikipedia.org/wiki/Garage%20kit
A garage kit (ガレージキット) or resin kit is an assembly scale model kit most commonly cast in polyurethane resin. They are often model figures portraying humans or other living creatures. In Japan, kits often depict anime characters, and in the United States, depictions of movie monsters are common. However, kits can be produced depicting a wide range of subjects, from characters in horror, science fiction, fantasy films, television and comic books to nudes, pin-up girls and original works of art, as well as upgrade and conversion kits for existing models and airsoft guns. Originally garage kits were amateur-produced, and the term originated with dedicated hobbyists using their garages as workshops. Unable to find model kits of subjects they wanted on the market, they began producing kits of their own. As the market expanded, professional companies began making similar kits. Sometimes a distinction is made between true garage kits, made by amateurs, and resin kits, manufactured professionally by companies. Because of the labor-intensive casting process, garage kits are usually produced in limited numbers and are more expensive than injection-molded plastic kits. The parts are glued together using cyanoacrylate (Super Glue) or an epoxy cement and the completed figure is painted. Some figures are sold completed, but most commonly they are sold in parts for the buyer to assemble and finish. Japan Japanese garage kits are often anime figures depicting popular characters. Another major subject is "Kaiju" monsters such as Godzilla, and they may also include subjects such as mecha and science fiction spaceships. Garage kits can be as simple as a one piece figure, or as complex as kits with well over one hundred parts. Most commonly they are cast in polyurethane resin, but may also be fabricated of diverse substances such as soft vinyl, white metal (a type of lead alloy) and fabric. Originally the kits were sold and traded between hobbyists at conventions like Wonder Festival. As the market grew, a number of companies began producing resin kits professionally, such as Federation Models, Volks, WAVE/Be-J, Kaiyodo, Kotobukiya and B-Club, a subsidiary of Bandai producing Gundam kits (Gunpla). The scale of figure kits varies, but as of 2008, 1/8 seems to be the predominant scale. Prior to 1990 the dominant scale was 1/6. This scale shrink coincided with the rise in material, labor, and licensing costs. Other scales, such as 1/3, 1/4, 1/6, 1/7 also exist, but are less common. Larger kits (1/3, 1/4, etc.) generally command higher prices due to the greater amounts of material required to produce them. Japanese garage kits are usually cast as separate parts which are packed with instructions and sometimes photographs of the final product. Most professionally manufactured kits come in a box while amateur-produced kits sold at conventions come in plastic bags, blank boxes or even boxes with copied sheet information glued onto them. They are not painted, but some of them do have decals provided by the sculptor or circle. The builder then paints and assembles the model, ideally using an airbrush. However, they can also be painted with a regular brush using a variety of techniques to achieve similar effects as when painting with a conventional airbrush. United States In the 1950s and 60s, Aurora and other companies produced cheap plastic models of movie monsters, comic book heroes, and movie and television characters. This market has since disappeared, but through the 1980s an underground market grew through which enthusiasts could acquire the old plastic model kits. In the early to mid-1980s, hobbyists began creating their own garage kits of movie monsters. There was a small but enthusiastic market for these new model kits. They were poured into flexible molds which could produce rigid reproductions of new figures which were then sculpted more accurately and with more detail than the old plastic model kits. They were usually produced in limited numbers and sold primarily by mail order and at toy and hobby conventions. In the mid- to late 1980s the monster model kit hobby grew toward the mainstream. By the 1990s, model kits were produced in the US and the UK, as well as in Japan, and distributed through hobby and comic stores. There was an unprecedented variety of licensed model figure kits. In the late 1990s, model kit sales went down. Hobby and comic stores and their distributors began either carrying fewer garage kits or closing down, along with their producers. As of 2009, there are two American garage kit magazines, Kitbuilders Magazine and Amazing Figure Modeler, and there are garage kit conventions held annually, like WonderFest USA in Louisville, Kentucky. Production Garage kits are generally produced in small quantities, from the tens to a few hundred copies, compared to injection-molded plastic kits which are produced in many thousands. This is due to the labor-intensive nature of the manufacturing process and the relatively low market demand. Resin casting garage kit production is the most labor-intensive. The upside is that creating the initial mold is much less costly than in the injection-molding process. Vinyl garage kits are produced by using liquid vinyl Plastisol in a spin casting process known as slush molding. It is more complex than resin casting, but less expensive and less sophisticated than the injection molding used for most plastic products. It is not something that is commonly done in a basement or garage. Intellectual property issues The legality of amateur garage kits can be questionable as they are not always properly licensed. The model might be of a copyrighted character or design that was produced by fans because no official model exists. The relatively low initial investment and ease of resin casting means that it's also easy to create recast copies of existing original kits. Recasts are produced by making molds of parts from original model kits and then doing recasts from the new molds. This can be done for personal use, such as modification of an existing kit, but unlicensed recast copies are sometimes sold unlawfully. In some cases the original kit is no longer available, but in others they are still in active production. The recasts can be of officially licensed model kits, but when they are of unlicensed kits the sculptor usually has a hard time pursuing litigation. The recasts are usually of inferior quality when produced in Thailand, however, other recasters in Hong Kong rival originals in quality and casting and offer at a price that undercuts the original. Recast kits can be found on online auction sites, where they can be difficult to control due to potential cumbersome site policies and seller pseudonymity. Many recasters are in East Asia but can be found all over the globe. In an effort to legitimize amateur garage kit production and sales in Japan, it is not uncommon for a license holder to issue a 'single day license' (:ja:当日版権システム) where for one day only, license is granted for the sale of amateur garage kits. These licensing agreements are typically negotiated between an event organizer (Wonder Festival, Character Hobby, Figure Mania, etc.) and various licensing entities for license to characters from specific TV shows and movies. Typically, the event organizer publishes a list of licenses available in advance, and sculptors intending to sell their sculptures then submit applications (including photos of their sculpture) for approval. Applications may be rejected. References External links Federation Models Volks Be-J Kaiyodo Kotobukiya Scale modeling Toy figurines
Garage kit
Physics
1,537
2,236,213
https://en.wikipedia.org/wiki/Microcellular%20plastic
Microcellular plastics, otherwise known as microcellular foam, is a form of manufactured plastic fabricated to contain billions of tiny bubbles less than 50 microns wide (typically 0.1–100 micrometers). It is formed by dissolving gas under high pressure into various polymers, relying on the phenomenon of thermodynamic instability to cause the uniform arrangement of the gas bubbles, otherwise known as nucleation. Its main purpose was to reduce material usage while maintaining valuable mechanical properties. the density of the finished product is determined by the gas used. Depending on the gas, the foam's density can be between 5% and 99% of the pre-processed plastic. Design parameters, focused on the foam's final form and the molding process afterward, include the type of die or mold to be used, as well as the dimensions of the bubbles, or cells, that classify the material as a foam. Since the cells' size is close to the wavelength of light, to the casual observer the foam retains the appearance of a solid, light-colored plastic. Recent developments at the University of Washington have produced nanocellular foams with cells in the 20-100 nanometer range. At Indian Institute of Technology Delhi, technologies are being developed to fabricate high quality microcellular foams. History Prior to 1974, traditional foams were created using a method outlined in U.S Patent named Mixing of Molten Plastic and Gas in 1974. By releasing a gas, otherwise known as a chemical or physical blowing agent, over molten plastic, hard plastic was converted into traditional foam. The results of these methods were highly undesirable. Due to the uncontrolled nature of the process, the product was often non-uniform, housing many large voids. In turn, the outcome was a low strength, low density foam, with large cells in the cellular structure. The pitfalls of this method drove the need for a process that could make a similar material with more advantageous mechanical properties. The creation of microcellular foams as we know today was inspired by the production of traditional foams. In 1979, MIT masters students J.E. Martini and F.A Waldman, under the direction of Professor Nam P Suh, are both accredited with the invention of microcellular plastics, or microcellular foams. By doing pressurized extrusion and injection molding, their experimentation led to a method that used significantly less material and a product with 5-30% less voids that were less than 8 microns in size. In terms of mechanical properties, the fracture toughness of the material improved by 400% and the resistance to crack propagation increased by 200%. First, plastic is uniformly saturated with gas at a high pressure. Then, the temperature is increased, causing thermal instability in the plastic. In order to reach a stable state, cell nucleation takes place. During this step, the cells created would be much smaller than that of traditional foams. After this, cell growth, or matrix relaxation would initiate. The novelty of this method was the ability to control the mechanical properties of the product by varying the temperature and pressure inputs. For example, by modifying the pressure, a very thin outside layer could be formed, making the product even stronger. Experimental results found to be the gas that produced the densest foams. Other gases, such as Argon and Nitrogen produced foams with mechanical properties that were slightly less desirable. Production When selecting a gas to produce the desired foam, functional requirements and design parameters are considered. The functional requirements are identical to the criteria used when inventing this material type; using less plastic without sacrificing mechanical properties (especially toughness) that are capable of making the same three dimensional products the original plastic was able to do. The production of microcellular plastics is dependent on temperature and pressure. Dissolving gas under high temperature and pressure creates a driving force that activates nucleation sites when the pressure drops, which increases exponentially with amount of dissolved gas. Homogeneous nucleation is the primary mechanism for producing the bubbles in the cellular matrix. The dissolved gas molecules have a preference to diffuse to activation sites that have nucleated first. This is prevented since these sites are activated nearly simultaneously, forcing the dissolved gas molecules to be shared equally and uniform throughout the plastic. Removing the plastic from the high pressure environment creates a thermodynamic instability. Heating the polymer above the effective glass transition temperature (of the polymer/gas mixture) then causes the plastic to foam, creating a very uniform structure of small bubbles. Mechanical properties The density of microcellular plastics has the greatest influence on the behavior and performance. The material tensile strength linearly decreases with the material density as more gas is dissolved into the part. Melting temperature and viscosity also decrease as well. The foam injection process itself introduces surface defects such as swirl marks, streaking, and blistering, which also influence how the part reacts to external forces. Advantages and disadvantages Due to the non-hazardous nature of this foam-generating process, these plastics are able to be recycled and put back into the production cycle, reducing their carbon footprint as well as reducing the cost of raw materials. With the porous nature of this material, the overall density is much lower than that of any solid plastic, considerably dropping the weight per unit volume of the part. This also entails less consumption of raw plastic with the addition of the tiny gas-filled pockets, allowing for further cost reduction, up to 35%. When observing the mechanical properties of these foams, a loss of tensile strength is correlated with the decrease in density, in a nearly linear fashion. Industrial applications Since the steps taken by MIT research in the late 70s, microcellular plastics, and their methods of manufacturing, has become more standardized and improved upon. Trexel Inc. is often referred to as the industry standard for microcellular plastics with their use of MuCell® Molding Technology. Trexel, and other manufacturers of microcellular plastics, use both injection molding and blow mold methods to create products for applications such as automotive, medical, packaging, consumer, and industrial. Injection molding and blow molding differ in regards to the type of product in need of being manufactured. Injection molding, much like casting, is centered around creating a mold for a solid object, which is to later be filled in with the molten plastic. Blow molding on the other hand, is more specialized for hollow objects, although it is less accurate regarding wall thickness with this dimension being an undefined feature (unlike in an injection mold where all dimensions are predetermined). In respect to MuCell® and microcellular plastics, these processes vary from that of traditional plastics due to the additional steps of gas dissolving and cell nucleation before the molding process can begin. This process removed the "pack and hold phase" that allowed for imperfections within a mold, creating a finished product with greater dimensional accuracy and sound structure. By removing an entire step of the molding process, time is saved, making MuCell® a more economical option since more parts can be manufactured in the same time compared to standard resins. A few examples of applications include automobile instrument panels, heart pumps, storage bins, and the housing on multiple household power tools. See also acrylonitrile butadiene styrene References External links Plastics
Microcellular plastic
Physics
1,505
2,192,043
https://en.wikipedia.org/wiki/Kupala%20Night
Kupala Night (also Kupala's Night or just Kupala; Polish: , Belarusian: , Russian: , , Ukrainian: ) is one of the major folk holidays in some of the Slavic countries that coincides with the Christian feast of the Nativity of St. John the Baptist and the East Slavic feast of Saint John's Eve. In folk tradition, it was revered as the day of the summer solstice and was originally celebrated on the shortest night of the year, which is on 21-22 or 23-24 of June {Czech Republic, Poland, Slovakia, Bulgaria (where it is called Enyovden), and modern Ukraine (since 2023), and according to Julian calendar on the night between 6 and 7 July (Belarus, Russia, and parts of Ukraine). The name of the holiday is ultimately derived from the East Slavic word kǫpati "to bathe". A number of activities and rituals are associated with Kupala Night, such as gathering herbs and flowers and decorating people, animals, and houses with them; entering water, bathing, or dousing with water and sending garlands on water; lighting fires, dancing, singing, and jumping over fire; and hunting witches and scaring them away. It was also believed that on this day the sun plays and other wonders of nature happen. The celebrations are held near the water, on the hills, surrounding that; chiefly, young men and women participate in these folkloric traditions. The rituals and symbolism of the holiday may point to its pre-Christian origins. Names Old East Slavic: , Russian: , , , , dialectal: ; : "bonfire in the field"; Ukrainian: , , , , , , Polesia: , Belarusian: , dialectal: Polish dialects have retained loans from East Slavic languages: Podlachia and Lublin: kupała, kąpała, kąpałeczka Podlachia, Lublin, Sieradz, Kalisz: kupalonecka, kopernacka, kopernocka, kupalnocka In Old Czech (15th century), there is attested kupadlo "a multicolored thread with which gifts were tied, given on the occasion of Saint John's Eve; a gift given to boys by girls on the occasion of Saint John's Eve". In Slovakia, the folk kupadla "Saint John's Eve". History and etymology According to many researchers, Kupala Night is a Christianized Proto-Slavic or East Slavic celebration of the summer solstice. According to Nikolay Gal'kovskiy, "Kupala Night combined two elements: pagan and Christian." The viewpoint on the pre-Christian origin of the holiday is criticized by historian Vladimir Petrukhin and ethnographer Aleksandr Strakhov. Whereas, according to Andrzej Kempinski, "The apparent ambivalence (male-female, fire-wood, light-dark) seems to testify to the ancient origins of the holiday alleviating the contradictions of a dual society." According to Holobuts’ky and Karadobri, one of the arguments for the antiquity of the holiday is the production of fire by friction. The name appears as early as the Old East Slavic language stage. Izmail Sreznevsky, in his Materials for the Dictionary of the Old East Slavic Language, gives the entries: "Saint John's Eve" (In Hypatian Codex under year 1262: ), "baptist" (no example), "St. John's Day" (). Epigraph No. 78 in The Cathedral of Holy Wisdom in Veliky Novgorod, dated to the late 11th - early 12th century, contains an inscription . According to ethnographer Vera Sokolova, Kupala is a later name that appeared among Eastern Slavs when the holiday coincided with the day of John the Baptist. According to Max Vasmer, the name (Ivan) Kupala/Kupalo is a variant of the name (John the) Baptist ( ) and it calques the ancient Greek equivalent . Greek "baptist" derives from the verb "to immerse; to wash; to bathe; to baptize, consecrate, immerse in baptismal font", which in Old East Slavic was originally rendered by the word "to bathe", later displaced by "to baptise". The Proto-Slavic form of the verb is reconstructed as *kǫpati "to dip in water, to bathe". According to Mel’nychuk, the word Kupalo itself may come from Proto-Slavic *kǫpadlo ( OCz. kupadlo, SCr. kùpalo, LSrb., USrb. kupadło "bathing place"), which is composed of the discussed verb *kǫpati and the suffix *-dlo. The name of the holiday is related to the fact that the first ceremonial bath was taken during Kupala Night, and the connection to John the Baptist is secondary. Deity Kupala From the 17th century, sources suggest that the holiday is dedicated to the deity Kupala, whom the Slavs supposedly worshipped. However, modern researchers deny the existence of such a deity. Rituals and beliefs On this day, June 24, it was customary to pray to John the Baptist for headaches and for children. Kupala Night is filled with rituals related to water, fire and herbs. Most Kupala rituals take place at night. Bathing before sunset was considered mandatory: in the north, Russians were more likely to bathe in banyas, and in the south in rivers and lakes. Closer to sunset, on high ground or near rivers, bonfires were lit. Sometimes, fires were lit in the traditional way – by friction wood against wood. In some places in Belarus and Volyn Polissia, this archaic way of lighting a fire for the holiday survived until early 20th century. According to Vera Sokolova, among the Eastern Slavs, the holiday has been preserved in its most "archaic" form by the Belarusians. In the center of the Kupala bonfire, Belarusians would place a pole on top of which a wheel was attached. Sometimes a horse's skull, called , was placed on top of the wheel and thrown into the fire, where it would burn, after which the youth would play, sing and dance around the fire. In Belarus, old, unwanted items were collected from backyards throughout the village and taken to a place chosen for the celebration (a glade, a high riverbank), where they were then burned. Ukrainians also preserved the main archaic elements, but changed their symbolic meanings in the 19th century. Russians either forgot the main elements of the Kupala ceremony or transferred them to other holidays (Trinity Day, Peter Day). The celebration of Kupala Night is mentioned in the Hustyn Chronicle (17th century): This Kupala... is commemorated on the eve of the Nativity of John the Baptist... in the following manner: In the evening, ordinary children of both sexes gather and make wreaths of poisonous herbs or roots, and those covered with their clothes set fire, and then they put a green branch, and holding their hands they dance around the fire, singing their songs... Then they leap over the fire... On Kupala Night, "bride and groom" were chosen and wedding ceremonies were conducted: they jumped over the fire holding hands, exchanged wreaths (symbol of maidenhood), looked for the fern flower and bathed in the morning dew. On this day, "village roads were plowed so that 'matchmakers would come sooner', or a furrow was plowed to a boy's house so that he would get engaged faster." In some parts of Ukrainian and Belarusian tradition, it was only after Kupala that vesnianky were no longer sung. Eastern and Western Slavs were forbidden to eat cherries before that day. Eastern Slavs believed that women should not eat berries before St. John's Day, or their young children would die. The custom of public condemnation and ridicule on Kupala Night (also George's Day in Spring and Trinity Day) is well known. Criticism and condemnation are usually directed at residents of one's own or a neighboring village who have violated social and moral norms over the past year. This social condemnation can be heard in Ukrainian and Belarusian songs, which contain themes of quarrels between girls and boys or residents of neighboring villages. Condemnation and ridicule are expressed in public and serve as a regulator of social relations. According to Hutsuls beliefs, after Kupala come the "", when thunders and lightnings are common. These are days when thunderous spirits walk around, sending lightning bolts to the earth. "And then between the dark sky and the tops of the mountains, fire trees grow, connecting heaven and earth. And so it will be until the Elijah's day, the old Thunderous feast" after which, they say, "thunder will stop pounding." Alexander Veselovsky, points out the similarity between the Slavic customs of Kupala Night and the Greek customs of Elijah's day, (Elijah the Thunderer). Ritual dishes The consecration of the first fruits ripening at this time may have coincided with the Kupala Night holiday. In some Russian villages, "votive porridge" was brewed: on St. Juliana's day (June 22), girls would gather to talk and, while singing, pound barley in a mortar. On the morning of St. Agrippina's day (June 23), barley was used to cook votive porridge. During the day, this porridge was given to the poor, and in the evening, sprinkled with butter, it was eaten by everyone. Among Belarusians, delicacies brought from home were eaten both in separate groups and at potluck and consisted of vareniki, cheese, tvarog, flour porridge (), sweet dough (babka) with ground hemp seeds, onion, garlic, bread acid (cold borscht), and eggs in lard. In Belarus in the 19th century, vodka was drunk during the holiday, and wine was drunk in Podlachia and the Carpathians. Songs have preserved mention of the ancient drinks of the night: Will accept you, Kupal’nochka, as a guest, With treating you with green vine, With watering you with wheat beer, With feeding you with quark. Water The obligatory custom on this day was mass bathing. It was believed that on this day all evil spirits would leave the rivers, so it was safe to swim until Elijah's day. In addition, the water of Kupala Night was endowed with revitalizing and magical properties. In places where people were not allowed to bathe in rivers (because of russets), they bathed in "sacred springs". In the Russian North, on the day before of Kupala Night, on St. Agrippina's Day, baths were heated in which people were washed and steamed, while steaming the herbs collected on that day. Water drawn from springs on St. John's Day was said to have miraculous and magical powers. On this holiday, according to a common sign, water can "make friends" with fire. The symbol of this union was a bonfire lit along the banks of rivers. Wreaths were often used for divination on Kupala Night: if they floated on the water, it meant good luck and long life or marriage. A 16th-century Russian scribe attempted to explain the name () and the healing power of St. John's Day by referring to the Old Testament legend of Tobias. As he writes, it was on this day that Tobias bathed in the Tigris, where, on the advice of the archangel Raphael, he discovered a fish whose entrails cured his father of blindness. Bonfire The main feature of the Kupala Night is the cleansing bonfires. The youths would bring down a huge amount of brushwood from all over the village and set up a tall pyramid, with a pole in the middle, on which was placed a wheel, a barrel of tar, a horse or cow skull (Polesia), etc. According to Tatyana Agapkin and Lyudmila Vinogradova, the symbol of a tall pole with a wheel attached to it generally correlated with the universal image of the world tree. Bonfires were lit late in the evening and usually burned until morning. In various traditions, there is evidence of the requirement to light the Kupala bonfire with "need-fire", produced by friction; in some places, the fire was carried into the house and lit in the earth. All the women of the village had to approach the fire, since any who did not go were suspected of witchcraft. A khorovod was led around the bonfire, dancing, singing Kupala songs, and jumping over the bonfire: whoever jumps more successfully and higher will be happier. The girls leap over the fire to "purify themselves and protect themselves from disease, spoilage, spells," and so that "rusalky will not attack and come during the year." A girl who did not jump over the fire was called a witch (Eastern Slavs, Poland); she was doused with water and scourged with nettles because she had not been "cleansed" by the baptismal fire. In the Kiev Governorate, a girl who lost her virginity before marriage could not jump over the bonfire during Kupala Night, as doing so would desecrate it. In Ukraine and Belarus, girls and boys held hands and jumped over the fire in pairs. It was believed that if their hands stayed together while jumping, it would be a clear sign of their future marriage; the same if sparks flew behind them. In the Gomel Governorate, boys used to cradle girls in their arms over the Kupala bonfire to protect them from spells. Young people and children jumped over bonfires, organized noisy games: they played gorelki. In addition to bonfires, in some places on Kupala Night, wheels and barrels of tar were set on fire, which were then rolled down the mountains or carried on poles, which is clearly related to the symbolism of the solstice. In Belarus, the Galician Poles and Carpathian Slovaks called baptismal bonfires Sobótki after the West Slavic sobota as a "day of rest". Kupala songs Many folklorists believe that the content of Kupala songs is poorly related to the rituals and mythological meaning of the holiday. The multi-genre song texts include many lyrical songs with love and family themes, humorous chants between boys and girls, khorovod dance songs and games, ballads, etc. As Kupala songs, these are identified by specific melodies and a specific calendar period. In other periods, it was not customary to sing such songs. Wreath The wreath was a mandatory attribute of the amusements. It was made before the holiday from wild herbs and flowers. The ritual use of the Kupala wreath is also related to the magical understanding of its shape, which brings it closer to other round and perforated objects (ring, hoop, loaf, etc.). The customs of milking or sipping milk through the wreath, reaching and pulling something through the wreath, looking, pouring, drinking, washing through it are based on these attributes of the wreath. It was believed that each plant gave the wreath special properties, and the way it was made — twisting and weaving — also added symbolism. Wreaths were often made of periwinkle, basil, geranium, ferns, roses, blackberries, oak and birch branches, etc. During the festival, the wreath was usually destroyed: thrown into water, burned in a bonfire, thrown on a tree or the roof of a house, carried to a cemetery, etc. Sometimes the wreath was preserved and used for healing, protecting fields from hailstorms and vegetable gardens from "worms". In Polesia, at the dawn of St. John's Day, peasants would choose the prettiest girl from among themselves, strip her naked and wrap her from head to toe in wreaths of flowers, then go to the forest, where the "dzevko-kupalo" (girl-kupalo – as the chosen girl was called) would distribute the previously prepared wreaths to her girlfriends. She would blindfold herself, and the girls would walk around her in a merry dance. The garland that someone received was used to foretell future fate: a fresh garland meant a rich and happy marriage, a dry garland meant poverty and an unhappy marriage: "she will not have happiness, she will live in misery." Kupala tree Depending on the region, a young birch, willow, maple, spruce, or the cut top of an apple tree was chosen for the Kupala. The girls would decorate it with wreaths, field flowers, fruits, ribbons and sometimes candles; then take it outside the village, stick it in the ground in a clearing and dance, walk and sing around it. Later, the boys would join in the fun, pretending to steal the Kupala tree or ornaments from it, knocking it over or setting it on fire, while the girls protected it. At the end, everyone together was supposed to drown the Kupala tree in the river or burn it in a bonfire. Before the ritual, the tree could not be cut down, but simply located in a convenient place for the khorovod and dressed. In the Zhytomyr region, in one village, a dry pine tree, growing outside the village near the river, was chosen for this; it was called . The celebrants threw the burnt tree trunk into the water, and then ran away so that "the witch (didn't) catch up with them." Medicinal and magical herbs A characteristic sign of Kupala Night are the many customs and legends associated with the plant world. Green was used as a universal amulet: it was believed to protect from diseases and epidemics, evil eye and spoilage; from sorcerers and witches, unclean powers, "walking" dead people; from natural lightning, hurricane, fire; from snakes and predatory animals, insect pests, worms. At the same time, the contact with fresh greens was conceived as a magical means providing fertility and successful breeding of cattle, poultry, yield of cereals and vegetable crops. It was believed that on this day it was best to collect medicinal herbs, as the plants receive great power from the sun and the earth. Some herbs were harvested at night, others in the afternoon before lunch, and others in the morning dew. While collecting medicinal herbs, a special prayer (zagovory) was recited. According to Belarusian beliefs, Kupala herbs are most healing if they are collected by the "old and young," i.e. old people and children – as the most pure (no sex life, no menstruation, etc.). The fern and the so-called Ivan-da-marya flower (e.g., Melampyrum nemorosum; literally: John and Mary) were associated with special Kupala legends. The names of these plants appear in Kupala songs. The Slavs believed that only once a year, on St. John's Day, a fern blooms. This mythical flower, which does not exist in nature, is supposed to give those who pick it and keep it with them miraculous powers. According to beliefs, the bearer of the flower becomes clairvoyant, can understand the language of animals, see all treasures, no matter how deep they are in the ground, and enter treasuries unhindered by holding the flower to locks and bolts (they must crumble before it), wield unclean spirits, wield earth and water, become invisible and take any form. One of the main symbols of St. John's Day was the Ivan-da-marya flower, which symbolized the magical combination of fire and water. Kupala songs link the origin of this flower to twins – a brother and sister – who got into a forbidden love affair and because of this turned into a flower. The story of incestuous twins finds numerous parallels in Indo-European mythologies. Some plant names are related to the name Kupala, e.g. Czech kupadlo "Bromus", "Cuscuta trifolii", kupalnice "Ranunculus", Polish kupalnik "Arnica", Ukrainian "Taraxacum officinale", "Tussilago", Russian "Ranunculus acris". Protection from evil spirits It was believed that on the Kupala Night all evil spirits awaken to life and harm people; that one should beware of "the mischief of demons – domovoy, vodyanoy, leshy, rusalky". In order to prevent witches from "taking away" milk from cows, Russians pounded consecrated willow in pastures, and in Ukraine the owner pounded aspen stakes in the yard. In Polesia, nettles, torn men's pants or a mirror was hung in the stable gate for the same purpose. In Belarus, aspen twigs and stakes were used to defend not only cattle, but also crops, "so that witches would not take the spores." To ward off evil spirits, it was customary to hammer sharp and prickly objects into tables, windows, doors, etc. In the Eastern Slavs, when a witch entered the house, a knife was driven into the table from below to prevent her from leaving. Southern Slavs believed that sticking a knife or hawthorn branch into the door would protect them from vampires or nightmares. On Kupala night, Eastern Slavs would drive scythes, pitchforks, knives and branches of certain trees into the windows and doors of houses and barns, protecting their space from evil spirits. It was believed that in order to protect oneself from witch attacks, one should put nettles on the threshold and window sills. Ukrainian girls collected wormwood because they believed it was feared by witches and russets. In Podolia, on St. John's Day, hemp flowers ("porridge") were collected and scattered in front of the entrances to houses and barns to bar the way for witches. In order to prevent the witches from stealing them and driving them to Bald Mountain (no horse will return from there alive), the horses must be locked up. Belarusians believed that during Kupala Night, domoviks would ride horses and torture them. In Ukraine and Belarus, magical powers were attributed to firebrands from the Kupala bonfire. In western Polesia, young people would pull the sails from the fire, run with them as if they were torches, wave them over their heads, and then throw them into the fields "to protect the crops from evil powers." In Polesia, a woman who did not come to the bonfire was called a witch by the youth, cursed and teased. In order to identify and neutralize the witch, the road along which cattle are usually herded was blocked with thread, plowed with a plow or harrow, sprinkled with seeds or ants and poured with ant stock, believing that the witch's cow would not be able to overcome the obstacle. According to Slavic beliefs, the root of Lythrum salicaria dug up on St. John's Day was able to ward off sorcerers and witches; it could be used to drive demons out of the possessed and possessors. Youth games The games usually had a love-marriage theme: , tag, , celovki; ball games (myachevukha, v baryshi and others). Ritual pranks On the night of Kupala, as well as on one of the nights during the winter Christmas holidays, among Eastern Slavs, youngsters often engaged in ritual mischief and pranks: they stole firewood, carts, gates and hoisted them onto roofs, propped up house doors, covered windows, etc. Pranks on Kupala night are a South Russian and Polesian tradition. Sun It is a well-known belief that on St. John's Eve, the sun at sunrise shimmers with different colors or reflects, flashes, stops, etc. The most common way of referring to this phenomenon is as follows: the sun plays or jumps; in some traditions it also bathes, jumps, dances, walks, trembles, is merry, spins, bows, changes, blooms, beautifies (Russia); the sun Crowing (Polesia). In some parts of Bulgaria, it is believed that at dawn on St. John's Day, three suns appear in the sky, of which only the central one is "ours" and the others are its brothers – shining at other times and over other lands. The Serbs called John the Baptist because they believed that on this day the sun stops three times in the sky or plays. They explained the behavior of the sun on John's day by referring to Gospel verses relating to the birth of John the Baptist: "When Elizabeth heard Mary's greeting, the child in her womb moved, and the Holy Spirit filled Elizabeth." Church on folk rituals In medieval Russia, the rituals and games of the day were considered demonic and were banned by church authorities. Thus, the message of the hegumen of the Yelizarov Convent (1505) to the Pskov governor and authorities condemned the "pagan" games of Pskov residents on the night of the Nativity of John the Baptist: For when the feast day of the Nativity of Forerunner itself arrives, then on this holy night nearly the entire city runs riot and in the villages they are possessed by drums and flutes and by the strings of the guitars and by every type of unsuitable satanic music, with the clapping of hands and dances, and with the women and the maidens and with the movements of the heads and with the terrible cry from their mouths: all of those songs are devilish and obscene, and curving their backs and leaping and jumping up and down with their legs; and right there do men and youths suffer great temptation, right there do they leer lasciviously in the face of the insolence of the women and the maidens, and there even occurs depravation for married women and perversion for the maidens. – Epistle of Pamphilus of Yelizarov Monastery Stoglav (a collection of decisions of the Stoglav Synod of 1551) also condemns the revelry during the Kupala Night, which originated in "Hellenistic" paganism And furthermore many of the children of Orthodox Christians, out of simple ignorance, engage in Hellenic devilish practices, a variety of games and clapping of hands in the cities and in the villages against the festivities of the Nativity of the Great John Prodome; and on the night of that same feast day and for the whole day until night-time, men and women and children in the houses and spread throughout the streets make a ruckus in the water with all types of games and much revelry and with satanic singing and dancing and gusli and in many other unseemly manners and ways, and even in a state of drunkenness. – Stoglav, chapter 92 Contemporary representatives of the Russian Orthodox Church continue to oppose some of the customs associated with this holiday. At the same time, responding to a question about the "intermingling" of Christian and pagan holidays, hieromonk expressed an opinion: The perennial persistence among the people of some of the customs of the Kupala Night does not indicate a double faith, but rather an incompleteness of faith. After all, how many people who have never participated in these pagan entertainments are prone to superstition and mythological ideas. The ground for this is our fallen nature, corrupted by sin. In 2013, at the request of the ROC, the celebrations of Kupala Night and Neptune's Day were banned in the Rossoshansky District of the Voronezh Oblast. References Bibliography Slavic antiques Dictionaries Observances in Russia Russian folklore Saint John's Day Observances in Poland Folk calendar of the East Slavs Belarusian traditions Russian traditions Ukrainian traditions Observances in Ukraine Slavic holidays Days celebrating love Summer events in Ukraine Summer events in Poland Summer solstice
Kupala Night
Astronomy
5,849
41,356,740
https://en.wikipedia.org/wiki/Surotomycin
Surotomycin was an investigational oral antibiotic. This macrolide antibiotic was under investigation by Merck & Co (who acquired Cubist Pharmaceuticals) for the treatment of life-threatening diarrhea, commonly caused by the bacterium Clostridioides difficile. After reaching phase III in clinical trials, its production was discontinued in 2017 due to its non-superiority to current therapies. See also Cadazolid Fidaxomicin Ridinilazole SCHEMBL19952957 References Antibiotics Macrolides
Surotomycin
Biology
115
33,863,877
https://en.wikipedia.org/wiki/Hemorphin-4
Hemorphin-4 is an endogenous opioid peptide of the hemorphin family which possesses antinociceptive properties and is derived from the β-chain of hemoglobin in the bloodstream. It is a tetrapeptide with the amino acid sequence Tyr-Pro-Trp-Thr. Hemorphin-4 has affinities for the μ-, δ-, and κ-opioid receptors that are in the same range as the structurally related β-casomorphins, although affinity to the κ-opioid receptor is markedly higher in comparison. It acts as an agonist at these sites. Hemorphin-4 also has inhibitory effects on angiotensin-converting enzyme (ACE), and as a result, may play a role in the regulation of blood pressure. Notably, inhibition of ACE also reduces enkephalin catabolism. See also Casomorphin References Delta-opioid receptor agonists Kappa-opioid receptor agonists Mu-opioid receptor agonists Opioid peptides Tetrapeptides
Hemorphin-4
Chemistry,Biology
241
24,201,900
https://en.wikipedia.org/wiki/C17H26O3
{{DISPLAYTITLE:C17H26O3}} The molecular formula C17H26O3 (molar mass :278.39 g/mol) may refer to: Isofalcarintriol, a polyacetylene found in carrots Panaxytriol, a fatty alcohol found in ginseng Paradol, the active flavor constituent of the seeds of Guinea pepper
C17H26O3
Chemistry
87
24,505
https://en.wikipedia.org/wiki/Phospholipid
Phospholipids are a class of lipids whose molecule has a hydrophilic "head" containing a phosphate group and two hydrophobic "tails" derived from fatty acids, joined by an alcohol residue (usually a glycerol molecule). Marine phospholipids typically have omega-3 fatty acids EPA and DHA integrated as part of the phospholipid molecule. The phosphate group can be modified with simple organic molecules such as choline, ethanolamine or serine. Phospholipids are a key component of all cell membranes. They can form lipid bilayers because of their amphiphilic characteristic. In eukaryotes, cell membranes also contain another class of lipid, sterol, interspersed among the phospholipids. The combination provides fluidity in two dimensions combined with mechanical strength against rupture. Purified phospholipids are produced commercially and have found applications in nanotechnology and materials science. The first phospholipid identified in 1847 as such in biological tissues was lecithin, or phosphatidylcholine, in the egg yolk of chickens by the French chemist and pharmacist Theodore Nicolas Gobley. Phospholipids in biological membranes Arrangement The phospholipids are amphiphilic. The hydrophilic end usually contains a negatively charged phosphate group, and the hydrophobic end usually consists of two "tails" that are long fatty acid residues. In aqueous solutions, phospholipids are driven by hydrophobic interactions, which result in the fatty acid tails aggregating to minimize interactions with the water molecules. The result is often a phospholipid bilayer: a membrane that consists of two layers of oppositely oriented phospholipid molecules, with their heads exposed to the liquid on both sides, and with the tails directed into the membrane. That is the dominant structural motif of the membranes of all cells and of some other biological structures, such as vesicles or virus coatings. In biological membranes, the phospholipids often occur with other molecules (e.g., proteins, glycolipids, sterols) in a bilayer such as a cell membrane. Lipid bilayers occur when hydrophobic tails line up against one another, forming a membrane of hydrophilic heads on both sides facing the water. Dynamics These specific properties allow phospholipids to play an important role in the cell membrane. Their movement can be described by the fluid mosaic model, which describes the membrane as a mosaic of lipid molecules that act as a solvent for all the substances and proteins within it, so proteins and lipid molecules are then free to diffuse laterally through the lipid matrix and migrate over the membrane. Sterols contribute to membrane fluidity by hindering the packing together of phospholipids. However, this model has now been superseded, as through the study of lipid polymorphism it is now known that the behaviour of lipids under physiological (and other) conditions is not simple. Main phospholipids Diacylglyceride structures See: Glycerophospholipid Phosphatidic acid (phosphatidate) (PA) Phosphatidylethanolamine (cephalin) (PE) Phosphatidylcholine (lecithin) (PC) Phosphatidylserine (PS) Phosphoinositides: Phosphatidylinositol (PI) Phosphatidylinositol phosphate (PIP) Phosphatidylinositol bisphosphate (PIP2) and Phosphatidylinositol trisphosphate (PIP3) Phosphosphingolipids See Sphingolipid Ceramide phosphorylcholine (Sphingomyelin) (SPH) Ceramide phosphorylethanolamine (Sphingomyelin) (Cer-PE) Ceramide phosphoryllipid Applications Phospholipids have been widely used to prepare liposomal, ethosomal and other nanoformulations of topical, oral and parenteral drugs for differing reasons like improved bio-availability, reduced toxicity and increased permeability across membranes. Liposomes are often composed of phosphatidylcholine-enriched phospholipids and may also contain mixed phospholipid chains with surfactant properties. The ethosomal formulation of ketoconazole using phospholipids is a promising option for transdermal delivery in fungal infections. Advances in phospholipid research lead to exploring these biomolecules and their conformations using lipidomics. Simulations Computational simulations of phospholipids are often performed using molecular dynamics with force fields such as GROMOS, CHARMM, or AMBER. Characterization Phospholipids are optically highly birefringent, i.e. their refractive index is different along their axis as opposed to perpendicular to it. Measurement of birefringence can be achieved using cross polarisers in a microscope to obtain an image of e.g. vesicle walls or using techniques such as dual polarisation interferometry to quantify lipid order or disruption in supported bilayers. Analysis There are no simple methods available for analysis of phospholipids, since the close range of polarity between different phospholipid species makes detection difficult. Oil chemists often use spectroscopy to determine total phosphorus abundance and then calculate approximate mass of phospholipids based on molecular weight of expected fatty acid species. Modern lipid profiling employs more absolute methods of analysis, with NMR spectroscopy, particularly 31P-NMR, while HPLC-ELSD provides relative values. Phospholipid synthesis Phospholipid synthesis occurs in the cytosolic side of ER membrane that is studded with proteins that act in synthesis (GPAT and LPAAT acyl transferases, phosphatase and choline phosphotransferase) and allocation (flippase and floppase). Eventually a vesicle will bud off from the ER containing phospholipids destined for the cytoplasmic cellular membrane on its exterior leaflet and phospholipids destined for the exoplasmic cellular membrane on its inner leaflet. Sources Common sources of industrially produced phospholipids are soya, rapeseed, sunflower, chicken eggs, bovine milk, fish eggs etc. Phospholipids for gene delivery, such as distearoylphosphatidylcholine and dioleoyl-3-trimethylammonium propane, are produced synthetically. Each source has a unique profile of individual phospholipid species, as well as fatty acids, and consequently differing applications in food, nutrition, pharmaceuticals, cosmetics, and drug delivery. In signal transduction Some types of phospholipid can be split to produce products that function as second messengers in signal transduction. Examples include phosphatidylinositol (4,5)-bisphosphate (PIP2), that can be split by the enzyme phospholipase C into inositol triphosphate (IP3) and diacylglycerol (DAG), which both carry out the functions of the Gq type of G protein in response to various stimuli and intervene in various processes from long term depression in neurons to leukocyte signal pathways started by chemokine receptors. Phospholipids also intervene in prostaglandin signal pathways as the raw material used by lipase enzymes to produce the prostaglandin precursors. In plants they serve as the raw material to produce jasmonic acid, a plant hormone similar in structure to prostaglandins that mediates defensive responses against pathogens. Food technology Phospholipids can act as emulsifiers, enabling oils to form a colloid with water. Phospholipids are one of the components of lecithin, which is found in egg yolks, as well as being extracted from soybeans, and is used as a food additive in many products and can be purchased as a dietary supplement. Lysolecithins are typically used for water–oil emulsions like margarine, due to their higher HLB ratio. Phospholipid derivatives See table below for an extensive list. Natural phospholipid derivates: egg PC (Egg lecithin), egg PG, soy PC, hydrogenated soy PC, sphingomyelin as natural phospholipids. Synthetic phospholipid derivates: Phosphatidic acid (DMPA, DPPA, DSPA) Phosphatidylcholine (DDPC, DLPC, DMPC, DPPC, DSPC, DOPC, POPC, DEPC) Phosphatidylglycerol (DMPG, DPPG, DSPG, POPG) Phosphatidylethanolamine (DMPE, DPPE, DSPE DOPE) Phosphatidylserine (DOPS) PEG phospholipid (mPEG-phospholipid, polyglycerin-phospholipid, functionalized-phospholipid, terminal activated-phospholipid) Abbreviations used and chemical information of glycerophospholipids See also Cable theory Galactolipid Sulfolipid References Anionic surfactants
Phospholipid
Chemistry
2,055
76,092,030
https://en.wikipedia.org/wiki/Amauroderma%20grandisporum
Amauroderma grandisporum is a tough woody mushroom in the family Ganodermataceae. It is a polypore fungus. References grandisporum Fungi described in 1998 Fungus species Taxa named by Leif Ryvarden
Amauroderma grandisporum
Biology
49
3,832
https://en.wikipedia.org/wiki/Bauhaus
The Staatliches Bauhaus (), commonly known as the , was a German art school operational from 1919 to 1933 that combined crafts and the fine arts. The school became famous for its approach to design, which attempted to unify individual artistic vision with the principles of mass production and emphasis on function. Along with the doctrine of functionalism, the Bauhaus initiated the conceptual understanding of architecture and design. The Bauhaus was founded by architect Walter Gropius in Weimar. It was grounded in the idea of creating a Gesamtkunstwerk ("comprehensive artwork") in which all the arts would eventually be brought together. The Bauhaus style later became one of the most influential currents in modern design, modernist architecture, and architectural education. The Bauhaus movement had a profound influence on subsequent developments in art, architecture, graphic design, interior design, industrial design, and typography. Staff at the Bauhaus included prominent artists such as Paul Klee, Wassily Kandinsky, Gunta Stölzl, and László Moholy-Nagy at various points. The school existed in three German cities—Weimar, from 1919 to 1925; Dessau, from 1925 to 1932; and Berlin, from 1932 to 1933—under three different architect-directors: Walter Gropius from 1919 to 1928; Hannes Meyer from 1928 to 1930; and Ludwig Mies van der Rohe from 1930 until 1933, when the school was closed by its own leadership under pressure from the Nazi regime, having been painted as a centre of communist intellectualism. Internationally, former key figures of Bauhaus were successful in the United States and became known as the avant-garde for the International Style. The White city of Tel Aviv to which numerous Jewish Bauhaus architects emigrated, has the highest concentration of the Bauhaus' international architecture in the world. The changes of venue and leadership resulted in a constant shifting of focus, technique, instructors, and politics. For example, the pottery shop was discontinued when the school moved from Weimar to Dessau, even though it had been an important revenue source; when Mies van der Rohe took over the school in 1930, he transformed it into a private school and would not allow any supporters of Hannes Meyer to attend it. Terms and concepts Several specific features are identified in the Bauhaus forms and shapes: simple geometric shapes like rectangles and spheres, without elaborate decorations. Buildings, furniture, and fonts often feature rounded corners, sometimes rounded walls, or curved chrome pipes. Some buildings are characterized by rectangular features, for example protruding balconies with flat, chunky railings facing the street, and long banks of windows. Some outlines can be defined as a tool for creating an ideal form, which is the basis of the architectural concept. Bauhaus and German modernism After Germany's defeat in World War I and the establishment of the Weimar Republic, a renewed liberal spirit allowed an upsurge of radical experimentation in all the arts, which had been suppressed by the old regime. Many Germans of left-wing views were influenced by the cultural experimentation that followed the Russian Revolution, such as constructivism. Such influences can be overstated: Gropius did not share these radical views, and said that Bauhaus was entirely apolitical. Just as important was the influence of the 19th-century English designer William Morris (1834–1896), who had argued that art should meet the needs of society and that there should be no distinction between form and function. Thus, the Bauhaus style, also known as the International Style, was marked by the absence of ornamentation and by harmony between the function of an object or a building and its design. However, the most important influence on Bauhaus was modernism, a cultural movement whose origins lay as early as the 1880s, and which had already made its presence felt in Germany before the World War, despite the prevailing conservatism. The design innovations commonly associated with Gropius and the Bauhaus—the radically simplified forms, the rationality and functionality, and the idea that mass production was reconcilable with the individual artistic spirit—were already partly developed in Germany before the Bauhaus was founded. The German national designers' organization Deutscher Werkbund was formed in 1907 by Hermann Muthesius to harness the new potentials of mass production, with a mind towards preserving Germany's economic competitiveness with England. In its first seven years, the Werkbund came to be regarded as the authoritative body on questions of design in Germany, and was copied in other countries. Many fundamental questions of craftsmanship versus mass production, the relationship of usefulness and beauty, the practical purpose of formal beauty in a commonplace object, and whether or not a single proper form could exist, were argued out among its 1,870 members (by 1914). German architectural modernism was known as Neues Bauen. Beginning in June 1907, Peter Behrens' pioneering industrial design work for the German electrical company AEG successfully integrated art and mass production on a large scale. He designed consumer products, standardized parts, created clean-lined designs for the company's graphics, developed a consistent corporate identity, built the modernist landmark AEG Turbine Factory, and made full use of newly developed materials such as poured concrete and exposed steel. Behrens was a founding member of the Werkbund, and both Walter Gropius and Adolf Meyer worked for him in this period. The Bauhaus was founded at a time when the German zeitgeist had turned from emotional Expressionism to the matter-of-fact New Objectivity. An entire group of working architects, including Erich Mendelsohn, Bruno Taut and Hans Poelzig, turned away from fanciful experimentation and towards rational, functional, sometimes standardized building. Beyond the Bauhaus, many other significant German-speaking architects in the 1920s responded to the same aesthetic issues and material possibilities as the school. They also responded to the promise "to promote the object of assuring to every German a healthful habitation" written into the new Weimar Constitution (Article 155). Ernst May, Bruno Taut and Martin Wagner, among others, built large housing blocks in Frankfurt and Berlin. The acceptance of modernist design into everyday life was the subject of publicity campaigns, well-attended public exhibitions like the Weissenhof Estate, films, and sometimes fierce public debate. Bauhaus and Vkhutemas The Vkhutemas, the Russian state art and technical school founded in 1920 in Moscow, has been compared to Bauhaus. Founded a year after the Bauhaus school, Vkhutemas has close parallels to the German Bauhaus in its intent, organization and scope. The two schools were the first to train artist-designers in a modern manner. Both schools were state-sponsored initiatives to merge traditional craft with modern technology, with a basic course in aesthetic principles, courses in color theory, industrial design, and architecture. Vkhutemas was a larger school than the Bauhaus, but it was less publicised outside the Soviet Union and consequently, is less familiar in the West. With the internationalism of modern architecture and design, there were many exchanges between the Vkhutemas and the Bauhaus. The second Bauhaus director Hannes Meyer attempted to organise an exchange between the two schools, while Hinnerk Scheper of the Bauhaus collaborated with various Vkhutein members on the use of colour in architecture. In addition, El Lissitzky's book Russia: an Architecture for World Revolution published in German in 1930 featured several illustrations of Vkhutemas/Vkhutein projects there. History of the Bauhaus Weimar The school was founded by Walter Gropius in Weimar on 1 April 1919, as a merger of the Grand-Ducal Saxon Academy of Fine Art and the Grand Ducal Saxon School of Arts and Crafts for a newly affiliated architecture department. Its roots lay in the arts and crafts school founded by the Grand Duke of Saxe-Weimar-Eisenach in 1906, and directed by Belgian Art Nouveau architect Henry van de Velde. When van de Velde was forced to resign in 1915 because he was Belgian, he suggested Gropius, Hermann Obrist, and August Endell as possible successors. In 1919, after delays caused by World War I and a lengthy debate over who should head the institution and the socio-economic meanings of a reconciliation of the fine arts and the applied arts (an issue which remained a defining one throughout the school's existence), Gropius was made the director of a new institution integrating the two called the Bauhaus. In the pamphlet for an April 1919 exhibition entitled Exhibition of Unknown Architects, Gropius, still very much under the influence of William Morris and the British Arts and Crafts Movement, proclaimed his goal as being "to create a new guild of craftsmen, without the class distinctions which raise an arrogant barrier between craftsman and artist." Gropius's neologism Bauhaus references both building and the Bauhütte, a premodern guild of stonemasons. The early intention was for the Bauhaus to be a combined architecture school, crafts school, and academy of the arts. Swiss painter Johannes Itten, German-American painter Lyonel Feininger, and German sculptor Gerhard Marcks, along with Gropius, comprised the faculty of the Bauhaus in 1919. By the following year their ranks had grown to include German painter, sculptor, and designer Oskar Schlemmer who headed the theatre workshop, and Swiss painter Paul Klee, joined in 1922 by Russian painter Wassily Kandinsky. The first major joint project completed by the Bauhaus was the Sommerfeld House, which was built between 1920 and 1921. A tumultuous year at the Bauhaus, 1922 also saw the move of Dutch painter Theo van Doesburg to Weimar to promote De Stijl ("The Style"), and a visit to the Bauhaus by Russian Constructivist artist and architect El Lissitzky. From 1919 to 1922 the school was shaped by the pedagogical and aesthetic ideas of Johannes Itten, who taught the Vorkurs or "preliminary course" that was the introduction to the ideas of the Bauhaus. Itten was heavily influenced in his teaching by the ideas of Franz Cižek and Friedrich Wilhelm August Fröbel. He was also influenced in respect to aesthetics by the work of the Der Blaue Reiter group in Munich, as well as the work of Austrian Expressionist Oskar Kokoschka. The influence of German Expressionism favoured by Itten was analogous in some ways to the fine arts side of the ongoing debate. This influence culminated with the addition of Der Blaue Reiter founding member Wassily Kandinsky to the faculty and ended when Itten resigned in late 1923. Itten was replaced by the Hungarian designer László Moholy-Nagy, who rewrote the Vorkurs with a leaning towards the New Objectivity favoured by Gropius, which was analogous in some ways to the applied arts side of the debate. Although this shift was an important one, it did not represent a radical break from the past so much as a small step in a broader, more gradual socio-economic movement that had been going on at least since 1907, when van de Velde had argued for a craft basis for design while Hermann Muthesius had begun implementing industrial prototypes. Gropius was not necessarily against Expressionism, and in the same 1919 pamphlet proclaiming this "new guild of craftsmen, without the class snobbery", described "painting and sculpture rising to heaven out of the hands of a million craftsmen, the crystal symbol of the new faith of the future." By 1923, however, Gropius was no longer evoking images of soaring Romanesque cathedrals and the craft-driven aesthetic of the "Völkisch movement", instead declaring "we want an architecture adapted to our world of machines, radios and fast cars." Gropius argued that a new period of history had begun with the end of the war. He wanted to create a new architectural style to reflect this new era. His style in architecture and consumer goods was to be functional, cheap and consistent with mass production. To these ends, Gropius wanted to reunite art and craft to arrive at high-end functional products with artistic merit. The Bauhaus issued a magazine called Bauhaus and a series of books called "Bauhausbücher". Since the Weimar Republic lacked the number of raw materials available to the United States and Great Britain, it had to rely on the proficiency of a skilled labour force and an ability to export innovative and high-quality goods. Therefore, designers were needed and so was a new type of art education. The school's philosophy stated that the artist should be trained to work with the industry. Weimar was in the German state of Thuringia, and the Bauhaus school received state support from the Social Democrat-controlled Thuringian state government. The school in Weimar experienced political pressure from conservative circles in Thuringian politics, increasingly so after 1923 as political tension rose. One condition placed on the Bauhaus in this new political environment was the exhibition of work undertaken at the school. This condition was met in 1923 with the Bauhaus' exhibition of the experimental Haus am Horn. The Ministry of Education placed the staff on six-month contracts and cut the school's funding in half. The Bauhaus issued a press release on 26 December 1924, setting the closure of the school for the end of March 1925. At this point it had already been looking for alternative sources of funding. After the Bauhaus moved to Dessau, a school of industrial design with teachers and staff less antagonistic to the conservative political regime remained in Weimar. This school was eventually known as the Technical University of Architecture and Civil Engineering, and in 1996 changed its name to Bauhaus-University Weimar. Dessau The Bauhaus moved to Dessau in 1925 and new facilities there were inaugurated in late 1926. Gropius's design for the Dessau facilities was a return to the futuristic Gropius of 1914 that had more in common with the International style lines of the Fagus Factory than the stripped down Neo-classical of the Werkbund pavilion or the Völkisch Sommerfeld House. During the Dessau years, there was a remarkable change in direction for the school. According to Elaine Hoffman, Gropius had approached the Dutch architect Mart Stam to run the newly founded architecture program, and when Stam declined the position, Gropius turned to Stam's friend and colleague in the ABC group, Hannes Meyer. Meyer became director when Gropius resigned in February 1928, and brought the Bauhaus its two most significant building commissions, both of which still exist: five apartment buildings in the city of Dessau, and the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer favoured measurements and calculations in his presentations to clients, along with the use of off-the-shelf architectural components to reduce costs. This approach proved attractive to potential clients. The school turned its first profit under his leadership in 1929. But Meyer also generated a great deal of conflict. As a radical functionalist, he had no patience with the aesthetic program and forced the resignations of Herbert Bayer, Marcel Breuer, and other long-time instructors. Even though Meyer shifted the orientation of the school further to the left than it had been under Gropius, he didn't want the school to become a tool of left-wing party politics. He prevented the formation of a student communist cell, and in the increasingly dangerous political atmosphere, this became a threat to the existence of the Dessau school. Dessau mayor Fritz Hesse fired him in the summer of 1930. The Dessau city council attempted to convince Gropius to return as head of the school, but Gropius instead suggested Ludwig Mies van der Rohe. Mies was appointed in 1930 and immediately interviewed each student, dismissing those that he deemed uncommitted. He halted the school's manufacture of goods so that the school could focus on teaching, and appointed no new faculty other than his close confidant Lilly Reich. By 1931, the Nazi Party was becoming more influential in German politics. When it gained control of the Dessau city council, it moved to close the school. Berlin In late 1932, Mies rented a derelict factory in Berlin (Birkbusch Street 49) to use as the new Bauhaus with his own money. The students and faculty rehabilitated the building, painting the interior white. The school operated for ten months without further interference from the Nazi Party. In 1933, the Gestapo closed down the Berlin school. Mies protested the decision, eventually speaking to the head of the Gestapo, who agreed to allow the school to re-open. However, shortly after receiving a letter permitting the opening of the Bauhaus, Mies and the other faculty agreed to voluntarily shut down the school. Although neither the Nazi Party nor Adolf Hitler had a cohesive architectural policy before they came to power in 1933, Nazi writers like Wilhelm Frick and Alfred Rosenberg had already labelled the Bauhaus "un-German" and criticized its modernist styles, deliberately generating public controversy over issues like flat roofs. Increasingly through the early 1930s, they characterized the Bauhaus as a front for communists and social liberals. Indeed, when Meyer was fired in 1930, a number of communist students loyal to him moved to the Soviet Union. Even before the Nazis came to power, political pressure on Bauhaus had increased. The Nazi movement, from nearly the start, denounced the Bauhaus for its "degenerate art", and the Nazi regime was determined to crack down on what it saw as the foreign, probably Jewish, influences of "cosmopolitan modernism". Despite Gropius's protestations that as a war veteran and a patriot his work had no subversive political intent, the Berlin Bauhaus was pressured to close in April 1933. Emigrants did succeed, however, in spreading the concepts of the Bauhaus to other countries, including the "New Bauhaus" of Chicago: Mies decided to emigrate to the United States for the directorship of the School of Architecture at the Armour Institute (now Illinois Institute of Technology) in Chicago and to seek building commissions. The simple engineering-oriented functionalism of stripped-down modernism, however, did lead to some Bauhaus influences living on in Nazi Germany. When Hitler's chief engineer, Fritz Todt, began opening the new autobahns (highways) in 1935, many of the bridges and service stations were "bold examples of modernism", and among those submitting designs was Mies van der Rohe. Architectural output The paradox of the early Bauhaus was that, although its manifesto proclaimed that the aim of all creative activity was building, the school did not offer classes in architecture until 1927. During the years under Gropius (1919–1927), he and his partner Adolf Meyer observed no real distinction between the output of his architectural office and the school. The built output of Bauhaus architecture in these years is the output of Gropius: the Sommerfeld house in Berlin, the Otte house in Berlin, the Auerbach house in Jena, and the competition design for the Chicago Tribune Tower, which brought the school much attention. The definitive 1926 Bauhaus building in Dessau is also attributed to Gropius. Apart from contributions to the 1923 Haus am Horn, student architectural work amounted to un-built projects, interior finishes, and craft work like cabinets, chairs and pottery. In the next two years under Meyer, the architectural focus shifted away from aesthetics and towards functionality. There were major commissions: one from the city of Dessau for five tightly designed "Laubenganghäuser" (apartment buildings with balcony access), which are still in use today, and another for the Bundesschule des Allgemeinen Deutschen Gewerkschaftsbundes (ADGB Trade Union School) in Bernau bei Berlin. Meyer's approach was to research users' needs and scientifically develop the design solution. He intended to place emphasis on Gropius' objective analysis of the properties determining an object's use value, known as Wesensforschung. Gropius believed that it was possible to design exemplary products of universal validity that should be standardized. Mies van der Rohe repudiated Meyer's politics, his supporters, and his architectural approach. As opposed to Gropius's "study of essentials", and Meyer's research into user requirements, Mies advocated a "spatial implementation of intellectual decisions", which effectively meant an adoption of his own aesthetics. Neither Mies van der Rohe nor his Bauhaus students saw any projects built during the 1930s. The Bauhaus movement was not focused on developing worker housing. Only two projects, the apartment building project in Dessau and the Törten row housing fall into the worker housing category. It was the Bauhaus contemporaries Bruno Taut, Hans Poelzig and particularly Ernst May, as the city architects of Berlin, Dresden and Frankfurt respectively, who are rightfully credited with the thousands of socially progressive housing units built in Weimar Germany. The housing Taut built in south-west Berlin during the 1920s, close to the U-Bahn stop Onkel Toms Hütte, is still occupied. Impact The Bauhaus had a major impact on art and architecture trends in Western Europe, Canada, the United States and Israel in the decades following its demise, as many of the artists involved fled, or were exiled by the Nazi regime. In 1996, four of the major sites associated with Bauhaus in Germany were inscribed on the UNESCO World Heritage List (with two more added in 2017). In 1928, the Hungarian painter Alexander Bortnyik founded a school of design in Budapest called Műhely, which means "the studio". Located on the seventh floor of a house on Nagymezo Street, it was meant to be the Hungarian equivalent to the Bauhaus. The literature sometimes refers to it—in an oversimplified manner—as "the Budapest Bauhaus". Bortnyik was a great admirer of László Moholy-Nagy and had met Walter Gropius in Weimar between 1923 and 1925. Moholy-Nagy himself taught at the Műhely. Victor Vasarely, a pioneer of op art, studied at this school before establishing in Paris in 1930. Walter Gropius, Marcel Breuer, and Moholy-Nagy re-assembled in Britain during the mid-1930s and lived and worked in the Isokon housing development in Lawn Road in London before the war caught up with them. Gropius and Breuer went on to teach at the Harvard Graduate School of Design and worked together before their professional split. Their collaboration produced, among other projects, the Aluminum City Terrace in New Kensington, Pennsylvania and the Alan I W Frank House in Pittsburgh. The Harvard School was enormously influential in America in the late 1920s and early 1930s, producing such students as Philip Johnson, I. M. Pei, Lawrence Halprin and Paul Rudolph, among many others. In the late 1930s, Mies van der Rohe re-settled in Chicago, enjoyed the sponsorship of the influential Philip Johnson, and became one of the world's pre-eminent architects. Moholy-Nagy also went to Chicago and founded the New Bauhaus school under the sponsorship of industrialist and philanthropist Walter Paepcke. This school became the Institute of Design, part of the Illinois Institute of Technology. Printmaker and painter Werner Drewes was also largely responsible for bringing the Bauhaus aesthetic to America and taught at both Columbia University and Washington University in St. Louis. Herbert Bayer, sponsored by Paepcke, moved to Aspen, Colorado in support of Paepcke's Aspen projects at the Aspen Institute. In 1953, Max Bill, together with Inge Aicher-Scholl and Otl Aicher, founded the Ulm School of Design (German: Hochschule für Gestaltung – HfG Ulm) in Ulm, Germany, a design school in the tradition of the Bauhaus. The school is notable for its inclusion of semiotics as a field of study. The school closed in 1968, but the "Ulm Model" concept continues to influence international design education. Another series of projects at the school were the Bauhaus typefaces, mostly realized in the decades afterward. The influence of the Bauhaus on design education was significant. One of the main objectives of the Bauhaus was to unify art, craft, and technology, and this approach was incorporated into the curriculum of the Bauhaus. The structure of the Bauhaus Vorkurs (preliminary course) reflected a pragmatic approach to integrating theory and application. In their first year, students learnt the basic elements and principles of design and colour theory, and experimented with a range of materials and processes. This approach to design education became a common feature of architectural and design school in many countries. For example, the Shillito Design School in Sydney stands as a unique link between Australia and the Bauhaus. The colour and design syllabus of the Shillito Design School was firmly underpinned by the theories and ideologies of the Bauhaus. Its first year foundational course mimicked the Vorkurs and focused on the elements and principles of design plus colour theory and application. The founder of the school, Phyllis Shillito, which opened in 1962 and closed in 1980, firmly believed that "A student who has mastered the basic principles of design, can design anything from a dress to a kitchen stove". In Britain, largely under the influence of painter and teacher William Johnstone, Basic Design, a Bauhaus-influenced art foundation course, was introduced at Camberwell School of Art and the Central School of Art and Design, whence it spread to all art schools in the country, becoming universal by the early 1960s. One of the most important contributions of the Bauhaus is in the field of modern furniture design. The characteristic Cantilever chair and Wassily Chair designed by Marcel Breuer are two examples. (Breuer eventually lost a legal battle in Germany with Dutch architect/designer Mart Stam over patent rights to the cantilever chair design. Although Stam had worked on the design of the Bauhaus's 1923 exhibit in Weimar, and guest-lectured at the Bauhaus later in the 1920s, he was not formally associated with the school, and he and Breuer had worked independently on the cantilever concept, leading to the patent dispute.) The most profitable product of the Bauhaus was its wallpaper. The physical plant at Dessau survived World War II and was operated as a design school with some architectural facilities by the German Democratic Republic. This included live stage productions in the Bauhaus theater under the name of Bauhausbühne ("Bauhaus Stage"). After German reunification, a reorganized school continued in the same building, with no essential continuity with the Bauhaus under Gropius in the early 1920s. In 1979 Bauhaus-Dessau College started to organize postgraduate programs with participants from all over the world. This effort has been supported by the Bauhaus-Dessau Foundation which was founded in 1974 as a public institution. Later evaluation of the Bauhaus design credo was critical of its flawed recognition of the human element, an acknowledgment of "the dated, unattractive aspects of the Bauhaus as a projection of utopia marked by mechanistic views of human nature…Home hygiene without home atmosphere." Subsequent examples which have continued the philosophy of the Bauhaus include Black Mountain College, Hochschule für Gestaltung in Ulm and Domaine de Boisbuchet. The White City The White City (Hebrew: העיר הלבנה), refers to a collection of over 4,000 buildings built in the Bauhaus or International Style in Tel Aviv from the 1930s by German Jewish architects who emigrated to the British Mandate of Palestine after the rise of the Nazis. Tel Aviv has the largest number of buildings in the Bauhaus/International Style of any city in the world. Preservation, documentation, and exhibitions have brought attention to Tel Aviv's collection of 1930s architecture. In 2003, the United Nations Educational, Scientific and Cultural Organization (UNESCO) proclaimed Tel Aviv's White City a World Cultural Heritage site, as "an outstanding example of new town planning and architecture in the early 20th century." The citation recognized the unique adaptation of modern international architectural trends to the cultural, climatic, and local traditions of the city. Bauhaus Center Tel Aviv organizes regular architectural tours of the city, and the Bauhaus Foundation offers Bauhaus exhibits. Centenary As the centenary of the founding of Bauhaus, several events, festivals, and exhibitions were held around the world in 2019. The international opening festival at the Berlin Academy of the Arts from 16 to 24 January concentrated on "the presentation and production of pieces by contemporary artists, in which the aesthetic issues and experimental configurations of the Bauhaus artists continue to be inspiringly contagious". Original Bauhaus, The Centenary Exhibition at the Berlinische Galerie (6 September 2019 to 27 January 2020) presented 1,000 original artefacts from the Bauhaus-Archive's collection and recounted the history behind the objects. The Bauhaus Museum Dessau also opened in September 2019, operated by the Bauhaus Dessau Foundation and funded by the State of Saxony-Anhalt and the German Federal government. It is set to be the permanent home of the second largest Bauhaus collection at 49,000 objects, while paying homage to its strong influence in the city when Bauhaus arrived in 1925. In 2024, the German far-right party Alternative for Germany (AfG) sought to attack celebrations of Bauhaus because of their view that Bauhaus did not follow tradition. Bauhaus was also crushed by the Nazi's before World War II, and according to political scientist Jan-Werner Mueller, AfG's condemnation seeks to use it in a culture war of far right-wing provocation. The New European Bauhaus In September 2020, President of the European Commission Ursula von der Leyen introduced the New European Bauhaus (NEB) initiative during her State of the Union address. The NEB is a creative and interdisciplinary movement that connects the European Green Deal to everyday life. It is a platform for experimentation aiming to unite citizens, experts, businesses and institutions in imagining and designing a sustainable, aesthetic and inclusive future. Sport and physical activity were an essential part of the original Bauhaus approach. Hannes Meyer, the second director of Bauhaus Dessau, ensured that one day a week was solely devoted to sport and gymnastics. 1 In 1930, Meyer employed two physical education teachers. The Bauhaus school even applied for public funds to enhance its playing field. The inclusion of sport and physical activity in the Bauhaus curriculum had various purposes. First, as Meyer put it, sport combatted a “one-sided emphasis on brainwork.” In addition, Bauhaus instructors believed that students could better express themselves if they actively experienced the space, rhythms and movements of the body. The Bauhaus approach also considered physical activity an important contributor to wellbeing and community spirit. Sport and physical activity were essential to the interdisciplinary Bauhaus movement that developed revolutionary ideas and continues to shape our environments today. Bauhaus staff and students People who were educated, or who taught or worked in other capacities, at the Bauhaus. Gallery See also Art Deco architecture Bauhaus Archive Bauhaus Center Tel Aviv Bauhaus Dessau Foundation Bauhaus Museum, Tel Aviv Bauhaus Museum, Weimar Bauhaus Museum, Dessau Bauhaus Project (computing) Bauhaus World Heritage Site Constructivist architecture Expressionist architecture Form follows function Haus am Horn IIT Institute of Design International style (architecture) Lucia Moholy Max-Liebling House, Tel Aviv Modern architecture Neues Sehen (New Vision) New Objectivity (architecture) Swiss Style (design) Ulm School of Design Vkhutemas Women of the Bauhaus Explanatory footnotes The closure, and the response of Mies van der Rohe, is fully documented in Elaine Hochman's Architects of Fortune. Google honored Bauhaus for its 100th anniversary on 12 April 2019 with a Google Doodle. Citations General and cited references Olaf Thormann: Bauhaus Saxony. arnoldsche Art Publishers 2019, . Further reading External links Bauhaus Everywhere — Google Arts & Culture Collection: Artists of the Bauhaus from the University of Michigan Museum of Art 1919 establishments in Germany 1933 disestablishments in Germany Architecture in Germany Architecture schools Art movements Design schools in Germany Expressionist architecture German architectural styles Graphic design Industrial design Modernist architecture Bauhaus, Dessau Visual arts education Bauhaus Weimar culture World Heritage Sites in Germany
Bauhaus
Engineering
6,842