id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
51,347,632
https://en.wikipedia.org/wiki/Xiangning%20Zhang
Xiangning "Forrest" Zhang 张向宁 (born in 1972) is a Chinese information technology entrepreneur, angel investor, and venture capitalist. He is among the first generation of internet entrepreneurs in China. Zhang is the founder, former Chairman and CEO of HiChina Corporation (now Alibaba Cloud) as well as the Chairman and CEO of Tixa Internet Technology Corporation. Zhang foresaw the potential of the internet and provided domain name registration and web hosting services from the end of 1995, leading HiChina to earn the largest share of the Chinese market in this field. Later he started his second company, Tixa, and continued to innovate while holding positions in organizations like the All-China Youth Federation and the China Council for International Investment Promotion. Early life Zhang enrolled in Beijing Normal University at 16 years old. His paper on the Principle of Relativity written at 17 years old impressed a professor at the Massachusetts Institute of Technology (MIT), but shortly afterwards, Zhang willingly stopped schooling at 18 years old. Professional life Between 1990 and 1994, Zhang worked as a regular staff member and gained operation experience and understanding of corporate management. His work expanded through industries such as international trade, shipping, tourism, bid management, equipment development, computer exhibition, and more. In 1994, Zhang foresaw the potential of the Internet. Initially interested in the Bulletin Board System (BBS), he started to create one of his own. He eventually decided to suspend all of his other businesses and focus on the Internet industry. At the end of 1995, Zhang founded HiChina Co. (Chinese: 万网), which was to become the largest domain name registration and web hosting service company in China. In January 2000, Zhang initiated the "Net.cn Plan", associating with Sina, Sohu, NetEase, Changhong, Kelon, Computer World, China Info World (CIW), China Internet Network Information Center, China Information Association, and 37 other notable companies and organizations, and created the Internet Services for China Businesses Alliance (Chinese: 中国企业上网服务联盟). Zhang also announced that the year 2000 would be China's Year of Internet Utilization for Businesses (Chinese: 企业上网年). During HiChina's development, Zhang led HiChina through two rounds of fundraising with major investors including IDG Ventures, TPG Newbridge, and more. In 1998, Zhang completed his Master's degree at Huazhong University of Science and Technology. In November 2001, Zhang chose to abdicate his position at HiChina, only remaining as a shareholder, and collaborated with his long-time friends to found a new company "VeryE.com". In February 2004, VeryE obtained investment from Sumitomo Mitsui Banking Corporation, Japan Asia Investment Co. (JAIC), MIH Investments, and became Tixa Co. (WWW.TIXA.COM). In September 2009, Alibaba acquired HiChina and later built it into Alibaba Cloud (Aliyun), maintaining its position as the top domain name registration and web hosting service company in China and later as a leader in cloud services. Notable investments Visual China Group 视觉中国 XiMaLaYa FM 喜马拉雅 ShenZhouRong 神州融 Hylanda 海量信息 HeYinLiang 合音量 Other positions Investment and Financing Committee (CIFC) Distinguished Guest of the China Internet Conference Honors and awards Tixa Co. – Top 100 Innovative Companies in Asia (2005, Red Herring Asia) Tixa Co. – Innovator 50 (2005, Internet Society of China) Best Investors in China (2006, Forbes) Most Influential Angel Investors in China (2007, The 1st China PE Influence Awards) Most Active Angel Investors in China (2007) References 1972 births Living people Businesspeople from Beijing Beijing Normal University alumni Huazhong University of Science and Technology alumni Chinese venture capitalists People in information technology
Xiangning Zhang
Technology
809
60,544,290
https://en.wikipedia.org/wiki/P.%20A.%20L.%20Chapman-Rietschi
Peter Albert Leslie Chapman-Rietschi (1945-2017) was an independent scholar and research writer in the field of history of astronomy, ancient astral sciences, archaeoastronomy, and astrobiology, including bioastronomy and SETI. He was a Fellow of the Royal Astronomical Society and in former years also Fellow of the Royal Astronomical Society of Canada and Member of the Egypt Exploration Society. Publications Pre-telescopic Astronomy: Invisible 'Planets' Rahu and Ketu. Quarterly Journal of the Royal Astronomical Society, 32, 53–55, 1991 The Plurality of Worlds. The Observatory, 111, 312, 1991 The Fronties of Life. The Observatory, 112, 145, 1992 Nonclassical SETI. The Observatory, 114, 175, 1994 The Beijing Ancient Observatory and Intercultural Contacts. The Journal of the Royal Astronomical Society of Canada, 88, 24–38, 1994 The Colour of Sirius in Ancient Times. Quarterly Journal of the Royal Astronomical Society, 36, 337–350, 1995 Astronomers and Missionaries in Old Beijing. Quarterly Journal of the Royal Astronomical Society, 36, 273-274, 1995 The Privatized World of SETI. The Observatory, 115, 135, 1995 The Star seen in the East. The Observatory, 115, 329-330, 1995 Venus as the Star of Bethlehem. Quarterly Journal of the Royal Astronomical Society, 37, 843-844, 1996 Astronomical Conceptions in Mithraic Iconography. The Journal of the Royal Astronomical Society of Canada, 91, 133-134, 1997 SETI, Forty Years on. The Observatory, 120, 403-404, 2000 The beginnings of SETI. Astronomy & Geophysics, 44, 1.7, 2003 Astrobiology. The Observatory, 127, 191, 2007 The First SETI Scans. The Observatory, 130, 172-173, 2010 Factor L of the Drake Equation. The Observatory, 131, 391-392, 2011 The Colour Black and the Planet Saturn. The Observatory, 133, 41–42, 2013 Book reviews Extraterrestrials. Where Are They?, B. Zuckerman and M.H. Hart. Cambridge University Press. The Observatory, 116, 182–183, 1996 The Sirius Mystery, R. Temple. Century London. The Observatory, 118, 245-246, 1998 Astrobiology: Future Perspectives, P. Ehrenfreund et al. Kluwer-Springer Dordrecht The Observatory, 125, 278–279, 2005 Contact with Alien Civilisations, M.A.G. Michaud. Springer Heidelberg. The Observatory, 127, 341–342, 2007 The Living Cosmos, C. Impey. Cambridge University Press. The Observatory, 132, 45–46, 2012 We are the Martians: Connecting Cosmology with Biology, G.F. Bignami. Springer Heidelberg. The Observatory, 133, 108–109, 2013 Signatures of Life: Science Searches the Universe, E. Ashpole. Prometheus Arnherst. The Observatory, 133, 370, 2013 Elephants in Space: The Past, Present and Future of Life and the Universe, B. Moore. Springer Heidelberg. The Observatory, 135, 108–109, 2015 Astrobiological Neurosystems, J.L. Cranford. Springer Heidelberg. The Observatory, 136, 93–95, 2016 The Hunt for Alien Life: A Wider Perspective, P. Linde. Springer Heidelberg. The Observatory, 137, 86–87, 2017 Conference Papers Frontiers of Life. 3rd 'Rencontres de Blois', October 1991 (summary of bioastronomy talks). The Observatory, 112, 145–147, 1992 Philosophy, Star Transformations and Okeanos. 3rd Conference of the 'International Association Cosmos and Philosophy' (IACP) 1991, Mytilene. In: Diotima, Institut de Philosophie de l'Université d'Athènes, J. Vrin, Paris, 21, 83–86, 1993. Sappho and the Astral Sciences, co-work with Anne Chapman-Rietschi. In: 14th Conference of the 'Société Européenne pour l'Astronomie dans la Culture' (SEAC) 2006, Rhodes and 17th IACP 2007, Athens. In: Ordre et Liberté: L'Univers Cosmique et Human, I.P.R. Athènes, 123–128, 2011, and In: Philosophia, Academy of Athens, 42, 75–79, 2012 Catherine of Alexandria and the Art of Sacred Astronomy, co-work with Anne Chapman-Rietschi. In: 19th IACP 2010, Athens. In: Ordre et Liberté: L'Univers Cosmique et Human, I.P.R. Athènes, 159–167, 2011, and In: Diotima, Institut de Philosophie de l'Université d'Athènes, J. Vrin, Paris, 41, 153–161, 2013. Other Worlds, Other Life. 11th IACP 2000, Prague. In: Cosmological Viewpoints, St. Kliment Ohridski University Press, Sofia, 138–140, 2015. ETI and Humankind. 12th IACP 2001, Aegina. In : Cosmological Viewpoints, St. Kliment Ohridski University Press, Sofia, 189–192, 2015. Key to conferences IACP = International Association Cosmos and Philosophy INSAP = The Inspiration of Astronomical Phenomena SEAC = Société pour l'Astronomie dans la Culture References 1945 births 2017 deaths Historians of astronomy Fellows of the Royal Astronomical Society
P. A. L. Chapman-Rietschi
Astronomy
1,161
12,150,693
https://en.wikipedia.org/wiki/Photoreceptor%20cell-specific%20nuclear%20receptor
The photoreceptor cell-specific nuclear receptor (PNR), also known as NR2E3 (nuclear receptor subfamily 2, group E, member 3), is a protein that in humans is encoded by the NR2E3 gene. PNR is a member of the nuclear receptor super family of intracellular transcription factors. Function PNR is exclusively expressed in the retina. The main target genes of PNR are rhodopsin and several opsins which are essential for sight. Structure and ligands The crystal structure of PNR's ligand-binding domain is known. It self-dimerizes into, by default, a repressor state. Computer simulations based on this model shows that a ligand could possibly fit into PNR and switch it into a transcription activator. 13-cis retinoic acid is a known weak agonist that fits into such a pocket, but no physiologic ligand is known. Two synthetic compounds, 11A and 11B, appear to be agonists but do not go into the pocket and instead work as allosteric modulators. A more recent screening identifies another compound called photoregulin-1 (PR1) that functions as a reverse agonist, an activity possibly useful in the management of retinitis pigmentosa. Clinical significance Mutations in the NR2E3 gene have been linked to several inherited retinal diseases, including enhanced S-cone syndrome (ESCS), a form of retinitis pigmentosa, and Goldmann-Favre syndrome. References Further reading External links Intracellular receptors Transcription factors
Photoreceptor cell-specific nuclear receptor
Chemistry,Biology
331
837,770
https://en.wikipedia.org/wiki/Entropy%20of%20fusion
In thermodynamics, the entropy of fusion is the increase in entropy when melting a solid substance. This is almost always positive since the degree of disorder increases in the transition from an organized crystalline solid to the disorganized structure of a liquid; the only known exception is helium. It is denoted as and normally expressed in joules per mole-kelvin, J/(mol·K). A natural process such as a phase transition will occur when the associated change in the Gibbs free energy is negative. where is the enthalpy of fusion. Since this is a thermodynamic equation, the symbol refers to the absolute thermodynamic temperature, measured in kelvins (K). Equilibrium occurs when the temperature is equal to the melting point so that and the entropy of fusion is the heat of fusion divided by the melting point: Helium Helium-3 has a negative entropy of fusion at temperatures below 0.3 K. Helium-4 also has a very slightly negative entropy of fusion below 0.8 K. This means that, at appropriate constant pressures, these substances freeze with the addition of heat. See also Entropy of vaporization Notes References Thermodynamic entropy Thermodynamic properties
Entropy of fusion
Physics,Chemistry,Mathematics
251
9,089,819
https://en.wikipedia.org/wiki/Buckley%E2%80%93Leverett%20equation
In fluid dynamics, the Buckley–Leverett equation is a conservation equation used to model two-phase flow in porous media. The Buckley–Leverett equation or the Buckley–Leverett displacement describes an immiscible displacement process, such as the displacement of oil by water, in a one-dimensional or quasi-one-dimensional reservoir. This equation can be derived from the mass conservation equations of two-phase flow, under the assumptions listed below. Equation In a quasi-1D domain, the Buckley–Leverett equation is given by: where is the wetting-phase (water) saturation, is the total flow rate, is the rock porosity, is the area of the cross-section in the sample volume, and is the fractional flow function of the wetting phase. Typically, is an S-shaped, nonlinear function of the saturation , which characterizes the relative mobilities of the two phases: where and denote the wetting and non-wetting phase mobilities. and denote the relative permeability functions of each phase and and represent the phase viscosities. Assumptions The Buckley–Leverett equation is derived based on the following assumptions: Flow is linear and horizontal Both wetting and non-wetting phases are incompressible Immiscible phases Negligible capillary pressure effects (this implies that the pressures of the two phases are equal) Negligible gravitational forces General solution The characteristic velocity of the Buckley–Leverett equation is given by: The hyperbolic nature of the equation implies that the solution of the Buckley–Leverett equation has the form , where is the characteristic velocity given above. The non-convexity of the fractional flow function also gives rise to the well known Buckley-Leverett profile, which consists of a shock wave immediately followed by a rarefaction wave. See also Capillary pressure Permeability (fluid) Relative permeability Darcy's law References External links Buckley-Leverett Equation and Uses in Porous Media Conservation equations Equations of fluid dynamics
Buckley–Leverett equation
Physics,Chemistry,Mathematics
418
77,567,392
https://en.wikipedia.org/wiki/Low-gravity%20process%20engineering
Low-gravity process engineering is a specialized field that focuses on the design, development, and optimization of industrial processes and manufacturing techniques in environments with reduced gravitational forces. This discipline encompasses a wide range of applications, from microgravity conditions experienced in Earth orbit to the partial gravity environments found on celestial bodies such as the Moon and Mars. As humanity extends its reach beyond Earth, the ability to efficiently produce materials, manage fluids, and conduct chemical processes in reduced gravity becomes crucial for sustained space missions and potential colonization efforts. Furthermore, the unique conditions of microgravity offer opportunities for novel materials and pharmaceuticals that cannot be easily produced on Earth, potentially leading to groundbreaking advancements in various industries. The historical context of low-gravity research dates back to the early days of space exploration. Initial experiments conducted during the Mercury and Gemini programs in the 1960s provided the first insights into fluid behavior in microgravity. Subsequent missions, including Skylab and the Space Shuttle program, expanded our understanding of materials processing and fluid dynamics in space. The advent of the International Space Station (ISS) in the late 1990s marked a significant milestone, providing a permanent microgravity laboratory for continuous research and development in low-gravity process engineering. Fundamentals of low-gravity environments Low-gravity environments, encompassing both microgravity and reduced gravity conditions, exhibit unique characteristics that significantly alter physical phenomena compared to Earth's gravitational field. These environments are typically characterized by gravitational accelerations ranging from to , where represents Earth's standard gravitational acceleration . Microgravity, often experienced in orbiting spacecraft, is characterized by the near absence of perceptible weight. In contrast, reduced gravity conditions, such as those on the Moon () or Mars (), maintain a fractional gravitational pull relative to Earth. These environments differ markedly from Earth's gravity in several key aspects: Absence of natural convection: On Earth, density differences in fluids due to temperature gradients drive natural convection. In microgravity, this effect is negligible, leading to diffusion-dominated heat and mass transfer. Surface tension dominance: Without the overwhelming force of gravity, surface tension becomes a dominant force in fluid behavior, significantly affecting liquid spreading and containment. Particle suspension: In low-gravity environments, particles in fluids remain suspended for extended periods, as sedimentation and buoyancy effects are minimal. Effects of low-gravity conditions on various physical processes Fluid dynamics In microgravity, fluid behavior is primarily governed by surface tension, viscous forces, and inertia. This leads to phenomena such as large stable liquid bridges, spherical droplet formation, and capillary flow dominance. The absence of buoyancy-driven convection alters mixing processes and phase separations, necessitating alternative methods for fluid management in space applications. Heat transfer The lack of natural convection in microgravity significantly impacts heat transfer processes. Conduction and radiation become the primary modes of heat transfer, while forced convection must be induced artificially. This alteration affects cooling systems, boiling processes, and thermal management in spacecraft and space-based manufacturing. Material behavior Low-gravity environments offer unique conditions for materials processing. The absence of buoyancy-driven convection and sedimentation allows for more uniform crystal growth and the formation of novel alloys and composites. Additionally, the reduced mechanical stresses in microgravity can lead to changes in material properties and behavior, influencing fields such as materials science and pharmaceutical research. Challenges Low-gravity process engineering faces a number of challenges that require innovative solutions and adaptations of terrestrial technologies. These challenges stem from the unique physical phenomena observed in microgravity and reduced gravity environments. Fluid management issues The absence of buoyancy and the dominance of surface tension in low-gravity environments significantly alter fluid behavior, presenting several challenges: Liquid-gas separation: Without buoyancy, separating liquids and gases becomes difficult, affecting processes such as fuel management and life support systems. Capillary effects: Surface tension dominance leads to unexpected fluid migrations and containment issues, requiring specialized designs for fluid handling systems. Bubble formation and coalescence: In microgravity, bubbles tend to persist and coalesce more readily, potentially disrupting fluid processes and heat transfer mechanisms. Heat transfer limitations The lack of natural convection in low-gravity environments poses significant challenges for heat transfer processes: Reduced convective heat transfer: Without buoyancy-driven flows, heat transfer becomes primarily dependent on conduction and radiation, potentially leading to localized hot spots and thermal management issues. Boiling and condensation: These phase change processes behave differently in microgravity, affecting cooling systems and thermal management strategies. Temperature gradients: The absence of natural mixing can result in sharp temperature gradients, impacting reaction kinetics and material processing. Material handling and containment difficulties Low-gravity environments present unique challenges in manipulating and containing materials: Particle behavior: Without settling due to gravity, particles tend to remain suspended and disperse differently, affecting filtration, separation, and mixing processes. Liquid containment: Surface tension effects can cause liquids to adhere unexpectedly to container walls, complicating storage and transfer operations. Phase separation: The lack of density-driven separation makes it challenging to separate immiscible fluids or different phases of materials. Equipment design considerations Designing equipment for low-gravity operations requires addressing several unique factors Mass and volume constraints: Space missions have strict limitations on payload mass and volume, necessitating compact and lightweight designs. Automation and remote operation: Many processes must be designed for autonomous or remote operation due to limited human presence in space environments. Reliability and redundancy: The inaccessibility of space environments demands highly reliable systems with built-in redundancies to mitigate potential failures. Microgravity-specific mechanisms: Equipment must often incorporate novel mechanisms to replace gravity-dependent functions, such as pumps for fluid transport or centrifuges for separation processes. Multi-functionality: Due to resource constraints, equipment is often designed to serve multiple purposes, increasing complexity but reducing overall payload requirements. Addressing these challenges requires interdisciplinary approaches, combining insights from fluid dynamics, heat transfer, materials science, and aerospace engineering. As research in low-gravity process engineering progresses, new solutions and technologies continue to emerge, expanding the possibilities for space-based manufacturing and resource utilization. Key areas Fluid processing Multiphase flow behavior in microgravity differs substantially from terrestrial conditions. The absence of buoyancy-driven phase separation leads to complex flow patterns and phase distributions. These phenomena affect heat transfer, mass transport, and chemical reactions in multiphase systems, necessitating novel approaches to fluid management in space. Boiling and condensation processes are fundamentally altered in microgravity. The lack of buoyancy affects bubble dynamics, heat transfer coefficients, and critical heat flux. Understanding these changes is crucial for designing efficient thermal management systems for spacecraft and space habitats. Capillary flow and wetting phenomena become dominant in low-gravity environments. Surface tension forces drive fluid behavior, leading to unexpected liquid migrations and containment challenges. These effects are particularly important in the design of fuel tanks, life support systems, and fluid handling equipment for space applications. Materials processing Materials processing in space offers unique opportunities for producing novel materials and improving existing manufacturing techniques. Crystal growth in space benefits from the absence of gravity-induced convection and sedimentation. This environment allows for the growth of larger, more perfect crystals with fewer defects. Space-grown crystals have applications in electronics, optics, and pharmaceutical research. Metallurgy and alloy formation in microgravity can result in materials with unique properties. The absence of buoyancy-driven convection allows for more uniform mixing of molten metals and the creation of novel alloys and composites that are difficult or impossible to produce on Earth. Additive manufacturing in low-gravity environments presents both challenges and opportunities. While the absence of gravity can affect material deposition and layer adhesion, it also allows for the creation of complex structures without the need for support materials. This technology has potential applications in on-demand manufacturing of spare parts and tools for long-duration space missions. Biotechnology applications Microgravity conditions offer unique advantages for various biotechnology applications. Protein crystallization in space often results in larger, more well-ordered crystals compared to those grown on Earth. These high-quality crystals are valuable for structural biology studies and drug design. The microgravity environment reduces sedimentation and convection, allowing for more uniform crystal growth. Cell culturing and tissue engineering benefit from the reduced mechanical stresses in microgravity. This environment allows for three-dimensional cell growth and the formation of tissue-like structures that more closely resemble in vivo conditions. Such studies contribute to our understanding of cellular biology and may lead to advancements in regenerative medicine. Pharmaceutical production in space has the potential to yield purer drugs with improved efficacy. The absence of convection and sedimentation can lead to more uniform crystallization and particle formation, potentially enhancing drug properties. Chemical engineering processes Chemical engineering processes in microgravity often exhibit different behaviors compared to their terrestrial counterparts. Reaction kinetics in microgravity can be altered due to the absence of buoyancy-driven convection. This can lead to more uniform reaction conditions and potentially different reaction rates or product distributions. Separation processes, such as distillation and extraction, face unique challenges in low-gravity environments. The lack of buoyancy affects phase separation and mass transfer, requiring novel approaches to achieve efficient separations. These challenges have led to the development of alternative separation technologies for space applications. Catalysis in space presents opportunities for studying fundamental catalytic processes without the interfering effects of gravity. The absence of natural convection and sedimentation can lead to more uniform catalyst distributions and potentially different reaction pathways. This research may contribute to the development of more efficient catalysts for both space and terrestrial applications. Experimental platforms and simulation techniques The study of low-gravity processes requires specialized platforms and techniques to simulate or create microgravity conditions. These methods range from ground-based facilities to orbital laboratories and computational simulations. Drop towers and parabolic flights Drop towers provide short-duration microgravity environments by allowing experiments to free-fall in evacuated shafts. These facilities typically offer 2–10 seconds of high-quality microgravity. Notable examples include NASA's Glenn Research Center 2.2-Second Drop Tower and the 146-meter ZARM Drop Tower in Bremen, Germany. Parabolic flights, often referred to as "vomit comets," create repeated periods of microgravity lasting 20–25 seconds by flying aircraft in parabolic arcs. These flights allow researchers to conduct hands-on experiments and test equipment destined for space missions. Sounding rockets and suborbital flights Sounding rockets offer extended microgravity durations ranging from 3 to 14 minutes, depending on the rocket's apogee. These platforms are particularly useful for experiments requiring longer microgravity exposure than drop towers or parabolic flights can provide. Suborbital flights, such as those planned by commercial spaceflight companies, present new opportunities for microgravity research. These flights can offer several minutes of microgravity time and the potential for frequent, cost-effective access to space-like conditions. International space station facilities The International Space Station serves as a permanent microgravity laboratory, offering long-duration experiments in various scientific disciplines. Key research facilities on the ISS include: Fluid Science Laboratory (FSL): Designed for studying fluid physics in microgravity. Materials Science Laboratory (MSL): Used for materials research and processing experiments. Microgravity Science Glovebox (MSG): A multipurpose facility for conducting a wide range of microgravity experiments. These facilities enable researchers to conduct complex, long-term studies in a true microgravity environment, advancing our understanding of fundamental physical processes and developing new technologies for space exploration. Computational fluid dynamics for low-gravity simulations Computational Fluid Dynamics (CFD) plays a crucial role in predicting and analyzing fluid behavior in low-gravity environments. CFD simulations complement experimental research by: Providing insights into phenomena difficult to observe experimentally. Allowing parametric studies across a wide range of conditions. Aiding in the design and optimization of space-based systems. CFD models for low-gravity applications often require modifications to account for the dominance of surface tension forces and the absence of buoyancy-driven flows. Validation of these models typically involves comparison with experimental data from microgravity platforms. As computational power increases, CFD simulations are becoming increasingly sophisticated, enabling more accurate predictions of complex multiphase flows and heat transfer processes in microgravity. References Material handling Crystallography
Low-gravity process engineering
Physics,Chemistry,Materials_science,Engineering
2,523
52,918,977
https://en.wikipedia.org/wiki/AIS%20station
AIS receiver station receive telegrams from near by vessels via VHF data (about 162 MHz) and sending it to Automatic identification system to be recorded and used for vessel tracking and other purpose. References See also GPS Exchange Format Related standards NMEA 0183 NMEA 2000 NMEA OneNet, a future standard based on Ethernet Electronic navigation Navigational equipment Technology systems
AIS station
Technology,Engineering
74
23,431,876
https://en.wikipedia.org/wiki/C3H5Cl
{{DISPLAYTITLE:C3H5Cl}} The molecular formula C3H5Cl may refer to: Allyl chloride Chlorocyclopropane
C3H5Cl
Chemistry
38
1,866,458
https://en.wikipedia.org/wiki/Dark%20Sector
Dark Sector, stylized as darkSector, is a third-person shooter video game developed by Digital Extremes for the Xbox 360, PlayStation 3 and Microsoft Windows. The game is set in the fictional Eastern Bloc country of Lasria, and centers on protagonist Hayden Tenno (voiced by Michael Rosenbaum), a morally ambivalent CIA "clean-up man". While trying to intercept a rogue agent named Robert Mezner, Hayden's right arm is infected with the fictional Technocyte virus, which gives him the ability to grow a three-pronged "Glaive" at will. Dark Sector received mixed reviews for its visual design, originality of action and weapon-based gameplay. Many critics have compared the game to Resident Evil 4 and Gears of War, for their similar style of play and story. Digital Extremes would later revisit the setting elements and themes of Dark Sector in their later release, Warframe. Gameplay Gameplay of Dark Sector revolves around the use of the Glaive, a tri-blade throwing weapon similar to a boomerang which returns to Hayden after each throw. The Glaive can be used for long-distance combat, solving environmental puzzles, and picking up in-game items. When in close proximity to an enemy, context-sensitive actions may appear, allowing the player to execute enemies with "finishers". Enemies hold onto Hayden while attacking, and the player must rapidly press a randomly prompted button to break free. Environmental puzzles in the game usually focus upon capturing various elements (fire, electricity, or ice) with the Glaive. For example, a web blocking Hayden's path can be bypassed by capturing fire with the Glaive, and then launching it at the web to burn it down. The Glaive can also be dual-wielded with a gun, which allows the player to perform weapon combos which are more effective against shielded enemies. As the game progresses, Hayden and the Glaive are given several new abilities; it can be guided through the air, being able to kill multiple enemies; a charged-up throw for deadlier attacks; and the ability to make Hayden invisible for a short time and provide a temporary shield. The camera is positioned over the shoulder for third-person shooting, and the player can take cover by standing next to an object such as a pillar or wall. While in cover, Hayden can move temporarily out of cover to fire and throw the Glaive, but there is no blind firing from behind cover. There is a sprint function, which works similar to Gears of Wars Roadie Run, and melee attacks that allow Hayden to punch or slice nearby enemies. The game has no HUD (except for the ammo counter); Hayden's health is shown by the screen flashing red when he takes damage, as well as an indicator showing the attacker's position. If Hayden takes too much damage, the flash speed will increase, and a heartbeat will be heard, indicating Hayden is "bleeding out". Money, ammo, weapon upgrades, and grenades can be found in set locations. Downed enemies drop their guns, though after his infection, Hayden can only carry these weapons for a few seconds before they self-destruct. Permanent weapons can be purchased and upgraded in black markets, one small weapon for his off-hand use with the Glaive (replacing the pistol) and one large weapon such as a shotgun or rifle. Multiplayer Dark Sector has an online multiplayer mode, where there are two modes of gameplay: Infection: one player is randomly selected to be Hayden in a deathmatch against many soldier characters. Epidemic: two Haydens on separate teams, the objective being to kill the opposing team's Hayden first. In both modes, Hayden will have superior powers compared to the soldiers. Hayden will be able to become invisible, use the Glaive, etc., whereas the soldiers cannot. Story Setting and characters Dark Sector is set in Lasria, a fictional satellite country bordering the Soviet Union, where the military fights against the Technocyte victims, who have largely undergone extreme mutations and have gained abilities. The player character is Hayden Tenno (voiced by Michael Rosenbaum). An ambivalent CIA agent, he has congenital analgesia, which renders him unable to feel pain. He is supported by Yargo Mensik (voiced by Jürgen Prochnow), an ex-GRU Colonel, scientist, and sleeper agent who knows the origin of the Technocyte virus. The main antagonist, Robert Mezner (voiced by Dwight Schultz), is an ex-CIA agent who seeks to build a utopia by spreading the Technocyte virus across the planet. Supporting Mezner is Nadia (voiced by Julianne Buescher), a mysterious woman whom Hayden knows; and "Nemesis", a metallic, humanoid figure who fights with a long Technocyte blade. Other characters include "the A.D.", Hayden's superior in the CIA; the Blackmarket Dealer, an arms dealer who supplies Hayden with weapons and equipment for his missions; and Viktor Sudek, a captured informant. Plot Near the end of the Cold War, the USSR discovers a sunken submarine off the coast of Lasria; something attacks the salvage crew through a gaping hole in the hull. In the following years, a mysterious infection called "the Technocyte" breaks out in Lasria, causing widespread mutation and destruction before the Lasrians bring it under control. In the present, Hayden infiltrates a Lasrian gulag compound Robert Mezner is using to hold those infected with the Technocyte virus. His mission is to find Viktor Sudek, prevent the virus's spread, and eliminate Mezner. Finding Viktor, now a loose end and potentially infected, Hayden executes him. Fighting through enemies and planting C4 charges, Hayden encounters an armored humanoid called "Nemesis"; immune to gunfire, it telekinetically deflects an RPG back at Hayden, knocking him off the roof and he passes out. Waking up, Hayden finds himself face-to-face with Mezner, who chastises Hayden for his blind obedience and divulges info about his psychological profile. Nemesis then stabs Hayden's right shoulder, infecting him on Mezner's order, who states that Hayden deserves to suffer the disease's effects. By detonating the explosives, Hayden manages to escape. His right arm now mutated by Technocyte, Hayden contacts the A.D. who orders him to meet up with Yargo Mensik and obtain boosters to halt the infection. Shortly after, when ambushed by soldiers, his infected arm produces the Glaive; he slowly gains new abilities as the infection progresses while encountering both haz-mat soldiers and infected civilians. He also hears Mezner taunting him telepathically, saying that this change is inevitable. Eventually, Hayden finds Yargo but refuses the medicine, and learns that Mezner wants to recapture the infected with an old transmitter, which emits a signal that attracts the infected to its location, within an old church. Killing a giant ape-like Technocyte monster, Hayden finds the transmitter in the church's catacombs and plants C4 but is held at gunpoint by Nadia, who has a deep-rooted hatred for him and leaves him to the infected. Contacting the A.D. again, Hayden learns Mezner is using a freighter to export the Technocyte virus. Fighting his way into the cargo hold, he accidentally releases a highly-evolved invisible Technocyte monster, which sinks the ship. After Hayden escapes, he learns that Nadia has captured and is torturing Yargo demanding access into "the Vault" and something within that can control the virus. Disobeying orders to stand down and await the A.D.'s arrival, Hayden fights through a train station and rescues Yargo, who has lost an eye during interrogation. To control his Technocyte-induced pain, Hayden finally attempts to use the booster; before Yargo can warn him about it, Nemesis appears and nearly overwhelms Hayden. Mezner then arrives and offers Hayden a chance to kill him; however, he can now mentally control Technocyte creatures, including Hayden. With no other choice, Hayden injects himself, breaking Mezner's control over him while simultaneously preventing further mutations; however, to Mezner's surprise, the booster also nearly kills Hayden. Before Hayden passes out, Mezner reveals that the CIA gave him the same "booster," its true purpose being to prepare them for receiving the virus. Yargo rescues Hayden and brings him to the Vozro Research Facility, where Technocyte was researched during the Cold War. He admits to having laced Hayden's booster with "enferon", a chemical lethal to Technocyte creatures. He had worried that Hayden would become like Mezner, as both had the same viral strain. However, Hayden has retained his humanity, unlike Mezner. Yargo directs Hayden to the facility's sub-basement to get a suit like Nemesis' to give him a fighting chance while Yargo moves through the vents. After fighting through hordes of Technocyte creatures and automated security systems, Hayden finds the suit as Nadia arrives. He pleads with her to leave, but she states that she is already in too deep, and that she will take Yargo to open the Vault. Hayden dons the suit, finally defeating Nemesis; it had been Nadia all along. Apologizing for infecting Hayden, she tells him Mezner plans to spread Technocyte all across Earth. Before she dies, she gives him the Vault key, knowing that Hayden will "do the right thing this time." Rendezvousing with the A.D outside the Vault., Hayden learns that a deal had been cut with Mezner for control of the virus. When he gives an outraged Hayden a booster "for the road", Hayden kills him by stabbing him in the neck with it and wipes out the A.D.'s men. Finding Yargo, Hayden gives him the key, telling him to seal the Vault. Inside, a stunned Hayden discovers the source of the virus: the USS Alaska, an American G-class submarine. Hayden discovers Mezner with the Technocyte source and transmitter: an enormous, Hydra-like monstrosity. Even after defeating Mezner, the monster, and numerous infected, Yargo tells Hayden that the transmission is still active. Hayden tries to fry the circuitry with his Glaive, but Mezner stuns his mutated right arm. Hayden manages to catch the electrified Glaive with his non-mutated left hand, impaling and frying Mezner's skull with it. With the transmission finally halted, Hayden leaves the Vault, catching the Glaive as he steps outside. Development The development of Dark Sector was announced in February 2000, on Digital Extremes' website. The game was originally proposed as a follow-up to Digital Extremes and Epic Games' critically acclaimed multiplayer first-person shooter, Unreal Tournament, but the original plan was scrapped and the game was not spoken of for another four years, during which the game underwent a massive change in focus. The original design had the game keeping in line with its predecessor as a multiplayer arena-style first-person shooter. An in-game cinematic unveiled years later in 2004, gave viewers a brief look at potential storylines and environments, as well as the graphics of the game. Digital Extremes specifically stated that the clips were not pre-rendered and were actual in-game footage. The game was shown as the first example of what a seventh-generation game would look like. The game was originally intended to take place in a science-fiction environment, in outer space, with players taking the role of a character that inhabits a sleek mechanical suit with powers. The game was officially revealed by Digital Extremes' in late 2005, around the time of the original release of the Xbox 360. In 2006, major overhauls to the game were revealed, showing the main character, and a noticeably less sci-fi setting, although Hayden starts to resemble the originally planned main character as the infection takes over his body. The developers cited a shift in focus by other gaming companies and publishers as the reason for the change to a more modern setting and reducing its sci-fi elements, also adding that they wanted to achieve the realism that fans would enjoy. Another reason was that the tech demo was originally built before the team knew the maximum specifications of the Xbox 360. An interview with GameSpot revealed that the change in setting was intended to make the main character stand out more, as well as making the story more relatable, which they said has been written as a superhero origin story. Dark Sector was based on the Sector Engine, later changed to the Evolution Engine, both Digital Extremes' proprietary game engines. Statements about this being just a name change or a major shift in their technology were not released to the public yet. Dark Sector project lead, Steve Sinclair, stated that the engine was written from scratch. The producer of Dark Sector, Dave Kudirka, said when they first built the engine, they did not want it to look like the Unreal Engine 3, and they wanted their own perspective engine. When asked about the games' engine being made on the Wii or PC, he replied "plausible". The game went gold on March 7, 2008. The musical score of the game was composed by Keith Power. The Windows version of Dark Sector was initially planned to be released on the same date as on consoles, but later it was dropped and there were no news on its release. Some sites reported in 2009 that a YouTube video showed Dark Sector running on a PC. It was later confirmed that the game was indeed ported to Windows and was on sale, though only in Russia and the language was Russian by default. Hackers found ways to run the game in English. Aspyr and Noviy Disk published Dark Sector for Microsoft Windows, on March 23, 2009. Optimized by Noviy Disk for the release, the port featured improved graphics and a redesigned interface that made use of mouse and keyboard controls. An English/French version was added to Steam a day later. The PC version's multiplayer mode is only available via local area network play, as the game is a straight port of the console version with no extra code for internet connectivity. Comic A comic titled Dark Sector Zero was released with Dark Sector. Set before the game's main events, it delves into the events that led to Lasria's demise. Reception Dark Sector received mixed reviews. Aggregating review websites GameRankings and Metacritic gave the Xbox 360 version 73.24% and 72/100, the PlayStation 3 version 73.14% and 72/100 and the PC version 65.22% and 66/100. Hyper'''s Dirk Watch commended the game for "the Glaive and its aftertouch", but he criticised it for its "patchy" AI and "steep" difficulty curve. Greg Howson of The Guardian thought the game was similar to other Gears of War clones except for the Glaive mechanic which he found entertaining, but ultimately called it a solid action game. Ban in Australia In February, before the release in March 2008, the game was banned by the Office of Film and Literature Classification (OFLC) for sale in Australia. Adam Zweck, the sales and product manager for AFA Interactive, the local distributors of Dark Sector, told GameSpot AU that the game was banned due to its violence, in particular the finishing moves. It was later re-released in Australia for the PlayStation 3 on October 9 of the same year, but the violence was censored. In July 2009, Dark Sector was released on the cover disc of PC Powerplay, an Australian PC gaming magazine, although this was the heavily censored version of the game. GamesRadar included it in their list of the 100 most overlooked games of its generation. Possible sequel When asked about a sequel in 2008, Steven Sinclair of Digital Extremes stated that there was "nothing definitive" planned, but commented that he would "love to do one", and that Dark Sector only scratched the surface of the character and weapon's potential. Digital Extremes eventually developed a free-to-play game, titled Warframe, which borrows heavily from the original Dark Sector concept video and game. The original concept for Dark Sector was more similar to what Warframe is now, but was put in a modern setting with a linear, single-player mode due to the industry landscape at the time. As such, Warframe is considered a spiritual successor, and has a handful of nods to Dark Sector''. References External links 2008 video games Biological weapons in popular culture Censored video games Cold War video games D3 Publisher games Fiction about mind control Video games set in laboratories Multiplayer and single-player video games Fiction about nanotechnology PlayStation 3 games Alternate history video games Post-apocalyptic video games Spy video games Third-person shooters Top Cow Productions Video games about genetic engineering Video games about viral outbreaks Video games adapted into comics Video games developed in Canada Video games set in 1987 Video games set in fictional countries Video games set in the Soviet Union Video games using PhysX Weapons of mass destruction in fiction Windows games Xbox 360 games Aspyr games Digital Extremes games
Dark Sector
Materials_science,Biology
3,574
32,606,364
https://en.wikipedia.org/wiki/Sherman%E2%80%93Takeda%20theorem
In mathematics, the Sherman–Takeda theorem states that if A is a C*-algebra then its double dual is a W*-algebra, and is isomorphic to the weak closure of A in the universal representation of A. The theorem was announced by and proved by . The double dual of A is called the universal enveloping W*-algebra of A. References Banach algebras C*-algebras Functional analysis Operator theory Von Neumann algebras
Sherman–Takeda theorem
Mathematics
95
26,749,732
https://en.wikipedia.org/wiki/Unisolvent%20point%20set
In approximation theory, a finite collection of points is often called unisolvent for a space if any element is uniquely determined by its values on . is unisolvent for (polynomials in n variables of degree at most m) if there exists a unique polynomial in of lowest possible degree which interpolates the data . Simple examples in would be the fact that two distinct points determine a line, three points determine a parabola, etc. It is clear that over , any collection of k + 1 distinct points will uniquely determine a polynomial of lowest possible degree in . See also Padua points External links Numerical Methods / Interpolation Approximation theory
Unisolvent point set
Mathematics
130
42,303,635
https://en.wikipedia.org/wiki/Emily%20M.%20Gray%20Award
The Emily M. Gray Award from the Biophysical Society in Rockville, Maryland, is given in recognition of "significant contributions to education in biophysics." The award was established in 1997 and first awarded the year thereafter. Award recipients 1998: Muriel S. Prouty 1999: Kensal E. van Holde 2000: Charles Cantor and Paul Schimmel 2001: Jane Richardson 2002: Norma Allewell 2003: Michael Summers 2004: Richard D. Ludescher 2005: Barry R. Lentz 2006: Ignacio Tinoco, Jr. 2007: John Steve Olson 2008: David S. Eisenberg and Donald M. Crothers 2009: Philip C. Nelson 2010: Greta Pifat-Mrzljak 2011: Bertil Hille 2012: Kenneth Dill and Sarina Bromberg 2013: Louis de Felice 2014: Alberto Diaspro 2015: Meyer Jackson 2016: Douglas Robinson 2017: Enrique De La Cruz 2018: Madeline Shea 2019: Yves De Koninck 2021: Doug Barrick References External links Emily M. Gray Award page Biophysics awards Education awards American science and technology awards
Emily M. Gray Award
Technology
225
33,923,355
https://en.wikipedia.org/wiki/Huntingdon%20Furnace
Huntingdon Furnace is a national historic district and historic iron furnace and associated buildings located at Franklin Township in Huntingdon County, Pennsylvania. It consists of seven contributing buildings and one contributing structure. They are the iron furnace, office building, the ironmaster's mansion, log worker's house, a residence, the farm manager's residence, the grist mill and the miller's house. The iron furnace was moved to this site in 1805, from its original site one mile upstream. It measures 30 feet square by 30 feet high. The ironmaster's mansion was built in 1851, and is a -story, L-shaped frame dwelling. The grist mill dates to 1808, and is a -story, rubble stone building measuring 50 feet by 45 feet. The furnace was in operation from 1796, until it ceased operations in the 1880s. It was listed on the National Register of Historic Places in 1990. References External links Industrial buildings and structures on the National Register of Historic Places in Pennsylvania Historic American Engineering Record in Pennsylvania Historic districts on the National Register of Historic Places in Pennsylvania Industrial buildings completed in 1805 Buildings and structures in Huntingdon County, Pennsylvania Ironworks and steel mills in Pennsylvania Industrial furnaces 1805 establishments in Pennsylvania National Register of Historic Places in Huntingdon County, Pennsylvania
Huntingdon Furnace
Chemistry
259
17,208,090
https://en.wikipedia.org/wiki/Tert-Butyl%20isocyanide
tert-Butyl isocyanide is an organic compound with the formula Me3CNC (Me = methyl, CH3). It is an isocyanide, commonly called isonitrile or carbylamine, as defined by the functional group C≡N-R. tert-Butyl isocyanide, like most alkyl isocyanides, is a reactive colorless liquid with an extremely unpleasant odor. It forms stable complexes with transition metals and can insert into metal-carbon bonds. tert-Butyl isocyanide is prepared by a Hofmann carbylamine reaction. In this conversion, a dichloromethane solution of tert-butylamine is treated with chloroform and aqueous sodium hydroxide in the presence of catalytic amount of the phase transfer catalyst benzyltriethylammonium chloride. Me3CNH2 + CHCl3 + 3 NaOH → Me3CNC + 3 NaCl + 3 H2O tert-Butyl isocyanide is isomeric with pivalonitrile, also known as tert-butyl cyanide. The difference, as with all carbylamine analogs of nitriles, is that the bond joining the CN functional group to the parent molecule is made on the nitrogen, not the carbon. Coordination chemistry By virtue of the lone electron pair on carbon, isocyanides serves as ligands in coordination chemistry, especially with metals in the 0, +1, and +2 oxidation states. tert-Butyl isocyanide has been shown to stabilize metals in unusual oxidation states, such as Pd(I). Pd(dba)2 + PdCl2(C6H5CN)2 + 4 t-BuNC → [(t-BuNC)2PdCl]2 + 2 dba + 2 C6H5CN tert-Butyl isocyanide can form hepta-coordinate homoleptic complexes, despite having a large t-Bu group, which is held far away from the metal center because of the linearity of the M-C≡N-C linkages. tert-Butyl isocycanide forms complexes that are stoichiometrically analogous to certain binary metal carbonyl complexes, such as Fe2(CO)9 and Fe2(tBuNC)9. Safety tert-Butyl isocyanide is toxic. Its behavior is similar to that of its close electronic relative carbon monoxide. References Isocyanides Ligands Tert-butyl compounds Foul-smelling chemicals
Tert-Butyl isocyanide
Chemistry
547
78,635,504
https://en.wikipedia.org/wiki/15.ai
15.ai was a free non-commercial web application that used artificial intelligence to generate text-to-speech voices of fictional characters from popular media. Created by an artificial intelligence researcher known as 15 during their time at the Massachusetts Institute of Technology, the application allowed users to make characters from video games, television shows, and movies speak custom text with emotional inflections faster than real-time. The platform was notable for its ability to generate convincing voice output using minimal training data—the name "15.ai" referenced the creator's claim that a voice could be cloned with just 15 seconds of audio. It was an early example of an application of generative artificial intelligence during the initial stages of the AI boom. Launched in March 2020, 15.ai gained widespread attention in early 2021 when it went viral on social media platforms like YouTube and Twitter, and quickly became popular among Internet fandoms, including the My Little Pony: Friendship Is Magic, Team Fortress 2, and SpongeBob SquarePants fandoms. The service distinguished itself through its support for emotional context in speech generation through emojis and precise pronunciation control through phonetic transcriptions. 15.ai is credited as the first mainstream platform to popularize AI voice cloning (audio deepfakes) in memes and content creation. 15.ai received varied responses from the voice acting community and broader public. Voice actors and industry professionals debated the technology's merits for fan creativity versus its potential impact on the profession, particularly following controversies over unauthorized commercial use. While many critics praised the website's accessibility and emotional control, they also noted technical limitations in areas like prosody options and language support. The technology sparked discussions about ethical implications, including concerns about reduction of employment opportunities for voice actors, voice-related fraud, and misuse in explicit content, though 15.ai maintained strict policies against replicating real people's voices. 15.ai's approach to data-efficient voice synthesis and emotional expression was influential in subsequent developments in AI text-to-speech technology. In January 2022, Voiceverse NFT sparked controversy when it was discovered that the company, which had partnered with voice actor Troy Baker, had misappropriated 15.ai's work for their own platform. The service was ultimately taken offline in September 2022. Its shutdown led to the emergence of various commercial alternatives in subsequent years. History Background The field of artificial speech synthesis underwent a significant transformation with the introduction of deep learning approaches. In 2016, DeepMind's publication of the seminal paper WaveNet: A Generative Model for Raw Audio marked a pivotal shift toward neural network-based speech synthesis, demonstrating unprecedented audio quality through dilated causal convolutions operating directly on raw audio waveforms at 16,000 samples per second, modeling the conditional probability distribution of each audio sample given all previous ones. Previously, concatenative synthesis—which worked by stitching together pre-recorded segments of human speech—was the predominant method for generating artificial speech, but it often produced robotic-sounding results with noticeable artifacts at the segment boundaries. Two years later, this was followed by Google AI's Tacotron in 2018, which demonstrated that neural networks could produce highly natural speech synthesis but required substantial training data—typically tens of hours of audio—to achieve acceptable quality. When trained on smaller datasets, such as 2 hours of speech, the output quality degraded while still being able to maintain intelligible speech, and with just 24 minutes of training data, Tacotron failed to produce intelligible speech. The same year saw the emergence of HiFi-GAN, a generative adversarial network (GAN)-based vocoder that improved the efficiency of waveform generation while producing high-fidelity speech, followed by Glow-TTS, which introduced a flow-based approach that allowed for both fast inference and voice style transfer capabilities. Chinese tech companies also made significant contributions to the field, with Baidu and ByteDance developing proprietary text-to-speech frameworks that further advanced the state of the art, though specific technical details of their implementations remained largely undisclosed. Development, release, and operation 15.ai was conceived in 2016 as a research project in deep learning speech synthesis by a developer known as "15" (at the age of 18) during their freshman year at the Massachusetts Institute of Technology (MIT) as part of MIT's Undergraduate Research Opportunities Program (UROP). The developer was inspired by DeepMind's WaveNet paper, with development continuing through their studies as Google AI released Tacotron the following year. By 2019, the developer had demonstrated at MIT their ability to replicate WaveNet and Tacotron's results using 75% less training data than previously required. The name 15 is a reference to the creator's claim that a voice can be cloned with as little as 15 seconds of data. The developer had originally planned to pursue a doctorate based on their undergraduate research, but opted to work in the tech industry instead after their startup was accepted into the Y Combinator accelerator in 2019. After their departure in early 2020, the developer returned to their voice synthesis research, implementing it as a web application. According to the developer, instead of using conventional voice datasets like LJSpeech that contained simple, monotone recordings, they sought out more challenging voice samples that could demonstrate the model's ability to handle complex speech patterns and emotional undertones. The Pony Preservation Project—a fan initiative originating from /mlp/, 4chan's My Little Pony board, that had compiled voice clips from My Little Pony: Friendship Is Magic—played a crucial role in the implementation. The project's contributors had manually trimmed, denoised, transcribed, and emotion-tagged every line from the show. This dataset provided ideal training material for 15.ai's deep learning model. 15.ai was released in March 2020 with a limited selection of characters, including those from My Little Pony: Friendship Is Magic and Team Fortress 2. More voices were added to the website in the following months. A significant technical advancement came in late 2020 with the implementation of a multi-speaker embedding in the deep neural network, enabling simultaneous training of multiple voices rather than requiring individual models for each character voice. This not only allowed rapid expansion from eight to over fifty character voices, but also let the model recognize common emotional patterns across characters, even when certain emotions were missing from some characters' training data. In early 2021, the application went viral on Twitter and YouTube, with people generating skits, memes, and fan content using voices from popular games and shows that have accumulated millions of views on social media. Content creators, YouTubers, and TikTokers have also used 15.ai as part of their videos as voiceovers. At its peak, the platform incurred operational costs of per month from AWS infrastructure needed to handle millions of daily voice generations; despite receiving offers from companies to acquire 15.ai and its underlying technology, the website remained independent and was funded out of the personal previous startup earnings of the developer—then aged 23 at the time. Voiceverse NFT controversy On January 14, 2022, a controversy ensued after it was discovered that Voiceverse NFT, a company that video game and anime dub voice actor Troy Baker had announced his partnership with, had misappropriated voice lines generated from 15.ai as part of their marketing campaign. This came shortly after 15.ai's developer had explicitly stated in December 2021 that they had no interest in incorporating NFTs into their work. Log files showed that Voiceverse had generated audio of characters from My Little Pony: Friendship Is Magic using 15.ai, pitched them up to make them sound unrecognizable from the original voices to market their own platform—in violation of 15.ai's terms of service. Voiceverse claimed that someone in their marketing team used the voice without properly crediting 15.ai; in response, 15 tweeted "Go fuck yourself," which went viral, amassing hundreds of thousands of retweets and likes on Twitter in support of the developer. Following continued backlash and the plagiarism revelation, Baker acknowledged that his original announcement tweet ending with "You can hate. Or you can create. What'll it be?" may have been "antagonistic," and on January 31, 2022, announced he would discontinue his partnership with Voiceverse. Inactivity In September 2022, 15.ai was taken offline due to legal issues surrounding artificial intelligence and copyright. The creator has suggested a potential future version that would better address copyright concerns from the outset, though the website remains inactive as of 2025. Features The platform was non-commercial, and operated without requiring user registration or accounts. Users generated speech by inputting text and selecting a character voice, with optional parameters for emotional contextualizers and phonetic transcriptions. Each request produced three audio variations with distinct emotional deliveries sorted by confidence score. Characters available included multiple characters from Team Fortress 2 and My Little Pony: Friendship Is Magic; GLaDOS, Wheatley, and the Sentry Turret from the Portal series; SpongeBob SquarePants; Kyu Sugardust from HuniePop, Rise Kujikawa from Persona 4; Daria Morgendorffer and Jane Lane from Daria; Carl Brutananadilewski from Aqua Teen Hunger Force; Steven Universe from Steven Universe; Sans from Undertale; Madeline and multiple characters from Celeste; the Tenth Doctor Who; the Narrator from The Stanley Parable; and HAL 9000 from 2001: A Space Odyssey. Out of the over fifty voices available, thirty were of characters from My Little Pony: Friendship Is Magic. Certain "silent" characters like Chell and Gordon Freeman were able to be selected as a joke, and would emit silent audio files when any text was submitted. The deep learning model's nondeterministic properties produced variations in speech output, creating different intonations with each generation, similar to how voice actors produce different takes. 15.ai introduced the concept of emotional contextualizers, which allowed users to specify the emotional tone of generated speech through guiding phrases. The emotional contextualizer functionality utilized DeepMoji, a sentiment analysis neural network developed at the MIT Media Lab. Introduced in 2017, DeepMoji processed emoji embeddings from 1.2 billion Twitter posts (from 2013 to 2017) to analyze emotional content. Testing showed the system could identify emotional elements, including sarcasm, more accurately than human evaluators. If an input into 15.ai contained additional context (specified by a vertical bar), the additional context following the bar would be used as the emotional contextualizer. For example, if the input was Today is a great day!|I'm very sad., the selected character would speak the sentence "Today is a great day!" in the emotion one would expect from someone saying the sentence "I'm very sad." The application used pronunciation data from Oxford Dictionaries API, Wiktionary, and CMU Pronouncing Dictionary, the last of which is based on ARPABET, a set of English phonetic transcriptions originally developed by the Advanced Research Projects Agency in the 1970s. For modern and Internet-specific terminology, the system incorporated pronunciation data from user-generated content websites, including Reddit, Urban Dictionary, 4chan, and Google. Inputting ARPABET transcriptions was also supported, allowing users to correct mispronunciations or specify the desired pronunciation between heteronyms—words that have the same spelling but have different pronunciations. Users could invoke ARPABET transcriptions by enclosing the phoneme string in curly braces within the input box (for example, {AA1 R P AH0 B EH2 T} to specify the pronunciation of the word "ARPABET" ( ). The interface displayed parsed words with color-coding to indicate pronunciation certainty: green for words found in the existing pronunciation lookup table, blue for manually entered ARPABET pronunciations, and red for words where the pronunciation had to be algorithmically predicted. Later versions of 15.ai introduced multi-speaker capabilities. Rather than training separate models for each voice, 15.ai used a unified model that learned multiple voices simultaneously through speaker embeddings–learned numerical representations that captured each character's unique vocal characteristics. Along with the emotional context conferred by DeepMoji, this neural network architecture enabled the model to learn shared patterns across different characters' emotional expressions and speaking styles, even when individual characters lacked examples of certain emotional contexts in their training data. The interface included technical metrics and graphs, which, according to the developer, served to highlight the research aspect of the website. As of version v23, released in September 2021, the interface displayed comprehensive model analysis information, including word parsing results and emotional analysis data. The flow and generative adversarial network (GAN) hybrid vocoder and denoiser, introduced in an earlier version, was streamlined to remove manual parameter inputs. Reception Critical reception Critics described 15.ai as easy to use and generally able to convincingly replicate character voices, with occasional mixed results. Natalie Clayton of PC Gamer wrote that SpongeBob SquarePants' voice was replicated well, but noted challenges in mimicking the Narrator from the The Stanley Parable: "the algorithm simply can't capture Kevan Brighting's whimsically droll intonation." Zack Zwiezen of Kotaku reported that "[his] girlfriend was convinced it was a new voice line from GLaDOS' voice actor, Ellen McLain". Rionaldi Chandraseta of AI newsletter Towards Data Science observed that "characters with large training data produce more natural dialogues with clearer inflections and pauses between words, especially for longer sentences." Taiwanese newspaper United Daily News also highlighted 15.ai's ability to recreate GLaDOS's mechanical voice, alongside its diverse range of character voice options. Yahoo! News Taiwan reported that "GLaDOS in Portal can pronounce lines nearly perfectly", but also criticized that "there are still many imperfections, such as word limit and tone control, which are still a little weird in some words." Chris Button of AI newsletter Byteside called the ability to clone a voice with only 15 seconds of data "freaky" but also called tech behind it "impressive". The platform's voice generation capabilities were regularly featured on Equestria Daily, a fandom news site dedicated to the show My Little Pony: Friendship Is Magic and its other generations, with documented updates, fan creations, and additions of new character voices. In a post introducing new character additions to 15.ai, Equestria Daily'''s founder Shaun Scotellaro—also known by his online moniker "Sethisto"—wrote that "some of [the voices] aren't great due to the lack of samples to draw from, but many are really impressive still anyway." Multiple other critics also found the word count limit, prosody options, and English-only nature of the application as not entirely satisfactory. Peter Paltridge of anime and superhero news outlet Anime Superhero News opined that "voice synthesis has evolved to the point where the more expensive efforts are nearly indistinguishable from actual human speech," but also noted that "In some ways, SAM is still more advanced than this. It was possible to affect SAM’s inflections by using special characters, as well as change his pitch at will. With 15.ai, you’re at the mercy of whatever random inflections you get." Conversely, Lauren Morton of Rock, Paper, Shotgun praised the depth of pronunciation control—"if you're willing to get into the nitty gritty of it". Similarly, Eugenio Moto of Spanish news website Qore.com wrote that "the most experienced [users] can change parameters like the stress or the tone." Takayuki Furushima of Den Fami Nico Gamer highlighted the "smooth pronunciations", and Yuki Kurosawa of AUTOMATON noted its "rich emotional expression" as a major feature; both Japanese authors noted the lack of Japanese-language support. Renan do Prado of the Brazilian gaming news outlet Arkade and José Villalobos of Spanish gaming outlet LaPS4 pointed out that while users could create amusing results in Portuguese and Spanish respectively, the generation performed best in English. Chinese gaming news outlet GamerSky called the app "interesting", but also criticized the word count limit of the text and the lack of intonations. South Korean video game outlet Zuntata wrote that "the surprising thing about 15.ai is that [for some characters], there's only about 30 seconds of data, but it achieves pronunciation accuracy close to 100%". Machine learning professor Yongqiang Li wrote in his blog that he was surprised to see that the application was free. Reactions from voice actors of featured characters Some voice actors whose characters appeared on 15.ai have publicly shared their thoughts about the platform. In a 2021 interview on video game voice acting podcast The VŌC, John Patrick Lowrie—who voices the Sniper in Team Fortress 2—explained that he had discovered 15.ai when a prospective intern showed him a skit she had created using AI-generated voices of the Sniper and the Spy from Team Fortress 2. Lowrie commented: He drew an analogy to synthesized music, adding: In a 2021 live broadcast on his Twitch channel, Nathan Vetterlein—the voice actor of the Scout from Team Fortress 2—listened to an AI recreation of his character's voice. He described the impression as "interesting" and noted that "there's some stuff in there." Ethical concerns Other voice actors had mixed reactions to 15.ai's capabilities. While some industry professionals acknowledged the technical innovation, others raised concerns about the technology's implications for their profession. When voice actor Troy Baker announced his partnership with Voiceverse NFT, which had misappropriated 15.ai's technology, it sparked widespread controversy within the voice acting industry. Critics raised concerns about automated voice acting's potential reduction of employment opportunities for voice actors, risk of voice impersonation, and potential misuse in explicit content. The controversy surrounding Voiceverse NFT and subsequent discussions highlighted broader industry concerns about AI voice synthesis technology. While 15.ai limited its scope to fictional characters and did not reproduce voices of real people or celebrities, computer scientist Andrew Ng noted that similar technology could be used to do so, including for nefarious purposes. In his 2020 assessment of 15.ai, he wrote: While discussing potential risks, he added: Legacy 15.ai was an early pioneer of audio deepfakes, leading to the emergence of AI speech synthesis-based memes during the initial stages of the AI boom in 2020. 15.ai is credited as the first mainstream platform to popularize AI voice cloning in Internet memes and content creation, particularly through its ability to generate convincing character voices in real-time without requiring extensive technical expertise. The platform's impact was especially notable in fan communities, including the My Little Pony: Friendship Is Magic, Portal, Team Fortress 2, and SpongeBob SquarePants fandoms, where it enabled the creation of viral content that garnered millions of views across social media platforms like Twitter and YouTube. Team Fortress 2 content creators also used the platform to produce both short-form memes and complex narrative animations using Source Filmmaker. Fan creations included skits and new fan animations, crossover content—such as Game Informer writer Liana Ruppert's demonstration combining Portal and Mass Effect dialogue in her coverage of the platform—recreations of viral videos (including the infamous Big Bill Hell's Cars car dealership parody), adaptations of fanfiction using AI-generated character voices, music videos and new musical compositions—such as the explicit Pony Zone series—and content where characters recited sea shanties. Some fan creations gained mainstream attention, such as a viral edit replacing Donald Trump's cameo in Home Alone 2: Lost in New York'' with the Heavy Weapons Guy's AI-generated voice, which was featured on a daytime CNN segment in January 2021. Some users integrated 15.ai's voice synthesis with VoiceAttack, a voice command software, to create personal assistants. Its influence has been noted in the years after it became defunct, with several commercial alternatives emerging to fill the void, such as ElevenLabs and Speechify. Contemporary generative voice AI companies have acknowledged 15.ai's pioneering role. Y Combinator startup PlayHT called the debut of 15.ai "a breakthrough in the field of text-to-speech (TTS) and speech synthesis". Cliff Weitzman, the founder and CEO of Speechify, credited 15.ai for "making AI voice cloning popular for content creation by being the first [...] to feature popular existing characters from fandoms". Mati Staniszewski, co-founder and CEO of ElevenLabs, wrote that 15.ai was transformative in the field of AI text-to-speech. Prior to its shutdown, 15.ai established several technical precedents that influenced subsequent developments in AI voice synthesis. Its integration of DeepMoji for emotional analysis demonstrated the viability of incorporating sentiment-aware speech generation, while its support for ARPABET phonetic transcriptions set a standard for precise pronunciation control in public-facing voice synthesis tools. The platform's unified multi-speaker model, which enabled simultaneous training of diverse character voices, proved particularly influential. This approach allowed the system to recognize emotional patterns across different voices even when certain emotions were absent from individual character training sets; for example, if one character had examples of joyful speech but no angry examples, while another had angry but no joyful samples, the system could learn to generate both emotions for both characters by understanding the common patterns of how emotions affect speech. 15.ai also made a key contribution in reducing training data requirements for speech synthesis. Earlier systems like Google AI's Tacotron and Microsoft Research's FastSpeech required tens of hours of audio to produce acceptable results and failed to generate intelligible speech with less than 24 minutes of training data. In contrast, 15.ai demonstrated the ability to generate speech with substantially less training data—specifically, the name "15.ai" refers to the creator's claim that a voice could be cloned with just 15 seconds of data. This approach to data efficiency influenced subsequent developments in AI voice synthesis technology, as the 15-second benchmark became a reference point for subsequent voice synthesis systems. The original claim that only 15 seconds of data is required to clone a human's voice was corroborated by OpenAI in 2024. See also AI boom Character.ai Deepfake Ethics of artificial intelligence WaveNet My Little Pony: Friendship Is Magic fandom Explanatory footnotes References Notes Works cited External links Archived frontend Internet properties established in 2020 Applications of artificial intelligence 2020 software 2020 in Internet culture 2020s in Internet culture 2020s fads and trends Web applications Speech synthesis Deep learning software applications Deepfakes Generative artificial intelligence My Little Pony: Friendship Is Magic fandom Massachusetts Institute of Technology software
15.ai
Engineering
4,772
8,661,171
https://en.wikipedia.org/wiki/J-I
The J-I was a solid-fuel, expendable, small-lift launch vehicle developed by the National Space Development Agency of Japan and the Institute of Space and Astronautical Science. In an attempt to reduce development costs, it used the solid rocket booster from the H-II as the first stage, and the upper stages of the M-3SII. It flew only once on a suborbital flight taking place on 11 February 1996 from the Osaki Launch Complex at the Tanegashima Space Center in a partial configuration, to launch the demonstrator HYFLEX. The vehicle never flew in the final orbital capability configuration, which should have launched the OICETS satellite (OICETS was launched on a Russian R-36MUTTH Intercontinental ballistic missile-based Dnepr rocket instead). On the HYFLEX mission a load of 1,054 kg was launched 1,300 km downrange. Apogee was 110km; the HYFLEX payload achieved speed of approximately 3.8 km/s. See also Epsilon (rocket) Mu (rocket family) M-V Comparison of orbital launchers families References External links Space launch vehicles of Japan
J-I
Astronomy
245
46,742,133
https://en.wikipedia.org/wiki/Haruzo%20Hida
Haruzo Hida (肥田 晴三 Hida Haruzo, born 6 August 1952, Sakai, Osaka) is a Japanese mathematician, known for his research in number theory, algebraic geometry, and modular forms. Hida received from Kyoto University a B.A. in 1975, an M.A. in 1977, and a Ph.D. in 1980 with thesis On Abelian Varieties with Complex Multiplication as Factors of the Jacobians of Shimura Curves, although he left Kyoto University in 1977. He was from 1977 to 1984 an assistant professor and from 1984 to 1987 an associate professor at Hokkaidō University. Since 1987 he has been a professor at the University of California, Los Angeles. From 1979 to 1981 he was a visiting scholar at the Institute for Advanced Study. Hida was an invited speaker at the International Congress of Mathematicians (Berkeley) in 1986. In 1991 he was awarded the Guggenheim Fellowship. Hida received in 1992 for his research on p-adic L-functions of algebraic groups and p-adic Hecke rings the Spring Prize of the Mathematical Society of Japan. In 2012 he was elected a Fellow of the American Mathematical Society. He received the 2019 Leroy P. Steele Prize for Seminal Contribution to Research for his highly original paper "Galois representations into GL2(p) attached to ordinary cusp forms," published in 1986 in Inventiones Mathematicae. Selected works Elementary theory of L-functions and Eisenstein series, Cambridge University Press, 1993 Modular forms and Galois cohomology, Cambridge University Press, 2000 Geometric modular forms and elliptic curves, World Scientific, 2000 p-Adic automorphic forms on Shimura varieties, Springer, 2004 Hilbert modular forms and Iwasawa theory, Oxford University Press, 2006 External links Homepage for Haruzo Hida at UCLA References 1952 births Living people 20th-century Japanese mathematicians 21st-century Japanese mathematicians Academic staff of Hokkaido University Fellows of the American Mathematical Society Kyoto University alumni Number theorists University of California, Los Angeles faculty
Haruzo Hida
Mathematics
408
20,901,868
https://en.wikipedia.org/wiki/Combinatorics%20and%20dynamical%20systems
The mathematical disciplines of combinatorics and dynamical systems interact in a number of ways. The ergodic theory of dynamical systems has recently been used to prove combinatorial theorems about number theory which has given rise to the field of arithmetic combinatorics. Also dynamical systems theory is heavily involved in the relatively recent field of combinatorics on words. Also combinatorial aspects of dynamical systems are studied. Dynamical systems can be defined on combinatorial objects; see for example graph dynamical system. See also Symbolic dynamics Analytic combinatorics Combinatorics and physics Arithmetic dynamics References . . . . . . . . . . . . External links Combinatorics of Iterated Functions: Combinatorial Dynamics & Dynamical Combinatorics Combinatorial dynamics at Scholarpedia
Combinatorics and dynamical systems
Physics,Mathematics
164
52,329,687
https://en.wikipedia.org/wiki/Focus%20assessed%20transthoracic%20echocardiography
Focus assessed transthoracic echocardiography (or FATE) is a type of transthoracic echocardiogram, or sonogram of the heart, often performed by non-cardiologist. The protocol has been used since 1989 and has four projections; subcostal four-chamber, apical four-chamber, parasternal long axis and parasternal short axis. The original focused cardiac ultrasound protocol for non-cardiologists was devised by Dr Erik Sloth in 1989 and has formed the basis of hands-on FATE courses ever since. The success of the original protocol has inspired a surge of replicas in many shapes and the coining of many imaginative acronyms: FEER, FEEL, Focus, Bleep, HART, FUSE etc. These are all variations of the original theme [1]. Gallery See also Echocardiography References "fate protocol" or "focus assessed transthoracic echocardiography" - Search Results - PubMed External links Official FATE protocol website with downloadable FATE card Medical imaging
Focus assessed transthoracic echocardiography
Biology
212
40,330,401
https://en.wikipedia.org/wiki/OISTAT
The International Organisation of Scenographers, Theatre Architects and Technicians (OISTAT) is a non-governmental organization (NGO) founded in 1968 in Prague, Czech Republic. According to its founding name, the organization is mainly a network for theatre designers, theatre architects and theatre technicians around the world, made up of people associated with creating or studying live performances, including educators, researchers and practitioners. The organization has members from 51 countries, in the membership categories: Centre Member, Associate Member and Individual Member. History The predecessor of the organization was the scenography section of the International Theatre Institute (ITI), founded in 1948. This institute was expected to form an international co-operation of all artistic fields and establish leading publishing and educational direction from the drama sections. The project failed, but interest in the visual elements of theatre grew. Shortly after, in 1958, the International Association of Theatre Technicians (IATT) was formed in Paris. This association also faltered due to lack of direction, programming, organizational structure, administrative and financial provisions. The representatives sought ITI to take up the activities again, and the ITI Secretary General Jean Darcante went to Prague in 1967 to discuss with representatives of the Czechoslovak National ITI Center, the Theatre Institute and the Institute of Scenography, the possibility of establishing an organization with a general secretariat in Prague. Czechoslovak scenographers, including professional workers and theatre specialists from both institutions, prepared a draft program for the activity of the new organization with the Czechoslovak Ministry of Culture. Darcante travelled to Prague again in early 1968 to obtain a binding promise from the Czechoslovak official authorities and agree the final details with the Theatre Institute and the Institute of Scenography. On 7 and 8 June 1968, the foundation committee met to establish the new International Organization of Scenographers and Theatre Technicians (; OISTT). The founding membership consisted of representatives from Czechoslovakia, Canada, Israel, Hungary, the German Democratic Republic, the Federal Republic of Germany and the United States. Organizational structure The World Congress OISTAT is composed of members in regions around the world, known as OISTAT Centres, Associate members and Individual members. As defined in the statutes, "the directing body of OISTAT is the Congress, which is formed when the delegates of the OISTAT Centres, Associate Members and Individual Members are assembled in plenary session". Every four years, the delegates of the World Congress elect the president and the members of executive committee. They also decide on changes in the statutes and other arising issues of the organization. Executive committee and governing board OISTAT affairs are coordinated by the governing board and its executive committee. The executive committee consists of eight elected members including the president, elected at the congress. The governing board consists of the elected chairs of the six commissions. Headquarters The Headquarters, known as the Secretariat until 2011, are the only constant office of the organization in charge of daily communications, and serve as the central network for its members and commissions. The Headquarters has moved twice since its foundation; it was in Prague from 1968-1993, in the Netherlands from 1993-2005, and since 2006 it is in Taipei, Taiwan. Commissions Six commissions lead various projects in their field of studies: Architecture: The Architecture commission was composed mainly of architects, theatre designers and technicians. The commission initiated the quadrennial Theatre Architecture Competition (TAC) in 1978 and the finalist designs are exhibited at the Prague Quadrennial. Education: This commission is the main facilitator of OISTAT Scenofest and International Stage Design Students' Works Exchange (ISDSWE) in Beijing, China. Research: The Research commission focuses on the theoretical and historical aspects of theatre scenography, architecture, and technologies. The Theatre Timeline Sub-commission is a recent establishment by the commission focusing on archiving theatre technologies, recording and preservation. Performance design: Performance Design includes four sub-commissions: Costume Design, Lighting Design, Sound Design and Space Design. Publication and communication: This commission's recent projects include Core Strategy, World Scenography, the Digital Theatre Words and the Bibliography of Theatre Arts. Technology: OISTAT Technology facilitates exchange of ideas in the field of theatre technology, develops research projects, and examines standards and safety issues. The commission offers technical advice and support for theatre technicians worldwide. The Theatre Invention Prize (TIP) is a competition initiated since 2011 by the commission. OISTAT projects Theatre Architecture Competition (TAC) Theatre Architecture Competition is an international ideas competition, aimed at students and emerging practitioners, which is organized every four years by the Architecture commission. The sites of the competition have been in these cities: 1978, Paris, France 1983, Stockholm, Sweden 1990, Moscow, Russia 1999, Prague, Czech Republic 2003, Hengelo, Netherlands 2007, Prague, Czech Republic 2011, Prague, Czech Republic 2015, Berlin, Germany 2017, Hsinchu, Taiwan In 2011, the competition theme was to design a theatre for a specific performance. The theatre can be in 'found' spaces which were not previously or presently a performing space or create a temporary installation. In 2015, the theme was to construct a 'floating theatre' on Spree River in Berlin, Germany. The competition has moved from Prague Quadrennial to Berlin's Stage|Set|Scenery. In 2017, the theme is 'Theatre as Public Space'. The aim of the competition is to challenge the conventional typology of the theatre and to focus on the design of a temporary theatre (or theatres) in The Public Activity Center, a disused sports stadium in Hsinchu City in Taiwan. The 25 selected entries of TAC 2017 were exhibited and awarded during 2017 World Stage Design on July 1–9, 2017 in Taipei, Taiwan. World Stage Design World Stage Design is an international exhibition of performance design from theatre, dance, music, and opera, as well as a showcase for cross-disciplinary performances and installations in non-conventional theatre spaces since 2005. It is held every four years on different continents, to purposely show the latest designs in theatre performances. Different from the Prague Quadrennial which presents designs curated by each country, World Stage Design allows every individual designer to submit design works directly to the curatorial panel for the grand exhibition. It has been in these cities: 2005, Toronto, Canada 2009, Seoul, South Korea 2013, Cardiff, UK 2017, Taipei, Taiwan Scenofest OISTAT Scenofest has been held in conjunction with Prague Quadrennial since 2003. It provides seminars, workshops, thematic exhibitions, performances, and presentations in the field of theatre design. Since 2013, Scenofest has formed new connections with World Stage Design. OISTAT publications World Scenography is a three-volume series which aims to gather significant scenographic works of theatre set, costume, and lighting designs from 1975 to 2015. The first volume World Scenography 1975-1990, published in 2012, documented 430 designs from 61 countries and was awarded the United States Institute for Theatre Technology Golden Pen Award in March 2014. The second volume World Scenography 1990-2005 was published in 2014. Digital Theatre Words is an ongoing digital publication project launched by OISTAT Publication and Communication to edit the most common theatre terms into a multi-language dictionary. The project comprises 2,000 theatre terms in 24 languages, along with descriptions, photos, and pronunciations. The Digital Theatre Words mobile application was launched on 15 June 2014. New Theatre Words is an illustrated multi-lingual theatre dictionary and reference first published in 1975. There are currently three editions: Northern Europe, Central Europe, and World Edition, comprising 25 languages in total. References Organizations established in 1968 Exhibitions Scenic design Performing arts in the Czech Republic Theatrical organizations Professional associations based in the Czech Republic Arts organizations established in 1968 Cultural organizations based in the Czech Republic 1968 establishments in Czechoslovakia
OISTAT
Engineering
1,591
2,698,920
https://en.wikipedia.org/wiki/Raychaudhuri%20equation
In general relativity, the Raychaudhuri equation, or Landau–Raychaudhuri equation, is a fundamental result describing the motion of nearby bits of matter. The equation is important as a fundamental lemma for the Penrose–Hawking singularity theorems and for the study of exact solutions in general relativity, but has independent interest, since it offers a simple and general validation of our intuitive expectation that gravitation should be a universal attractive force between any two bits of mass–energy in general relativity, as it is in Newton's theory of gravitation. The equation was discovered independently by the Indian physicist Amal Kumar Raychaudhuri and the Soviet physicist Lev Landau. Mathematical statement Given a timelike unit vector field (which can be interpreted as a family or congruence of nonintersecting world lines via the integral curve, not necessarily geodesics), Raychaudhuri's equation can be written where are (non-negative) quadratic invariants of the shear tensor and the vorticity tensor respectively. Here, is the expansion tensor, is its trace, called the expansion scalar, and is the projection tensor onto the hyperplanes orthogonal to . Also, dot denotes differentiation with respect to proper time counted along the world lines in the congruence. Finally, the trace of the tidal tensor can also be written as This quantity is sometimes called the Raychaudhuri scalar. Intuitive significance The expansion scalar measures the fractional rate at which the volume of a small ball of matter changes with respect to time as measured by a central comoving observer (and so it may take negative values). In other words, the above equation gives us the evolution equation for the expansion of the timelike congruence. If the derivative (with respect to proper time) of this quantity turns out to be negative along some world line (after a certain event), then any expansion of a small ball of matter (whose center of mass follows the world line in question) must be followed by recollapse. If not, continued expansion is possible. The shear tensor measures any tendency of an initially spherical ball of matter to become distorted into an ellipsoidal shape. The vorticity tensor measures any tendency of nearby world lines to twist about one another (if this happens, our small blob of matter is rotating, as happens to fluid elements in an ordinary fluid flow which exhibits nonzero vorticity). The right hand side of Raychaudhuri's equation consists of two types of terms: terms which promote (re)-collapse initially nonzero expansion scalar, nonzero shearing, positive trace of the tidal tensor; this is precisely the condition guaranteed by assuming the strong energy condition, which holds for the most important types of solutions, such as physically reasonable fluid solutions, terms which oppose (re)-collapse nonzero vorticity, corresponding to Newtonian centrifugal forces, positive divergence of the acceleration vector (e.g., outward pointing acceleration due to a spherically symmetric explosion, or more prosaically, due to body forces on fluid elements in a ball of fluid held together by its own self-gravitation). Usually one term will win out. However, there are situations in which a balance can be achieved. This balance may be: stable: in the case of hydrostatic equilibrium of a ball of perfect fluid (e.g. in a model of a stellar interior), the expansion, shear, and vorticity all vanish, and a radial divergence in the acceleration vector (the necessary body force on each blob of fluid being provided by the pressure of surrounding fluid) counteracts the Raychaudhuri scalar, which for a perfect fluid is in geometrized units. In Newtonian gravitation, the trace of the tidal tensor is ; in general relativity, the tendency of pressure to oppose gravity is partially offset by this term, which under certain circumstances can become important. unstable: for example, the world lines of the dust particles in the Gödel solution have vanishing shear, expansion, and acceleration, but constant vorticity just balancing a constant Raychuadhuri scalar due to nonzero vacuum energy ("cosmological constant"). Focusing theorem Suppose the strong energy condition holds in some region of our spacetime, and let be a timelike geodesic unit vector field with vanishing vorticity, or equivalently, which is hypersurface orthogonal. For example, this situation can arise in studying the world lines of the dust particles in cosmological models which are exact dust solutions of the Einstein field equation (provided that these world lines are not twisting about one another, in which case the congruence would have nonzero vorticity). Then Raychaudhuri's equation becomes Now the right hand side is always negative or zero, so the expansion scalar never increases in time. Since the last two terms are non-negative, we have Integrating this inequality with respect to proper time gives If the initial value of the expansion scalar is negative, this means that our geodesics must converge in a caustic ( goes to minus infinity) within a proper time of at most after the measurement of the initial value of the expansion scalar. This need not signal an encounter with a curvature singularity, but it does signal a breakdown in our mathematical description of the motion of the dust. Optical equations There is also an optical (or null) version of Raychaudhuri's equation for null geodesic congruences. . Here, the hats indicate that the expansion, shear and vorticity are only with respect to the transverse directions. When the vorticity is zero, then assuming the null energy condition, caustics will form before the affine parameter reaches . Applications The event horizon is defined as the boundary of the causal past of null infinity. Such boundaries are generated by null geodesics. The affine parameter goes to infinity as we approach null infinity, and no caustics form until then. So, the expansion of the event horizon has to be nonnegative. As the expansion gives the rate of change of the logarithm of the area density, this means the event horizon area can never go down, at least classically, assuming the null energy condition. See also Congruence (general relativity), for a derivation of the kinematical decomposition and of Raychaudhuri's equation Gravitational singularity Penrose–Hawking singularity theorems for an application of the focusing theorem Notes References See chapter 2 for an excellent discussion of Raychaudhuri's equation for both timelike and null geodesics, as well as the focusing theorem. See appendix F. See chapter 6 for a very detailed introduction to geodesic congruences, including the general form of Raychaudhuri's equation. See section 4.1 for a discussion of the general form of Raychaudhuri's equation. Raychaudhuri's paper introducing his equation. See section IV for derivation of the general form of Raychaudhuri equations for three kinematical quantities (namely expansion scalar, shear and rotation). See for a review on Raychaudhuri equations. External links The Meaning of Einstein's Field Equation by John C. Baez and Emory F. Bunn. Raychaudhuri's equation takes center stage in this well known (and highly recommended) semi-technical exposition of what Einstein's equation says. General relativity Lev Landau
Raychaudhuri equation
Physics
1,565
55,807,655
https://en.wikipedia.org/wiki/Arrivo
Arrivo Corporation was a startup company in Los Angeles, California, that developed maglev rail. Arrivo initially attempted to commercialize a hyperloop, but abandoned the effort in November 2017 in favor of established transit technologies. In November 2017, Arrivo proposed a plan for a 200 mph (322 km/h) maglev system in Colorado that would transport automobiles to and from Denver International Airport. On December 14, 2018, Arrivo reportedly shut down due to being unable to secure Series A funding. History Arrivo was founded in 2016, after an acrimonious departure of most of Arrivo's management team from Hyperloop One. A resulting lawsuit was settled. The company's trademark application described its mission as: "Financial advisory and consultancy services namely, provide expert project analysis in the field of transportation." In a June 2017 interview, founder BamBrogan reported the company had twenty employees. Three months before it ended hyperloop development, USA Today reported Arrivo as one of three top contenders in the hyperloop field. Colorado maglev project Arrivo agreed to lease offices in an unused toll plaza on E-470 in Commerce City, Colorado, intending to employ forty engineers. The second phase would have been the erection of a half-mile maglev test track, but not the evacuated tube that was a big part of Elon Musk's original hyperloop proposal. The state has offered $760,000 in tax incentives to lure Arrivo. At a press conference, Brogan BamBrogan described a system that would move automobiles from downtown Denver to the airport at the same price as the tolls on Pena Boulevard, the airport highway. It would, he said, have a payback period of ten years. The company planned to break ground on the first commercial leg, from Aurora to the airport, in 2019, with an opening in 2021. Technology The company has described a sled for automobiles; other elements of the technology with the exception of the tube and vacuum are likely to be similar to maglev. In March 2017 the company claimed it could have an operational hyperloop within three years. In November 2017, the company announced that it was no longer developing vacuum tubes and was focused on maglev rail technology. Funding In 2017, Arrivo said it had "'initial funding in place,' but did not reveal how much capital it had secured or the source of financial support." BamBrogan expected revenue-generating projects within three years, with a "classic infrastructure model". In July 2018, it announced that it had received $1 billion credit from Genertec America. Management The lead founder of Arrivo is Brogan BamBrogan, formerly founder and chief engineer at Hyperloop One and SpaceX. References Hyperloop Technology companies based in Greater Los Angeles Transportation in Aurora, Colorado Transportation companies based in California Transportation companies of the United States 2016 establishments in California 2018 disestablishments in California American companies established in 2016 American companies disestablished in 2018
Arrivo
Technology,Engineering
624
45,049,580
https://en.wikipedia.org/wiki/Penicillium%20athertonense
Penicillium athertonense is a fungus species of the genus of Penicillium which is named after Atherton Tablelands where this species was found. See also List of Penicillium species References athertonense Fungi described in 2014 Fungus species
Penicillium athertonense
Biology
54
1,414,279
https://en.wikipedia.org/wiki/Absolute%20neutrophil%20count
Absolute neutrophil count (ANC) is a measure of the number of neutrophil granulocytes (also known as polymorphonuclear cells, PMN's, polys, granulocytes, segmented neutrophils or segs) present in the blood. Neutrophils are a type of white blood cell that fights against infection. The ANC is almost always a part of a larger blood panel called the complete blood count. The ANC is calculated from measurements of the total number of white blood cells (WBC), usually based on the combined percentage of mature neutrophils (sometimes called "segs", or segmented cells) and bands, which are immature neutrophils. Clinical significance The reference range for ANC in adults varies by study, but 1500 to 8000 cells per microliter is typical. An ANC less than 1500 cells/μL is defined as neutropenia and increases risk of infection. Neutropenia is the condition of a low ANC, and the most common condition where an ANC would be measured is in the setting of chemotherapy for cancer. Neutrophilia indicates an elevated count. While many clinicians refer to the presence of neutrophilia as a "left shift", this is imprecise, as a left shift indicates the presence of immature neutrophil forms, but neutrophilia refers to the entire mass of neutrophils, both mature and immature. Neutrophilia can be indicative of: Premature release of myeloid cells from the bone marrow. A leukemoid reaction. Calculation ANC = or ANC = (Absolute-Polys + Absolute-Bands) Ranges Related tests In some cases, a ratio is reported in addition to the sum. This is known as the "I/T ratio". References External links Absolute Neutrophil Count (ANC) Blood tests
Absolute neutrophil count
Chemistry
411
162,190
https://en.wikipedia.org/wiki/Computer%20telephony%20integration
Computer telephony integration, also called computer–telephone integration or CTI, is a common name for any technology that allows interactions on a telephone and a computer to be coordinated. The term is predominantly used to describe desktop-based interaction for helping users be more efficient, though it can also refer to server-based functionality such as automatic call routing. See also Automatic number identification (ANI) Automatic call distributor Dialed Number Identification Service (DNIS) PhoneValet Message Center Predictive dialer Screen pop Telephony Application Programming Interface (TAPI) Telephony Server Application Programming Interface (TSAPI) Computer-supported telecommunications applications (CSTA) Multi-Vendor Integration Protocol References External links User Agent CSTA (uaCSTA) - TR/87 - ECMA International Telephone service enhanced features
Computer telephony integration
Technology
163
57,553,626
https://en.wikipedia.org/wiki/Glycerol%202-phosphate
Glycerol 2-phosphate is the conjugate base of phosphoric ester of glycerol. It is commonly known as β-glycerophosphate or BGP. Unlike glycerol 1-phosphate and glycerol 3-phosphate, this isomer is not chiral. It is also less common. Applications β-Glycerophosphate is an inhibitor of the enzyme serine-threonine phosphatase. It is often used in combination with other phosphatase/protease inhibitors for broad spectrum inhibition. β-Glycerophosphate is also used to drive osteogenic differentiation of bone marrow stem cells in vitro. β-Glycerophosphate is used to buffer M17 media for Lactococcus culture in recombinant protein expression. Notes Organophosphates
Glycerol 2-phosphate
Chemistry,Biology
185
30,659,750
https://en.wikipedia.org/wiki/P-nuclei
p-nuclei (p stands for proton-rich) are certain proton-rich, naturally occurring isotopes of some elements between selenium and mercury inclusive which cannot be produced in either the s- or the r-process. Definition The classical, ground-breaking works of Burbidge, Burbidge, Fowler and Hoyle (1957) and of A. G. W. Cameron (1957) showed how the majority of naturally occurring nuclides beyond the element iron can be made in two kinds of neutron capture processes, the s- and the r-process. Some proton-rich nuclides found in nature are not reached in these processes and therefore at least one additional process is required to synthesize them. These nuclei are called p-nuclei. Since the definition of the p-nuclei depends on the current knowledge of the s- and r-process (see also nucleosynthesis), the original list of 35 p-nuclei may be modified over the years, as indicated in the Table below. For example, it is recognized today that the abundances of 152Gd and 164Er contain at least strong contributions from the s-process. This also seems to apply to those of 113In and 115Sn, which additionally could be made in the r-process in small amounts. Natural occurrence The long-lived radionuclides 92Nb, 97Tc, 98Tc, 146Sm, 150Gd, and 154Dy are not among the classically defined p-nuclei as they no longer occur naturally on Earth. By the above definition, however, they are also p-nuclei because they cannot be made in either the s- or the r-process. From the discovery of their decay products in presolar grains it can be inferred that at least 92Nb and 146Sm were present in the solar nebula. This offers the possibility to estimate the time since the last production of these p-nuclei before the formation of the Solar System. p-nuclei are very rare. Those isotopes of an element which are p-nuclei are less abundant typically by factors of ten to one thousand than the other isotopes of the same element. The abundances of p-nuclei can only be determined in geochemical investigations and by analysis of meteoritic material and presolar grains. They cannot be identified in stellar spectra. Therefore, the knowledge of p-abundances is restricted to those of the Solar System and it is unknown whether the solar abundances of p-nuclei are typical for the Milky Way. List of p-nuclei Origin of the p-nuclei The astrophysical production of p-nuclei is not completely understood yet. The favored γ-process (see below) in core-collapse supernovae cannot produce all p-nuclei in sufficient amounts, according to current computer simulations. This is why additional production mechanisms and astrophysical sites are under investigation, as outlined below. It is also conceivable that there is not just a single process responsible for all p-nuclei but that different processes in a number of astrophysical sites produce certain ranges of p-nuclei. In the search for the relevant processes creating p-nuclei, the usual way is to identify the possible production mechanisms (processes) and then to investigate their possible realization in various astrophysical sites. The same logic is applied in the discussion below. Basics of p-nuclide production In principle, there are two ways to produce proton-rich nuclides: by successively adding protons to a nuclide (these are nuclear reactions of type (p,γ)) or by removing neutrons from a nucleus through sequences of photodisintegrations of type (γ,n). Under conditions encountered in astrophysical environments it is difficult to obtain p-nuclei through proton captures because the Coulomb barrier of a nucleus increases with increasing proton number. A proton requires more energy to be incorporated (captured) into an atomic nucleus when the Coulomb barrier is higher. The available average energy of the protons is determined by the temperature of the stellar plasma. Increasing the temperature, however, also speeds up the (γ,p) photodisintegrations which counteract the (p,γ) captures. The only alternative avoiding this would be to have a very large number of protons available so that the effective number of captures per second is large even at low temperature. In extreme cases (as discussed below) this leads to the synthesis of extremely short-lived radionuclides which decay to stable nuclides only after the captures cease. Appropriate combinations of temperature and proton density of a stellar plasma have to be explored in the search of possible production mechanisms for p-nuclei. Further parameters are the time available for the nuclear processes, and number and type of initially present nuclides (seed nuclei). Possible processes The p-process In a p-process it is suggested that p-nuclei were made through a few proton captures on stable nuclides. The seed nuclei originate from the s- and r-process and are already present in the stellar plasma. As outlined above, there are serious difficulties explaining all p-nuclei through such a process although it was originally suggested to achieve exactly this. It was shown later that the required conditions are not reached in stars or stellar explosions. Based on its historical meaning, the term p-process is sometimes used for any process synthesizing p-nuclei, even when no proton captures are involved, but this usage is discouraged. The γ-process p-nuclei can also be obtained by photodisintegration of s-process and r-process nuclei. At temperatures around 2–3 gigakelvins (GK) and short process time of a few seconds (this requires an explosive process) photodisintegration of the pre-existing nuclei will remain small, just enough to produce the required tiny abundances of p-nuclei. This is called the γ-process (gamma process) because the photodisintegration proceeds by nuclear reactions of the types (γ,n), (γ,α) and (γ,p), which are caused by highly energetic photons (gamma rays). The ν-process (nu process) If a sufficiently intensive source of neutrinos is available, nuclear reactions can directly produce certain nuclides, for example 7Li, 11B, 19F, 138La in core-collapse supernovae. Rapid proton capture processes In a p-process protons are added to stable or weakly radioactive atomic nuclei. If there is a high proton density in the stellar plasma, even short-lived radionuclides can capture one or more protons before they beta decay. This quickly moves the nucleosynthesis path from the region of stable nuclei to the very proton-rich side of the chart of nuclides. This is called rapid proton capture. Here, a series of (p,γ) reactions proceeds until either the beta decay of a nucleus is faster than a further proton capture, or the proton drip line is reached. Both cases lead to one or several sequential beta decays until a nucleus is produced which again can capture protons before it beta decays. Then the proton capture sequences continue. It is possible to cover the region of the lightest nuclei up to 56Ni within a second because both proton captures and beta decays are fast. Starting with 56Ni, however, a number of waiting points are encountered in the reaction path. These are nuclides which both have relatively long half-lives (compared to the process timescale) and can only slowly add another proton (that is, their cross section for (p,γ) reactions is small). Examples for such waiting points are: 56Ni, 60Zn, 64Ge, 68Se. Further waiting points may be important, depending on the detailed conditions and location of the reaction path. It is typical for such waiting points to show half-lives of minutes to days. Thus, they considerably increase the time required to continue the reaction sequences. If the conditions required for this rapid proton capture are only present for a short time (the timescale of explosive astrophysical events is of the order of seconds), the waiting points limit or hamper the continuation of the reactions to heavier nuclei. In order to produce p-nuclei, the process path has to encompass nuclides bearing the same mass number (but usually containing more protons) as the desired p-nuclei. These nuclides are then converted into p-nuclei through sequences of beta decays after the rapid proton captures ceased. Variations of the main category rapid proton captures are the rp-, pn-, and νp-processes, which will be briefly outlined below. The rp-process The so-called rp-process (rp is for rapid proton capture) is the purest form of the rapid proton capture process described above. At proton densities of more than protons/cm3 and temperatures around , the reaction path is close to the proton drip line. The waiting points can be bridged provided that the process time is 10–600 s. Waiting-point nuclides are produced with larger abundances while the production of nuclei "behind" each waiting point is increasingly suppressed. A definitive endpoint is reached close to 104Te because the reaction path runs into a region of nuclides which decay preferably by alpha decay and thus loop the path back onto itself. Therefore, an rp-process would only be able to produce p-nuclei with mass numbers less than or equal to 104. The pn-process The waiting points in rapid proton capture processes can be avoided by (n,p) reactions which are much faster than proton captures on or beta decays of waiting points nuclei. This results in a considerable reduction of the time required to build heavy elements and allows an efficient production within seconds. This requires, however, a (small) supply of free neutrons which are usually not present in such proton-rich plasmas. One way to obtain them is to release them through other reactions occurring simultaneously as the rapid proton captures. This is called neutron-rich rapid proton capture or pn-process. The νp-process Another possibility to obtain the neutrons required for the accelerating (n,p) reactions in proton-rich environments is to use the anti-neutrino capture on protons (), turning a proton and an anti-neutrino into a positron and a neutron. Since (anti-)neutrinos interact only very weakly with protons, a high flux of anti-neutrinos has to act on a plasma with high proton density. This is called -process (nu p process). Possible synthesis sites Core-collapse supernovae Massive stars end their life in a core-collapse supernova. In such a supernova, a shockfront from an explosion runs from the center of the star through its outer layers and ejects these. When the shockfront reaches the O/Ne-shell of the star (see also stellar evolution), the conditions for a -process are reached for 1-2 s. Although the majority of p-nuclei can be made in this way, some mass regions of p-nuclei turn out to be problematic in model calculations. It has been known already for decades that p-nuclei with mass numbers cannot be produced in a -process. Modern simulations also show problems in the range . The p-nucleus 138La is not produced in the -process but it can be made in a -process. A hot neutron star is made in the center of such a core-collapse supernova and it radiates neutrinos with high intensity. The neutrinos interact also with the outer layers of the exploding star and cause nuclear reactions which create 138La, among other nuclei. Also 180mTa may receive a contribution from this -process. It was suggested to supplement the γ-process in the outer layers of the star by another process, occurring in the deepest layers of the star, close to the neutron star but still being ejected instead of falling onto the neutron star surface. Due to the initially high flow of neutrinos from the forming neutron star, these layers become extremely proton-rich through the reaction . Although the anti-neutrino flux is initially weaker a few neutrons will be created, nevertheless, because of the large number of protons. This allows a -process in these deep layers. Because of the short timescale of the explosion and the high Coulomb barrier of the heavier nuclei, such a νp-process could possibly only produce the lightest p-nuclei. Which nuclei are made and how much of them depends sensitively on many details in the simulations and also on the actual explosion mechanism of a core-collapse supernova, which still is not completely understood. Thermonuclear supernovae A thermonuclear supernova is the explosion of a white dwarf in a binary star system, triggered by thermonuclear reactions in matter from a companion star accreted on the surface of the white dwarf. The accreted matter is rich in hydrogen (protons) and helium (α particles) and becomes hot enough to allow nuclear reactions. A number of models for such explosions are discussed in literature, of which two were explored regarding the prospect of producing p-nuclei. None of these explosions release neutrinos, therefore rendering ν- and νp-process impossible. Conditions required for the rp-process are also not attained. Details of the possible production of p-nuclei in such supernovae depend sensitively on the composition of the matter accreted from the companion star (the seed nuclei for all subsequent processes). Since this can change considerably from star to star, all statements and models of p-production in thermonuclear supernovae are prone to large uncertainties. Type Ia supernovae The consensus model of thermonuclear supernovae postulates that the white dwarf explodes after exceeding the Chandrasekhar limit by the accretion of matter because the contraction and heating ignites explosive carbon burning under degenerate conditions. A nuclear burning front runs through the white dwarf from the inside out and tears it apart. Then the outermost layers closely beneath the surface of the white dwarf (containing 0.05 solar masses of matter) exhibit the right conditions for a γ-process. The p-nuclei are made in the same way as in the γ-process in core-collapse supernovae and also the same difficulties are encountered. In addition, 138La and 180mTa are not produced. A variation of the seed abundances by assuming increased s-process abundances only scales the abundances of the resulting p-nuclei without curing the problems of relative underproduction in the nuclear mass ranges given above. subChandrasekhar supernovae In a subclass of type Ia supernovae, the so-called subChandrasekhar supernova, the white dwarf may explode long before it reaches the Chandrasekhar limit because nuclear reactions in the accreted matter can already heat the white dwarf during its accretion phase and trigger explosive carbon burning prematurely. Helium-rich accretion favors this type of explosion. Helium burning ignites degeneratively on the bottom of the accreted helium layer and causes two shockfronts. The one running inwards ignites the carbon explosion. The outwards moving front heats the outer layers of the white dwarf and ejects them. Again, these outer layers are site to a γ-process at temperatures of 2-3 GK. Due to the presence of α particles (helium nuclei), however, additional nuclear reactions become possible. Among those are such which release a large number of neutrons, such as 18O(α,n)21Ne, 22Ne(α,n)25Mg, and 26Mg(α,n)29Si. This allows a pn-process in that part of the outer layers which experiences temperatures above 3 GK. Those light p-nuclei which are underproduced in the γ-process can be so efficiently made in the pn-process that they even show much larger abundances than the other p-nuclei. To obtain the observed solar relative abundances, a strongly enhanced s-process seed (by factors of 100-1000 or more) has to be assumed which increases the yield of heavy p-nuclei from the γ-process. Neutron stars in binary star systems A neutron star in a binary star system can also accrete matter from the companion star on its surface. Combined hydrogen and helium burning ignites when the accreted layer of degenerate matter reaches a density of and a temperature exceeding . This leads to thermonuclear burning comparable to what happens in the outwards moving shockfront of subChandrasekhar supernovae. The neutron star itself is not affected by the explosion and therefore the nuclear reactions in the accreted layer can proceed longer than in an explosion. This allows to establish an rp-process. It will continue until either all free protons are used up or the burning layer has expanded due to the increase in temperature and its density falls below the one required for the nuclear reactions. It was shown that the properties of X-ray bursts in the Milky Way can be explained by an rp-process on the surface of accreting neutron stars. It remains unclear, yet, whether matter (and if, how much matter) can be ejected and escape the gravitational field of the neutron star. Only if this is the case can such objects be considered as possible sources of p-nuclei. Even if this is corroborated, the demonstrated endpoint of the rp-process limits the production to the light p-nuclei (which are underproduced in core-collapse supernovae). See also List of unsolved problems in physics References Nuclear physics Astrophysics Nucleosynthesis Supernovae
P-nuclei
Physics,Chemistry,Astronomy
3,675
1,407,978
https://en.wikipedia.org/wiki/PTCRB
The PTCRB was established in 1997 as the certification forum by select North American cellular operators. Now a pseudo-acronym, it no longer stands for its original meaning of the PCS Type Certification Review Board (then named after the GSM1900 MHz band in North America). The purpose of the PTCRB is to provide the framework within which device certification can take place for members of the PTCRB. This includes, but is not limited to, determination of the test specifications and methods necessary to support the certification process for 5G NR and 4G LTE wireless devices. This group will also be responsible to generate input regarding testing of devices to standards development organizations. Certified device types PTCRB operates a certification program for Smartphones, feature phones, tablets, Internet of Things (IoT) devices, notebook computers and modules. Certification standards PTCRB certification is based on standards developed by 3rd Generation Partnership Project (3GPP), Open Mobile Alliance (OMA) and other standards-developing organizations (SDOs) recognized by the PTCRB. In some cases, PTCRB certification may accommodate North American standards and additional requirements from the U.S. Federal Communications Commission (FCC), Innovation, Science and Economic Development Canada (ISED) or any other government agency that may have jurisdiction and/or competence in the matter. The Plenary meetings on a quarterly 'PTCRB Super Week' publish their updated North America Permanent Reference document NAPRD03 and PTCRB Program Management Document (PPMD). The PTCRB Editorial Chair is David Trevayne-Smith of Eurofins E&E. By obtaining PTCRB Certification on a mobile device, it ensures compliance with cellular network standards within the PTCRB Operators' networks. Consequently, PTCRB Operators may block devices from their network, if they are not PTCRB certified. The CTIA – The Wireless Association has been assigned as the administrator for the PTCRB Certification process and is also responsible for the administration of PTCRB issued IMEIs. The PTCRB Validation Group (PVG) is a group of test laboratory organizations working in the field of PTCRB associated technologies. The PVG group meets on a regular basis to discuss technical issues and the resolution of problems in a harmonized way. PVG works on open PTCRB Operators Requests for Tests (RfTs) and the validation of Test Platforms and Test Cases for the relevant PTCRB operating frequencies. Full and Observer membership categories reflect the status and scopes of organizations working within the group. The Chair of the PVG is held by Muharrem Gedikoglu of Cetecom and Graham Harvey of Sporton as Vice Chair. The PTCRB Marketing Working Group Co-Chairs are Johee Chang of Sporton and Amber Neves of Rohde & Schwarz the Working Group meets regularly to produce marketing materials to assist manufacturers understanding of the process for certification and supporting Test Laboratories along with supportingthe CTIA in attendance and support materials for exhibitions. Some Mobile Network Operators may require further certification before a device is allowed to be placed on a network. See also CTIA – The Wireless Association CTIA - IoT Network Certified Federal Communications Commission Global Certification Forum IMEI References External links PTCRB: official website for the PTCRB. Membership application required to view information Mobile telecommunications standards Mobile phone standards Telecommunications organizations Standards of the United States
PTCRB
Technology
691
14,896,333
https://en.wikipedia.org/wiki/Citric%20acid/potassium-sodium%20citrate
Citric acid/potassium-sodium citrate is a drug used in the treatment of metabolic acidosis (a disorder in which the blood is too acidic). It is made up of citrate (the weak base of citric acid), a sodium cation and potassium cation. It can also be used for the treatment of kidney stones by treating hypocitraturia. It does this by lowering the amount of acid in the urine, a process known as alkalinization. Increasing the amount of citrate in the blood is also important for kidney stone prevention because citrate creates chemical complexes with calcium, preventing nucleation and agglomeration with oxalate that leads to kidney stones. Because of these two mechanisms of treatment, it can be used to treat both calcium oxalate and uric acid kidney stones. References Citric acid/potassium-sodium citrate entry in the public domain NCI Dictionary of Cancer Terms Ludwig, Wesley W.; Matlaga, Brian R. (2018-03-01). "Urinary Stone Disease: Diagnosis, Medical Therapy, and Surgical Management". Medical Clinics of North America. Urology. 102 (2): 265–277. doi:10.1016/j.mcna.2017.10.004. ISSN 0025-7125. Sorokin, Igor; Pearle, Margaret S. (2018-10-01). "Medical therapy for nephrolithiasis: State of the art". Asian Journal of Urology. Medical and surgical management of urolithiasis. 5 (4): 243–255. doi:10.1016/j.ajur.2018.08.005. ISSN 2214-3882. Kamatani, Naoyuki; Jinnah, H. A.; Hennekam, Raoul C. M.; van Kuilenburg, André B. P. (2021-01-01), Pyeritz, Reed E.; Korf, Bruce R.; Grody, Wayne W. (eds.), "6 - Purine and Pyrimidine Metabolism", Emery and Rimoin's Principles and Practice of Medical Genetics and Genomics (Seventh Edition), Academic Press, pp. 183–234, ISBN 978-0-12-812535-9, retrieved 2024-10-31 "CITRATE TO PREVENT CALCIUM AND URIC ACID STONES | Kidney Stone Program". kidneystones.uchicago.edu. Retrieved 2024-11-30. Acid–base disturbances Citric acid cycle compounds Potassium compounds Combination drugs
Citric acid/potassium-sodium citrate
Chemistry
550
30,747,790
https://en.wikipedia.org/wiki/Cationic%20polymerization
In polymer chemistry, cationic polymerization is a type of chain growth polymerization in which a cationic initiator transfers charge to a monomer, which then becomes reactive. This reactive monomer goes on to react similarly with other monomers to form a polymer. The types of monomers necessary for cationic polymerization are limited to alkenes with electron-donating substituents and heterocycles. Similar to anionic polymerization reactions, cationic polymerization reactions are very sensitive to the type of solvent used. Specifically, the ability of a solvent to form free ions will dictate the reactivity of the propagating cationic chain. Cationic polymerization is used in the production of polyisobutylene (used in inner tubes) and poly(N-vinylcarbazole) (PVK). Monomers Monomer scope for cationic polymerization is limited to two main types: alkene and heterocyclic monomers. Cationic polymerization of both types of monomers occurs only if the overall reaction is thermally favorable. In the case of alkenes, this is due to isomerization of the monomer double bond; for heterocycles, this is due to release of monomer ring strain and, in some cases, isomerization of repeating units. Monomers for cationic polymerization are nucleophilic and form a stable cation upon polymerization. Alkenes Cationic polymerization of olefin monomers occurs with olefins that contain electron-donating substituents. These electron-donating groups make the olefin nucleophilic enough to attack electrophilic initiators or growing polymer chains. At the same time, these electron-donating groups attached to the monomer must be able to stabilize the resulting cationic charge for further polymerization. Some reactive olefin monomers are shown below in order of decreasing reactivity, with heteroatom groups being more reactive than alkyl or aryl groups. Note, however, that the reactivity of the carbenium ion formed is the opposite of the monomer reactivity. Heterocyclic monomers Heterocyclic monomers that are cationically polymerized are lactones, lactams and cyclic amines. Upon addition of an initiator, cyclic monomers go on to form linear polymers. The reactivity of heterocyclic monomers depends on their ring strain. Monomers with large ring strain, such as oxirane, are more reactive than 1,3-dioxepane which has considerably less ring strain. Rings that are six-membered and larger are less likely to polymerize due to lower ring strain. Synthesis Initiation Initiation is the first step in cationic polymerization. During initiation, a carbenium ion is generated from which the polymer chain is made. The counterion should be non-nucleophilic, otherwise the reaction is terminated instantaneously. There are a variety of initiators available for cationic polymerization, and some of them require a coinitiator to generate the needed cationic species. Classical protic acids Strong protic acids can be used to form a cationic initiating species. High concentrations of the acid are needed in order to produce sufficient quantities of the cationic species. The counterion (A−) produced must be weakly nucleophilic so as to prevent early termination due to combination with the protonated alkene. Common acids used are phosphoric, sulfuric, fluoro-, and triflic acids. Only low molecular weight polymers are formed with these initiators. Lewis acids/Friedel-Crafts catalysts Lewis acids are the most common compounds used for initiation of cationic polymerization. The more popular Lewis acids are SnCl4, AlCl3, BF3, and TiCl4. Although these Lewis acids alone are able to induce polymerization, the reaction occurs much faster with a suitable cation source. The cation source can be water, alcohols, or even a carbocation donor such as an ester or an anhydride. In these systems the Lewis acid is referred to as a coinitiator while the cation source is the initiator. Upon reaction of the initiator with the coinitiator, an intermediate complex is formed which then goes on to react with the monomer unit. The counterion produced by the initiator-coinitiator complex is less nucleophilic than that of the Brønsted acid A− counterion. Halogens, such as chlorine and bromine, can also initiate cationic polymerization upon addition of the more active Lewis acids. Carbenium ion salts Stable carbenium ions are used to initiate chain growth of only the most reactive alkenes and are known to give well defined structures. These initiators are most often used in kinetic studies due to the ease of measuring the disappearance of the carbenium ion absorbance. Common carbenium ions are trityl and tropylium cations. Ionizing radiation Ionizing radiation can form a radical-cation pair that can then react with a monomer to start cationic polymerization. Control of the radical-cation pairs is difficult and often depends on the monomer and reaction conditions. Formation of radical and anionic species is often observed. Propagation Propagation proceeds by addition of monomer to the active species, i.e. the carbenium ion. The monomer is added to the growing chain in a head-to-tail fashion; in the process, the cationic end group is regenerated to allow for the next round of monomer addition. Effect of temperature The temperature of the reaction has an effect on the rate of propagation. The overall activation energy for the polymerization () is based upon the activation energies for the initiation (), propagation (), and termination () steps: Generally, is larger than the sum of and , meaning the overall activation energy is negative. When this is the case, a decrease in temperature leads to an increase in the rate of propagation. The converse is true when the overall activation energy is positive. Chain length is also affected by temperature. Low reaction temperatures, in the range of 170–190 K, are preferred for producing longer chains. This comes as a result of the activation energy for termination and other side reactions being larger than the activation energy for propagation. As the temperature is raised, the energy barrier for the termination reaction is overcome, causing shorter chains to be produced during the polymerization process. Effect of solvent and counterion The solvent and the counterion (the gegen ion) have a significant effect on the rate of propagation. The counterion and the carbenium ion can have different associations according to intimate ion pair theory; ranging from a covalent bond, tight ion pair (unseparated), solvent-separated ion pair (partially separated), and free ions (completely dissociated). The association is strongest as a covalent bond and weakest when the pair exists as free ions. In cationic polymerization, the ions tend to be in equilibrium between an ion pair (either tight or solvent-separated) and free ions. The more polar the solvent used in the reaction, the better the solvation and separation of the ions. Since free ions are more reactive than ion pairs, the rate of propagation is faster in more polar solvents. The size of the counterion is also a factor. A smaller counterion, with a higher charge density, will have stronger electrostatic interactions with the carbenium ion than will a larger counterion which has a lower charge density. Further, a smaller counterion is more easily solvated by a polar solvent than a counterion with low charge density. The result is increased propagation rate with increased solvating capability of the solvent. Termination Termination generally occurs by unimolecular rearrangement with the counterion. In this process, an anionic fragment of the counterion combines with the propagating chain end. This not only inactivates the growing chain, but it also terminates the kinetic chain by reducing the concentration of the initiator-coinitiator complex. Chain transfer Chain transfer can take place in two ways. One method of chain transfer is hydrogen abstraction from the active chain end to the counterion. In this process, the growing chain is terminated, but the initiator-coinitiator complex is regenerated to initiate more chains. The second method involves hydrogen abstraction from the active chain end to the monomer. This terminates the growing chain and also forms a new active carbenium ion-counterion complex which can continue to propagate, thus keeping the kinetic chain intact. Cationic ring-opening polymerization Cationic ring-opening polymerization follows the same mechanistic steps of initiation, propagation, and termination. However, in this polymerization reaction, the monomer units are cyclic in comparison to the resulting polymer chains which are linear. The linear polymers produced can have low ceiling temperatures, hence end-capping of the polymer chains is often necessary to prevent depolymerization. Kinetics The rate of propagation and the degree of polymerization can be determined from an analysis of the kinetics of the polymerization. The reaction equations for initiation, propagation, termination, and chain transfer can be written in a general form: In which I+ is the initiator, M is the monomer, M+ is the propagating center, and , , , and are the rate constants for initiation, propagation, termination, and chain transfer, respectively. For simplicity, counterions are not shown in the above reaction equations and only chain transfer to monomer is considered. The resulting rate equations are as follows, where brackets denote concentrations: Assuming steady-state conditions, i.e. the rate of initiation = rate of termination: This equation for [M+] can then be used in the equation for the rate of propagation: From this equation, it is seen that propagation rate increases with increasing monomer and initiator concentration. The degree of polymerization, , can be determined from the rates of propagation and termination: If chain transfer rather than termination is dominant, the equation for becomes Living polymerization In 1984, Higashimura and Sawamoto reported the first living cationic polymerization for alkyl vinyl ethers. This type of polymerization has allowed for the control of well-defined polymers. A key characteristic of living cationic polymerization is that termination is essentially eliminated, thus the cationic chain growth continues until all monomer is consumed. Commercial applications The largest commercial application of cationic polymerization is in the production of polyisobutylene (PIB) products which include polybutene and butyl rubber. These polymers have a variety of applications from adhesives and sealants to protective gloves and pharmaceutical stoppers. The reaction conditions for the synthesis of each type of isobutylene product vary depending on the desired molecular weight and what type(s) of monomer(s) is used. The conditions most commonly used to form low molecular weight (5–10 x 104 Da) polyisobutylene are initiation with AlCl3, BF3, or TiCl4 at a temperature range of −40 to 10 °C. These low molecular weight polyisobutylene polymers are used for caulking and as sealants. High molecular weight PIBs are synthesized at much lower temperatures of −100 to −90 °C and in a polar medium of methylene chloride. These polymers are used to make uncrosslinked rubber products and are additives for certain thermoplastics. Another characteristic of high molecular weight PIB is its low toxicity which allows it to be used as a base for chewing gum. The main chemical companies that produce polyisobutylene are Esso, ExxonMobil, and BASF. Butyl rubber, in contrast to PIB, is a copolymer in which the monomers isobutylene (~98%) and isoprene (2%) are polymerized in a process similar to high molecular weight PIBs. Butyl rubber polymerization is carried out as a continuous process with AlCl3 as the initiator. Its low gas permeability and good resistance to chemicals and aging make it useful for a variety of applications such as protective gloves, electrical cable insulation, and even basketballs. Large scale production of butyl rubber started during World War II, and roughly 1 billion pounds/year are produced in the U.S. today. Polybutene is another copolymer, containing roughly 80% isobutylene and 20% other butenes (usually 1-butene). The production of these low molecular weight polymers (300–2500 Da) is done within a large range of temperatures (−45 to 80 °C) with AlCl3 or BF3. Depending on the molecular weight of these polymers, they can be used as adhesives, sealants, plasticizers, additives for transmission fluids, and a variety of other applications. These materials are low-cost and are made by a variety of different companies including BP Chemicals, Esso, and BASF. Other polymers formed by cationic polymerization are homopolymers and copolymers of polyterpenes, such as pinenes (plant-derived products), that are used as tackifiers. In the field of heterocycles, 1,3,5-trioxane is copolymerized with small amounts of ethylene oxide to form the highly crystalline polyoxymethylene plastic. Also, the homopolymerization of alkyl vinyl ethers is achieved only by cationic polymerization. References Polymerization reactions
Cationic polymerization
Chemistry,Materials_science
2,870
32,753,316
https://en.wikipedia.org/wiki/Ferredoxin-thioredoxin%20reductase
Ferredoxin-thioredoxin reductase , systematic name ferredoxin:thioredoxin disulfide oxidoreductase, is a [4Fe-4S] protein that plays an important role in the ferredoxin/thioredoxin regulatory chain. It catalyzes the following reaction: 2 reduced ferredoxin + thioredoxin disulfide 2 oxidized ferredoxin + thioredoxin thiols + 2 H+ Ferredoxin-Thioredoxin reductase (FTR) converts an electron signal (photoreduced ferredoxin) to a thiol signal (reduced thioredoxin), regulating enzymes by reduction of specific disulfide groups. It catalyses the light-dependent activation of several photosynthesis enzymes and constitutes the first historical example of a thiol/disulfide exchange cascade for enzyme regulation. It is a heterodimer of subunit alpha and subunit beta. Subunit alpha is the variable subunit, and beta is the catalytic chain. The structure of the beta subunit has been determined and found to fold around the FeS cluster. Biological Function Major groups of oxygen-producing, photosynthetic organisms such as cyanobacteria, algae, C4, C3, and crassulacean acid metabolism (CAM) plants use Ferredoxin-thioredoxin reductase for carbon fixation regulation. FTR, as part of a greater Ferredoxin-Thioredoxin system, allows plants to change their metabolism based on light intensity. Specifically, the Ferredoxin-Thioredoxin system controls enzymes in the Calvin Cycle and Pentose phosphate pathway - allowing plants to balance carbohydrate synthesis and degradation based on the availability of light. In the light, photosynthesis harnesses light energy and reduces Ferredoxin. Using FTR, reduced Ferredoxin then reduces Thioredoxin. Thioredoxin, through thiol/disulfide exchange, then activates carbohydrate synthesis enzymes such as chloroplast fructose-1,6-bisphosphatase, Sedoheptulose-bisphosphatase, and phosphoribulokinase. As a result, light uses FTR to activate carbohydrate biosynthesis. In the dark, Ferredoxin remains oxidized. This leaves Thioredoxin inactive and allows carbohydrate breakdown to dominate metabolism. Structure Ferredoxin-Thioredoxin Reductase is an α-β heterodimer of approximately 30 kDa. FTR structure across different plant species include a conserved catalytic β subunit and a variable α subunit. The structure of FTR from Synechocystis sp. PCC6803 has been studied in detail and resolved at 1.6 Å. FTR resembles a thin concave disc, 10 Å across the center where a [4Fe-4S cluster] resides. One side of the cluster center contains redox-active disulfide bonds that reduce Thioredoxin while the opposite docks with reduced Ferredoxin. This two sided disc structure allows FTR to simultaneously interact with Thioredoxin and Ferredoxin. The variable α subunit has an open β barrel structure made of five antiparallel β strands. Its interaction with the catalytic subunit occurs mainly with two loops between β strands. The residues in these two loops are mostly conserved and are thought to stabilize the 4Fe-4S cluster in the catalytic subunit. Structurally, the α subunit is very similar to the PsaE protein, a subunit of Photosystem I, though the similarity is not seen in their sequences or functions. The catalytic β subunit has a general α-helical structure with an [4Fe-4S center]. The FeS center and redox-active Cysteine residues are located within the loops of these helices. Cysteine-55, 74, 76, and 85 are coordinated to the iron atoms of the cubane-type cluster. Enzymatic Mechanism FTR is unique among thioredoxin reductases because it uses an Fe-S cluster cofactor rather than flavoproteins to reduce disulfide bonds. FTR catalysis begins with its interaction with reduced Ferredoxin. This proceeds with the attraction between FTR Lys-47 and Ferredoxin Glu-92. One electron from Ferredoxin and one electron from the Fe-S center is abstracted to break FTR's Cys-87 and Cys-57 disulfide bond, create a nucleophilic Cys-57, and oxidize the Fe-S center from [4Fe-4S]2+ to [4Fe-4S]3+. The structure of this one-electron (from Ferredoxin) intermediate is contested: Staples et al. suggest Cys-87 is coordinated to a Sulfur in the Fe-S center while Dai et al. argue Cys-87 is coordinated to an Iron. Next, the nucleophilic Cys-57, encouraged by an adjacent Histidine residue, attacks a disulfide bridge on Thioredoxin, creating a hetero-disulfide Thioredoxin intermediate. Lastly, a newly docked Ferredoxin molecule delivers the final electron to the FeS center, reducing it to its original 2+ state, reforming the Cys-87, Cys-57 disulfide, and fully reducing thioredoxin to two thiols. References External links Protein domains EC 1.8.7
Ferredoxin-thioredoxin reductase
Biology
1,189
19,189,578
https://en.wikipedia.org/wiki/TransferJet
TransferJet is a close proximity wireless transfer technology initially proposed by Sony and demonstrated publicly in early 2008. By touching (or bringing very close together) two electronic devices, TransferJet allows high speed exchange of data. The concept of TransferJet consists of a touch-activated interface which can be applied for applications requiring high-speed data transfer between two devices in a peer-to-peer mode without the need for external physical connectors. TransferJet's maximum physical layer transmission rate is 560 Mbit/s. After allowing for error correction and other protocol overhead, the effective maximum throughput is 375 Mbit/s. TransferJet will adjust the data rate downward according to the wireless environment, thereby maintaining a robust link even when the surrounding wireless condition fluctuates. TransferJet has the capability of identifying the unique MAC addresses of individual devices, enabling users to choose which devices can establish a connection. By allowing only devices inside the household, for example, one can prevent data theft from strangers while riding a crowded train. If, on the other hand, one wishes to connect the device with any other device at a party, this can be done by simply disabling the filtering function. TransferJet uses the same frequency spectrum as UWB, but occupies only a section of this band available as a common worldwide channel. Since the RF power is kept under -70 dBm/MHz, it can operate in the same manner as that of UWB devices equipped with DAA functionality. In addition, this low power level also ensures that there will be no interference to other wireless systems, including other TransferJet systems, operating nearby. By reducing the RF power and spatial reach down to a few centimeters (about an inch or less), a TransferJet connection in its most basic mode does not require any initial setup procedure by the user for either device, and the action of spontaneously touching one device with another will automatically trigger the data transfer. More complex usage scenarios will require various means to select the specific data to send as well as the location to store (or method to process) the received data. TransferJet utilizes a newly developed TransferJet Coupler based on the principle of electric induction field as opposed to radiation field for conventional antennas. The functional elements of a generic TransferJet Coupler consist of a coupling electrode or plate, a resonant stub and ground. Compared to conventional radiating antennas, the TransferJet Coupler achieves higher transmission gain and more efficient coupling in the near-field while providing sharp attenuation at longer distances. Because the Coupler generates longitudinal electric fields, there is no polarization and the devices can be aligned at any angle. TransferJet Specifications Although sometimes confused with Near Field Communication, TransferJet depends on an entirely different technology and is also generally targeted for different usage scenarios focusing on high-speed data transfer. Thus these two systems will not interfere with each other and can even co-exist in the same location, as already implemented in certain products. Other recent products combine TransferJet with wireless power to allow both data transfer and wireless charging capability simultaneously in the same location. TransferJet, NFC and wireless power are the three major near-field (contact-less) technologies that are expected to eliminate the physical connections and cables currently required to interface devices with each other. Comparison with NFC The TransferJet Consortium was established in July 2008 to advance and promote the TransferJet Format, by developing the technical specifications and compliance testing procedures as well as creating a market for TransferJet-compliant, interoperable products. In September 2011, the consortium was registered as an independent non-profit industry association. As of June 2015, the Consortium is led by five Promoter companies, consisting of: JRC, NTT, Olympus, Sony (consortium administrator), and Toshiba. The Consortium currently also has around thirty Adopter companies. The TransferJet regular typeface and TransferJet logos are trademarks managed and licensed by the TransferJet Consortium. Commercial products have been introduced since January 2010 and the initial product categories include digital cameras, laptop PCs, USB cradle accessories, USB dongle accessories and office/business equipment. Compliance testing equipment is provided by Agilent technologies and certification services are offered by Allion Test Labs. The first commercially available TransferJet development platform for embedded systems was launched by Icoteq Ltd in February 2015. Smartphones with integrated TransferJet functionality were launched in June 2015 from Fujitsu, and Bkav. Other product vendors include Buffalo and E-Globaledge. TransferJet X is a new second-generation TransferJet specification capable of data transfer speeds of 13.1 Gbit/sec and above, or about 20 times the speed of current TransferJet. This specification uses the 60 GHz band and requires only 2 msec or less to establish a connection prior to the actual data transfer, thereby enabling the exchange of large content files even in the short amount of time it takes, for example, for a person to walk through a wicket gate. The TransferJet Consortium is currently defining the details of the TransferJet X ecosystem, based on the IEEE 802.15.3e standard completed and published in June 2017. The HRCP Research and Development Partnership, established in 2016, is developing an SoC solution for implementing TransferJet X in a variety of products and services to be released starting around 2020. References External links Transferjet.org: TransferJet Consortium official website Interfaces Electrical connectors Network protocols Sony hardware Computer-related introductions in 2008
TransferJet
Technology
1,096
38,303,683
https://en.wikipedia.org/wiki/Divided%20domain
In algebra, a divided domain is an integral domain R in which every prime ideal satisfies . A locally divided domain is an integral domain that is a divided domain at every maximal ideal. A Prüfer domain is a basic example of a locally divided domain. Divided domains were introduced by who called them AV-domains. References External links Commutative algebra
Divided domain
Mathematics
74
553,683
https://en.wikipedia.org/wiki/Drug%20holiday
A drug holiday (sometimes also called a drug vacation, medication vacation, structured treatment interruption, tolerance break, treatment break or strategic treatment interruption) is when a patient stops taking a medication(s) for a period of time; anywhere from a few days to many months or even years if the doctor or medical provider feels it is best for the patient. It is recommended not to discontinue any medication without the close supervision of the prescribing party. Planned drug holidays are used in numerous fields of medicine. They are perhaps best known in HIV therapy, after a study showed that stopping medication may stimulate the immune system to attack the virus. Another reason for drug holidays is to permit a drug to regain effectiveness after a period of continuous use, and to reduce the tolerance effect that may require increased dosages. In addition to drug holidays that are intended for therapeutic effect, they are sometimes used to reduce drug side effects so that patients may enjoy a more normal life for a period of time such as a weekend or holiday, or engage in a particular activity. For example, it is common for patients using SSRI anti-depressant therapies to take a drug holiday to reduce or avoid side effects associated with sexual dysfunction. In the treatment of mental illness, a drug holiday may be part of a progression toward treatment cessation. The holiday is also a tool to assess a drug's benefits against unwanted side effects, assuming that both will dissipate after an extended vacation. Evolution of the practice As a treatment for bipolar disorder One-day drug holidays in the lithium treatment of bipolar disorder, known as "lithium-free days", have been in use since the pioneering work of Noack and Trautner in 1951. This was found to reduce toxic buildup of the drug in some patients. As a treatment for Parkinson's disease Drug holidays from L-dopa found use in the early 1970s when Sweet et al. reported they were beneficial in terms of restoring the effectiveness of the treatment after adaptation by the brain had diminished its effectiveness. However, later studies revealed that such drug holidays conferred only temporary benefits to L-dopa responsiveness. Furthermore, there was an increased risk of death from associated complications, namely aspiration pneumonia, depression, and thromboembolic disease. L-dopa drug holidays are thus no longer recommended. As a treatment for schizophrenia Drug holidays from antipsychotic medication such as chlorpromazine have been used since the early 1980s to alleviate adverse reactions associated with long-term treatment. According to Ann Mortimer, it is acknowledged that established guidelines require long-term treatment in established schizophrenia, because the vast majority of evidence from discontinuation, "drug holiday", and ultra-low-dose studies conducted over many years points to significantly higher relapse rates when compared to maintenance treatment. If antipsychotics cannot be avoided in the near term, there is no reason to question their long-term usefulness. The same might be said of insulin in diabetes. As a treatment for HIV HIV selectively targets activated helper T-cells. Thus, over time, HIV will tend to selectively destroy those helper T-cells most capable of fighting the HIV infection off, effectively desensitizing the immune system to the infection. The purpose of a structured treatment interruption is to create a short interval in which the virus becomes common enough to stimulate reproduction of T-cells capable of fighting the virus. A 2006 HIV literature review noted that "two studies suggested that so-called drug holidays were of no benefit and might actually harm patients, while a third study suggested that the idea might still have value and should be revisited." See also Downregulation and upregulation Rebound effect Tachyphylaxis References Clinical pharmacology
Drug holiday
Chemistry
770
46,995,434
https://en.wikipedia.org/wiki/Swedish%20Chemical%20Society
The Swedish Chemical Society () was established in 1883 and is a nonprofit organisation to promote the development of chemistry in Sweden. The society is based on Wallingatan, Stockholm. () is the monthly magazine of the Swedish Chemical Society. The society also awards the annual Arrhenius Plaque for contributions in the field of science or to the society. References Learned societies of Sweden Chemistry societies Scientific organizations established in 1883 1883 establishments in Sweden
Swedish Chemical Society
Chemistry
88
2,955,843
https://en.wikipedia.org/wiki/Satplan
Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem, which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT. Given a problem instance in planning, with a given initial state, a given set of actions, a goal, and a horizon length, a formula is generated so that the formula is satisfiable if and only if there is a plan with the given horizon length. This is similar to simulation of Turing machines with the satisfiability problem in the proof of Cook's theorem. A plan can be found by testing the satisfiability of the formulas for different horizon lengths. The simplest way of doing this is to go through horizon lengths sequentially, 0, 1, 2, and so on. See also Graphplan References H. A. Kautz and B. Selman (1992). Planning as satisfiability. In Proceedings of the Tenth European Conference on Artificial Intelligence (ECAI'92), pages 359–363. H. A. Kautz and B. Selman (1996). Pushing the envelope: planning, propositional logic, and stochastic search. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI'96), pages 1194–1201. J. Rintanen (2009). Planning and SAT. In A. Biere, H. van Maaren, M. Heule and Toby Walsh, Eds., Handbook of Satisfiability, pages 483–504, IOS Press. Automated planning and scheduling
Satplan
Technology
354
15,352,502
https://en.wikipedia.org/wiki/NXF2
Nuclear RNA export factor 2 is a protein that in humans is encoded by the NXF2 gene. Function This gene is one of a family of nuclear RNA export factor genes. It encodes a protein that is involved in mRNA export, is located in the nucleoplasm, and is associated with the nuclear envelope. Alternative splicing seems to be a common mechanism in this gene family. Two variants have been found for this gene. Interactions NXF2 has been shown to interact with NUP214. References Further reading
NXF2
Chemistry
111
9,296,327
https://en.wikipedia.org/wiki/Digital%20era%20governance
The first idea of a digital administrative law was born in Italy in 1978 by Giovanni Duni and was developed in 1991 with the name teleadministration. In the public administration debate about new public management (NPM), the concept of digital era governance (or DEG) is claimed by Patrick Dunleavy, Helen Margetts and their co-authors as replacing NPM since around 2000 to 2005. DEG has three key elements: reintegration (bringing issues back into government control, like US airport security after 9/11); needs-based holism (reorganizing government around distinct client groups); and digitization (fully exploiting the potential of digital storage and Internet communications to transform governance). Digital era governance implies that public sector organizations are facing new challenges and rapidly changing information technologies and information systems. Since the popularization of the theory, it has been applied and enriched through the empirical works, such as the case study done on Brunei's Information Department. The case study demonstrated that digital dividends that can be secured through the effective application of new technology in the digital governance process. Management approaches for digital era governance To create a better government by means of ICT, public sector organizations cannot rely on their traditional methods. Firstly, traditional public services often are fragmented, duplicative, and inconsistent across government. Secondly, silo-like organizational management then leads to more individual government offices that are less effective regarding the creation of public values. A reorientation towards more innovative approaches is necessary. The mere implementation of technological instruments however does not necessarily require a change of management. For an improved government it is necessary to go from traditional management approaches towards innovative approaches. Collaborative management One approach is a highly collaborative way of managing future policy implementations such as the development of proactive public services. Such proactive services require little to no action by the users and eliminate the "burden and confusion for citizens and businesses, who can now obtain services without dealing with bureaucracy". This new course of action on the one hand entails a significant transformation of government practices while on the other hand proactive or new services (e.g., automated assistants) might rely on third parties who are leveraging government information. This shift from government-provided towards third party services represents a distinct approach and forces public management to transfer some of their control by forming policy knowledge and resource networks. Problem oriented governance Another approach refers to a change of mindset. This means that management should consider the transformation from service-oriented governance towards problem-oriented governance which is an "approach to policy design and implementation that emphasizes the need for organizations to adapt their form and functioning to the nature of the public problems they seek to address". This will create a more holistic and efficient way of tackling citizens’ needs and future technological advances. Since the digital era management challenges are also about harmonizing "delivery-first, user-centric, agile work models while also satisfying, or alternatively, challenging, onerous hierarchical accountability requirements", public sector organizations require fundamental change of the inflexible culture of bureaucratic organizations, e.g., by establishing cross-functional problem-oriented teams. See also Cyberocracy E-governance E-government Government by algorithm References Political science Public administration Public policy Digital technology
Digital era governance
Technology
666
6,804,356
https://en.wikipedia.org/wiki/Swedish%20bitters
Swedish bitters, also called Swedish tincture, is a bitter and a traditional herbal tonic, the use of which dates back to the 15th century. Origins Swedish bitters is said to have been formulated in a similar way to ancient bitters by Paracelsus and rediscovered by 18th century Swedish medics Dr. Klaus Samst and Dr. Urban Hjärne, though this appears to be mistaking the latter for his son, Kristian Henrik Hjärne, who himself invented a bitter. In modern times, Swedish bitters have been popularized by Maria Treben, an Austrian herbalist. The tonic is claimed to cure a large number of ailments, and to aid digestion. These claims are presented with little in the way of scientific evidence to support them, though empirical evidence provides for a very large database of positive results. Components The alcoholic Swedish bitters is purported to have a similar flavor to Angostura bitters, though perhaps drier. Nowadays, it is more common to prepare Swedish bitters from a dry herbs mixture Ingredients The following herbs are added to alcohol to make Swedish bitters: aloe as active ingredient water extract of the following herbs: angelica root (Angelica archangelica) camphor (Cinnamomum camphora) carline thistle root (Carlina acaulis) manna (Fraxinus ornus) myrrh rhubarb root (Rheum palmatum) saffron senna (Senna alexandrina) theriac venetian (theriac) (a mixture of many herbs and other substances) zedoary root (Curcuma zedoaria) There are variations on this recipe and herbal shops supply alcoholic and non-alcoholic versions of the drink. Maria Treben's book contains nine pages on this bitter, with a description of many ailments and their cures. References Herbalism Bitters Pharmacognosy
Swedish bitters
Chemistry
388
237,495
https://en.wikipedia.org/wiki/Information%20system
An information system (IS) is a formal, sociotechnical, organizational system designed to collect, process, store, and distribute information. From a sociotechnical perspective, information systems comprise four components: task, people, structure (or roles), and technology. Information systems can be defined as an integration of components for collection, storage and processing of data, comprising digital products that process data to facilitate decision making and the data being used to provide information and contribute to knowledge. A computer information system is a system, which consists of people and computers that process or interpret information. The term is also sometimes used to simply refer to a computer system with software installed. "Information systems" is also an academic field of study about systems with a specific reference to information and the complementary networks of computer hardware and software that people and organizations use to collect, filter, process, create and also distribute data. An emphasis is placed on an information system having a definitive boundary, users, processors, storage, inputs, outputs and the aforementioned communication networks. In many organizations, the department or unit responsible for information systems and data processing is known as "information services". Any specific information system aims to support operations, management and decision-making. An information system is the information and communication technology (ICT) that an organization uses, and also the way in which people interact with this technology in support of business processes. Some authors make a clear distinction between information systems, computer systems, and business processes. Information systems typically include an ICT component but are not purely concerned with ICT, focusing instead on the end-use of information technology. Information systems are also different from business processes. Information systems help to control the performance of business processes. Alter argues that viewing an information system as a special type of work system has its advantages. A work system is a system in which humans or machines perform processes and activities using resources to produce specific products or services for customers. An information system is a work system in which activities are devoted to capturing, transmitting, storing, retrieving, manipulating and displaying information. As such, information systems inter-relate with data systems on the one hand and activity systems on the other. An information system is a form of communication system in which data represent and are processed as a form of social memory. An information system can also be considered a semi-formal language which supports human decision making and action. Information systems are the primary focus of study for organizational informatics. Overview Silver et al. (1995) provided two views on IS that includes software, hardware, data, people, and procedures. The Association for Computing Machinery defines "Information systems specialists [as] focus[ing] on integrating information technology solutions and business processes to meet the information needs of businesses and other enterprises." There are various types of information systems, : including transaction processing systems, decision support systems, knowledge management systems, learning management systems, database management systems, and office information systems. Critical to most information systems are information technologies, which are typically designed to enable humans to perform tasks for which the human brain is not well suited, such as: handling large amounts of information, performing complex calculations, and controlling many simultaneous processes. Information technologies are a very important and malleable resource available to executives. Many companies have created a position of chief information officer (CIO) that sits on the executive board with the chief executive officer (CEO), chief financial officer (CFO), chief operating officer (COO), and chief technical officer (CTO). The CTO may also serve as CIO, and vice versa. The chief information security officer (CISO) focuses on information security management. Six components The six components that must come together in order to produce an information system are: Hardware: The term hardware refers to machinery and equipment. In a modern information system, this category includes the computer itself and all of its support equipment. The support equipment includes input and output devices, storage devices and communications devices. In pre-computer information systems, the hardware might include ledger books and ink. Software: The term software refers to computer programs and the manuals (if any) that support them. Computer programs are machine-readable instructions that direct the circuitry within the hardware parts of the system to function in ways that produce useful information from data. Programs are generally stored on some input/output medium, often a disk or tape. The "software" for pre-computer information systems included how the hardware was prepared for use (e.g., column headings in the ledger book) and instructions for using them (the guidebook for a card catalog). Data: Data are facts that are used by systems to produce useful information. In modern information systems, data are generally stored in machine-readable form on disk or tape until the computer needs them. In pre-computer information systems, the data were generally stored in human-readable form. Procedures: Procedures are the policies that govern the operation of an information system. "Procedures are to people what software is to hardware" is a common analogy that is used to illustrate the role of procedures in a system. People: Every system needs people if it is to be useful. Often the most overlooked element of the system is the people, probably the component that most influences the success or failure of information systems. This includes "not only the users, but those who operate and service the computers, those who maintain the data, and those who support the network of computers". Internet: The internet is a combination of data and people. (Although this component is not necessary for functionality.) Data is the bridge between hardware and people. This means that the data we collect is only data until we involve people. At that point, data becomes information. Types The "classic" view of Information systems found in textbooks in the 1980s was a pyramid of systems that reflected the hierarchy of the organization, usually transaction processing systems at the bottom of the pyramid, followed by management information systems, decision support systems, and ending with executive information systems at the top. Although the pyramid model remains useful since it was first formulated, a number of new technologies have been developed and new categories of information systems have emerged, some of which no longer fit easily into the original pyramid model. Some examples of such systems are: Artificial intelligence system Computing platform Data warehouses Decision support system Enterprise resource planning Enterprise systems Expert systems Geographic information system Global information system Management information system Multimedia information system Office automation Process control system Search engines Social information systems A computer(-based) information system is essentially an IS using computer technology to carry out some or all of its planned tasks. The basic components of computer-based information systems are: Hardware are the devices like the monitor, processor, printer, and keyboard, all of which work together to accept, process, show data, and information. Software are the programs that allow the hardware to process the data. Databases are the gathering of associated files or tables containing related data. Networks are a connecting system that allows diverse computers to distribute resources. Procedures are the commands for combining the components above to process information and produce the preferred output. The first four components (hardware, software, database, and network) make up what is known as the information technology platform. Information technology workers could then use these components to create information systems that watch over safety measures, risk and the management of data. These actions are known as information technology services. Certain information systems support parts of organizations, others support entire organizations, and still others, support groups of organizations. Each department or functional area within an organization has its own collection of application programs or information systems. These functional area information systems (FAIS) are supporting pillars for more general IS namely, business intelligence systems and dashboards. As the name suggests, each FAIS supports a particular function within the organization, e.g.: accounting IS, finance IS, production-operation management (POM) IS, marketing IS, and human resources IS. In finance and accounting, managers use IT systems to forecast revenues and business activity, to determine the best sources and uses of funds, and to perform audits to ensure that the organization is fundamentally sound and that all financial reports and documents are accurate. Other types of organizational information systems are FAIS, transaction processing systems, enterprise resource planning, office automation system, management information system, decision support system, expert system, executive dashboard, supply chain management system, and electronic commerce system. Dashboards are a special form of IS that support all managers of the organization. They provide rapid access to timely information and direct access to structured information in the form of reports. Expert systems attempt to duplicate the work of human experts by applying reasoning capabilities, knowledge, and expertise within a specific domain. Development Information technology departments in larger organizations tend to strongly influence the development, use, and application of information technology in the business. A series of methodologies and processes can be used to develop and use an information system. Many developers use a systems engineering approach such as the system development life cycle (SDLC), to systematically develop an information system in stages. The stages of the system development lifecycle are planning, system analysis, and requirements, system design, development, integration and testing, implementation and operations, and maintenance. Recent research aims at enabling and measuring the ongoing, collective development of such systems within an organization by the entirety of human actors themselves. An information system can be developed in house (within the organization) or outsourced. This can be accomplished by outsourcing certain components or the entire system. A specific case is the geographical distribution of the development team (offshoring, global information system). A computer-based information system, following a definition of Langefors, is a technologically implemented medium for recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions. Geographic information systems, land information systems, and disaster information systems are examples of emerging information systems, but they can be broadly considered as spatial information systems. System development is done in stages which include: Problem recognition and specification Information gathering Requirements specification for the new system System design System construction System implementation Review and maintenance As an academic discipline The field of study called information systems encompasses a variety of topics including systems analysis and design, computer networking, information security, database management, and decision support systems. Information management deals with the practical and theoretical problems of collecting and analyzing information in a business function area including business productivity tools, applications programming and implementation, electronic commerce, digital media production, data mining, and decision support. Communications and networking deals with telecommunication technologies. Information systems bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes on building the IT systems within a computer science discipline. Computer information systems (CIS) is a field studying computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society, whereas IS emphasizes functionality over design. Several IS scholars have debated the nature and foundations of information systems which have its roots in other reference disciplines such as computer science, engineering, mathematics, management science, cybernetics, and others. Information systems also can be defined as a collection of hardware, software, data, people, and procedures that work together to produce quality information. Related terms Similar to computer science, other disciplines can be seen as both related and foundation disciplines of IS. The domain of study of IS involves the study of theories and practices related to the social and technological phenomena, which determine the development, use, and effects of information systems in organizations and society. But, while there may be considerable overlap of the disciplines at the boundaries, the disciplines are still differentiated by the focus, purpose, and orientation of their activities. In a broad scope, information systems is a scientific field of study that addresses the range of strategic, managerial, and operational activities involved in the gathering, processing, storing, distributing, and use of information and its associated technologies in society and organizations. The term information systems is also used to describe an organizational function that applies IS knowledge in the industry, government agencies, and not-for-profit organizations. Information systems often refers to the interaction between algorithmic processes and technology. This interaction can occur within or across organizational boundaries. An information system is a technology an organization uses and also the way in which the organizations interact with the technology and the way in which the technology works with the organization's business processes. Information systems are distinct from information technology (IT) in that an information system has an information technology component that interacts with the processes' components. One problem with that approach is that it prevents the IS field from being interested in non-organizational use of ICT, such as in social networking, computer gaming, mobile personal usage, etc. A different way of differentiating the IS field from its neighbours is to ask, "Which aspects of reality are most meaningful in the IS field and other fields?" This approach, based on philosophy, helps to define not just the focus, purpose, and orientation, but also the dignity, destiny and, responsibility of the field among other fields. Business informatics is a related discipline that is well-established in several countries, especially in Europe. While Information systems has been said to have an "explanation-oriented" focus, business informatics has a more "solution-oriented" focus and includes information technology elements and construction and implementation-oriented elements. Career pathways Information systems workers enter a number of different careers: Information system strategy Management information systems – A management information system (MIS) is an information system used for decision-making, and for the coordination, control, analysis, and visualization of information in an organization. Project management – Project management is the practice of initiating, planning, executing, controlling, and closing the work of a team to achieve specific goals and meet specific success criteria at the specified time. Enterprise architecture – A well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy. IS development IS organization IS consulting IS security IS auditing There is a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. Workers with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue." Information technology is important to the operation of contemporary businesses, it offers many employment opportunities. The information systems field includes the people in organizations who design and build information systems, the people who use those systems, and the people responsible for managing those systems. The demand for traditional IT staff such as programmers, business analysts, systems analysts, and designer is significant. Many well-paid jobs exist in areas of Information technology. At the top of the list is the chief information officer (CIO). The CIO is the executive who is in charge of the IS function. In most organizations, the CIO works with the chief executive officer (CEO), the chief financial officer (CFO), and other senior executives. Therefore, he or she actively participates in the organization's strategic planning process. Bachelor of Business Information Systems Research Information systems research is generally interdisciplinary concerned with the study of the effects of information systems on the behaviour of individuals, groups, and organizations. Hevner et al. (2004) categorized research in IS into two scientific paradigms including behavioural science which is to develop and verify theories that explain or predict human or organizational behavior and design science which extends the boundaries of human and organizational capabilities by creating new and innovative artifacts. Salvatore March and Gerald Smith proposed a framework for researching different aspects of information technology including outputs of the research (research outputs) and activities to carry out this research (research activities). They identified research outputs as follows: Constructs which are concepts that form the vocabulary of a domain. They constitute a conceptualization used to describe problems within the domain and to specify their solutions. A model which is a set of propositions or statements expressing relationships among constructs. A method which is a set of steps (an algorithm or guideline) used to perform a task. Methods are based on a set of underlying constructs and a representation (model) of the solution space. An instantiation is the realization of an artefact in its environment. Also research activities including: Build an artefact to perform a specific task. Evaluate the artefact to determine if any progress has been achieved. Given an artefact whose performance has been evaluated, it is important to determine why and how the artefact worked or did not work within its environment. Therefore, theorize and justify theories about IT artefacts. Although Information Systems as a discipline has been evolving for over 30 years now, the core focus or identity of IS research is still subject to debate among scholars. There are two main views around this debate: a narrow view focusing on the IT artifact as the core subject matter of IS research, and a broad view that focuses on the interplay between social and technical aspects of IT that is embedded into a dynamic evolving context. A third view calls on IS scholars to pay balanced attention to both the IT artifact and its context. Since the study of information systems is an applied field, industry practitioners expect information systems research to generate findings that are immediately applicable in practice. This is not always the case however, as information systems researchers often explore behavioral issues in much more depth than practitioners would expect them to do. This may render information systems research results difficult to understand, and has led to criticism. In the last ten years, the business trend is represented by the considerable increase of Information Systems Function (ISF) role, especially with regard to the enterprise strategies and operations supporting. It became a key factor to increase productivity and to support value creation. To study an information system itself, rather than its effects, information systems models are used, such as EATPUT. The international body of Information Systems researchers, the Association for Information Systems (AIS), and its Senior Scholars Forum Subcommittee on Journals (202), proposed a list of 11 journals that the AIS deems as 'excellent'. According to the AIS, this list of journals recognizes topical, methodological, and geographical diversity. The review processes are stringent, editorial board members are widely-respected and recognized, and there is international readership and contribution. The list is (or should be) used, along with others, as a point of reference for promotion and tenure and, more generally, to evaluate scholarly excellence. A number of annual information systems conferences are run in various parts of the world, the majority of which are peer reviewed. The AIS directly runs the International Conference on Information Systems (ICIS) and the Americas Conference on Information Systems (AMCIS), while AIS affiliated conferences include the Pacific Asia Conference on Information Systems (PACIS), European Conference on Information Systems (ECIS), the Mediterranean Conference on Information Systems (MCIS), the International Conference on Information Resources Management (Conf-IRM) and the Wuhan International Conference on E-Business (WHICEB). AIS chapter conferences include Australasian Conference on Information Systems (ACIS), Scandinavian Conference on Information Systems (SCIS), Information Systems International Conference (ISICO), Conference of the Italian Chapter of AIS (itAIS), Annual Mid-Western AIS Conference (MWAIS) and Annual Conference of the Southern AIS (SAIS). EDSIG, which is the special interest group on education of the AITP, organizes the Conference on Information Systems and Computing Education and the Conference on Information Systems Applied Research which are both held annually in November. See also Related subjects Formative context Human–computer interaction Informatics Bioinformatics Business informatics Cheminformatics Disaster informatics Geoinformatics Health informatics Information science Library science Web sciences Components Data architect Data modeling Data processing system Data Reference Model Database EATPUT Metadata Predictive Model Markup Language Semantic translation Three schema approach Implementation Enterprise information system Environmental Modeling Center Institute for Operations Research and the Management Sciences (INFORMS) References Further reading Rainer, R. Kelly and Cegielski, Casey G. (2009). "Introduction to Information Systems: Enabling and Transforming Business, 3rd Edition" Kroenke, David (2008). Using MIS – 2nd Edition. Lindsay, John (2000). Information Systems – Fundamentals and Issues. Kingston University, School of Information Systems Dostal, J. School information systems (Skolni informacni systemy). In Infotech 2007 – modern information and communication technology in education. Olomouc, EU: Votobia, 2007. s. 540 – 546. . O'Leary, Timothy and Linda. (2008). Computing Essentials Introductory 2008. McGraw-Hill on Computing2008.com Sage, S.M. "Information Systems: A brief look into history", Datamation, 63–69, Nov. 1968. – Overview of the early history of IS. External links Association for Information Systems (AIS) IS History website by AIS Center for Information Systems Research – Massachusetts Institute of Technology European Research Center for Information Systems Systems
Information system
Technology
4,291
13,464,844
https://en.wikipedia.org/wiki/Bell%20Laboratories%20Layered%20Space-Time
Bell Laboratories Layer Space-Time (BLAST) is a transceiver architecture for offering spatial multiplexing over multiple-antenna wireless communication systems. Such systems have multiple antennas at both the transmitter and the receiver in an effort to exploit the many different paths between the two in a highly-scattering wireless environment. BLAST was developed by Gerard Foschini at Lucent Technologies' Bell Laboratories (now Nokia Bell Labs). By careful allocation of the data to be transmitted to the transmitting antennas, multiple data streams can be transmitted simultaneously within a single frequency band — the data capacity of the system then grows directly in line with the number of antennas (subject to certain assumptions). This represents a significant advance on current, single-antenna systems. V-BLAST V-BLAST (Vertical-Bell Laboratories Layered Space-Time) is a detection algorithm to the receipt of multi-antenna MIMO systems. Available for the first time in 1996 at Bell Laboratories in New Jersey in the United States by Gerard J. Foschini. He proceeded simply to eliminate interference caused successively issuers. Its principle is quite simple: to make a first detection of the most powerful signal. It regenerates the received signal from this user from this decision. Then, the signal is regenerated subtracted from the received signal and, with this new signal, it proceeds to the detection of the second user's most powerful, since it has already cleared the first and so forth. What gives a vector containing received less interference. The complete detection algorithm can be summarized as recursive as follows: Initialize: Recursive: See also Space–time code — a means for using multiple antennas to improve reliability rather than data-rate. Telecommunication References Further reading External links http://www.alcatel-lucent.com/wps/portal/BellLabs Antennas Detection theory
Bell Laboratories Layered Space-Time
Engineering
377
65,596,698
https://en.wikipedia.org/wiki/AM-0902
AM-0902 is a drug which acts as a potent and selective antagonist for the TRPA1 receptor, and has analgesic and antiinflammatory effects. References Purines Oxadiazoles 4-Chlorophenyl compounds Phenyl compounds
AM-0902
Chemistry
59
301,135
https://en.wikipedia.org/wiki/Tacoma%20Narrows%20Bridge%20%281940%29
The 1940 Tacoma Narrows Bridge, the first bridge at this location, was a suspension bridge in the U.S. state of Washington that spanned the Tacoma Narrows strait of Puget Sound between Tacoma and the Kitsap Peninsula. It opened to traffic on July 1, 1940, and dramatically collapsed into Puget Sound on November 7 of the same year. The bridge's collapse has been described as "spectacular" and in subsequent decades "has attracted the attention of engineers, physicists, and mathematicians". Throughout its short existence, it was the world's third-longest suspension bridge by main span, behind the Golden Gate Bridge and the George Washington Bridge. Construction began in September 1938. From the time the deck was built, it began to move vertically in windy conditions, so construction workers nicknamed the bridge "Galloping Gertie". The motion continued after the bridge opened to the public, despite several damping measures. The bridge's main span finally collapsed in winds on the morning of November 7, 1940, as the deck oscillated in an alternating twisting motion that gradually increased in amplitude until the deck tore apart. The violent swaying and eventual collapse resulted in the death of a cocker spaniel named "Tubby", as well as inflicting injuries on people fleeing the disintegrating bridge or attempting to rescue the stranded dog. Efforts to replace the bridge were delayed by US involvement in World War II, as well as engineering and finance issues, but in 1950, a new Tacoma Narrows Bridge opened in the same location, using the original bridge's tower pedestals and cable anchorages. The portion of the bridge that fell into the water now serves as an artificial reef. The bridge's collapse had a lasting effect on science and engineering. In many physics textbooks, the event is presented as an example of elementary forced mechanical resonance, but it was more complicated in reality; the bridge collapsed because moderate winds produced aeroelastic flutter that was self-exciting and unbounded: for any constant sustained wind speed above about , the amplitude of the (torsional) flutter oscillation would continuously increase, with a negative damping factor, i.e., a reinforcing effect, opposite to damping. The collapse boosted research into bridge aerodynamics-aeroelastics, which has influenced the designs of all later long-span bridges. Design and construction Proposals for a bridge between Tacoma and the Kitsap Peninsula date at least to the Northern Pacific Railway's 1889 trestle proposal, but concerted efforts began in the mid-1920s. The Tacoma Chamber of Commerce began campaigning and funding studies in 1923. Several noted bridge engineers were consulted, including Joseph B. Strauss, who went on to be chief engineer of the Golden Gate Bridge, and David B. Steinman, later the designer of the Mackinac Bridge. Steinman made several Chamber-funded visits and presented a preliminary proposal in 1929, but by 1931 the Chamber had cancelled the agreement because Steinman was not working hard enough to obtain financing. At the 1938 meeting of the structural division of the American Society of Civil Engineers, during the construction of the bridge, with its designer in the audience, Steinman predicted its failure. In 1937, the Washington State legislature created the Washington State Toll Bridge Authority and appropriated $5,000 (equivalent to $ today) to study the request by Tacoma and Pierce County for a bridge over the Narrows. From the start, financing of the bridge was a problem: Revenue from the proposed tolls would not be enough to cover construction costs; another expense was buying out the ferry contract from a private firm running services on the Narrows at the time. Nonetheless, there was strong support for the bridge from the United States Navy, which operated the Puget Sound Naval Shipyard in Bremerton, and from the United States Army, which ran McChord Field and Fort Lewis near Tacoma. Washington State engineer Clark Eldridge produced a preliminary tried-and-true conventional suspension bridge design, and the Washington State Toll Bridge Authority requested $11 million (equivalent to $ million today) from the federal Public Works Administration (PWA). Preliminary construction plans by the Washington Department of Highways had called for a set of trusses to sit beneath the roadway and stiffen it. However, "Eastern consulting engineers" — by which Eldridge meant Leon Moisseiff, the noted New York bridge engineer who served as designer and consultant engineer for the Golden Gate Bridge — petitioned the PWA and the Reconstruction Finance Corporation (RFC) to build the bridge for less. Moisseiff and Frederick Lienhard, the latter an engineer with what was then known in New York as the Port Authority, had published a paper that was probably the most important theoretical advance in the bridge engineering field of the decade. Their theory of elastic distribution extended the deflection theory that was originally devised by the Austrian engineer Josef Melan to horizontal bending under static wind load. They showed that the stiffness of the main cables (via the suspenders) would absorb up to one-half of the static wind pressure pushing a suspended structure laterally. This energy would then be transmitted to the anchorages and towers. Using this theory, Moisseiff argued for stiffening the bridge with a set of plate girders rather than the trusses proposed by the Washington State Toll Bridge Authority. This approach meant a slimmer, more elegant design, and also reduced the construction costs as compared with the Highway Department's design proposed by Eldridge. Moisseiff's design won out, inasmuch as the other proposal was considered to be too expensive. On June 23, 1938, the PWA approved nearly $6 million (equivalent to $ million today) for the Tacoma Narrows Bridge. Another $1.6 million ($ million today) was to be collected from tolls to cover the estimated total $8 million cost ($ million today). Following Moisseiff's design, bridge construction began on November 23, 1938. Construction took only nineteen months, at a cost of $6.4 million ($ million today), which was financed by the grant from the PWA and a loan from the RFC. The Tacoma Narrows Bridge, with a main span of , was the third-longest suspension bridge in the world at that time, following the George Washington Bridge between New Jersey and New York City, and the Golden Gate Bridge, connecting San Francisco with Marin County to its north. Because planners expected fairly light traffic volumes, the bridge was designed with two lanes, and it was just wide. This was quite narrow, especially in comparison with its length. With only the plate girders providing additional depth, the bridge's roadway section was also shallow. The decision to use such shallow and narrow girders proved the bridge's undoing. With such minimal girders, the deck of the bridge was insufficiently rigid and was easily moved about by winds; from the start, the bridge became infamous for its movement. A mild to moderate wind could cause alternate halves of the centre span to visibly rise and fall several feet over four- to five-second intervals. This flexibility was experienced by the builders and workmen during construction, which led some of the workers to christen the bridge "Galloping Gertie". The nickname soon stuck, and even the public (when the toll-paid traffic started) felt these motions on the day that the bridge opened on July 1, 1940. Attempt to control structural vibration Since the structure experienced considerable vertical oscillations while it was still under construction, several strategies were used to reduce the motion of the bridge. They included: attachment of tie-down cables to the plate girders, which were anchored to 50-ton concrete blocks on the shore. This measure proved ineffective, as the cables snapped shortly after installation. addition of a pair of inclined cable stays that connected the main cables to the bridge deck at mid-span. These remained in place until the collapse but were also ineffective at reducing the oscillations. Finally, the structure was equipped with hydraulic buffers installed between the towers and the floor system of the deck to damp longitudinal motion of the main span. The effectiveness of the hydraulic dampers was nullified, however, because the seals of the units were damaged when the bridge was sand-blasted before being painted. The Washington State Toll Bridge Authority hired Frederick Burt Farquharson, an engineering professor at the University of Washington, to make wind tunnel tests and recommend solutions to reduce the oscillations of the bridge. Farquharson and his students built a 1:200-scale model of the bridge and a 1:20-scale model of a section of the deck. The first studies concluded on November 2, 1940—five days before the bridge collapse on November 7. He proposed two solutions: To drill holes in the lateral girders and along the deck so that the airflow could circulate through them (in this way reducing lift forces). To give a more aerodynamic shape to the transverse section of the deck by adding fairings or deflector vanes along the deck, attached to the girder fascia. The first option was not favored, because of its irreversible nature. The second option was the chosen one, but it was not carried out, because the bridge collapsed five days after the studies were concluded. Collapse On November 7, 1940, at around 9:45 a.m. PST, especially strong winds caused the bridge to sway wildly from side to side. At least two vehicles were on the bridge at the time – a delivery truck driven by Ruby Jacox and Arthur Hagen, employees of Rapid Transfer Company, and a vehicle driven by Leonard Coatsworth, editor at The News Tribune. The truck tipped over due to the swaying, while the car lost control and began to slide from side to side. Jacox, Hagen, and Coatsworth exited their respective vehicles and got off of the bridge on foot. Coatsworth's daughter's dog Tubby was left inside the car. Coatsworth later described his experience. Traffic was stopped to prevent additional vehicles from entering the bridge. Howard Clifford, a photographer for the Tacoma News Tribune, walked onto the bridge to try to save Tubby, but was forced to turn back when the span began to break apart in the center. At approximately 11:00 a.m., the bridge collapsed into the strait. Coatsworth received $814.40 (equivalent to $ today) in reimbursement from the Washington State Toll Bridge Authority for his car and its contents, including Tubby the cocker spaniel. Film of collapse The collapse was filmed with two cameras by Barney Elliott and by Harbine Monroe, owners of The Camera Shop in Tacoma, including the unsuccessful attempt to rescue the dog. Their footage was subsequently sold to Paramount Pictures, which duplicated it for newsreels in black-and-white and distributed it worldwide to movie theaters. Castle Films also received distribution rights for 8 mm home video. In 1998, The Tacoma Narrows Bridge Collapse was selected for preservation in the United States National Film Registry by the Library of Congress as being culturally, historically, or aesthetically significant. This footage is still shown to engineering, architecture, and physics students as a cautionary tale. Elliott and Monroe's footage of the construction and collapse was shot on 16 mm Kodachrome film, but most copies in circulation are in black and white because newsreels of the day copied the film onto 35 mm black-and-white stock. There were also film-speed discrepancies between Monroe's and Elliot's footage, with Monroe filming at 24 frames per second and Elliott at 16 frames per second. As a result, most copies in circulation also show the bridge oscillating approximately 50% faster than real time, due to an assumption during conversion that the film was shot at 24 frames per second rather than the actual 16 fps. Another reel of film emerged in February 2019, taken by Arthur Leach from the Gig Harbor (westward) side of the bridge, and one of the few known images of the collapse from that side. Leach was a civil engineer who served as toll collector for the bridge, and is believed to have been the last person to cross the bridge to the west before its collapse, trying to prevent further crossings from that side as the bridge became unstable. Leach's footage (originally on black-and-white film but then recorded to video cassette by filming the projection) also includes Leach's commentary at the time of the collapse. Inquiry Theodore von Kármán, the director of the Guggenheim Aeronautical Laboratory and a world-renowned aerodynamicist, was a member of the board of inquiry into the collapse. He reported that the State of Washington was unable to collect on one of the insurance policies for the bridge because its insurance agent had fraudulently pocketed the insurance premiums. The agent, Hallett R. French, who represented the Merchant's Fire Assurance Company, was charged and tried for grand larceny for withholding the premiums for $800,000 worth of insurance (equivalent to $ million today). The bridge was insured by many other policies that covered 80% of the $5.2 million structure's value (equivalent to $ million today). Most of these were collected without incident. On November 28, 1940, the U.S. Navy's Hydrographic Office reported that the remains of the bridge were located at geographical coordinates , at a depth of . Federal Works Agency Commission A commission formed by the Federal Works Agency studied the collapse of the bridge. The board of engineers responsible for the report were Othmar Ammann, Theodore von Kármán, and Glenn B. Woodruff. Without drawing any definitive conclusions, the commission explored three possible failure causes: Aerodynamic instability by self-induced vibrations in the structure Eddy formations that might be periodic Random effects of turbulence, that is the random fluctuations in velocity of the wind. Cause of the collapse The original Tacoma Narrows Bridge was the first to be built with girders of carbon steel anchored in concrete blocks; preceding designs typically had open lattice beam trusses underneath the roadbed. This bridge was the first of its type to employ plate girders (pairs of deep I-beams) to support the roadbed. With the earlier designs, any wind would pass through the truss, but in the new design, the wind would be diverted above and below the structure. Shortly after construction finished at the end of June (opened to traffic on July 1, 1940), it was discovered that the bridge would sway and buckle dangerously in relatively mild windy conditions that are common for the area, and worse during severe winds. This vibration was transverse, one-half of the central span rising while the other lowered. Drivers would see cars approaching from the other direction rise and fall, riding the violent energy wave through the bridge. However, at that time the mass of the bridge was considered sufficient to keep it structurally sound. The failure of the bridge occurred when a never-before-seen twisting mode occurred, from winds at . This is a so-called torsional vibration mode (which is different from the transversal or longitudinal vibration mode), whereby when the left side of the roadway went down, the right side would rise, and vice versa, i.e., the two halves of the bridge twisted in opposite directions, with the centre line of the road remaining still (motionless). This vibration was caused by aeroelastic fluttering. Fluttering is a physical phenomenon in which several degrees of freedom of a structure become coupled in an unstable oscillation driven by the wind. Here, unstable means that the forces and effects that cause the oscillation are not checked by forces and effects that limit the oscillation, so it does not self-limit but grows without bound. Eventually, the amplitude of the motion produced by the fluttering increased beyond the strength of a vital part, in this case the suspender cables. As several cables failed, the weight of the deck transferred to the adjacent cables, which became overloaded and broke in turn until almost all of the central deck fell into the water below the span. Resonance (due to Von Kármán vortex street) hypothesis The bridge's spectacular destruction is often used as an object lesson in the necessity to consider both aerodynamics and resonance effects in civil and structural engineering. Billah and Scanlan (1991) reported that, in fact, many physics textbooks (for example Resnick et al. and Tipler et al.) wrongly explain that the cause of the failure of the Tacoma Narrows bridge was externally forced mechanical resonance. Resonance is the tendency of a system to oscillate at larger amplitudes at certain frequencies, known as the system's natural frequencies. At these frequencies, even relatively small periodic driving forces can produce large amplitude vibrations, because the system stores energy. For example, a child using a swing realizes that if the pushes are properly timed, the swing can move with a very large amplitude. The driving force, in this case the child pushing the swing, exactly replenishes the energy that the system loses if its frequency equals the natural frequency of the system. Usually, the approach taken by those physics textbooks is to introduce a first order forced oscillator, defined by the second-order differential equation where , and stand for the mass, damping coefficient and stiffness of the linear system and and represent the amplitude and the angular frequency of the exciting force. The solution of such ordinary differential equation as a function of time represents the displacement response of the system (given appropriate initial conditions). In the above system resonance happens when is approximately , i.e., is the natural (resonant) frequency of the system. The actual vibration analysis of a more complicated mechanical system — such as an airplane, a building or a bridge — is based on the linearization of the equation of motion for the system, which is a multidimensional version of equation (). The analysis requires eigenvalue analysis and thereafter the natural frequencies of the structure are found, together with the so-called fundamental modes of the system, which are a set of independent displacements and/or rotations that specify completely the displaced or deformed position and orientation of the body or system, i.e., the bridge moves as a (linear) combination of those basic deformed positions. Each structure has natural frequencies. For resonance to occur, it is necessary to have also periodicity in the excitation force. The most tempting candidate of the periodicity in the wind force was assumed to be the so-called vortex shedding. This is because bluff (non-streamlined) bodies — like bridge decks — in a fluid stream produce (or "shed") wakes, whose characteristics depend on the size and shape of the body and the properties of the fluid. These wakes are accompanied by alternating low-pressure vortices on the downwind side of the body, the so-called Kármán vortex street or von Kármán vortex street. The body will in consequence try to move toward the low-pressure zone, in an oscillating movement called vortex-induced vibration. Eventually, if the frequency of vortex shedding matches the natural frequency of the structure, the structure will begin to resonate and the structure's movement can become self-sustaining. The frequency of the vortices in the von Kármán vortex street is called the Strouhal frequency , and is given by Here, stands for the flow velocity, is a characteristic length of the bluff body and is the dimensionless Strouhal number, which depends on the body in question. For Reynolds numbers greater than 1000, the Strouhal number is approximately equal to 0.21. In the case of the Tacoma Narrows,  was approximately and was 0.20. It was thought that the Strouhal frequency was close enough to one of the natural vibration frequencies of the bridge, i.e., , to cause resonance and therefore vortex-induced vibration. In the case of the Tacoma Narrows Bridge, this appears not to have been the cause of the catastrophic damage. According to Farquharson, the wind was steady at and the frequency of the destructive mode was 12 cycles/minute (0.2 Hz). This frequency was neither a natural mode of the isolated structure nor the frequency of blunt-body vortex shedding of the bridge at that wind speed, which was approximately 1 Hz. It can be concluded therefore that the vortex shedding was not the cause of the bridge collapse. The event can be understood only while considering the coupled aerodynamic and structural system that requires rigorous mathematical analysis to reveal all the degrees of freedom of the particular structure and the set of design loads imposed. Vortex-induced vibration is a far more complex process that involves both the external wind-initiated forces and internal self-excited forces that lock on to the motion of the structure. During lock-on, the wind forces drive the structure at or near one of its natural frequencies, but as the amplitude increases this has the effect of changing the local fluid boundary conditions, so that this induces compensating, self-limiting forces, which restrict the motion to relatively benign amplitudes. This is clearly not a linear resonance phenomenon, even if the bluff body has linear behaviour, since the exciting force amplitude is a nonlinear force of the structural response. Resonance vs. non-resonance explanations Billah and Scanlan state that Lee Edson in his biography of Theodore von Kármán is a source of misinformation: "The culprit in the Tacoma disaster was the Karman vortex street." However, the Federal Works Administration report of the investigation, of which von Kármán was part, concluded that A group of physicists cited "wind-driven amplification of the torsional oscillation" as distinct from resonance: To some degree the debate is due to the lack of a commonly accepted precise definition of resonance. Billah and Scanlan provide the following definition of resonance "In general, whenever a system capable of oscillation is acted on by a periodic series of impulses having a frequency equal to or nearly equal to one of the natural frequencies of the oscillation of the system, the system is set into oscillation with a relatively large amplitude." They then state later in their paper "Could this be called a resonant phenomenon? It would appear not to contradict the qualitative definition of resonance quoted earlier, if we now identify the source of the periodic impulses as self-induced, the wind supplying the power, and the motion supplying the power-tapping mechanism. If one wishes to argue, however, that it was a case of externally forced linear resonance, the mathematical distinction ... is quite clear, self-exciting systems differing strongly enough from ordinary linear resonant ones." Link to the Armistice Day blizzard The weather system that caused the bridge collapse went on to cause the 1940 Armistice Day Blizzard that killed 145 people in the Midwestern United States: Fate of the collapsed superstructure Efforts to salvage the bridge began almost immediately after its collapse and continued into May 1943. Two review boards, one appointed by the federal government and one appointed by the state of Washington, concluded that repair of the bridge was impossible, and the entire bridge would have to be dismantled and an entirely new bridge superstructure built. With steel being a valuable commodity because of the involvement of the United States in World War II, steel from the bridge cables and the suspension span was sold as scrap metal to be melted down. The salvage operation cost the state more than was returned from the sale of the material, a net loss of over $350,000 (). The cable anchorages, tower pedestals and most of the remaining substructure were relatively undamaged in the collapse, and were reused during construction of the replacement span that opened in 1950. The towers, which supported the main cables and road deck, suffered major damage at their bases from being deflected towards shore as a result of the collapse of the mainspan and the sagging of the sidespans. They were dismantled, and the steel sent to recyclers. Preservation of the collapsed roadway The underwater remains of the highway deck of the old suspension bridge act as a large artificial reef, and these are listed on the National Register of Historic Places with reference number 92001068. The Harbor History Museum has a display in its main gallery regarding the 1940 bridge, its collapse, and the subsequent two bridges. A lesson for history Othmar Ammann, a leading bridge designer and member of the Federal Works Agency Commission investigating the collapse of the Tacoma Narrows Bridge, wrote: Following the incident, engineers took extra caution to incorporate aerodynamics into their designs, and wind tunnel testing of designs was eventually made mandatory. The Bronx–Whitestone Bridge, which is of similar design to the 1940 Tacoma Narrows Bridge, was reinforced shortly after the collapse. Fourteen-foot-high (4.3 m) steel trusses were installed on both sides of the deck in 1943 to weigh down and stiffen the bridge in an effort to reduce oscillation. In 2003, the stiffening trusses were removed and aerodynamic fiberglass fairings were installed along both sides of the road deck. A key consequence was that suspension bridges reverted to a deeper and heavier truss design, including the replacement Tacoma Narrows Bridge (1950), until the development in the 1960s of box girder bridges with an airfoil shape such as the Severn Bridge, which gave the necessary stiffness together with reduced torsional forces. Replacement bridge Because of shortages in materials and labor as a result of the involvement of the United States in World War II, it took 10 years before a replacement bridge was opened to traffic. This replacement bridge was opened to traffic on October 14, 1950, and is long, longer than the original bridge. The replacement bridge also has more lanes than the original bridge, which only had two traffic lanes, plus shoulders on both sides. Half a century later, the replacement bridge exceeded its traffic capacity, and a second, parallel, suspension bridge was constructed to carry eastbound traffic. The suspension bridge that was completed in 1950 was reconfigured to carry only westbound traffic. The new parallel bridge opened to traffic in July 2007. See also Engineering disasters Humen Pearl River Bridge, suspension bridge that shook violently until weight limits were implemented List of bridge failures List of structural failures and collapses Millennium Bridge, London, for an engineering error Silver Bridge, a bridge that collapsed in 1967 on the West Virginia–Ohio border Volgograd Bridge, a bridge in Russia that experienced similar problems with the wind Kutai Kartanegara Bridge, a suspension bridge that collapsed in Indonesia References Further reading External links Tacoma Narrows Bridge at the Gig Harbor Peninsula Historical Society & Museum 1940 1940 establishments in Washington (state) 1940 disestablishments in Washington (state) 1940 disasters in the United States 1940 in Washington (state) Articles containing video clips Artificial reefs Bridge disasters caused by engineering error Bridge disasters in the United States Bridges completed in 1940 Bridges in Tacoma, Washington Former toll bridges in Washington (state) Historic Civil Engineering Landmarks National Register of Historic Places in Tacoma, Washington North Tacoma, Washington November 1940 events in the United States Road bridges on the National Register of Historic Places in Washington (state) Steel bridges in the United States Suspension bridges in Washington (state) Transport disasters in 1940 Transportation disasters in Washington (state) Building and structure collapses in the United States
Tacoma Narrows Bridge (1940)
Engineering
5,572
13,481,218
https://en.wikipedia.org/wiki/Open%20platform
In computing, an open platform describes a software system which is based on open standards, such as published and fully documented external application programming interfaces (API) that allow using the software to function in other ways than the original programmer intended, without requiring modification of the source code. Using these interfaces, a third party could integrate with the platform to add functionality. The opposite is a closed platform. An open platform does not mean it is open source, however most open platforms have multiple implementations of APIs. For example, Common Gateway Interface (CGI) is implemented by open source web servers as well as Microsoft Internet Information Server (IIS). An open platform can consist of software components or modules that are either proprietary or open source or both. It can also exist as a part of closed platform, such as CGI, which is an open platform, while many servers that implement CGI also have other proprietary parts that are not part of the open platform. An open platform implies that the vendor allows, and perhaps supports, the ability to do this. Using an open platform a developer could add features or functionality that the platform vendor had not completed or had not conceived of. An open platform allows the developer to change existing functionality, as the specifications are publicly available open standards. A service-oriented architecture allows applications, running as services, to be accessed in a distributed computing environment, such as between multiple systems or across the Internet. A major focus of Web services is to make functional building blocks accessible over standard Internet protocols that are independent from platforms and programming languages. An open SOA platform would allow anyone to access and interact with these building blocks. A 2008 Harvard Business School working paper, titled "Opening Platforms: How, When and Why?", differentiated a platform's openness in four aspects and gave example platforms. See also Application programming interface Open standard Open architecture Service-oriented architecture References Application programming interfaces Computing platforms
Open platform
Technology
385
46,666,977
https://en.wikipedia.org/wiki/Department%20of%20Computer%20Science%2C%20University%20of%20Bristol
The Department of Computer Science of the University of Bristol, is the computer science department of the University of Bristol and is based in the Merchant Venturers building on Woodland Road, close to Bristol city centre. the department is home to 145 academic staff, researchers, and PhD students. Research Research in the department is organised around 10 research groups, which focus on cryptography, algorithms, Human–computer interaction (HCI), computer vision, artificial intelligence (AI), verification, computational neuroscience, cybersecurity, robotics, high-performance computing, and programming languages. History The Department of Computer Science was formally established around 1984. Its heads of department include Professor Mike Rogers (1984-1995) Professor David May (1995-2006) Professor Nigel Smart (2006-2008) Professor Nishan Canagarajah (2008- ?) Dr Neill Campbell (?-2011) Dr Ian Holyer (2011-?) Professor Andrew Calway (?-2016) Professor Seth Bullock (2016-2020) Professor Christian Allen (2020-2021) Dr. Aisling O'Kane (2021-) Notable faculty members the department employs fourteen Professors, shown below: Professor Awais Rashid Professor Dave Cliff Professor Peter Flach Professor Majid Mirmehdi Professor Seth Bullock Professor Kerstin Eder Professor Walterio Mayol-Cuevas Professor Simon McIntosh-Smith Professor Andrew Calway Professor Kirsten Cater Professor Ian Nabney Professor Chris Preist Professor Bogdan Warinschi Professor Dima Damen References University of Bristol Bristol
Department of Computer Science, University of Bristol
Technology
306
26,605,091
https://en.wikipedia.org/wiki/KOI-81
KOI-81 is an eclipsing binary star in the constellation of Cygnus. The primary star is a late B-type or early A-type main-sequence star with a temperature of . It lies in the field of view of the Kepler Mission and was determined to have an object in orbit around it which is smaller and hotter than the main star. KOI-81b KOI-81b is a hot compact object orbiting KOI-81. It was discovered in 2010 by the Kepler Mission and came to attention because of its small size and high temperature of . The orbit of KOI-81b around the main star takes 23.8776 days to complete. Analysis of relativistic effects in the Kepler light curve suggests that it is a low-mass white dwarf of approximately 0.3 solar masses, produced by a previous stage of mass transfer during the object's giant phase. See also KOI-74, a similar system also discovered by the Kepler Mission. Kepler Object of Interest, stars observed to have transits by the Kepler Mission References B-type main-sequence stars Cygnus (constellation) Eclipsing binaries White dwarfs 81
KOI-81
Astronomy
242
19,434,737
https://en.wikipedia.org/wiki/NGC%205665
NGC 5665 is a spiral galaxy in the northern constellation of Boötes. It was discovered on January 30, 1784 by German-British astronomer William Herschel. This galaxy is located at a distance of , and is receding with a heliocentric radial velocity of . It is cataloged in Halton Arp's Atlas of Peculiar Galaxies as object number 49. The morphological classification of NGC 5665 is unclear and differs by author. In the De Vaucouleurs system it was classified as , which indicates a weakly-barred spiral galaxy (SAB) with a transitional inner ring structure (rs), loosely wound spiral arms (c), and suspected peculiarities (pec?). The galactic plane is inclined at an angle of to the plane of the sky, with the major axis aligned along a position angle of 145°. Evidence suggests that NGC 5665 underwent a gravitational interaction with another galaxy some 500 million years ago, swallowing a smaller companion. It is somewhat asymmetrical in appearance, retaining a single main spiral arm and the remains of several others. The galaxy is rich in dust and gas with a small bar at the center. There are numerous sites of star formation in the arm that match the age of the interaction. The spectrum of the core is a blend between a LINER and an H II region. References External links Image NGC 5665 Distance SIMBAD Data Boötes Intermediate spiral galaxies 5665 09352 51953 049 51953 +01-37-024 Discoveries by William Herschel
NGC 5665
Astronomy
312
498,773
https://en.wikipedia.org/wiki/Benzalkonium%20chloride
Benzalkonium chloride (BZK, BKC, BAK, BAC), also known as alkyldimethylbenzylammonium chloride (ADBAC) and by the trade name Zephiran, is a type of cationic surfactant. It is an organic salt classified as a quaternary ammonium compound. ADBACs have three main categories of use: as a biocide, a cationic surfactant, and a phase transfer agent. ADBACs are a mixture of alkylbenzyldimethylammonium chlorides, in which the alkyl group has various even-numbered alkyl chain lengths. Solubility and physical properties Depending on purity, benzalkonium chloride ranges from colourless to a pale yellow (impure). Benzalkonium chloride is readily soluble in ethanol and acetone. Dissolution in water is ready, upon agitation. Aqueous solutions should be neutral to slightly alkaline. Solutions foam when shaken. Concentrated solutions have a bitter taste and a faint almond-like odour. Standard concentrates are manufactured as 50% and 80% w/w solutions, and sold under trade names such as BC50, BC80, BAC50, BAC80, etc. The 50% solution is purely aqueous, while more concentrated solutions require incorporation of rheology modifiers (alcohols, polyethylene glycols, etc.) to prevent increases in viscosity or gel formation under low temperature conditions. Cationic surfactant Benzalkonium chloride possesses surfactant properties, dissolving the lipid phase of the tear film and increasing drug penetration, making it a useful excipient, but at the risk of causing damage to the surface of the eye. Laundry detergents and treatments. Softeners for textiles. Phase transfer agent Benzalkonium chloride is a mainstay of phase-transfer catalysis, an important technology in the synthesis of organic compounds, including drugs. Bioactive agents Especially for its antimicrobial activity, benzalkonium chloride is an active ingredient in many consumer products: Pharmaceutical products such as eye, ear and nasal drops or sprays, as a preservative. Personal care products such as hand sanitizers, wet wipes, shampoos, soaps, deodorants and cosmetics. Skin antiseptics and wound wash sprays, such as Bactine. Throat lozenges and mouthwashes, as a biocide Spermicidal creams. Cleaners for floor and hard surfaces as a disinfectant, such as Lysol and Dettol antibacterial spray and wipes. Algaecides for clearing of algae, moss, lichens from paths, roof tiles, swimming pools, masonry, etc. Benzalkonium chloride is also used in many non-consumer processes and products, including as an active ingredient in surgical disinfection. A comprehensive list of uses includes industrial applications. During the course of the COVID-19 pandemic, from time to time there have been shortages of hand cleaner containing ethanol or isopropanol as active ingredients. The FDA has stated that benzalkonium chloride is eligible as an alternative for use in the formulation of healthcare personnel hand rubs. However, in reference to the FDA rule, the CDC states that it does not have a recommended alternative to ethanol or isopropanol as active ingredients, and adds that "available evidence indicates benzalkonium chloride has less reliable activity against certain bacteria and viruses than either of the alcohols." In November 2020 the Journal of Hospital Infection published a study on benzalkonium chloride formulations; it was found that laboratory and commercial disinfectants with as little as 0.13% benzalkonium chloride inactivated the SARS-CoV-2 virus within 15 seconds of contact, even in the presence of a soil or hard water. This resulted in a growing consensus that BZK sanitizers are just as effective as alcohol-based sanitizers despite the CDC guidelines. As a hand sanitizer, use of BZK may be advantageous over ethanol in some situations because it has significantly more residual antibacterial action on the skin after initial application. Benzalkonium chloride has demonstrated persistent antimicrobial activity for up to four hours after contact whereas ethanol-based sanitizer demonstrate skin protection for only 10 minutes post-application. Medicine Benzalkonium chloride is a frequently used preservative in eye drops; typical concentrations range from 0.004% to 0.01%. Stronger concentrations can be caustic and cause irreversible damage to the corneal endothelium. Avoiding the use of benzalkonium chloride solutions while contact lenses are in place is discussed in the literature. Due to its antimicrobial activity when applied to skin, some topical medications for acne vulgaris have benzalkonium chloride added to increase the products' efficiency or shelf-life. Benzalkonium chloride has also been shown to be a spermicide. In Russia and China, benzalkonium chloride is used as a contraceptive. Tablets are inserted vaginally, or a gel is applied, resulting in local spermicidal contraception. It is not a wholly reliable method, and can cause irritation. Beekeeping This chemical is used in beekeeping for the treatment of rotten diseases of the brood. Adverse effects Although historically benzalkonium chloride has been ubiquitous as a preservative in ophthalmic preparations, its ocular toxicity and irritant properties, in conjunction with consumer demand, have led pharmaceutical companies to increase production of preservative-free preparations, or to replace benzalkonium chloride with preservatives which are less harmful. Many mass-marketed inhaler and nasal spray formulations contain benzalkonium chloride as a preservative, despite substantial evidence that it can adversely affect ciliary motion, mucociliary clearance, nasal mucosal histology, human neutrophil function, and leukocyte response to local inflammation. Although some studies have found no correlation between use of benzalkonium chloride in concentrations at or below 0.1% in nasal sprays and drug-induced rhinitis, others have recommended that benzalkonium chloride in nasal sprays be avoided. In the United States, nasal steroid preparations that are free of benzalkonium chloride include budesonide, triamcinolone acetonide, dexamethasone, and Beconase and Vancenase aerosol inhalers. Benzalkonium chloride is an irritant to middle ear tissues at typically used concentrations. Inner ear toxicity has been demonstrated. Occupational exposure to benzalkonium chloride has been linked to the development of asthma. In 2011, a large clinical trial designed to evaluate the efficacy of hand sanitizers based on different active ingredients in preventing virus transmission amongst schoolchildren was re-designed to exclude sanitizers based on benzalkonium chloride due to safety concerns. Benzalkonium chloride has been in common use as a pharmaceutical preservative and antimicrobial since the 1940s. While early studies confirmed the corrosive and irritant properties of benzalkonium chloride, investigations into the adverse effects of, and disease states linked to, benzalkonium chloride have only surfaced during the past 30 years. Toxicology RTECS lists the following acute toxicity data: Benzalkonium chloride is a human skin and severe eye irritant. It is a respiratory toxicant, immunotoxicant, gastrointestinal toxicant, and neurotoxicant. Benzalkonium chloride formulations for consumer use are dilute solutions. Concentrated solutions are toxic to humans, causing corrosion/irritation to the skin and mucosa, and death if taken internally in sufficient volumes. 0.1% is the maximum concentration of benzalkonium chloride that does not produce primary irritation on intact skin or act as a sensitizer. Poisoning by benzalkonium chloride is recognised in the literature. A 2014 case study detailing the fatal ingestion of up to 8.1 oz (240 ml) of 10% benzalkonium chloride in a 78-year-old male also includes a summary of the currently published case reports of benzalkonium chloride ingestion. While the majority of cases were caused by confusion about the contents of containers, one case cites incorrect pharmacy dilution of benzalkonium chloride as the cause of poisoning of two infants. In 2018 a Japanese nurse was arrested and admitted to having murdered approximately 20 patients at a hospital in Yokohama by injecting benzalkonium chloride into their intravenous drip bags. Benzalkonium chloride poisoning of domestic pets has been recognised as a result of direct contact with surfaces cleaned with disinfectants using benzalkonium chloride as an active ingredient. Biological activity The antimicrobial activity is dependent on the chain length. For example, yeast and fungi are most affected by C12, gram positive by C14, and gram negative by C16. The greatest biocidal activity is associated with the C12 dodecyl and C14 myristyl alkyl derivatives. The mechanism of bactericidal/microbicidal action is thought to be due to disruption of intermolecular interactions. This can cause dissociation of cellular membrane lipid bilayers, which compromises cellular permeability controls and induces leakage of cellular contents. Other biomolecular complexes within the bacterial cell can also undergo dissociation. Enzymes, which finely control a wide range of respiratory and metabolic cellular activities, are particularly susceptible to deactivation. Critical intermolecular interactions and tertiary structures in such highly specific biochemical systems can be readily disrupted by cationic surfactants. Benzalkonium chloride solutions are fast-acting biocidal agents with a moderately long duration of action. They are active against bacteria and some viruses, fungi, and protozoa. Bacterial spores are considered to be resistant. Solutions are bacteriostatic or bactericidal according to their concentration. Gram-positive bacteria are generally more susceptible than gram-negative bacteria. Its activity depends on the surfactant concentration and also on the bacterial concentration (inoculum) at the moment of the treatment. Activity is not greatly affected by pH, but increases substantially at higher temperatures and prolonged exposure times. In a 1998 study using the FDA protocol, a non-alcohol sanitizer with benzalkonium chloride as the active ingredient met the FDA performance standards, while Purell, a popular alcohol-based sanitizer, did not. The study, which was undertaken and reported by a leading US developer, manufacturer and marketer of topical antimicrobial pharmaceuticals based on quaternary ammonium compounds, found that their own benzalkonium chloride-based sanitizer performed better than alcohol-based hand sanitizer after repeated use. Newer formulations using benzalkonium blended with various quaternary ammonium derivatives can be used to extend the biocidal spectrum and enhance the efficacy of benzalkonium based disinfection products. Formulation techniques have been used to great effect in enhancing the virucidal activity of quaternary ammonium-based disinfectants such as Virucide 100 to typical healthcare infection hazards such as hepatitis and HIV. The use of appropriate excipients can also greatly enhance the spectrum, performance and detergency, and prevent deactivation under use conditions. Formulation can also help minimise deactivation of benzalkonium solutions in the presence of organic and inorganic contamination.. However, recent studies have demonstrated the capacity of environmental microorganisms to develop reduced susceptibility to benzalkonium chloride by employing strategies such as modifying bacterial membranes: increasing pump activity, and reducing the expression of certain porins. Degradation Benzalkonium chloride degradation follows consecutive debenzylation, dealkylation, and demethylation steps producing benzyl chloride, an alkyl dimethyl amine, dimethylamine, a long chain alkane, and ammonia. The intermediates, major, and minor products can then be broken down into CO2, H2O, NH3, and Cl–. The first step to the biodegradation of BAC is the fission or splitting of the alkyl chain from the quaternary nitrogen as shown in the diagram. This is done by abstracting the hydrogen from the alkyl chain by using a hydroxyl radical leading to a carbon centered radical. This results in dimethylbenzylamine as the first intermediate and dodecanal as the major product. From here, dimethylbenzylamine can be oxidized to benzoic acid using the Fenton process. The trimethyl amine group in dimethylbenzylamine can be cleaved to form a benzyl that can be further oxidized to benzoic acid. Benzoic acid uses hydroxylation (adding a hydroxyl group) to form p-hydroxybenzoic acid. Dimethylbenzylamine can then be converted into ammonia by performing demethylation twice, which removes both methyl groups, followed by debenzylation, removing the benzyl group using hydrogenation. The diagram represents suggested pathways of the biodegradation of BAC for both the hydrophobic and the hydrophilic regions of the surfactant. Since stearalkonium chloride is a type of BAC, the biodegradation process should happen in the same manner. Regulation Benzalkonium chloride is classed as a Category III antiseptic active ingredient by the United States Food and Drug Administration (FDA). Ingredients are categorized as Category III when "available data are insufficient to classify as safe and effective, and further testing is required”. In September 2016, the FDA announced a ban on nineteen ingredients in consumer antibacterial soaps citing a lack of evidence for safety and effectiveness. A ban on three additional ingredients, including benzalkonium chloride, was deferred at that time to allow ongoing studies to be completed. Benzalkonium chloride was deferred from further rulemaking in the 2019 FDA Final Rule on safety and effectiveness of consumer hand sanitizers, "to allow for the ongoing study and submission of additional safety and effectiveness data necessary to make a determination" on whether it met these criteria for use in OTC hand sanitizers, but the agency indicated it did not intend to take action to remove benzalkonium chloride-based hand sanitizers from the market. There is acknowledgement that more data are required on its safety, efficacy, and effectiveness, especially with relation to: Human pharmacokinetic studies, including information on its metabolites Studies on animal absorption, distribution, metabolism, and excretion Data to help define the effect of formulation on dermal absorption Carcinogenicity Studies on developmental and reproductive toxicology Potential hormonal effects Assessment of the potential for development of bacterial resistance Risks of using it as a contraceptive method. However, recent studies have demonstrated the capacity of environmental microorganisms to develop reduced susceptibility to benzalkonium chloride by employing strategies such as modifying bacterial membranes: increasing pump activity, and reducing the expression of certain porins. See also – an alternative preservative for contact lens solutions References Further reading Thorup I: Evaluation of health hazards by exposure to Quaternary ammonium compounds, The Institute of Food Safety and Toxicology, Danish Veterinary and Food Administration, External links International Programme on Chemical Safety, International Chemical Safety Card (ICSC) - Benzalkonium Chloride National Institute for Occupational Safety and Health (NIOSH), International Chemical Safety Card (ICSC) - Benzalkonium Chloride International Programme on Chemical Safety, Poisons Information Monograph (PIMs) - Benzalkonium Chloride Haz-Map Category Details - Benzalkonium Chloride Recognition and Management of Pesticide Poisonings, United States Environmental Protection Agency, Office of Pesticide Programs, Sixth Edition, 2013 CDC Healthcare Infection Control Practices Advisory Committee (HICPAC), Guideline for Disinfection and Sterilization in Healthcare Facilities, 2008 Santa Cruz Biotechnology, Inc. MSDS Spectrum Labs "Clear Bath" Algae Inhibitor MSDS Nile Chemicals MSDS TCI America MSDS Sciencelab.com, Inc. MSDS Nasal Saline Sprays - The Additives May Be the Problem Algaecides Antiseptics Benzyl compounds Cationic surfactants Chlorides Disinfectants Quaternary ammonium compounds
Benzalkonium chloride
Chemistry,Biology
3,447
75,595,931
https://en.wikipedia.org/wiki/Aluminylene
Aluminylenes are a sub-class of aluminium(I) compounds that feature singly-coordinated aluminium atoms with a lone pair of electrons.  As aluminylenes exhibit two unoccupied orbitals, they are not strictly aluminium analogues of carbenes until stabilized by a Lewis base to form aluminium(I) nucleophiles. The lone pair and two empty orbitals on the aluminium allow for ambiphilic bonding where the aluminylene can act as both an electrophile and a nucleophile.  Aluminylenes have also been reported under the names alumylenes and alanediyl. The +1 oxidation state for aluminium is less stable than heavier group 13 elements, but the lower stability and higher reactivity of aluminium(I) compounds make for interesting chemistry.  The first aluminium(I) compound to be isolated was Dohmeier's (AlCp*)4 which existed as a tetrameric solid but dissociated in solution to the monomer. This was followed by Roesky's synthesis of a doubly coordinated aluminium(I) and nitrogen heterocycle analogous to an aluminium Arduengo carbene. Despite some rich aluminium(I) chemistry following those discoveries, it wasn't until 2020 that a free (not Lewis base stabilized) aluminylene was synthesized. Free aluminylenes Simple aluminylenes have been studied but are highly reactive and only exist in the gas phase under extreme conditions. The first free aluminylene came from Tuononen and Power, who used bulky terphenyl ligands to stabilize the reduction of the aluminium(III) diiodide.  The isolated arylaluminylene formed thermally stable yellow-orange crystals that were characterized via X-ray crystallography and NMR spectroscopy.  The aluminylene demonstrated more reactivity than its gallium analogue and quickly formed an aluminium hydride upon reaction with hydrogen gas. Soon after, Liu and coworkers as well as Hinz and coworkers separately synthesized a free nitrogen bound aluminylenes that was stabilized with the use of bulky carbazolyl ligands.  While also thermally stable, the N-aluminylene was extremely sensitive to air and water.  Part of the stability of the N-aluminylene is based on slight pi-donation from the nitrogen atom, facilitated by the planar nature of the molecule.  This conclusion is supported by electronic structure calculations and a slightly shorter N-Al bond distance than would be expected for a N-Al single bond.  Both free aluminylenes largely depend on the steric bulk of their ligands for kinetic protection, a common motif in stabilizing reactive main group complexes. Reactivity The ambiphilic nature of aluminylenes, as well as the reactivity of aluminium(I) complexes more generally, allows for aluminylenes to participate in a diverse range of reactions.  Natural Bond Orbital (NBO) calculations showed that the frontier orbitals of these aluminylenes matched expectations with the aluminium lone pair as the HOMO and a largely aluminium p-orbital based LUMO. Redox reactions Power's aluminylene was shown to react with organic azides to create aluminium(III) imides.  In a reaction with ArMe6N3, the terphenyl aluminylene was able to form an Al-N triple bond, a conclusion supported by the shortest reported Al-N bond distances (1.625Å).  This aluminylene also reacted with less bulky azides, but the lack of steric protection meant that a second equivalent of azide reacted to give a multiply coordinated aluminium(III) compound. The N-aluminylene reported by Liu and coworkers was shown to undergo an oxidative insertion reaction when mixed with IDippCuCl (IDipp=1,3-bis(2,6-diisopropylphenyl)imidazol-2-ylidene) to form a terminal copper-alumanyl complex. Liu also demonstrated that the N-aluminylene could act as an important precursor to organoaluminium compounds.  In these reactions, the aluminylene performs cycloaddition with unsaturated hydrocarbons to create aluminium heterocycles.  Subsequently, the Al-N bond can be cleaved using a nucleophilic salt to free the newly formed organoaluminium compound. In 2023, Liu and coworkers published further examples of the reactivity of their N-aluminylene as they attempted to react the compound with various boron based Lewis acids.  Upon reaction with Ph2BOBPh2, the aluminylene formed a tricoordinate species featuring new aluminium-boron and aluminium-oxygen bonds.  This free alumaborane was characterized via 11B NMR and showed two three-coordinate boron atoms, an observation further supported by x-ray crystallography data.  The formation of Lewis adducts was also observed when the aluminylene was mixed with strong Lewis acids such as BCF (Tris(pentafluorophenyl)borane) and Piers’ borane (HB(C6F5)2). Lewis base stabilized aluminylenes In addition to free aluminylenes, there have been several attempts to further stabilize these reactive species through the coordination of another Lewis base.  Transient versions of these compounds have been reported on the way to other products via coordination with N-heterocyclic Carbenes (NHCs) and amidophosphines. However, in 2022 Liu and coworkers were able to form an adduct between their N-aluminylene and an NHC, a combination that demonstrated increased reactivity compared to the free aluminylene.  They explained this with Density Functional Theory calculations at the M06-2X/def2-SVP level showing that the NHC coordination narrowed of the HOMO-LUMO gap by raising the energy of the aluminium lone pair (HOMO). This aluminylene-NHC adduct was then shown to activate otherwise unreactive arene species to initiate ring expansions. Aluminylene coordination chemistry Aluminylenes have also demonstrated the ability to act as ligands and coordinate to transition metal centers.  Tokitoh demonstrated multiple methods for using dialumene starting materials to create an arylaluminylene platinum complexes.  NBO calculations showed that the Al-Pt bond showed a large degree of electrostatic interaction, supplemented by sigma donation from the aluminium and pi-backbonding from the platinum. The N-aluminylene reported by Liu, also demonstrated an ability to coordinate to metal atoms.  UV irradiation of tungsten hexacarbonyl in the presence of the N-aluminylene created an aluminylene-W(CO)5 compound. Furthermore, treatment of the N-aluminylene with W(CO)6 and Cr(CO)6 in coordinating solvents such as THF and DMAP also formed the aluminylene-transition metal complexes. In these cases, the aluminylene was stabilized by having a THF molecule or two DMAP molecules donate their lone pairs into the aluminylenes empty orbitals.  Intrinsic Bond Orbital calculations showed a significant degree of pi-backbonding from the aluminylene in the tungsten and chromium complexes, which added further stabilization. References Aluminium(I) compounds Organoaluminium compounds Coordination complexes
Aluminylene
Chemistry
1,588
73,733,768
https://en.wikipedia.org/wiki/Boletinellus%20monticola
Boletinellus monticola, previously known as Gyrodon monticola, is a bolete fungus in the Boletinellaceae family with a pored hymenium rather than gills. This species can be identified by its common ectomycorrhizal association and therefore close proximity to Alder trees (Alnus acuminata). B. monticola is most commonly found near the equator, specifically in Southern Mexico and stretching into northern South America. Taxonomy Originally, B. monticola was associated with the Gryodon genus by Rolf Singer in 1957. However, it was reassigned to the Boletinellus genus due to its closer genetic relation. This reassignment was reinforced by R. Watling in 1997 who analyzed the description of Rolf Singer's observations in Argentina and concluded it to be related closer to the Boletinellus genus. This change in taxonomy also included the change of Gyrodon exiguus and Gyrodon rompelii to the Boletinellus genus as well. Description Boletinellus monticola has a yellow-brown cap with a yellow or orange fertile layer. The stalk extends a few centimeters from the ground and is commonly brown. The fertile layer is made up of large and yellow pores and tubes. B. monticola is also known to produce brown sclerotia in soil providing the ability for the fungus to survive under extreme environmental conditions. The flesh of the bolete is soft and often moist or even wet due to its favored climate of warm tropical areas. This species also produces highly differentiated rhizomorphs with brown dolipore hyphae. B. monticola bruises blue then fades to reddish brown then to dark brown. Edibility Boletinellus monticola is considered to be likely edible, however there is no record of it being eaten. Boletes are known to be edible and are reasonably safe for human consumption. Some closely related species such as B. merulioides have been known to taste “acidic and unpleasant”, while offering very little nutritional value. Habitat and distribution This species is a terrestrial fungi which grows in top-soil. B. monticola is grown in warm tropical climates and in high elevations ranging from 1000 m - 3,800 m above sea level. Due to its ectomycorrhizal association with Alder trees (Alnus acuminata) the fungus is restricted to the range of where these trees grow. Since most Alder trees grow north of the equator and B. monticola is only found in warm tropical climates, the fungus is generally rare due to this small geographical region. More specifically, Alnus acuminata is found in the highlands of Mexico to the Andes mountains. In North America, the fungus can be found in Southern Mexico. Stretching into South America, B. monticola has been found in Argentina, Bolivia, Colombia, Costa Rica, and Ecuador. Ectomycorrhizae Most of the species in the Boletinellus genus are ectomycorrhizal and B. monticola is no exception. This species is known to have an ectomycorrhizal relationship with Alder trees (Alnus acuminata). According to a study by Pablo Alvarado in 2021 this relationship could have evolved independently from a common ancestor of the Paxillaceae family nearly 98 million years ago. Using sulpho-vanillin the root and Hartig net stain reddish, bleach with NH4OH and lactic acid, while no reaction occurs when exposed to 15% KOH, Melzer's reagent and 70% ethanol. As a result of ITS PCR/RFLP analysis, B. monticola is identified molecularly and morphologically as a symbiont of A. acuminata in native Argentinean forests. Genome Using Internal Transcribed Spacer (ITS), Alejandra Becerra et al. in 2003 identified 895 base pairs in the genome. The genome of B. monticola is what allowed for the reassignment of the species from genus Gyrodon to Boletinellus. The work done by the University of California at Berkeley using PCR (Polymerase Chain Reaction) found the sequences of primers and other codons for specific genes didn't align with other species in the Gyrodon genus. This study found the first atp6 (mitochondrial locus) and cox3 sequences in the order Boletales allowing for the comparison of certain genes changing the distribution of various families under the order. Despite the family Boletinellaceae is in the order Boletales, evidence suggests that the family is more closely related to the order Sclerodermataceae, however there appears to be some species exceptions. References Boletales Fungi of Mexico Fungi of South America Fungus species
Boletinellus monticola
Biology
999
2,692,475
https://en.wikipedia.org/wiki/Sigma%20Tauri
Sigma Tauri (σ Tauri) is the Bayer designation for a pair of white-hued stars in the zodiac constellation of Taurus. The system is a visual double star, whose components are designated σ1 Tauri and σ2 Tauri, with the latter being the more northerly star. The two are separated by 7.2 arcminutes on the sky and can be readily split with a pair of binoculars. They have apparent visual magnitudes of +5.07 and +4.70, respectively, which indicates they are both visible to the naked eye. Based upon parallax measurements, σ1 Tauri is about 147 light years from the Sun, while σ2 Tauri is 156 light years distant. σ1 Tauri is a single-lined spectroscopic binary star system with an orbital period of 38.951 days and an eccentricity of 0.15. The visible component is an Am star with a stellar classification of A4m, indicating it is chemically peculiar A-type star. It is spinning with a projected rotational velocity of 56.5 km/s. The star has 1.9 times the mass of the Sun and is radiating 14.7 times the Sun's luminosity from its photosphere at an effective temperature of 8,470 K. Although it lies in the general direction of the Hyades cluster, based on parallax measurements it has been excluded from the list of candidate members. σ2 Tauri is a solitary A-type main sequence star with a stellar classification of A5 Vn. The 'n' suffix indicates the lines are "nebulous" due to rapid rotation, and indeed it is spinning with a projected rotational velocity of 128 km/s. The star is an estimated 258 million years old, with 1.7 times the mass of the Sun. It is radiating 22.5 times the Sun's luminosity from its photosphere at an effective temperature of around 8,165 K. The star is considered a member of the Hyades cluster. In Chinese astronomy, σ2 Tauri is called 附耳, Pinyin: Fùěr, meaning Whisper, because this star is marking itself and stands alone in the Whisper asterism of the Net mansion (see : Chinese constellation). References Am stars Binary stars A-type main-sequence stars Hyades (star cluster) Tauri, Sigma Taurus (constellation) Durchmusterung objects Tauri, 091 2 029479 88 021673 83 1478 9
Sigma Tauri
Astronomy
521
75,742,967
https://en.wikipedia.org/wiki/List%20of%20extant%20megaherbivores
Extant megaherbivores are large megafaunaul herbivores that can exceed in weight. They include elephants, rhinos, hippos, and giraffes, and are the largest of the land animals. There are nine extant species of megaherbivores, distributed across Africa and Asia. The term "megaherbivore" was coined in 1988 by Owen-Smith to describe large mammals that performed similar ecological functions, such as habitat defoliation and extensive seed dispersal. Animals under are this group are K-selected, meaning they have high life expectancies, slow population growth, large offspring, lengthy pregnancies, and low mortality rates. They have selected slow reproduction to enhance their survival chances, and as a result, increase their lifespan. Their large size offers protection from predators, but at the same time, it decreases the degree at which they reproduce due to restricted food sources. On average, megaherbivores give birth to a single offspring every 1.3 to 4.5 years, depending on the species and also tend to have high lifespans, with giraffes living 25 years, rhinoceroses and hippopotamuses 40 years, and elephants 60 years. The nine megaherbivore species are split into four distinct families: Elephantidae (2 genera and 3 species), Rhinocerotidae (4 genera and 4 species), Hippopotamidae (1 species), and Giraffidae (1 species). These families are polyphyletic and do not share a recent common ancestor, but were instead assembled due to similarities in ecological niches. List References Herbivores Heaviest or most massive organisms Lists of mammals Megafauna
List of extant megaherbivores
Biology
353
55,904,512
https://en.wikipedia.org/wiki/Topological%20geometry
Topological geometry deals with incidence structures consisting of a point set and a family of subsets of called lines or circles etc. such that both and carry a topology and all geometric operations like joining points by a line or intersecting lines are continuous. As in the case of topological groups, many deeper results require the point space to be (locally) compact and connected. This generalizes the observation that the line joining two distinct points in the Euclidean plane depends continuously on the pair of points and the intersection point of two lines is a continuous function of these lines. Linear geometries Linear geometries are incidence structures in which any two distinct points and are joined by a unique line . Such geometries are called topological if depends continuously on the pair with respect to given topologies on the point set and the line set. The dual of a linear geometry is obtained by interchanging the roles of points and lines. A survey of linear topological geometries is given in Chapter 23 of the Handbook of incidence geometry. The most extensively investigated topological linear geometries are those which are also dual topological linear geometries. Such geometries are known as topological projective planes. History A systematic study of these planes began in 1954 with a paper by Skornyakov. Earlier, the topological properties of the real plane had been introduced via ordering relations on the affine lines, see, e.g., Hilbert, Coxeter, and O. Wyler. The completeness of the ordering is equivalent to local compactness and implies that the affine lines are homeomorphic to and that the point space is connected. Note that the rational numbers do not suffice to describe our intuitive notions of plane geometry and that some extension of the rational field is necessary. In fact, the equation for a circle has no rational solution. Topological projective planes The approach to the topological properties of projective planes via ordering relations is not possible, however, for the planes coordinatized by the complex numbers, the quaternions or the octonion algebra. The point spaces as well as the line spaces of these classical planes (over the real numbers, the complex numbers, the quaternions, and the octonions) are compact manifolds of dimension . Topological dimension The notion of the dimension of a topological space plays a prominent rôle in the study of topological, in particular of compact connected planes. For a normal space , the dimension can be characterized as follows: If denotes the -sphere, then if, and only if, for every closed subspace each continuous map has a continuous extension . For details and other definitions of a dimension see and the references given there, in particular Engelking or Fedorchuk. 2-dimensional planes The lines of a compact topological plane with a 2-dimensional point space form a family of curves homeomorphic to a circle, and this fact characterizes these planes among the topological projective planes. Equivalently, the point space is a surface. Early examples not isomorphic to the classical real plane have been given by Hilbert and Moulton. The continuity properties of these examples have not been considered explicitly at that time, they may have been taken for granted. Hilbert’s construction can be modified to obtain uncountably many pairwise non-isomorphic -dimensional compact planes. The traditional way to distinguish from the other -dimensional planes is by the validity of Desargues’s theorem or the theorem of Pappos (see, e.g., Pickert for a discussion of these two configuration theorems). The latter is known to imply the former (Hessenberg). The theorem of Desargues expresses a kind of homogeneity of the plane. In general, it holds in a projective plane if, and only if, the plane can be coordinatized by a (not necessarily commutative) field, hence it implies that the group of automorphisms is transitive on the set of quadrangles ( points no of which are collinear). In the present setting, a much weaker homogeneity condition characterizes : Theorem. If the automorphism group of a -dimensional compact plane is transitive on the point set (or the line set), then has a compact subgroup which is even transitive on the set of flags (=incident point-line pairs), and is classical. The automorphism group of a -dimensional compact plane , taken with the topology of uniform convergence on the point space, is a locally compact group of dimension at most , in fact even a Lie group. All -dimensional planes such that can be described explicitly; those with are exactly the Moulton planes, the classical plane is the only -dimensional plane with ; see also. Compact connected planes The results on -dimensional planes have been extended to compact planes of dimension . This is possible due to the following basic theorem: Topology of compact planes. If the dimension of the point space of a compact connected projective plane is finite, then with . Moreover, each line is a homotopy sphere of dimension , see or. Special aspects of 4-dimensional planes are treated in, more recent results can be found in. The lines of a -dimensional compact plane are homeomorphic to the -sphere; in the cases the lines are not known to be manifolds, but in all examples which have been found so far the lines are spheres. A subplane of a projective plane is said to be a Baer subplane, if each point of is incident with a line of and each line of contains a point of . A closed subplane is a Baer subplane of a compact connected plane if, and only if, the point space of and a line of have the same dimension. Hence the lines of an 8-dimensional plane are homeomorphic to a sphere if has a closed Baer subplane. Homogeneous planes. If is a compact connected projective plane and if is transitive on the point set of , then has a flag-transitive compact subgroup and is classical, see or. In fact, is an elliptic motion group. Let be a compact plane of dimension , and write . If , then is classical, and is a simple Lie group of dimension respectively. All planes with are known explicitly. The planes with are exactly the projective closures of the affine planes coordinatized by a so-called mutation of the octonion algebra , where the new multiplication is defined as follows: choose a real number with and put . Vast families of planes with a group of large dimension have been discovered systematically starting from assumptions about their automorphism groups, see, e.g.,. Many of them are projective closures of translation planes (affine planes admitting a sharply transitive group of automorphisms mapping each line to a parallel), cf.; see also for more recent results in the case and for . Compact projective spaces Subplanes of projective spaces of geometrical dimension at least 3 are necessarily Desarguesian, see §1 or §16 or. Therefore, all compact connected projective spaces can be coordinatized by the real or complex numbers or the quaternion field. Stable planes The classical non-euclidean hyperbolic plane can be represented by the intersections of the straight lines in the real plane with an open circular disk. More generally, open (convex) parts of the classical affine planes are typical stable planes. A survey of these geometries can be found in, for the -dimensional case see also. Precisely, a stable plane is a topological linear geometry such that is a locally compact space of positive finite dimension, each line is a closed subset of , and is a Hausdorff space, the set is an open subspace ( stability), the map is continuous. Note that stability excludes geometries like the -dimensional affine space over or . A stable plane is a projective plane if, and only if, is compact. As in the case of projective planes, line pencils are compact and homotopy equivalent to a sphere of dimension , and with , see or. Moreover, the point space is locally contractible. Compact groups of (proper) 'stable planes are rather small. Let denote a maximal compact subgroup of the automorphism group of the classical -dimensional projective plane . Then the following theorem holds: If a -dimensional stable plane admits a compact group of automorphisms such that , then , see. Flag-homogeneous stable planes. Let be a stable plane. If the automorphism group is flag-transitive, then is a classical projective or affine plane, or is isomorphic to the interior of the absolute sphere of the hyperbolic polarity of a classical plane; see. In contrast to the projective case, there is an abundance of point-homogeneous stable planes, among them vast classes of translation planes, see and. Symmetric planes Affine translation planes have the following property: There exists a point transitive closed subgroup of the automorphism group which contains a unique reflection at some and hence at each point. More generally, a symmetric plane is a stable plane satisfying the aforementioned condition; see, cf. for a survey of these geometries. By Corollary 5.5, the group is a Lie group and the point space is a manifold. It follows that is a symmetric space. By means of the Lie theory of symmetric spaces, all symmetric planes with a point set of dimension or have been classified. They are either translation planes or they are determined by a Hermitian form. An easy example is the real hyperbolic plane. Circle geometries Classical models are given by the plane sections of a quadratic surface in real projective -space; if is a sphere, the geometry is called a Möbius plane. The plane sections of a ruled surface (one-sheeted hyperboloid) yield the classical Minkowski plane, cf. for generalizations. If is an elliptic cone without its vertex, the geometry is called a Laguerre plane. Collectively these planes are sometimes referred to as Benz planes. A topological Benz plane is classical, if each point has a neighbourhood which is isomorphic to some open piece of the corresponding classical Benz plane. Möbius planes Möbius planes consist of a family of circles, which are topological 1-spheres, on the -sphere such that for each point the derived structure is a topological affine plane. In particular, any distinct points are joined by a unique circle. The circle space is then homeomorphic to real projective -space with one point deleted. A large class of examples is given by the plane sections of an egg-like surface in real -space. Homogeneous Möbius planes If the automorphism group of a Möbius plane is transitive on the point set or on the set of circles, or if , then is classical and , see. In contrast to compact projective planes there are no topological Möbius planes with circles of dimension , in particular no compact Möbius planes with a -dimensional point space. All 2-dimensional Möbius planes such that can be described explicitly. Laguerre planes The classical model of a Laguerre plane consists of a circular cylindrical surface in real -space as point set and the compact plane sections of as circles. Pairs of points which are not joined by a circle are called parallel. Let denote a class of parallel points. Then is a plane , the circles can be represented in this plane by parabolas of the form . In an analogous way, the classical -dimensional Laguerre plane is related to the geometry of complex quadratic polynomials. In general, the axioms of a locally compact connected Laguerre plane require that the derived planes embed into compact projective planes of finite dimension. A circle not passing through the point of derivation induces an oval in the derived projective plane. By or, circles are homeomorphic to spheres of dimension or . Hence the point space of a locally compact connected Laguerre plane is homeomorphic to the cylinder or it is a -dimensional manifold, cf. A large class of -dimensional examples, called ovoidal Laguerre planes, is given by the plane sections of a cylinder in real 3-space whose base is an oval in . The automorphism group of a -dimensional Laguerre plane () is a Lie group with respect to the topology of uniform convergence on compact subsets of the point space; furthermore, this group has dimension at most . All automorphisms of a Laguerre plane which fix each parallel class form a normal subgroup, the kernel of the full automorphism group. The -dimensional Laguerre planes with are exactly the ovoidal planes over proper skew parabolae. The classical -dimensional Laguerre planes are the only ones such that , see, cf. also. Homogeneous Laguerre planes If the automorphism group of a -dimensional Laguerre plane is transitive on the set of parallel classes, and if the kernel is transitive on the set of circles, then is classical, see 2.1,2. However, transitivity of the automorphism group on the set of circles does not suffice to characterize the classical model among the -dimensional Laguerre planes. Minkowski planes The classical model of a Minkowski plane has the torus as point space, circles are the graphs of real fractional linear maps on . As with Laguerre planes, the point space of a locally compact connected Minkowski plane is - or -dimensional; the point space is then homeomorphic to a torus or to , see. Homogeneous Minkowski planes If the automorphism group of a Minkowski plane of dimension is flag-transitive, then is classical. The automorphism group of a -dimensional Minkowski plane is a Lie group of dimension at most . All -dimensional Minkowski planes such that can be described explicitly. The classical -dimensional Minkowski plane is the only one with , see. Notes References Topology Incidence geometry
Topological geometry
Physics,Mathematics
2,811
1,300,341
https://en.wikipedia.org/wiki/Non-homologous%20end%20joining
Non-homologous end joining (NHEJ) is a pathway that repairs double-strand breaks in DNA. It is called "non-homologous" because the break ends are directly ligated without the need for a homologous template, in contrast to homology directed repair (HDR), which requires a homologous sequence to guide repair. NHEJ is active in both non-dividing and proliferating cells, while HDR is not readily accessible in non-dividing cells. The term "non-homologous end joining" was coined in 1996 by Moore and Haber. NHEJ is typically guided by short homologous DNA sequences called microhomologies. These microhomologies are often present in single-stranded overhangs on the ends of double-strand breaks. When the overhangs are perfectly compatible, NHEJ usually repairs the break accurately. Imprecise repair leading to loss of nucleotides can also occur, but is much more common when the overhangs are not compatible. Inappropriate NHEJ can lead to translocations and telomere fusion, hallmarks of tumor cells. NHEJ implementations are understood to have been existent throughout nearly all biological systems and it is the predominant double-strand break repair pathway in mammalian cells. In budding yeast (Saccharomyces cerevisiae), however, homologous recombination dominates when the organism is grown under common laboratory conditions. When the NHEJ pathway is inactivated, double-strand breaks can be repaired by a more error-prone pathway called microhomology-mediated end joining (MMEJ). In this pathway, end resection reveals short microhomologies on either side of the break, which are then aligned to guide repair. This contrasts with classical NHEJ, which typically uses microhomologies already exposed in single-stranded overhangs on the DSB ends. Repair by MMEJ therefore leads to deletion of the DNA sequence between the microhomologies. and archaea Many species of bacteria, including Escherichia coli, lack an end joining pathway and thus rely completely on homologous recombination to repair double-strand breaks. NHEJ proteins have been identified in a number of bacteria, including Bacillus subtilis, Mycobacterium tuberculosis, and Mycobacterium smegmatis. Bacteria utilize a remarkably compact version of NHEJ in which all of the required activities are contained in only two proteins: a Ku homodimer and the multifunctional ligase/polymerase/nuclease LigD. In mycobacteria, NHEJ is much more error prone than in yeast, with bases often added to and deleted from the ends of double-strand breaks during repair. Many of the bacteria that possess NHEJ proteins spend a significant portion of their life cycle in a stationary haploid phase, in which a template for recombination is not available. NHEJ may have evolved to help these organisms survive DSBs induced during desiccation. It preferentially use rNTPs (RNA nucleotides), possibly advantageous in dormant cells. The archaeal NHEJ system in Methanocella paludicola have a homodimeric Ku, but the three functions of LigD are broken up into three single-domain proteins sharing an operon. All three genes retain substantial homology with their LigD counterparts and the polymerase retains the preference for rNTP. NHEJ has been lost and acquired multiple times in bacteria and archaea, with a significant amount of horizontal gene transfer shuffling the system around taxa. Corndog and Omega, two related mycobacteriophages of Mycobacterium smegmatis, also encode Ku homologs and exploit the NHEJ pathway to recircularize their genomes during infection. Unlike homologous recombination, which has been studied extensively in bacteria, NHEJ was originally discovered in eukaryotes and was only identified in prokaryotes in the past decade. In eukaryotes In contrast to bacteria, NHEJ in eukaryotes utilizes a number of proteins, which participate in the following steps: End binding and tethering In yeast, the Mre11-Rad50-Xrs2 (MRX) complex is recruited to DSBs early and is thought to promote bridging of the DNA ends. The corresponding mammalian complex of Mre11-Rad50-Nbs1 (MRN) is also involved in NHEJ, but it may function at multiple steps in the pathway beyond simply holding the ends in proximity. DNA-PKcs is also thought to participate in end bridging during mammalian NHEJ. Eukaryotic Ku is a heterodimer consisting of Ku70 and Ku80, and forms a complex with DNA-PKcs, which is present in mammals but absent in yeast. Ku is a basket-shaped molecule that slides onto the DNA end and translocates inward. Ku may function as a docking site for other NHEJ proteins, and is known to interact with the DNA ligase IV complex and XLF. End processing End processing involves removal of damaged or mismatched nucleotides by nucleases and resynthesis by DNA polymerases. This step is not necessary if the ends are already compatible and have 3' hydroxyl and 5' phosphate termini. Little is known about the function of nucleases in NHEJ. Artemis is required for opening the hairpins that are formed on DNA ends during V(D)J recombination, a specific type of NHEJ, and may also participate in end trimming during general NHEJ. Mre11 has nuclease activity, but it seems to be involved in homologous recombination, not NHEJ. The X family DNA polymerases Pol λ and Pol μ (Pol4 in yeast) fill gaps during NHEJ. Yeast lacking Pol4 are unable to join 3' overhangs that require gap filling, but remain proficient for gap filling at 5' overhangs. This is because the primer terminus used to initiate DNA synthesis is less stable at 3' overhangs, necessitating a specialized NHEJ polymerase. Ligation The DNA ligase IV complex, consisting of the catalytic subunit DNA ligase IV and its cofactor XRCC4 (Dnl4 and Lif1 in yeast), performs the ligation step of repair. XLF, also known as Cernunnos, is homologous to yeast Nej1 and is also required for NHEJ. While the precise role of XLF is unknown, it interacts with the XRCC4/DNA ligase IV complex and likely participates in the ligation step. Recent evidence suggests that XLF promotes re-adenylation of DNA ligase IV after ligation, recharging the ligase and allowing it to catalyze a second ligation. Other In yeast, Sir2 was originally identified as an NHEJ protein, but is now known to be required for NHEJ only because it is required for the transcription of Nej1. NHEJ and heat-labile sites Induction of heat-labile sites (HLS) is a signature of ionizing radiation. The DNA clustered damage sites consist of different types of DNA lesions. Some of these lesions are not prompt DSBs but they convert to DSB after heating. HLS are not evolved to DSB under physiological temperature (37 C). Also, the interaction of HLS with other lesions and their role in living cells is yet elusive. The repair mechanisms of these sites are not fully revealed. The NHEJ is the dominant DNA repair pathway throughout the cell cycle. The DNA-PKcs protein is the critical element in the center of NHEJ. Using DNA-PKcs KO cell lines or inhibition of DNA-PKcs does not affect the repair capacity of HLS. Also blocking both HR and NHEJ repair pathways by dactolisib (NVP-BEZ235) inhibitor showed that repair of HLS is not dependent on HR and NHEJ. These results showed that the repair mechanism of HLS is independent of NHEJ and HR pathways Regulation The choice between NHEJ and homologous recombination for repair of a double-strand break is regulated at the initial step in recombination, 5' end resection. In this step, the 5' strand of the break is degraded by nucleases to create long 3' single-stranded tails. DSBs that have not been resected can be rejoined by NHEJ, but resection of even a few nucleotides strongly inhibits NHEJ and effectively commits the break to repair by recombination. NHEJ is active throughout the cell cycle, but is most important during G1 when no homologous template for recombination is available. This regulation is accomplished by the cyclin-dependent kinase Cdk1 (Cdc28 in yeast), which is turned off in G1 and expressed in S and G2. Cdk1 phosphorylates the nuclease Sae2, allowing resection to initiate. V(D)J recombination NHEJ plays a critical role in V(D)J recombination, the process by which B-cell and T-cell receptor diversity is generated in the vertebrate immune system. In V(D)J recombination, hairpin-capped double-strand breaks are created by the RAG1/RAG2 nuclease, which cleaves the DNA at recombination signal sequences. These hairpins are then opened by the Artemis nuclease and joined by NHEJ. A specialized DNA polymerase called terminal deoxynucleotidyl transferase (TdT), which is only expressed in lymph tissue, adds nontemplated nucleotides to the ends before the break is joined. This process couples "variable" (V), "diversity" (D), and "joining" (J) regions, which when assembled together create the variable region of a B-cell or T-cell receptor gene. Unlike typical cellular NHEJ, in which accurate repair is the most favorable outcome, error-prone repair in V(D)J recombination is beneficial in that it maximizes diversity in the coding sequence of these genes. Patients with mutations in NHEJ genes are unable to produce functional B cells and T cells and suffer from severe combined immunodeficiency (SCID). At telomeres Telomeres are normally protected by a "cap" that prevents them from being recognized as double-strand breaks. Loss of capping proteins causes telomere shortening and inappropriate joining by NHEJ, producing dicentric chromosomes which are then pulled apart during mitosis. Paradoxically, some NHEJ proteins are involved in telomere capping. For example, Ku localizes to telomeres and its deletion leads to shortened telomeres. Ku is also required for subtelomeric silencing, the process by which genes located near telomeres are turned off. Consequences of dysfunction Several human syndromes are associated with dysfunctional NHEJ. Hypomorphic mutations in LIG4 and XLF cause LIG4 syndrome and XLF-SCID, respectively. These syndromes share many features including cellular radiosensitivity, microcephaly and severe combined immunodeficiency (SCID) due to defective V(D)J recombination. Loss-of-function mutations in Artemis also cause SCID, but these patients do not show the neurological defects associated with LIG4 or XLF mutations. The difference in severity may be explained by the roles of the mutated proteins. Artemis is a nuclease and is thought to be required only for repair of DSBs with damaged ends, whereas DNA Ligase IV and XLF are required for all NHEJ events. Mutations in genes that participate in non-homologous end joining lead to ataxia-telangiectasia (ATM gene), Fanconi anemia (multiple genes), as well as hereditary breast and ovarian cancers (BRCA1 gene). Many NHEJ genes have been knocked out in mice. Deletion of XRCC4 or LIG4 causes embryonic lethality in mice, indicating that NHEJ is essential for viability in mammals. In contrast, mice lacking Ku or DNA-PKcs are viable, probably because low levels of end joining can still occur in the absence of these components. All NHEJ mutant mice show a SCID phenotype, sensitivity to ionizing radiation, and neuronal apoptosis. Aging A system was developed for measuring NHEJ efficiency in the mouse. NHEJ efficiency could be compared across tissues of the same mouse and in mice of different age. Efficiency was higher in the skin, lung and kidney fibroblasts, and lower in heart fibroblasts and brain astrocytes. Furthermore, NHEJ efficiency declined with age. The decline was 1.8 to 3.8-fold, depending on the tissue, in the 5-month-old compared to the 24-month-old mice. Reduced capability for NHEJ can lead to an increase in the number of unrepaired or faultily repaired DNA double-strand breaks that may then contribute to aging. An analysis of the level of NHEJ protein Ku80 in human, cow, and mouse indicated that Ku80 levels vary dramatically between species, and that these levels are strongly correlated with species longevity. List of proteins involved in NHEJ in human cells Ku70/80 DNA-PKcs DNA Ligase IV XRCC4 XLF Artemis DNA polymerase mu DNA polymerase lambda PNKP Aprataxin APLF BRCA1 BRCA2 CYREN References DNA repair Telomeres
Non-homologous end joining
Biology
2,879
2,363,928
https://en.wikipedia.org/wiki/Algebra%20of%20physical%20space
In physics, the algebra of physical space (APS) is the use of the Clifford or geometric algebra Cl3,0(R) of the three-dimensional Euclidean space as a model for (3+1)-dimensional spacetime, representing a point in spacetime via a paravector (3-dimensional vector plus a 1-dimensional scalar). The Clifford algebra Cl3,0(R) has a faithful representation, generated by Pauli matrices, on the spin representation C2; further, Cl3,0(R) is isomorphic to the even subalgebra Cl(R) of the Clifford algebra Cl3,1(R). APS can be used to construct a compact, unified and geometrical formalism for both classical and quantum mechanics. APS should not be confused with spacetime algebra (STA), which concerns the Clifford algebra Cl1,3(R) of the four-dimensional Minkowski spacetime. Special relativity Spacetime position paravector In APS, the spacetime position is represented as the paravector where the time is given by the scalar part , and e1, e2, e3 is a basis for position space. Throughout, units such that are used, called natural units. In the Pauli matrix representation, the unit basis vectors are replaced by the Pauli matrices and the scalar part by the identity matrix. This means that the Pauli matrix representation of the space-time position is Lorentz transformations and rotors The restricted Lorentz transformations that preserve the direction of time and include rotations and boosts can be performed by an exponentiation of the spacetime rotation biparavector W In the matrix representation, the Lorentz rotor is seen to form an instance of the group (special linear group of degree 2 over the complex numbers), which is the double cover of the Lorentz group. The unimodularity of the Lorentz rotor is translated in the following condition in terms of the product of the Lorentz rotor with its Clifford conjugation This Lorentz rotor can be always decomposed in two factors, one Hermitian , and the other unitary , such that The unitary element R is called a rotor because this encodes rotations, and the Hermitian element B encodes boosts. Four-velocity paravector The four-velocity, also called proper velocity, is defined as the derivative of the spacetime position paravector with respect to proper time τ: This expression can be brought to a more compact form by defining the ordinary velocity as and recalling the definition of the gamma factor: so that the proper velocity is more compactly: The proper velocity is a positive unimodular paravector, which implies the following condition in terms of the Clifford conjugation The proper velocity transforms under the action of the Lorentz rotor L as Four-momentum paravector The four-momentum in APS can be obtained by multiplying the proper velocity with the mass as with the mass shell condition translated into Classical electrodynamics Electromagnetic field, potential, and current The electromagnetic field is represented as a bi-paravector F: with the Hermitian part representing the electric field E and the anti-Hermitian part representing the magnetic field B. In the standard Pauli matrix representation, the electromagnetic field is: The source of the field F is the electromagnetic four-current: where the scalar part equals the electric charge density ρ, and the vector part the electric current density j. Introducing the electromagnetic potential paravector defined as: in which the scalar part equals the electric potential ϕ, and the vector part the magnetic potential A. The electromagnetic field is then also: The field can be split into electric and magnetic components. Here, and F is invariant under a gauge transformation of the form where is a scalar field. The electromagnetic field is covariant under Lorentz transformations according to the law Maxwell's equations and the Lorentz force The Maxwell equations can be expressed in a single equation: where the overbar represents the Clifford conjugation. The Lorentz force equation takes the form Electromagnetic Lagrangian The electromagnetic Lagrangian is which is a real scalar invariant. Relativistic quantum mechanics The Dirac equation, for an electrically charged particle of mass m and charge e, takes the form: where e3 is an arbitrary unitary vector, and A is the electromagnetic paravector potential as above. The electromagnetic interaction has been included via minimal coupling in terms of the potential A. Classical spinor The differential equation of the Lorentz rotor that is consistent with the Lorentz force is such that the proper velocity is calculated as the Lorentz transformation of the proper velocity at rest which can be integrated to find the space-time trajectory with the additional use of See also Paravector Multivector wikibooks:Physics Using Geometric Algebra Dirac equation in the algebra of physical space Algebra References Textbooks Articles Mathematical physics Geometric algebra Clifford algebras Special relativity Electromagnetism
Algebra of physical space
Physics,Mathematics
1,018
57,687,305
https://en.wikipedia.org/wiki/Dissimilar%20friction%20stir%20welding
Dissimilar friction stir welding (DFSW) is the application of friction stir welding (FSW), invented in The Welding Institute (TWI) in 1991, to join different base metals including aluminum, copper, steel, titanium, magnesium and other materials. It is based on solid state welding that means there is no melting. DFSW is based on a frictional heat generated by a simple tool in order to soften the materials and stir them together using both tool rotational and tool traverse movements. In the beginning, it is mainly used for joining of aluminum base metals due to existence of solidification defects in joining them by fusion welding methods such as porosity along with thick Intermetallic compounds. DFSW is taken into account as an efficient method to join dissimilar materials in the last decade. There are many advantages for DFSW in compare with other welding methods including low-cost, user-friendly, and easy operation procedure resulting in enormous usages of friction stir welding for dissimilar joints. Welding tool, base materials, backing plate (fixture), and a milling machine are required materials and equipment for DFSW. On the other hand, other welding methods, such as Shielded Metal Arc Welding (SMAW) typically need highly professional operator as well as quite expensive equipment. Principle of operation The mechanism of DFSW is very simple. A rotating tool plunges into the interface of parent metals, and heat input generated by the friction between the tool shoulder surface and top surface of the base metals lead to softening of the base materials. In other words, the rotational movement of the tool mixes and stirs the parent metals and create a softened pasty mixture. Afterwards, the tool's traverse movement along the interface creates a joint. This results in a final bond that combines both mechanical and metallurgical bonding at the interface. These two bondings are critical in order to achieve proper mechanical properties. Butt and lap designs are the most common joint types in dissimilar friction stir welding (DFSW). Likewise, one material is generally harder than the other. In general, hard and soft materials are placed in advancing and retreating sides respectively during welding. Tool Geometry Tool configuration is an important factor to achieve a sound joint. The tool consists of two parts including tool shoulder and tool pin, as shown in below figure. The tool shoulder generates frictional heat, while the tool pin stirs the softened materials. Various pin and shoulder configurations may be used for DFSW. "Cylindrical", "rectangular", "triangular" and "threaded-cylindrical" are the most common tool pin profiles, while "featureless" and "scrolled" are the most common tool shoulder configurations. Tool material selection is dependent on the base materials to be joined. For example, for aluminum/copper joints, hot working alloy steel is generally used, while for harder metals such as titanium/aluminum joints, tungsten carbide is common. Welding Parameters In DFSW, mechanical properties mainly include tensile strength, hardness, yield strength, elongation. Selecting optimum welding parameters results in achieving proper mechanical properties of the joint. Tool rotational speed (rpm), tool traverse speed (mm/min), tool tilt angle (degree), tool offset (mm), tool penetration (mm), and tool geometry are most important welding parameters in DFSW. The tool center is typically placed in the centerline of the joint for similar joints such as aluminum/aluminium or copper/copper joints; in contrast, it is shifted towards the softer materials in DFSW called tool offset. It is a significant factor to achieve a joint possessing smaller welding defect and higher mechanical properties. Generally, harder and softer materials are placed in Advancing Side (AS) and Retreating Side (RT) respectively. Regardless of the tool geometry, which plays a critical role on final mechanical and metallurgical properties of the weldment, the effect of the tool rotational speed and tool offset are taken into account as the most important welding parameters during DFSW. Heat Generation A non-consumable rotating tool is plunged into the interface of parent materials. Frictional heat arisen from the tool shoulder throughout welding plasticizes the parent materials leading to local plastic deformation of the parent materials. Localized heat generated by the tool results from following process. At the initial stage, it is primarily arisen from frictional heat between the plunged pin and parent materials. Afterwards, it is mainly produced by the frictional heat between the shoulder surface and the top-surface of base metals once the shoulder touched the top-surface. Subsequently, the softened materials are stirred together by the rotating pin resulting in a solid-state bond. Frigaard et al. showed that tool rotational speed and tool shoulder diameters are the main contributing factors in heat generation. Material Flow The mechanism of bonding in DFSW is based on two simple concepts. First, stirred materials, a mixture flow of soft and hard metals, is forged into the interface of harder material leading to strong mechanical bond at the interface. Furthermore, a complementary metallurgical bond is formed at the interface enhancing and improving mechanical properties of the joint. Materials flow throughout DFSW depends on various parameters including welding process parameters, tool geometry, and base materials. Tool geometry is the most important factor in achieving appropriate material flow. Defects Occurrence of welding defects in DFSW are quite common. Welding defects in DFSW include tunneling defect, fragment defect, crack, void, surface cavity or grooves and excessive flash formation. Amongst these, tunneling defect is the most common defect in DFSW resulting from improper material flows throughout welding. It is mainly attributed to inappropriate selection of welding parameters particularly welding speed, rotational speed, tool design and tool penetration leading to either abnormal stirring or insufficient heat input. Formation of coarse fragments of harder materials within the matrix of softer materials is another typical defect observed only in DFSW. Generally, during DFSW, the paste materials behave like a metal matrix composite such that harder and softer materials act as the matrix and the reinforcement respectively. In fact, it is quite important to keep the harder material in relatively small size in order to achieve the best flow of materials. Therefore, any factors that cause formation of large piece of harder material lead to appearance of fragment defects. Tool offset and tool pin design were taken into account as the most significant contributing factors in formation of fragment defect in DFSW. They were accounted for disturbing the flow of material resulting from the formation of large pieces of harder material within the matrix of softer material due to the fact that it is quite difficult to stir and mix paste materials when one of them is not relatively fine. In addition, fragment defects usually accompany with other defects such as voids and cracks. Typical Characteristics DFSW shows various characteristics in terms of hardness distribution, tensile strength, microstructure, formation of intermetallic compounds as well as formation of a composite structure within the stir zone. The majority of the dissimilar joints fabricated by FSW demonstrate similar results. Hardness Since the base materials have different mechanical properties, the hardness distribution is not homogeneous which can be attributed to two different reasons. First, different mechanical properties of base materials including the hardness causes inhomogeneity in the weldments. Second, different microstructure and grain size of the welding zones including stir zone, TMAZ, and HAZ result in various hardness. Moreover, the hardness in the nugget zone or stir zone is very inhomogenous because of the formation of onion ring (composite structure ) and IMCs. As a result, dissimilar joints shows inhomogenous distribution in the nugget zone or stir zone. Microstructure Four different welding zones including Stir Zone (SZ) or nugget zone, Thermo-Mechanical Affected Zone (TMAZ), Heat affected zone (HAZ) and Base Metals (BM) are typically observed in dissimilar joints made by FSW. Microstructure of the weldment demonstrates a remarkable grain refinement in the stir zone along with elongation of the grains in the TMAZ. Intensive plastic deformation risen up by tool action, rotational and traverse movements, account for the notable grain refinement in the stir zone. Moreover, HAZ presents relatively coarser grain that can be attributed to lower cooling rate in comparison with other welding areas. Some phenomena are typical in dissimilar friction stir welding including formation of Intermetallic Compounds (IMCs) and appearance of a Composite-like Structure (CS) appeared in various patterns specifically onion rings shown in below figure. IMCs and CS enhance mechanical behavior of the joints depending their conditions such as the thickness of IMCs as well as distribution pattern of composite-like structure. Proper selection of welding parameters optimizes formation of IMCs and CS resulting in the highest mechanical properties. As pointed out before, rotational speed, welding speed, and tool offset along with tool pin are the most important factors affecting on mechanical and metallurgical properties during DFSW. Unlike conventional fusion welding methods that are accompanied with substantially thick interfacial IMCs, forming an interfacial metallurgical bond during DFSW is essential to achieve a sound joint. However, it should be kept at optimum condition to enhance and improve mechanical properties i.e. it should be thin, uniform and contentious. IMCs IMCs are another typical phenomenon in DFSW. There existed some criteria for IMCs in order to achieve a sound joint including thickness, uniformity and continuity. The most common type of IMCs appeared in aluminum/copper joint are Al4Cu9, Al2Cu3, Al2Cu. Interface and surrounding edge of the particles dispersed in the nugget zone are two main places IMCs formed. Likewise, depending the size of the particles of harder material which dispersed in the matrix of softer material, coarse particles partially transform to IMCs mostly around the outer edge of the particles, while fine particles completely transform to IMCs. It is worth noting that the average thickness of IMCs are less than 2 micrometer. Therefore, those particles that are below than 2 micrometer are completely transform to IMCs resulting in enhancing mechanical properties of the nugget zone. Tensile Strength Another important characteristic in DFSW is the final tensile strength. The majority of dissimilar weldments presented similar trend in tensile strength. There are two different materials in DFSW. One is softer than the other. For example, in aluminum to copper joint, aluminum is softer than copper. What would be the tensile strength of the joint? Is it more than both? Is it less than both? What is the requirement for the sound joint? The answer is that tensile strength of the joints in DFSW are a fraction of the tensile strength of the softer material. Therefore, the final tensile strength of the weldments are usually less than tensile strength of both materials, however, in order to be acceptable in the industry, it is usually more than 70 percent of the tensile strength of the softer material. Fracture behavior of the tensile specimens shows that majority of the joints failed at the interface along with a brittle fracture. It can be attributed to IMCs developed at the interface. Although, it could successfully improve tensile strength, but the specimens showed brittle fracture which is one of the existing challenge in dissimilar joints fabricated by FSW. Formation of composite structure Due to the fact that there are two different materials in DFSW; formation of a composite structure within the nugget zone is inevitable. Typically, it appears in the forming of onion ring in the nugget zone or stir zone of the softer matrix as shown in below figure. That is, fine particle of the material in the advancing side (harder material) disperse throughout the stir zone of the retreating material (Softer material). That is the main reason regarding the inhomogeneous hardness distribution in the stir zone. Challenge FSW can be efficient method to be used in order to join dissimilar materials and the outcome in terms of tensile strength, shear strength, and hardness distribution are promising. However, most of the joints fractured at interface. Moreover, even those that have been ruptured in the base metals showed brittle behavior i.e. low elongation which can be attributed to formation of IMCs. There must a balance between tensile strength and ductility of the weldments in order to safely use dissimilar weldments in industrial applications. In other words, proper ductility and toughness are required for some industrial applications since they should possess proper resistivity against impact and shock loading. The majority of the fabricated weldments are not sufficiently strong to be used for such applications. Therefore, it is worthwhile to focus current and future works on improving toughness of the weldments along with keeping tensile strength in a proper value. References Welding Friction Friction stir welding
Dissimilar friction stir welding
Physics,Chemistry,Engineering
2,660
15,559,643
https://en.wikipedia.org/wiki/Spantax%20Flight%20995
Spantax Flight 995 was a charter flight from Madrid–Barajas Airport to New York via Málaga Airport on September 13, 1982. When the DC-10 aircraft was rolling for take-off from Malaga, the pilot felt a strong and worsening vibration and aborted the take-off. The flight crew lost control of the aircraft and were unable to stop in the runway available and the aircraft overran the runway, hit an airfield aerial installation, losing an engine, then crossed the Malaga–Torremolinos Highway, hitting a number of vehicles before finally hitting a railway embankment and bursting into flames. An emergency evacuation of the aircraft was carried out but 50 on board died of both burns and other injuries. A further 110 people were hospitalized. Aircraft The aircraft involved in the accident was a 5-year old McDonnell Douglas DC-10-30CF. It was delivered to Overseas National Airways on June 6th 1977. The aircraft was leased by Spantax in October 1978 and bought in December of that year. At the time of the accident the aircraft had racked up 15,364 flight hours. Passengers The aircraft was carrying 381 passengers and 13 crew in two cabins of service. Among the 381 passengers were mostly American tourists, who had booked a trip around the Spanish coast. Crew The captain was 55-year-old Juan Pérez, who had logged almost 16,129 flight hours (including 2,119 hours on the DC-10). The first officer was 33-year-old Carlos Ramírez, who had logged almost 6,489 flight hours, with 2,165 of them on the DC-10. The flight engineer was 33-year-old Teodoro Cabejas Barúque, who had logged 19,427 flight hours, including 2,116 on the DC-10. Of the 13 crew members (including aircrew and cabin crew), all of them were Spanish citizens. Accident The flight had begun in Palma de Mallorca earlier in the morning, and made a routine stopover at Madrid–Barajas Airport, before arriving in Málaga. The aircraft was almost at full weight, with every seat in the aircraft being booked. One additional passenger was listed, although it was an infant. At 9:58:50 UTC, the aircraft was cleared for take-off on Runway 14. The crew and passengers reported vibration during take-off prior to V1, but the crew chose to continue the takeoff. After reaching VR, the nose was pulled, and the vibrations started to get worse. Immediately Captain Perez slammed the aircraft's nose gear down and applied full reverse thrust and brakes. The aircraft overshot the runway, hitting an ILS facility and metal fence, passing over the M-21 Highway, striking a truck, and colliding with a farm building. The collision ripped off three quarters of the right wing and the right horizontal stabilizer. The aircraft came to rest in a field about 450 m (1,475 feet) past the runway threshold. The initial impact killed 8 passengers. The evacuation was chaotic, as passengers rushed toward the exits of the aircraft, many of them taking their bags and personal belongings with them. One flight attendant tried to open door 4L in the rear left, but was overcome by smoke before she could open it. The right door was also jammed due to the deformation caused by the impact, so the rear section was evacuated through the doors 3L and 3R. Due to the slow evacuation, 42 people died of smoke inhalation. Overall 110 people had sustained injuries, while the truck driver on the highway received severe injuries. Investigation An investigation team from the Spanish Civil Aviation Accident and Incident Investigation Commission (CIAIAC) and the American National Transportation Safety Board (NTSB) was assembled to investigate the accident. The flight recorders were retrieved and sent to the manufacturer Sundstrand in Charlotte, North Carolina. The reconstructed data showed a power cutout for engine number 3 on the right side, due to the captain's finger slipping on the throttle lever. It was determined that the vibrations had been caused by the separation of the profile of a newly replaced tire. The investigation found that a maintenance error had caused weak glue on the tires to sever on the takeoff roll, most likely due to the heavy payload. Though this was determined as the main cause, interviews with the cockpit crew found that crews were not trained on anything other than engine problems during the take-off roll, leading to the pilots continuing the take-off but ultimately deeming the condition uncontrollable and aborting the take-off at , above V1, with only to spare. The CIAIAC determined that the captain's actions were reasonable and recommended crews to be trained on other failures than engine malfunctions on take-off. The committee also called for passengers to be briefed about the dangers of taking their bags along with them and for crews to be in close reach of safety equipment such as megaphones and flashlights. Trivia An audio-visual specialist at Pace University, Carlton Maloney, was recording audiotape during the accident as part of a series of recordings of airplane takeoffs and landings. As it became clear that something was going wrong, he began to report on the incident and its immediate aftermath. In 2001, about 19 years after the crash, Binter Mediterráneo Flight 8261 crashed in almost the same spot as Flight 995. References External links Accident Report (Archive) Accident report (Archive) Aviation accidents and incidents in 1982 Aviation accidents and incidents in Spain Airliner accidents and incidents caused by mechanical failure Accidents and incidents involving the McDonnell Douglas DC-10 Spantax accidents and incidents 1982 in Spain September 1982 events in Europe Aviation accidents and incidents involving runway overruns
Spantax Flight 995
Materials_science
1,170
14,335,846
https://en.wikipedia.org/wiki/Puccinellia
Puccinellia is a genus of plants in the grass family, known as alkali grass or salt grass. These grasses grow in wet environments, often in saline or alkaline conditions. They are native to temperate to Arctic regions of the Northern and Southern Hemispheres. Selected species Puccinellia agrostidea Sorensen Bent alkali grass or tundra alkali grass Puccinellia ambigua Sorensen - Alberta alkali grass Puccinellia americana Sorensen - American alkali grass Puccinellia andersonii Swallen - Anderson's alkali grass Puccinellia angustata (R.Br.) Rand & Redf. - Narrow alkali grass Puccinellia arctica (Hook.) Fern. & Weath. - Arctic alkali grass Puccinellia bruggemannii Sorensen - Prince Patrick alkali grass Puccinellia convoluta (Hornem.) Hayek - Puccinellia coreensis Honda - Korean alkaligrass Puccinellia deschampsioides Sorensen - Polar alkali grass Puccinellia distans (Jacq.) Parl. - Spreading alkali grass, weeping alkali grass or reflexed saltmarsh-grass Puccinellia fasciculata (Torr.) E.P.Bicknell - Torrey alkali grass or Borrer's saltmarsh-grass Puccinellia fernaldii (A.Hitchc.) E.G.Voss = Torreyochloa pallida var. fernaldii Puccinellia festuciformis (Host) Parl. - Puccinellia groenlandica Sorensen - Greenland alkali grass Puccinellia howellii J.I.Davis - Howell's alkali grass Puccinellia hultenii Swallen - Hulten's alkali grass Puccinellia interior Sorensen - Interior alkali grass Puccinellia kamtschatica Holmb. - Alaska alkali grass Puccinellia kurilensis (Takeda) Honda - Dwarf alkali grass Puccinellia langeana (Berlin) T.J.Sorensen ex Hultén - Puccinellia laurentiana Fern. & Weath. - Tracadigash Mountain alkali grass Puccinellia lemmonii (Vasey) Scribn. - Lemmon's alkali grass Puccinellia limosa (Schur) Holmb. - Puccinellia lucida Fern. & Weath. - Shining alkali grass Puccinellia macquariensis (Cheeseman) Allan & Jansen Puccinellia macra Fern. & Weath. - Bonaventure Island alkali grass Puccinellia maritima (Huds.) Parl. - Seaside alkali grass or common saltmarsh-grass Puccinellia nutkaensis (J.Presl) Fern. & Weath. - Nootka alkali grass Puccinellia nuttalliana (J.A.Schultes) A.S.Hitchc. - Nuttall's alkali grass Puccinellia parishii A.S.Hitchc. - Bog alkali grass or Parish's alkali grass Puccinellia perlaxa (N.G.Walsh) N.G.Walsh & A.R.Williams - Plains saltmarsh-grass Puccinellia phryganodes (Trin.) Scribn. & Merr. - Creeping alkali grass Puccinellia poacea Sorensen - Floodplain alkali grass Puccinellia porsildii Sorensen - Porsild's alkali grass Puccinellia pumila (Vasey) A.S.Hitchc. - Dwarf alkali grass Puccinellia pungens (Pau) Paunero - Puccinellia rosenkrantzii Sorensen - Rosenkrantz's alkali grass Puccinellia rupestris (With.) Fern. & Weath. - British alkali grass or stiff saltmarsh-grass Puccinellia simplex Scribn. - California alkali grass Puccinellia stricta (Hook.f.) C.Blom - Australian saltmarsh-grass Puccinellia sublaevis (Holmb.) Tzvelev - Smooth alkali grass Puccinellia tenella Holmb. ex Porsild - Tundra alkali grass Puccinellia tenuiflora (Griesb.) Scribn. & Merr. - Puccinellia vaginata (Lange) Fern. & Weath. - Sheathed alkali grass Puccinellia vahliana (Liebm.) Scribn. & Merr. - Vahl's alkali grass Puccinellia wrightii (Scribn. & Merr.) Tzvelev - Wright's alkali grass List sources : References External links Jepson Manual Treatment USDA Plants Profile Poaceae genera Halophytes
Puccinellia
Chemistry
1,063
12,222,555
https://en.wikipedia.org/wiki/Furniture%20Style
Furniture Style (magazine) was a monthly business-to-business magazine and Web site serving home furnishings retailers, specifically furniture retailers, and interior designers. Owned by William C. Vance's Vance Publishing Corp., the magazine was BPA-audited and reaches 25,000 furniture retail professionals in the United States and Canada. The magazine was based in Lincolnshire, Illinois, at Vance Publishing's corporate headquarters; it was founded in October 1996. Key members of the editorial staff included Julie M. Smith, publisher; Romy Schafer, editor; Thomas A. Prais & Sara Sandock, managing editors; and Senior Contributing Editor Nancy Robinson. It ceased publication in 2009. The next year, Vance sold the Furniture Style assets to Scranton Gillette Communications. Furniture Style is also an online retailer of furniture based in the UK, founded in 2011 by Nicola Morgan. Content and coverage Furniture Style presented content in a concise, highly visual format that puts product trends at center stage. Topics include merchandising advice, consumer shopping trends and timely news about home furnishings retailers' most profitable product categories, such as bedroom, dining room, entertainment, youth, accent, area rugs, mattresses and upholstery. Other publications and properties A.D.I. Awards — Advancing Design & Innovation - Annual home furnishings awards program for Las Vegas Market exhibitors produced by Furniture Style magazine and the World Market Center. www.furniturestyle.com - Furniture Style launched a new Web site in May 2007 that includes multimedia pods, Style File trend slide shows, Web-only articles, breaking news and editors’ blogs, as well as current and archived articles from the print edition. Home Fashion Forecast - A quarterly fashion supplement that showcases new products for the whole home, presents color forecasts and interviews with design talent, and offers timely merchandising advice. Previously distributed only at the High Point and Las Vegas markets, Home Fashion Forecast was later available online. Home Point — Where Furniture Retailers Click – An online focus group of home furnishings retailers that provides the data for Furniture Style's monthly "Retail Matters" column. The Retail Experience – An annual supplement that provides extensive, exclusive research about home furnishings consumers and their purchase decisions. References External links A.D.I. Awards — Advancing Design & Innovation Furniture Style Home Fashion Forecast The Retail Experience Home Accents Blog Furniture Style Business magazines published in the United States Defunct magazines published in the United States Design magazines Magazines established in 1996 Magazines disestablished in 2009 Magazines published in Illinois Monthly magazines published in the United States Professional and trade magazines
Furniture Style
Engineering
520
9,395,279
https://en.wikipedia.org/wiki/Grothendieck%20inequality
In mathematics, the Grothendieck inequality states that there is a universal constant with the following property. If Mij is an n × n (real or complex) matrix with for all (real or complex) numbers si, tj of absolute value at most 1, then for all vectors Si, Tj in the unit ball B(H) of a (real or complex) Hilbert space H, the constant being independent of n. For a fixed Hilbert space of dimension d, the smallest constant that satisfies this property for all n × n matrices is called a Grothendieck constant and denoted . In fact, there are two Grothendieck constants, and , depending on whether one works with real or complex numbers, respectively. The Grothendieck inequality and Grothendieck constants are named after Alexander Grothendieck, who proved the existence of the constants in a paper published in 1953. Motivation and the operator formulation Let be an matrix. Then defines a linear operator between the normed spaces and for . The -norm of is the quantity If , we denote the norm by . One can consider the following question: For what value of and is maximized? Since is linear, then it suffices to consider such that contains as many points as possible, and also such that is as large as possible. By comparing for , one sees that for all . One way to compute is by solving the following quadratic integer program: To see this, note that , and taking the maximum over gives . Then taking the maximum over gives by the convexity of and by the triangle inequality. This quadratic integer program can be relaxed to the following semidefinite program: It is known that exactly computing for is NP-hard, while exacting computing is NP-hard for . One can then ask the following natural question: How well does an optimal solution to the semidefinite program approximate ? The Grothendieck inequality provides an answer to this question: There exists a fixed constant such that, for any , for any matrix , and for any Hilbert space , Bounds on the constants The sequences and are easily seen to be increasing, and Grothendieck's result states that they are bounded, so they have limits. Grothendieck proved that where is defined to be . improved the result by proving that , conjecturing that the upper bound is tight. However, this conjecture was disproved by . Grothendieck constant of order d Boris Tsirelson showed that the Grothendieck constants play an essential role in the problem of quantum nonlocality: the Tsirelson bound of any full correlation bipartite Bell inequality for a quantum system of dimension d is upperbounded by . Lower bounds Some historical data on best known lower bounds of is summarized in the following table. Upper bounds Some historical data on best known upper bounds of : Applications Cut norm estimation Given an real matrix , the cut norm of is defined by The notion of cut norm is essential in designing efficient approximation algorithms for dense graphs and matrices. More generally, the definition of cut norm can be generalized for symmetric measurable functions so that the cut norm of is defined by This generalized definition of cut norm is crucial in the study of the space of graphons, and the two definitions of cut norm can be linked via the adjacency matrix of a graph. An application of the Grothendieck inequality is to give an efficient algorithm for approximating the cut norm of a given real matrix ; specifically, given an real matrix, one can find a number such that where is an absolute constant. This approximation algorithm uses semidefinite programming. We give a sketch of this approximation algorithm. Let be matrix defined by One can verify that by observing, if form a maximizer for the cut norm of , then form a maximizer for the cut norm of . Next, one can verify that , where Although not important in this proof, can be interpreted to be the norm of when viewed as a linear operator from to . Now it suffices to design an efficient algorithm for approximating . We consider the following semidefinite program: Then . The Grothedieck inequality implies that . Many algorithms (such as interior-point methods, first-order methods, the bundle method, the augmented Lagrangian method) are known to output the value of a semidefinite program up to an additive error  in time that is polynomial in the program description size and . Therefore, one can output which satisfies Szemerédi's regularity lemma Szemerédi's regularity lemma is a useful tool in graph theory, asserting (informally) that any graph can be partitioned into a controlled number of pieces that interact with each other in a pseudorandom way. Another application of the Grothendieck inequality is to produce a partition of the vertex set that satisfies the conclusion of Szemerédi's regularity lemma, via the cut norm estimation algorithm, in time that is polynomial in the upper bound of Szemerédi's regular partition size but independent of the number of vertices in the graph. It turns out that the main "bottleneck" of constructing a Szemeredi's regular partition in polynomial time is to determine in polynomial time whether or not a given pair is close to being -regular, meaning that for all with , we have where for all and are the vertex and edge sets of the graph, respectively. To that end, we construct an matrix , where , defined by Then for all , Hence, if is not -regular, then . It follows that using the cut norm approximation algorithm together with the rounding technique, one can find in polynomial time such that Then the algorithm for producing a Szemerédi's regular partition follows from the constructive argument of Alon et al. Variants of the Grothendieck inequality Grothendieck inequality of a graph The Grothendieck inequality of a graph states that for each and for each graph without self loops, there exists a universal constant such that every matrix satisfies that The Grothendieck constant of a graph , denoted , is defined to be the smallest constant that satisfies the above property. The Grothendieck inequality of a graph is an extension of the Grothendieck inequality because the former inequality is the special case of the latter inequality when is a bipartite graph with two copies of as its bipartition classes. Thus, For , the -vertex complete graph, the Grothendieck inequality of becomes It turns out that . On one hand, we have . Indeed, the following inequality is true for any matrix , which implies that by the Cauchy-Schwarz inequality: On the other hand, the matching lower bound is due to Alon, Makarychev, Makarychev and Naor in 2006. The Grothendieck inequality of a graph depends upon the structure of . It is known that and where is the clique number of , i.e., the largest such that there exists with such that for all distinct , and The parameter is known as the Lovász theta function of the complement of . L^p Grothendieck inequality In the application of the Grothendieck inequality for approximating the cut norm, we have seen that the Grothendieck inequality answers the following question: How well does an optimal solution to the semidefinite program approximate , which can be viewed as an optimization problem over the unit cube? More generally, we can ask similar questions over convex bodies other than the unit cube. For instance, the following inequality is due to Naor and Schechtman and independently due to Guruswami et al: For every matrix and every , where The constant is sharp in the inequality. Stirling's formula implies that as . See also Pisier–Ringrose inequality References External links (NB: the historical part is not exact there.) Theorems in functional analysis Inequalities
Grothendieck inequality
Mathematics
1,660
23,485,928
https://en.wikipedia.org/wiki/Himanthalia%20elongata
Himanthalia elongata is a brown alga in the order Fucales, also known by the common names thongweed, sea thong and sea spaghetti. It is found in the north east Atlantic Ocean and the North Sea. According to the World Register of Marine Species Himanthalia elongata is the only member of its genus, Himanthalia Lyngbye, 1819 and the only member of its family, Himanthaliaceae (Kjellman) De Toni, 1891. Description Himanthalia elongata is a common brown alga of the lower shore. The thallus is at first a small flattened or saucer-shaped disc up to three centimetres wide with a short stalk. In the autumn or winter, long thongs grows from the centre of this, branching dichotomously a number of times. They grow fast and can reach up to two metres by the following summer when they become mature. They bear the conceptacles, the reproductive organs, and begin to decay when the gametes have been released into the water. The discs live for two or three years. Distribution and habitat Himanthalia elongata is found in the Baltic Sea, the North Sea and the north east Atlantic Ocean from Scandinavia, through Ireland, and south to Portugal. It is found on gently shelving rocky shores in the lower littoral zone and the sublittoral zone particularly on shores with moderate wave exposure. It is sometimes abundant and forms a distinct zone just below the Fucus serratus zone. References Fucales Edible algae
Himanthalia elongata
Biology
322
58,622,290
https://en.wikipedia.org/wiki/Aspergillus%20unilateralis
Aspergillus unilateralis is a species of fungus in the genus Aspergillus. It is from the Fumigati section. Several fungi from this section produce heat-resistant ascospores, and the isolates from this section are frequently obtained from locations where natural fires have previously occurred. The species was first described in 1954. It has been reported to produce aszonapyrones and mycophenolic acid. Growth and morphology A. unilateralis has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References unilateralis Fungi described in 1954 Fungus species
Aspergillus unilateralis
Biology
161
45,185,641
https://en.wikipedia.org/wiki/Penicillium%20chermesinum
Penicillium chermesinum is an anamorph fungus species of the genus of Penicillium which was isolated from soil from Nova Scotia in Canada.Penicillium chermesinum produces plastatin, luteosporin, xanthomegnin, azaphilones, p-terphenyls and costaclavine. See also List of Penicillium species Further reading References chermesinum Fungi described in 1923 Fungus species
Penicillium chermesinum
Biology
102
78,961,457
https://en.wikipedia.org/wiki/Jewish%20customs%20of%20etiquette
Jewish customs of etiquette, known simply as Derekh Eretz (, ), or what is a Hebrew idiom used to describe etiquette, is understood as the order and manner of conduct of man in the presence of other men; being a set of social norms drawn from the world of human interactions. In the Talmud and Midrashic literature there are many things on this subject, some of which having the same rigid application of the Torah itself, while others pertain to the customs in the synagogues, or at the dinner table. Jewish etiquette is a complex system of mores and manners that have been agreed upon by the community, and which seeks to delineate an acceptable standard of social laws governing the expectations of personal conduct with respect to one's fellow Jew and/or Gentile, or environment. Ancient Jewish communities throughout the world have preserved a well-documented system of etiquette, and are believed to have mimicked the social order once universally practised by all Jews in former times. However, today, many of these social norms are being lost to the community, due to their mixing with the larger community of Jewish immigrants, and the coalescing of these diverse ethnic groups. History Jews in ancient times adhered to strict codes of conduct, where custom played an important role in the way they would interact with one another; with an emphasis on decorum (good manners), respect and politeness. The precursor for Jewish social etiquette dates back to antiquity, and has been documented in one of the Minor Tractates, known as Derech Ereẓ (Manners), the name of a treatise attached to Talmud editions, divided into Rabba (Large) and Zuta (Small). The early rabbinic work is a testimony of how Jewish etiquette has maintained its own unique, strict code of customs throughout the centuries, although in some cases (e.g. Jewish etiquette in the bath-house), such rules can be traced back to ancient Roman practices. In fact, some ancient practices were so widespread that a Jewish dictum is typically cited in its recognition: “Decorum came before the giving of the Law,” meaning, one cannot personify Torah until he demonstrates common courtesy and decorum (derech ereẓ) in everything that he does. In the Talmud and Midrash, there are approximately 200 teachings concerning derech eretz as decent, polite, respectful, thoughtful, and civilized behavior, as well as a Minor Tractate (Derech Ereẓ) specifically treating on these issues. They define and delineate the difference between conduct and behavior that is malum in se, malum prohibitum, and good practice. Sometimes ethical codes include sections that are meant to give firm rules, but some offer only general guidance, while at other times the words are merely aspirational. Jewish etiquette can easily be divided into sub-categories: table etiquette, dress etiquette, speech etiquette, writing etiquette, etc. A collection known as Hilkot Derekh Ereẓ existed even in the school of Rabbi Akiva (Berakhot 22a); but, as the term "Hilkot" indicates, it was composed entirely of short sentences and rules of behavior and custom, without any references to Jewish Scripture and tradition. Some rules of etiquette are supposed by the Rabbis to have been laid down by the Bible itself. Moses modestly uses the plural in saying to Joshua, "Choose for us men and go fight with Amalek" (Exodus 17:9), though he referred only to himself. By this, the rabbis learnt that whenever a wise man speaks to his congregation, he does not speak in the language of "I" but in the language of "we", so as not to be condescending. This is also the practice of authors or newspaper editors when writing lead articles, speaking in the language of "we." The most basic of biblical tenets and which touch on good manners is the command to stand up before an old man (Leviticus 19:32), particularly, before one who is learned in the Torah. Later, in rabbinic tradition, proper etiquette extended even to the place one takes when walking, in relation to one's superiors. For example, if there were three people walking together, the rabbi takes up the middle position, while he that is esteemed greater than one's self (2nd in rank) takes up the right-most position, while the person who is least amongst them takes up the left-most position. Another of the rabbinic teachings is the importance given to the right-hand side. For instance, at every turn that a man turns, he should strive only to make a right turn, etc. Moreover, whenever one is scheduled to meet with a great and respectable man, before going to such a meeting, he is supposed to change his clothes, and dress appropriately for the occassion. Jewish mannerisms As in other societies, the social structure and conduct of Jews were reinforced by the element of shame. The Talmud names three characteristic traits that are exemplary of the Jewish people as a whole, saying that they are distinguished by being 'merciful, shamefaced and benevolent'. In general, the principle of a shame culture, or the fear of being brought to shame, or of gaining a social stigma resulting in social alienation by his peers, or, in extreme cases, family estrangement if caught being disrespectful or engaging in any misconduct, were the chief factors that preserved social order and conduct. At the same time, the people of Israel are admonished not to bring their fellow Jew to public shame, but to safeguard his personal dignity. Common greetings One's manner of speech has always played a major role in Jewish etiquette, a manner of speech that is meant to highlight one's more refined and urbane character traits, and where it is mainly guided by respect, humility and modesty. In some cultures this is known as "beautified speech," or "elegant speech." (צפרך טוב = ṣafrakh ṭoḇ), the common greeting said when one greets his neighbor after rising from sleep in the morning, literally meaning, "May your morning be good" (Good morning!). The response to the greeting is (צפרך טוב ומבורך = ṣafrakh ṭoḇ ū-meḇorakh), "May your morning be good and blessed". (שלום עליכם = shalom ʿaleikhem), literally meaning, "Peace be unto you," is said whenever a man meets up with his neighbor, whether on a weekday or on a Sabbath day, being the customary words at greeting someone. The expression is always used in the plural tense, even if there was only one man whom he met. The traditional reply given in return is, (עליכם שלום וברכה = shalom ʿaleikhem u'ḇrakhah), meaning, "Unto you may there be peace and a blessing." (מרי שלום עליכם = mori, shalom ʿaleikhem), meaning, "Mori (Rabbi), peace be unto you," is said when a man greets his Rabbi. The greeter, in this case, will place the palm of his right hand over his own heart and make a slight bow of courtesy, out of respect for his Rabbi. The Rabbi, in turn, will usually answer him in kind, by responding: (עליכם שלום וברכה = ʿaleikhem shalom u'ḇrakha), "May peace and a blessing be upon you." (חייך לפניך = ḥayekha lefanekha), meaning, "Your life is before you!", said whenever a person sees another person studying Torah. The person studying will duly respond by saying, in respect, (כי הוא חייך ואורך ימיך = kī hū ḥayekha we-orekh yamekha), meaning, "For it is your life and the prolonging of your days," an allusion to the biblical verse in Deuteronomy 30:20. Holiday greetings On any of the three major Jewish holidays (Passover, Shavu'ot and Sukkot), the common greeting between a man and his neighbour is to say: (תזכה לשנים רבות ומועדים טובים = "May you be merited with many more years and with good holidays"). The response to the same greeting is: (בחייך ובימיך הטובים = "During your lifetime and in your own good days"). On the Jewish New Year (Rosh Hashanah), the common blessing said to one's neighbour in greeting him is: (תיכתב בספר החיים ובספר הזיכרון = "May you be inscribed in the Book of Life and in the Book of Remembrance"). The response to the same greeting is: (ואתה תיכתב בספר החיים ובספר הזיכרון = "And may you be inscribed in the Book of Life and in the Book of Remembrance"). On the night of the Sabbath, the common greeting to one's neighbour is to say: (שבת שלום = shabbat shalom), which has the meaning of "A Sabbath of peace!" The response to the same greeting is: (עליך ועל כל ישראל = ʻalekha we-ʻal kol yisrael), meaning, "Upon you and upon all of Israel." Euphemisms, nicknames and flowery speech A sign of Hebrew literary excellence is the ability of speakers to interweave in their daily conversation verses taken from the Hebrew Bible, such that "the language is rife with biblical allusions, that is, the insertion of verses and parts of verses into their speech, a phenomenon that is common and seen as a humorous rhetorical device." These interjections are usually "in response to an existing situation, or they would say a verse with deliberate distortion to suit a specific event." Typically, religious Jews will not make use of vulgar language. This was seen as essential in adding refinement to one's manner of speech. If, in a conversation, there was a need to mention one's privy place, they would seek the least offensive way of saying so. The vestiges of ancient etiquette have also revealed themselves in their manner of expressions or utterances. In what follows are a few examples: If someone needed to mention the virile membrum, he would say for that organ (בְּרִית = bǝrīth), a reference to the "covenant" of circumcision. The Yemenite Jew, for example, did not call a donkey by its name, but rather gave to it a euphemism, "beast of burden" (נושא אדם = lit. carrier of man). The scribe, Rabbi Zechariah al-Dhahiri (16th-century), coins the phrase "lance" (Heb. רומח) for it. Similarly, we find that the elders who procured a Greek translation of the Hebrew Bible for Ptolemy II gave a euphemism for the donkey, rather than call it by its name. Thus, the Midrash says: "And he put them on a donkey – This is one of eighteen places where the Sages changed [the literal translation] for Ptolemy the king." Instead of saying "toilet", which word carries with it certain negative connotations, Jews in Yemen would say, (בֵּית הַכָּבוֹד = bayth ha-koḇodh), being a euphemism for "outhouse" or "toilet facilities," and literally meaning, "the house of glory," so as not to accustom oneself in speaking vulgar words. A word that is more commonly used to denote the same is (בית הכסא = bayth ha-kisei), literally, "house of the stool." The word "lewd woman" ("harlot" or "whore") was much too harsh of a word to say, therefore the euphemism (מוכנת = mukhanath) was used for her, literally meaning "she that is ready." In other places, they made use of the word (יצאנית = yeṣ’ānīth) for her, meaning "she that goes out." A cemetery or graveyard was not called by its Modern Hebrew expression, bayt ha-keḇorot, but rather by its euphemism, (בית החיים = bayth ha-ḥayyim), meaning, "the house of the living." Instead of saying "so-and-so has died," or "he is dead," words that were seen as too harsh to say, they will say "so-and-so has passed-on" or "he passed-on" (הוא עבר = hū ʿaḇar), or what is also in today's Modern Hebrew, (הוא נפטר = hū nifṭar), "he has been dismissed; sent-off [into the other world]." Language of appeasement (חוץ מכבודך = ḥūṣ mikǝḇodakh), or what is a flowery way of saying "I beg your pardon," or "Forgive me [for saying]", is often used in the Hebrew register. Considered a very respectful way of saying to a person that you respectfully disagree with someone's opinion or action, and is usually followed by whatever it is that you disagree with him. It is often used as a preface whenever one wishes to mention a matter that is very sensitive, such as when there is an element in the statement that when heard may be offensive to the listener, and the person speaking does not want to come across as offensive by way of preemptively warning the listener that he is about to say something that is offensive (e.g. a toilet; a dog; feces; a donkey; a harlot; shoes; a heathen, etc.). (בעונות = be-ʿawonot), meaning, "On account of [our] iniquities" --- said by the person who has just heard bad news (disaster, destruction, divorce and broken homes, etc.) and by saying so is meant to show regret, on the one hand, yet justify God's judgments and dealings with man, on the other. In Jewish etiquette, invectives are never used against the people of Israel, but if anyone wished to denounce something done by the nation of Israel, he alters his speech and says, "The enemies of Israel ( = soneihen shel yisrael) have done so-and-so", without specifically railing on Israel. Terms of endearment In most Jewish commuities, a man did not call his spouse by her personal name, but rather coined a term of endearment for that spouse, such as with the Jews of Yemen who would use the phrase, (יַא-הֵי = Ya-He), lit. "O, you!". This was also done out of respect. Its practice was often used by, both, Jewish men and women alike when addressing one another, without mentioning the other's name. A man might also call his wife, Imma (אמא = "mother"), while the woman may call her husband Abba (אבא = "father"). Sometimes a wife would simply call her husband by his family name, such as (יא כהן = "Ya Cohen!"). If her husband was a Rabbi, she would often call her husband by the epithet, (יא מורי = "Ya mori!") (lit. O, Rabbi!). The general rule of practice was that it was always held as improper to call one's spouse by his or her first name. Coining a phrase or nick-name for one's spouse was also meant to instruct children not to call their parents by their first names, out of respect and awe for their parents. In western societies, "honey" and "babe" are commonly used to address one's spouse. Similarly, one does not say to a rabbi or to a superior: 'You said, such and such" [= ] (in the second-person), as this is seen as being too direct, or might sound confrontational. Rather, one says: 'The rabbi has said' [= ] (third-person), or 'his honor has said', etc. [= ]. The Bach (Yoreh De'ah 242:6) seems to believe that while such a practice (referring to one's teacher in third-person) is appropriate, it is not an absolute requirement, and therefore if one wishes to greet his rabbi, he may say, Shalom to you, my Rabbi; or if one is having an extended conversation with his teacher and a younger person wishes to correct his teacher or some older person, the younger person can say to the older person, "But did you not teach us, such-and-such?" (again, second-person, without using strong and harsh words of renunciation). Intimations and facial expressions The colloquy used by religious Jews in their every-day speech is rich in various body gestures and nuances, each with an intimation and meaning of its own. Some of them are very old, and their memorial comes up in early literary sources: Taking hold of another's right earlobe and flexing it inwards. This movement initiated by an older man towards someone younger, hints at the threat of punishment, and is most commonly found in relations between father and son, hinting at the punishment awaiting him by pinching his ear. The basis for this hand gesture can be traced to the words of Israel's Sages, in Tractate Semaḥot. After a child had gone off and committed suicide because of his fear of being punished, it was declared: "Let no man show to a small child [his displeasure] by holding his ear, but rather spank him immediately, or else let him remain silent, and not say anything to him." Rabbeinu Asher explained: "His father frightened him, by threatening that he would not go unpunished for his mishap, in that he took hold of his ear, in the same way that they beat small children and drag them by their ears." The story, as related there, tells of how two fathers had shown their displeasure towards their sons by holding on to their ears. One had broken a vial on the Sabbath and the other ran away from school. Both boys, being frightened by what they had done, went off and committed suicide by falling into cisterns. Among Yemenite Jews, the practice of bending a child's ear was still prevalent as late as the 20th-century, and called by them in the Yemenite-Arabic dialect "chabzeh." Resting the cheeks on the palm of the hand symbolized mourning and sorrow. A person who makes this gesture without realizing it on the Sabbath day or holiday is scorned by his onlookers. The memorial of this action was discovered in the lamentation, Ašer tešeḇ () composed by Solomon ibn Gabirol. In the lament's introduction, the poet describes a young woman who grieves heavily over the parting of her loved one, as if she had been bereaved of her firstborn son. "She then sighed and put a hand to her cheek, and she was bitter, as those who weep bitterly for their firstborn." Use of honorifics (מורי ורבי = morī we-rebbī), literally meaning, "My lord and my rabbi," the honorific titles commonly given to a rabbi when addressing him. The order may also be reversed, rebbī we-morī. One uses such honorifics when talking directly to one's interlocutor, or even when referring to an unrelated third party in speech. The word, (אֲדוֹנִי = adhonī), is used as an honorific title, to show respect to one's elders; literally meaning, "my lord" (in the lower case), was often used when addressing one's grandfather, meaning either, "my grandfather," or in some cases, "my [maternal] uncle". (דוד = dod), often used as a title of respect for any elderly man, unrelated to oneself; literally meaning one's "paternal uncle," and used for the same.(דודה = dodah), same as above, meaning "auntie," used as a title of respect for any elderly woman, unrelated to oneself. (תלמיד = talmīd), a title which, in Yemen, was applied to any older person who took upon himself to study the more arcane religious topics of ritual slaughter, etc. Today, the word in Modern Hebrew has come to mean any pupil, or "student," even small children. Common respect for parents, teachers and elders A child does not sit in a special chair or seat reserved for his father (an exercise of filial piety). This would apply also to a student; any seat that is used strictly by one's teacher a student will never sit in that seat, even jokingly, as it is seen as a show of disrespect for either one's parent or teacher. An elderly man that enters a house or a room is quickly greeted by having all those who are younger and who are present in that house to stand-up on their feet (had they been sitting), until he passes them or sits down in his place. If the man sitting were actually older in age than the person entering, it is not necessary that he stand-up, but often he will nod out of a show of respect. Conduct in places of worship The Jewish custom of old was to take-off one's shoes immediately prior to entering the synagogue, a custom that has only recently disappeared by some communities following their immigration to the land of Israel after 1948. When a man rises up to the reading dais on the Sabbath day, to read from the Torah scroll during the weekly lection, all of his sons, grandchildren and younger brothers in the synagogue remain standing upon their feet, each man in his place, until the reader completes his appointed reading. When he begins to say the final blessing after reading the parasha, they all sit down again. If the person going up to read from the Torah was a talmid hacham (disciple of the Sages), his son-in-laws would also remain standing until he concludes his reading. Table etiquette Although tables were not in common use in ancient Israel, as most families gathered to eat while reclining on the floor and eating from a common dish or bowl, still, table manners were seen at that place. The Shulchan Arukh codifies the requirement to wash one's hands with water before eating bread. If the supper was a ceremonial meal with many gathered to eat together, the host who serves them will make rounds, going from guest to guest with a water jug, basin and hand towel, starting with the most distinguished of the quests, so that each can wash his hands without the necessity of having to get up from his place. The common response said after one had been given water to wash his hands is (יעבדוך עמים = yaʻaḇdūkha ʻamīm), "May the peoples serve you," to which response, the one who administered the water will answer him, (ישתחוו לך לאומים = yištaḥawwū lekha leʾūmīm), "May the nations bow down before you." Observant Jews, while eating, will keep communication around the table to a bare minimum, almost maintaining complete silence, in keeping with the rabbinic dictum, "The hour of eating is an hour of warfare," explained as "lest the windpipe precedes the esophagus" (i.e. the intake of food is inadvertently channeled down the windpipe, instead of the gullet, and he chokes thereby). People who converse while eating are more prone to have this happen to them. Flatbread, pita bread, bread rolls, slices of bread, etc., no matter how thin, are not held up and placed into the mouth for eating, nor chomped upon. Rather, the refined person while sitting at his supper will break-off a small portion of that slice of bread with his hand, sufficient enough to be consumed at one time in his mouth, and only then will he proceed to eat it. Unlike the custom of Ashkenaz whose practice is to cut the Sabbath Challah with a knife, the Yemenite Jewish custom strictly avoids laying a knife to bread, but breaks the bread with his hands. Those reclining to eat food together are always calm and relaxed, the proper dining etiquette being to chew one's food slowly and in a prolonged manner. Because of this, the quantities of food they consumed were much smaller than what is currently accepted in the Western world. (Formerly, the entire Jewish family would lounge together around a low-lying table, eating usually from a common dish. In Yemenite Jewish dining etiquette, cutlery was not used at the dinner table; each man eating with his fingers and a sop. For this reason, diners took extra care to ensure that their fingernails were cropped and hands clean). Diners are careful (for aesthetic reasons) when eating and dipping their sop into a bowl of soup that the tips of their fingers do not touch the soup itself. When meat is served at that table and is placed in a common dish, no person puts forth his hand to take a portion of the meat, until the host or master of the house has first done so at that table, and this, too, is done by him only near the conclusion of eating that meal. If a man were being poured a drink, it is customary for the person being served to say to the one serving him, (ברוך מי שהכוס מידו = Borūkh mī šǝ-hakōs mi-yadō), meaning, "Blessed is he from whose hand is the cup." The response by the one who pours the drink is (ברוך שותהו = Borūkh šothehū), meaning, "Blessed is he that drinks it." If there was a large piece of meat set at that table, no person will take it up and bite a piece from it. Rather, he that takes it will slice away a smaller portion with a knife, or else break a piece off with his hand. A rule of practice is never to eat or drink while standing. In Yemenite Jewish culture this is reinforced with the dictum: "No one drinks while standing, except the donkey." (תזכו לחיים טובים = tizkū le-ḥayyim toḇīm), literally, "May you be merited with a good life," said whenever a person enters a house a finds his hosts seated and drinking arrack or other alcoholic beverages (whether on a Sabbath day or week-day). He that belches after a meal, they say to him: (יהנה בטוב = "Enjoy the good"). The same blessing is also used instead of "Bon Appetit!", or what has now been replaced in Modern Hebrew with: Be-te'aḇon (בתיאבון). It is considered uncouth to eat in public places, such as in the marketplace, but rather one eats only in the confines of his own house or in the house of his host. Those who took little regard to this rabbinic stricture and who would eat unabashedly in the marketplace were libeled as incompetent to bear witness in a Jewish court of law, since such people were generally seen as shameless. (Modern-day inns, hotels and restaurants are generally thought not to be under such strictures). Common courtesy after one's meal is to include the Birkat ha-Oreaḥ (Heb. ברכת האורח) in the Grace said over the meal. The common expressions used for showing one's gratitude to the host are to say either (תזכו = tizkū), meaning "Be merited," or (ברוכים תהיו = berūkhīm tehiyū), meaning, "May you be blessed," or (לעולם תחיו = leʿolam teḥiyū), meaning "May you live forever." Inviting guests In ordinary Jewish law, it is considered bad manners if a guest who is invited to dine with a person(s) invites another guest; a guest inviting a guest. The homeowner is entitled to invite as many guests as he pleases, but a guest should never invite another guest, as the host will, in most cases, consider this an imposition. Personal hygiene and conduct in the toilet After relieving one's self in the toilet, it is a well-known Jewish custom to use one's left hand when wiping one's self, while even this too is done with water. The reason being is that the right hand is used for writing the Torah, while water is known to thoroughly cleanse the place. Jewish men traditionally urinated in a sitting position. This may have been because Jewish men in Yemen traditionally wore tunics and long, dress-like vestments, and the impracticality of urinating while standing without revealing one's buttocks and privy place. The etiquette of sitting while urinating was reinforced with a local dictum: "No one urinates while standing, except the donkey." The old Jewish practice of sitting while urinating is also alluded to in the Babylonian Talmud (Berakhot 40a). Today, this old etiquette is nearly obsolete because of the western-style trousers with zippers that are worn by Jewish men. In Jewish orthodox law, for reasons of modesty, whenever a person uses the toilet facility, or bathes himself in a public bath, he does not engage in a conversation with any person or people on matters related to Torah, whether those sitting in the Water-Closet (toilet) with him, or those waiting outside. (In former times, when outhouses were removed at a distance from the house, such as in an open field, young women and girls would go out while accompanied with one of their female companions for reasons of personal safety, such as the prevention of mishaps, the one waiting outside the door of the outhouse while the other relieved herself, and these conversing all the while with each other, until the time that they return to their respective places). One who undresses in the public bath-house, while he is in the dressing chamber he covers up his naked body with a sheet or a towel, until he reaches the actual place of bathing. So, too, after he concludes his bath, he covers his nakedness in his sheet or towel until he reaches the dressing chamber, where he puts back on his clothes. Jewish women that are married will shave their pubic hair, including the hair beneath their armpit. Dress codes Clothing, as anchored in Jewish law, is often a sign of one's identity, and plays an important role in preserving a social hierarchy, as well as to distinguish between religious groups, age, gender, and more. For women and girls, in particular, it has the additional function of instilling in them the rule of discipline and the principle of restraint, of modesty and submission to authority. Every Jewish man or boy dons a hat (not necessarily a brimmed hat), or else a skull-cap (yarmulke), at all times, except when bathing or sleeping. This is done to show his humility towards heaven. In Arab lands, the Jewish custom was for unmarried men and boys to wear a large felt-like hat without a brim and which covered the greater part of their head. The majority of Israel made it an obligation, rather than a "measure of piety," to wear hats or kippot at all times. When a man married, he also wore a habit (now obsolete). In Jewish etiquette, Jewish women will not wear any predominantly red colored accoutrement, as it attracts undue attention to themselves. The same was the rule of practice in most places throughout Yemen. Modesty was the guiding-factor here, so that a woman would not make herself conspicuous to others. Interpersonal relations In the language register employed by the Jews of Yemen there are preserved ancient linguistic patterns, especially in the field of blessings and good wishes. These expressions are mostly in Hebrew, since the well-wishers hoped to add some degree of sanctity to their words, for which cause, they drew such words from the vocabulary of their ancestors and repeated them in the holy tongue. The most ancient of these can be found in the corpus of Midrashic literature, while the most recent date back to the period of the Middle-Ages and to the Cairo Geniza fragments. The language of the Torah in Yemenite Jewish communities has not come down to us in the form of "thank you" (Modern Hebrew: תודה), or "thank you very much," but rather, "may you be blessed" (ברוך תהיה = borūkh tehiyeh), or "may he be blessed," or "may they be blessed." Such expressions are used in the writings of the Geonim. The Gaon, Rabbi Samuel ben ʿAli, in one of his letters, says about those communities who lend support to the Babylonian academies, "And concerning those communities, may they be blessed." Rabbi and ethnographer, Jacob Sapir (1822–1886), who visited Yemen's Jewish community in 1859, noted certain expressions in widespread use among the Jews there, and wrote of his impressions on this wise: "They are very well accustomed, whenever a man tells his friend [about] his troubles or his aspirations, to reply back unto him in consolation (אהיה אשר אהיה = Eheyeh Asher Eheyeh), literally meaning, "I shall be what I shall be", an allusion to God's ability to affect change, or (אל שדי = El Shaddai), meaning, "God Almighty", while this [expression] does not cease from their mouths... Over every speech or statement made, they will say, (ברוך תהיה = Borūkh tehiyeh!), meaning, "May you be blessed", or the phrase (ברוך אתה לאדוני = Borūkh attah laadhonai), meaning, "Blessed are you unto God", and they are not scrupulous about [infringing upon the commandment that says], Thou shalt not take the Lord's name in vain." While Sapir thought it was inappropriate or excessive conduct to mention God's name or one of his attributes in greeting, the Yemenite Jews held that the practice was completely valid, based on a teaching in the Talmud (Berakhoth 54a), which says: "They made it an enactment that a man greet his neighbor by employing God's name, etc." The expression (בָּרוּךְ תִּהְיֶה = Borūkh tehiyeh), or in the plural form (ברוכים תהיו = Berūkhīm tehiyū), was often said when leaving someone's house in the daytime, or after listening to a certain statement made by one's friend or friends. The reply given in return was: (אתה ברוך אדוני = Attah borūkh adhonai), meaning, "You are blessed of God". Another common expression used when leaving one's neighbor's house is to say, (ואתה ברוך = We-attah borūkh). Whenever a person sought forgiveness, the Yemenite custom was not to say, "I'm sorry" (Modern Hebrew: סליחה), but rather, "I beg your forgiveness" (מחילה = meḥīlah), to which plea another reply was given, "you are forgiven" (בִּמְחִיל = bimḥīl), or (אתה במחיל = attah bimḥīl). Here, in fact, this word במחיל, whose form looks strange to the Hebrew reader, is found in the Genizah manuscripts. Shelomo Dov Goitein, a researcher of both Yemenite Jewry and the Cairo Geniza manuscripts, has already made mention of it. Following his examination of these manuscripts, he reached the conclusion that many of the linguistic forms common to the Yemenite Jews can be found in the Geniza fragments. (ברוך הבא = Borūkh haba), the traditional words said when welcoming a person into one's house, literally meaning, "Blessed is he that comes" (Welcome!). The response by the guest is traditionally (ברוך הנמצא = Borūkh hannimṣa), "Blessed is he that is present." Whenever a person takes leave of his friend at night (such as when he retires to sleep, or leaves his neighbor's house at night), the host says to his departing friend or guest, (תלין בטוב = Talīn bǝṭoḇ) (sing.) or if there were two or more persons, (תלינו בטוב = Talīnū bǝṭoḇ) (plural), literally meaning "rest well." To this, the departing guest replies to his host, (תקיץ ברחמים = Takīṣ bǝraḥamīm) (sing.), meaning, "may you rise in mercy." If the hosts were more than one person, the guest would answer, (תקיצו ברחמים = Takīṣū bǝraḥamīm) (pl.). (תזכו = Tizkū), literally, "May you be merited," the common blessing said after a man has shown an act of kindness to him, or has heard of a man's good deeds. If a man departed another's house during the day, the one leaving will say to his host (שלום עליכם = Shalom ʻaleikhem), "May peace be upon you!" The response given in return by one's host is (לֵך לשלום = Lekh le-shalom), literally, "Go in peace." If a poor man or beggar came to a man's house asking alms, the owner of the house if he had nothing to give to him would not say to him, "I do not have anything to give you," but rather will say: (אדוני יתן לך = Adhonai yitten lekha), meaning, "May God provide for you". In public events, Jewish men and woman, including boys and girls, sat in separate company; the sexes did not mingle together, out of a display of modesty. Even in houses of merriment, women sat separately from the men. In Yemen, it was considered a "misconduct of social norms" for Jewish men and women to sing together and to dance together. However, in the confines of a man's house, where the proprietor of the house sat at the dinner table with his wife and children on the Sabbath day and holidays, they were permitted to sing hymns and para-liturgical songs together. Writing etiquette Formal writing is a prominent feature in early Jewish letters of communication, in which the opening lines are usually styled in a rhymed, flowery speech, and one that usually praises its recipient. A few of the more common forms of rhymed addresses in a letter's opening are as follows: (A principal person's greeting): "An abundance of peace, even a thousand-fold and ten thousand-fold, from He that dwells in the heavens; may they reach and come before my beloved, he that is the delight of mine eye, like the valleys of brooks that are spread out, even unto him that is near to my heart, but far from mine eye, [he that is like] an ornament of grace upon my neck; he that is of a good name, who is like unto a green tree, God will also provide for him what is good, even our honorable and dear [so-and-so], etc." () "May your peace be multiplied always" (שלמכון יסגא לעלמין = šelomkhōn yisğei le-ʻolǝmīn), a form of formal address often used in letter writing, and which is written in Aramaic. The phrase is often abbreviated in letter-writing, שי"ל. Between the formal opening lines of the letter and its main content, there is an intermediate statement found in most letters of etiquette, namely, "After having sought your peace and well-being" (אחרי דרישת שלומך וטובתך = aḥǝrei dǝrišath šǝlomkha we-ṭoḇathkha), often abbreviated אחדש"ו. Those who wrote letters in Yemen, whenever they came to express their longing for the recipient of the letter, they made note of it with the banal phrase, "I lack naught except to see your dear face." This style of language is a legacy from the Middle-Ages. Rabbi Meir Abulafia (13th-century) writes in a letter to Rabbi Yehuda b. Mattithiah: "And I have nothing new to inform you, the king's daughter is all glorious within, her clothing is of wrought gold; there is naught that she lacks, except to see your face." Some see this as authentic Jewish mannerisms of speech preserved by the Jews of Yemen, a manner of speech also discovered in letters of communications found in the Cairo Geniza. This, too, is a testament to the antiquity that symbolizes the very foundations of their culture. Valedictions and Terms of contritement Such expressions, mostly used as valedictions in letters of communication before signing one's name, are common with the Jewish nation. The idea behind such words is to show humility, and to always bear in mind the rabbinic admonition: "Be exceedingly lowly in spirit." The most typical of these expressions are as follows: (הצעיר = ha-ṣaʿīr), meaning "The Younger", written before signing one's name; (סְיָן טִין = siyan ṭīn), Aramaic for "He that is but mire and clay", and what is often only abbreviated in letters (ס"ט). The expression is an allusion to Jonathan ben Uzziel's Aramaic translation of Isaiah 57:20, and is usually written after signing one's name; (הקטן = ha-qaṭan), or sometimes (הקל = ha-qal), meaning, "He that is least", and written before signing one's name; (שפל מאד = šǝfal mǝʾod), meaning, "A man of very low stature." The common practice is to sign one's name, "so-and-so" the son of "so-and-so." Occasionally, the signatories will make use of the abbreviated expression, (יצ"ו = yišmǝro ṣūro wiyoṣǝro), meaning, "May his Rock and Creator preserve him," instead of the typical ending, "He that is but mire and clay." On other occasions, especially in court documents (e.g. title-deeds), one's deceased father's name is signed with the addition of (יש"ל = yǝḥī šǝmo le-ʻolam), meaning, "May his name live forever." When speaking to others about one's own accomplishments, one does not say of himself, "I did such-and-such a thing," but will say rather, "We did such-and-such a thing," or "We wrote such-and-such a thing," or "We gave orders that such-and-such a thing be done," or "We spoke to so-and-so," so as not to draw undue attention to one's own self, nor to make himself appear to be condescending. The Evil eye Avoiding the affects of the "evil eye" was part and parcel of Jewish etiquette. The superstitious belief in the affects of the "evil eye" was so pervasive in many Jewish cultures that they would say for a beautiful maiden that she was (בְּלָאָה = belo'oh), literally meaning "rag," rather than say she was a beauty, so that she would not be ill-affected by the evil-eye. The word used here is Arabic, equivalent to the Hebrew סְחָבָה. (The same idea is used in the Scripture when referring to Moses having taken an "Ethiopian woman," and whom Rashi in his commentary on Numbers 12:1 says, by way of an exegesis, was actually a very beautiful woman.) See also Etiquette in the Middle East Mussar literature Norm (social) Shame society Torah im Derech Eretz Yetzer hara Notes References Bibliography (reprinted from 1922 and 1938 editions of the Hebrew Publishing Co., New York) External links Minor Tractate: Derech Eretz Rabbah Wisdom literature Jewish law and rituals Minhagim Etiquette Habits Popular culture Social concepts Jews and Judaism Hebrew language Jewish ethics Etiquette by region
Jewish customs of etiquette
Biology
9,850
1,356,350
https://en.wikipedia.org/wiki/Keepalive
A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent the link from being broken. Description Once a TCP connection has been established, that connection is defined to be valid until one side closes it. Once the connection has entered the connected state, it will remain connected indefinitely. But, in reality, the connection will not last indefinitely. Many firewall or NAT systems will close a connection if there has been no activity in some time period. The Keep Alive signal can be used to trick intermediate hosts to not close the connection due to inactivity. It is also possible that one host is no longer listening (e.g. application or system crash). In this case, the connection is closed, but no FIN was ever sent. In this case, a KeepAlive packet can be used to interrogate a connection to check if it is still intact. A keepalive signal is often sent at predefined intervals, and plays an important role on the Internet. After a signal is sent, if no reply is received, the link is assumed to be down and future data will be routed via another path until the link is up again. A keepalive signal can also be used to indicate to Internet infrastructure that the connection should be preserved. Without a keepalive signal, intermediate NAT-enabled routers can drop the connection after timeout. Since the only purpose is to find links that do not work or to indicate connections that should be preserved, keepalive messages tend to be short and not take much bandwidth. However, their precise format and usage terms depend on the communication protocol. TCP keepalive Transmission Control Protocol (TCP) keepalives are an optional feature, and if included must default to off. The keepalive packet contains no data. In an Ethernet network, this results in frames of minimum size (64 bytes). There are three parameters related to keepalive: Keepalive time is the duration between two keepalive transmissions in idle condition. TCP keepalive period is required to be configurable and by default is set to no less than 2 hours. Keepalive interval is the duration between two successive keepalive retransmissions, if acknowledgement to the previous keepalive transmission is not received. Keepalive retry is the number of retransmissions to be carried out before declaring that remote end is not available When two hosts are connected over a network via TCP/IP, TCP Keepalive Packets can be used to determine if the connection is still valid, and terminate it if needed. Most hosts that support TCP also support TCP Keepalive. Each host (or peer) periodically sends a TCP packet to its peer which solicits a response. If a certain number of keepalives are sent and no response (ACK) is received, the sending host will terminate the connection from its end. If a connection has been terminated due to a TCP Keepalive time-out and the other host eventually sends a packet for the old connection, the host that terminated the connection will send a packet with the RST flag set to signal the other host that the old connection is no longer active. This will force the other host to terminate its end of the connection so a new connection can be established. Typically, TCP Keepalives are sent every 45 or 60 seconds on an idle TCP connection, and the connection is dropped after 3 sequential ACKs are missed. This varies by host, e.g. by default, Windows PCs send the first TCP Keepalive packet after 7200000ms (2 hours), then send 5 Keepalives at 1000ms intervals, dropping the connection if there is no response to any of the Keepalive packets. Linux hosts send the first TCP Keepalive packet after 2 hours (default since Linux 2.2), then send 9 Keepalive probes (default since Linux 2.2) at 75 seconds (default since Linux 2.4) intervals, dropping the connection if there is no response to any of the Keepalive packets. Keepalive on higher layers Since TCP keepalive is optional, various protocols (e.g. SMB and TLS) implement their own keep-alive feature on top of TCP. It is also common for protocols which maintain a session over a connectionless protocol, e.g. OpenVPN over UDP, to implement their own keep-alive. Other uses HTTP keepalive The Hypertext Transfer Protocol uses the keyword "Keep-Alive" in the "Connection" header to signal that the connection should be kept open for further messages (this is the default in HTTP 1.1, but in HTTP 1.0 the default was to use a new connection for each request/reply pair). Despite the similar name, this function is entirely unrelated. See also Switch Watchdog timer Hole punching UDP hole punching Warrant canary Ping test References Computer networking
Keepalive
Technology,Engineering
1,038
70,747,050
https://en.wikipedia.org/wiki/Autosomal%20dominant%20cerebellar%20ataxia%2C%20deafness%2C%20and%20narcolepsy
Autosomal dominant cerebellar ataxia, deafness, and narcolepsy (ADCADN) is a rare progressive genetic disorder that primarily affects the nervous system and is characterized by sensorineural hearing loss, narcolepsy with cataplexy, and dementia later in life. People with this disorder usually start showing symptoms when they are in their early-mid adulthoods. It is a type of autosomal dominant cerebellar ataxia. Presentation Usually, people with this disorder have ataxia, mild–moderate sensorineural hearing loss, narcolepsy, and cataplexy. These symptoms start happening when an affected person is about 30 years old. A bit later in life, people with ADCADN start showing a decline in executive function known as dementia. Degeneration of the optic nerves, cataracts, sensory neuropathy, lymphedema of the arms and legs, urinary incontinence, depression, uncontrollable and inappropriate laughing or crying (e.g. sudden incontrollable laughing during a funeral), and psychosis are features that typically accompany it. People with this disorder only live to be 40–50 years old. Other features of the disorder that may or may not occur in all patients include diabetes mellitus, spasticity, nystagmus, tremors, dilatation of the right ventricle, cerebral atrophy, and other generalized brain abnormalities. Complications Genetics This condition is caused by mutations in exon 20–21 of the DNMT1 gene, located in chromosome 19. These mutations are inherited in an autosomal dominant manner, meaning that for someone to show symptoms of a condition, they must have at least one copy of the mutation. This can occur in two scenarios; it can be hereditary or it can be the result of a spontaneous error. This gene plays a role in the production of an enzyme called DNA methyltransferase 1, which is involved in DNA methylation. This enzyme is essential for the regulation of neuron maturation, differentiation, migration, and most importantly, survival. The mutations involved in ADCAN alter a certain region in the enzyme produced by the gene which helps DNA methylation, which ends up distorting said process. This affects the expression of various genes. This also disrupts neuron maintenance, leading to the characteristic psychiatric and cognitive symptoms of this condition. Diagnosis This condition can be diagnosed by using methods such as whole exome sequencing and examination of the patient's symptoms. Treatment Prevalence More than 80 cases from families around the world have been described in medical literature. The following list comprises all countries of origin (according to OrphaNet): Sweden United States Italy Brazil China New Zealand Belgium United Kingdom Canada Germany Taiwan History This condition was first discovered in 1995 by Melberg et al. when they described 5 members of a 4-generation Swedish family where cerebellar ataxia and sensorineural deafness presented as an autosomal dominant trait, 4 of them had narcolepsy and 2 had diabetes mellitus. The oldest members had psychiatric symptoms, neurological anomalies, and optic atrophy, showing the progressive nature of the condition. References Genetic diseases and disorders Nervous system Hearing loss Narcolepsy Dementia
Autosomal dominant cerebellar ataxia, deafness, and narcolepsy
Biology
678
11,245,168
https://en.wikipedia.org/wiki/Allan%20Rechtschaffen
Allan Rechtschaffen (December 8, 1927 – November 29, 2021) was a noted pioneer in the field of sleep research whose work includes some of the first laboratory studies of insomnia, narcolepsy, sleep apnea, and napping. He received his PhD from Northwestern University in 1956. He did research in the effects on sleep of exercise, mental work, stimulation, stress, and metabolism, as well as the effects of sleep deprivation. He also looked at sleep in reptiles and rats. Dr. Rechtschaffen and Gerry Vogel, working with colleagues at Mt. Sinai Hospital in New York including Dr. William Dement, described narcolepsy—the first scientifically demonstrated sleep disorder—in a landmark paper in 1963. Dr. Rechtschaffen went on to perform experiments in rats that demonstrated the lethal consequences of long-term (two weeks or more) sleep deprivation and REM sleep deprivation. He worked with Anthony Kales in developing the still-used criteria used by sleep laboratories to report human sleep scale data. The system is commonly called R&K or Rechtschaffen and Kales, named after its key developers. R&K was used from 1968 to 2007 when The AASM Manual for the Scoring of Sleep and Associated Events was published by the American Academy of Sleep Medicine (AASM). At the time of his death, Rechtschaffen, who was born in the Bronx, was Professor Emeritus in the Department of Psychiatry and Psychology at the University of Chicago. His family name means "upright" in German. He is the uncle of the author Daniel Mendelsohn and filmmaker Eric Mendelsohn. References A manual of standardized terminology, techniques and scoring system for sleep stages of human subjects edited by Allan Rechtschaffen and Anthony Kales, National Institutes of Health, Publication no. 204, Neurological Information Network (1968) External links The Secret of Sleep 1927 births 2021 deaths Sleep researchers University of Chicago faculty
Allan Rechtschaffen
Biology
405
615,421
https://en.wikipedia.org/wiki/Lombard%20rhythm
The Lombard rhythm or Scotch snap is a syncopated musical rhythm in which a short, accented note is followed by a longer one. This reverses the pattern normally associated with dotted notes or notes inégales, in which the longer value precedes the shorter. In Baroque music, a Lombard rhythm consists of a stressed sixteenth note, or semiquaver, followed by a dotted eighth note, or dotted quaver. Baroque composers often implemented these rhythms. For instance, Johann Georg Pisendel utilized Lombard rhythms within the largo and allegro sections of his sonata for Violin Solo in A Minor. Carl Philipp Emanuel Bach included dotted rhythms within certain excerpts of his concerto for flute, cello, and keyboard. Not only did Baroque performers and composers such as Johann Joachim Quantz, introduce these uneven rhythms in their studies and pedagogy, but jazz also possesses these rhythms which are in the very essence of its style. In Scottish country dances, the Scotch snap (or Scots snap) is a prominent feature of the strathspey. Due to the immigration of Scots to Appalachia, elements of Scottish music such as the Lombard rhythm have been appropriated into popular music forms of the 20th and 21st century. In modern North American pop and rap music, the Lombard rhythm is very common; recent releases by Post Malone, Cardi B, and Ariana Grande feature the Scotch snap. Grande's song ‘7 Rings’ was the subject of controversy surrounding this rhythm, wherein several hip-hop artists (Princess Nokia and Soulja Boy) who had used the rhythm in an iconic fashion raised accusations of plagiarism. References Babitz, Sol. “A Problem of Rhythm in Baroque Music.” The Musical Quarterly 38, no. 4 (October 1952): 533–565. https://www.jstor.org/stable/740138 Fuller, David. “Notes inégales (Fr.: ‘unequal notes’),” Grove Music Online (January 2001) https://doi.org/10.1093/gmo/9781561592630.article.20126 Gábor, Elod and Ignác-Csaba FILIP. “Johann Georg Pisendel: Sonata for Violin Solo in A Minor.” Series VIII: Performing Arts 12, no. 61 (2019): pp. 72–76. https://doi.org/10.31926/but.pa.2019.12.61.30 Miller, Leta. “C.P.E. Bach’s Instrumental ‘Recompositions’: Revisions or Alternatives?” Current Musicology 59, (1995) p. 29. Further reading Baroque music Rhythm and meter Scottish country dance Scottish fiddling Scottish folk music
Lombard rhythm
Physics
566
49,778,963
https://en.wikipedia.org/wiki/National%20Tile%20Contractors%20Association
The National Tile Contractors Association (NTCA) is a nonprofit trade association dedicated to the professional installation of ceramic tile and natural stone, established in 1947. The NTCA aims to improve the industry through education and training, participation in the development of standards and methods. The current president of the association is Martin Howard . In February 2016 Bart Bettiga, executive director of the NTCA, officially announced that the association has joined forces with the Contractors Association of America (CCA) Global Partners to bring education and technical expertise to its members. NTCA University The NTCA University is an online training and learning system developed to create awareness of industry standards, improve product knowledge, and train tile installers. Programs available and in development include contractor apprenticeship, and continuing education (CEU's) that unite sales and product training and business education. Certification Program The Certified Tile Installer Evaluation is a certification program introduced by the NTCA in alliance with the Ceramic Tile Education Foundation (CTEF) and the Tile Council of North America (TCNA). The program was developed in response to the lack of a mechanism for consumers to know the level of proficiency of prospective installers. The evaluation itself is a validation of the skills and knowledge of tile installers which includes a multiple-choice exam and a hands-on test. Both are based on current industry standards and best practices for producing professional installations that exhibit good workmanship. Since its inception in 2008 and until 2014, the CTI Evaluation has certified over 1000 tile installers, aiming to reach 1500 by the end of 2014. Board of directors The NTCA Board of Directors is elected by members at their annual meeting. It is composed of the Executive Officers of the Association, representatives of 12 regions, and members from distribution and tile and installation material manufacturers. Terms for the Board of Directors are for two years. Committees References External links Official Website Stoneware Tile Tiling Construction organizations Organizations established in 1947 Trade associations based in the United States
National Tile Contractors Association
Engineering
395
432,624
https://en.wikipedia.org/wiki/U-duality
In physics, U-duality (short for unified duality) is a symmetry of string theory or M-theory combining S-duality and T-duality transformations. The term is most often met in the context of the "U-duality (symmetry) group" of M-theory as defined on a particular background space (topological manifold). This is the union of all the S-duality and T-duality available in that topology. The narrow meaning of the word "U-duality" is one of those dualities that can be classified neither as an S-duality, nor as a T-duality - a transformation that exchanges a large geometry of one theory with the strong coupling of another theory, for example. References String theory
U-duality
Astronomy
158
41,823
https://en.wikipedia.org/wiki/Tropospheric%20wave
In telecommunications, a tropospheric wave is a radio wave that travels via reflection in the troposphere. Trophospheric waves are propagated from a place of abrupt change in the dielectric constant, or its gradient. In some cases, a ground wave may be so altered that new components appear to arise from reflection in regions of rapidly changing dielectric constant. When these components are distinguishable from the other components, they are called "tropospheric waves." References Radio frequency propagation
Tropospheric wave
Physics,Materials_science
109
39,094,287
https://en.wikipedia.org/wiki/Stimulus%E2%80%93response%20compatibility
Stimulus–response (S–R) compatibility is the degree to which a person's perception of the world is compatible with the required action. S–R compatibility has been described as the "naturalness" of the association between a stimulus and its response, such as a left-oriented stimulus requiring a response from the left side of the body. A high level of S–R compatibility is typically associated with a shorter reaction time, whereas a low level of S-R compatibility tends to result in a longer reaction time, a phenomenon known as the Simon effect. The term "stimulus-response compatibility" was first coined by Arnold Small in a presentation in 1951. Determinants of reaction time Visual location S–R compatibility can be seen in the variation in the amount of time taken to respond to a visual stimulus, given the similarity of the event that prompts the action, and the action itself. For example, a visual stimulus in the left of a person's field of vision is more compatible with a response involving the left hand than with a response involving the right hand. Evidence In 1953, Paul Fitts and C. M. Seeger ran the first experiment conclusively demonstrating that certain responses are more compatible with certain stimuli, during which subjects were alternatively instructed to press buttons on their left and right in response to lights which could appear in either the left or right corner of their field of vision. The study found that subjects took longer when the stimulus and response were incompatible. This was not in and of itself evidence for a relationship between S–R compatibility and reaction time; an alternate hypothesis posited that the delay was simply the result of the sensory information taking longer to reach neural processing centers when hemispheres are crossed. This alternate hypothesis was disproven by a follow-up trial in which Fitts and Seeger had subjects cross their arms, so that the left hand would press the right button and vice versa; the difference between reaction times of subjects in the standard and crossed-arms trials was statistically insignificant, even though the neural signal traveled a greater distance. Refinements and improvements The reverse scenario was tested in a 1954 experiment by Richard L. Deninger and Paul Fitts, in which it was demonstrated that subjects responded more quickly when the stimulus and response were compatible. Solid evidence that S-R compatibility impacted the response planning phase was not found until 1995, when Bernhard Hommel demonstrated that modifying stimuli in ways unrelated to S-R compatibility, such as the size of the objects on the computer screen, did not increase reaction time. Auditory location This phenomenon also applies to auditory stimuli. For example, hearing a tone in one ear prepares that side of the body to respond, and the reaction time will be longer if one is required to perform an action with the opposite side of the body as the side where the tone was heard, or vice versa. Evidence In 2000, T. E. Roswarski and Robert Proctor conducted a variation of the original Fitts and Seeger experiment involving auditory tones in each ear instead of lights. The experiment showed that the reaction time for auditory signals is also influenced by S-R compatibility. Motion Another determinant of S-R compatibility is the destination of a moving stimulus. For example, an object moving towards the right hand is more compatible with a right-hand response than an object moving towards the left hand, even if the object is closer to the left hand when the stimulus is perceived. Evidence An experiment by Claire Michaels in 1988 demonstrated the role of motion in determining S–R compatibility. In this experiment, subjects were presented with a computer display with their hands extended, and a square on the screen would appear at some random location and move towards either the right or left hand. Choice reaction time was faster when subjects responded with the same hand the square was moving towards. This experiment showed that reaction time was affected more by the destination of the square than by its current location relative to the hand by showing that reaction time was even shorter when the square started in the middle of the screen than when it was close to the destination hand. Affordance Also important to S–R compatibility is the type of stimulus; familiar objects tend to invite specific responses. As one example, if an object is perceived as more easily (or more typically) manipulable with one hand than the other, any response requiring use of the other hand will tend to have a long reaction time. Evidence In 1998, Mike Tucker and Rob Ellis conducted an experiment at the University of Plymouth which expanded the concept of S–R compatibility to higher-order cognition. In their experiment, subjects were given two buttons, one on their left and one on their right, and shown a series of pictures of familiar objects like frying pans and teacups. For each image, they were asked to press the left button if the object in the image was upright and the right button if the object was inverted. However, the objects also varied in their rotation, such that the handles faced either left or right. The experiment revealed that seeing the handle pointing in one direction primed subjects to reach with the corresponding hand, which caused discrepancies in S-R compatibility that affected reaction time; for example, a subject seeing an inverted teapot with a handle pointing left took longer to press the button on the right than a subject who saw the same teapot pointing right. Expectations Prior knowledge and stereotyping plays a role in S–R compatibility. If a required response is inconsistent with a person's stereotyped knowledge of a stimulus and its "typical" reactions, even if the person is aware of the necessary response in the new situation, compatibility will be low. For example, light switches in the United Kingdom are "on" when toggled down, but light switches in the United States are "on" when toggled up; a native of one country visiting the other will demonstrate low S-R compatibility when turning the lights on or off. As another example, red lights are universally associated with "stop" and green with "go", and a reversed configuration will result in a longer reaction time. Applications S–R compatibility is an important consideration in the field of human-computer interaction, and in software engineering. Programs are easier and more intuitive to use when the input of the user and the output of the program are S–R-compatible. This would also be an important consideration in the physical design of objects...for instance, an electrical appliance with an on/off switch will be most intuitive if it is designed to conform to cultural expectations. Additionally, principles of S–R compatibility are important considerations for psychology researchers; experiments may need to be controlled for the phenomenon. For example, behavioral neuroscience researchers should make sure that a task does not inadvertently vary along dimensions of S–R compatibility. See also Stroop effect Mental chronometry Hick's law Simon effect Priming Further reading Bächtold, Daniel, Martin Baumüller, & Peter Brugger. "Stimulus-response compatibility in representational space". Neuropsychologia, Volume 36, Issue 8, 1 August 1998, Pages 731–735 References External links Usabilityfirst.com Psycnet.apa.org Books.google.com Books.google.com Experimental psychology Human–computer interaction Cognitive science Cognitive psychology 1950s neologisms
Stimulus–response compatibility
Engineering,Biology
1,499
189,654
https://en.wikipedia.org/wiki/Patrick%20Moore
Sir Patrick Alfred Caldwell-Moore (; 4 March 1923 – 9 December 2012) was an English amateur astronomer who attained prominence in that field as a writer, researcher, radio commentator and television presenter. Moore's early interest in astronomy led him to join the British Astronomical Association at the age of 11. He served in the Royal Air Force during World War II and briefly taught before publishing his first book on lunar observation in 1953. Renowned for his expertise in Moon observation and the creation of the Caldwell catalogue, Moore authored more than seventy astronomy books. He hosted the world's longest-running television series with the original presenter, BBC's The Sky at Night, from 1957 until his death in 2012. Idiosyncrasies such as his rapid diction and monocle made him a popular and instantly recognisable figure on British television. Moore was co-founder and president of the Society for Popular Astronomy. Outside his field of astronomy, Moore appeared in the video game television show GamesMaster. Moore was also a self-taught xylophonist and pianist, as well as an accomplished composer. He was an amateur cricketer, golfer and chess player. In addition to many popular science books, he wrote numerous works of fiction. He was an opponent of fox hunting, an outspoken critic of the European Union and a supporter of the UK Independence Party, and he served as chairman of the short-lived anti-immigration United Country Party. He was knighted in 2001. Early life Moore was born in Pinner, Middlesex, on 4 March 1923 to Capt. Charles Trachsel Caldwell-Moore MC (died 1947) and Gertrude (née White) (died 1981). His family moved to Bognor Regis, and subsequently to East Grinstead where he spent his childhood. His youth was marked by heart problems, which left him in poor health, and he was educated at home by private tutors. He developed an interest in astronomy at the age of six and joined the British Astronomical Association at the age of 11. He was invited to run a small observatory in East Grinstead at the age of 14, after his mentor, William Sadler Franks – who ran the observatory – was killed in a road accident. At the age of 16, he began wearing a monocle after an oculist told him his right eye was weaker than his left. During World War II, Moore joined the Home Guard in East Grinstead, where his father had been elected platoon commander. Records show that he enlisted in the Royal Air Force Volunteer Reserve in December 1941 at age 18 and was not called up for service until July 1942 as an Aircraftman, 2nd Class. After basic training at various RAF bases in England, he went to Canada under the British Commonwealth Air Training Plan. He completed training at RAF Moncton in New Brunswick as a navigator and pilot. Returning to England in June 1944, he was commissioned as a pilot officer and was posted to RAF Millom in Cumberland, where he claimed to have been a navigator in the crew of a Vickers Wellington bomber, engaged in maritime patrolling and bombing missions to mainland Europe, though in fact he was still in training at Millom. He was only posted to Bomber Command five days before the end of the war in Europe. After the end of hostilities, Moore became an adjutant and then an Area Meteorological Officer, demobilising in October 1945 with the rank of flying officer. Career in astronomy After the war, Moore rejected a grant to study at the University of Cambridge, citing a wish to "stand on my own two feet". He wrote his first book, Guide to the Moon (later retitled Patrick Moore on the Moon) in 1952, and it was published a year later. He was a teacher in Woking and at Holmewood House School in Langton Green in Kent from 1945 to 1953. His second book was a translation of a work of French astronomer Gérard de Vaucouleurs (Moore spoke fluent French). After his second original science book, Guide to the Planets, he wrote his first work of fiction, The Master of the Moon, the first of numerous young adult fiction space adventure books (including the late 1970s series the Scott Saunders Space Adventure); he wrote a more adult novel and a farce titled Ancient Lights, though he did not wish either to be published. Moore also translated the book Quanta by J Lochak and Andrade E Silva, published in 1969, from the French. While teaching at Holmewood, he set up a 12½ inch reflector telescope at his home, which he kept into his old age. He developed a particular interest in the far side of the Moon, a small part of which is visible from Earth as a result of the Moon's libration; the Moon was his specialist subject throughout his life. Moore described the short-lived glowing areas on the lunar surface and gave them the name transient lunar phenomena in 1968. His first television appearance was in a debate about the existence of flying saucers following a spate of reported sightings in the 1950s; Moore argued against Lord Dowding and other UFO proponents. He was invited to present a live astronomy programme and said the greatest difficulty was finding an appropriate theme tune; the opening of Jean Sibelius's Pelléas et Mélisande was chosen and used throughout the programme's existence. The programme was originally named Star Map before The Sky at Night was chosen in the Radio Times. On 24 April 1957, at 10:30 pm, Moore presented the first episode about the Comet Arend–Roland. The programme was pitched to casual viewers up to professional astronomers, in a format which remained consistent from its inception. Moore presented every monthly episode except for one in July 2004 when he suffered a near-fatal bout of food poisoning caused by eating a contaminated goose egg and was replaced for that episode by Chris Lintott. Moore appears in the Guinness World Records book as the world's longest-serving TV presenter having presented the programme since 1957. From 2004 to 2012, the programme was broadcast from Moore's home when arthritis prevented him from travelling to the studios. Over the years, he received many lucrative offers to take his programme onto other networks but rejected them because he held a 'gentlemen's agreement' with the BBC. In 1959, the Russians allowed Moore to be the first Westerner to see the photographic results of the Luna 3 probe and to show them live on air. Less successful was the transmission of the Luna 4 probe, which ran into technical difficulties and around this time, Moore famously swallowed a large fly; both episodes were live, and Moore had to continue regardless. He was invited to visit the Soviet Union, where he met Yuri Gagarin, the first man to journey into outer space. For the fiftieth episode of The Sky at Night, in September 1961, Moore's attempt to be the first to broadcast a live direct telescopic view of a planet resulted in another unintended 'comedy episode', as cloud obscured the sky. In 1965, he was appointed director of the newly constructed Armagh Planetarium in Northern Ireland, a post he held until 1968. His stay outside England was short partly because of the beginning of The Troubles, a dispute Moore wanted no involvement in. He was appointed Armagh County secretary of the Scout movement but resigned after being informed that Catholics could not be admitted. In developing the Planetarium, Moore travelled to Japan to secure a Goto Mars projector. He helped with the redevelopment of the Birr Telescope in the Republic of Ireland. He was a key figure in the development of the Herschel Museum of Astronomy in Bath. In June 1968, he returned to England, settling in Selsey after resigning from his post in Armagh. During the NASA Apollo programme, presenting on the Apollo 8 mission, he declared that "this is one of the great moments of human history", only to have his broadcast interrupted by the children's programme Jackanory. He was a presenter for the Apollo 9 and Apollo 10 missions, and a commenter, with Cliff Michelmore and James Burke, for BBC television's coverage of the Moon landing missions. Moore could not remember his words at the "Eagle has landed" moment, and the BBC lost the tapes of the broadcast. A homemade recording reveals that the studio team was very quiet during the landing sequence, leaving the NASA commentary clear of interruptions. Some 14 seconds after "contact" Burke says "They've touched". At 36 seconds, he says, "Eagle has landed". Between 53 and 62 seconds, he explains the upcoming stay/no-stay decision, and NASA announces the T1 stay at 90 seconds after contact. At 100 seconds, the recorded sequence ends. Thus, any real-time comment Moore made was not broadcast live, and the recording ends before Burke polls the studio team for comment and reaction. Moore participated in TV coverage of Apollo missions 12 to 17. He was elected a member of the International Astronomical Union in 1966; having twice edited the Union's General Assembly newsletters. He attempted to establish an International Union of Amateur Astronomers, which failed due to lack of interest. During the 1970s and 80s, he reported on the Voyager and Pioneer programs, often from NASA headquarters. At this time he became increasingly annoyed by conspiracy theorists and reporters who asked him questions such as "Why waste money on space research when there is so much to be done here?". He said that when asked these type of questions "I know that I'm dealing with an idiot." Another question that annoyed him was "what is the difference between astronomy and astrology?" Despite this he made a point of responding to all letters delivered to his house, and sent a variety of standard replies to letters asking basic questions, as well as those from conspiracy theorists, proponents of hunting and 'cranks'. Despite his fame, his telephone number was always listed in the telephone directory and he was happy to show members of the public his observatory. He compiled the Caldwell catalogue, of 109 star clusters, nebulae, and galaxies for observation by amateur astronomers. In 1982, asteroid 2602 Moore was named in his honour. In February 1986, he presented a special episode of The Sky at Night on the approach of Halley's Comet. However, he later said the BBC's better-funded Horizon team "made a complete hash of the programme." In January 1998, a tornado destroyed part of Moore's garden observatory; it was subsequently rebuilt. Moore campaigned unsuccessfully against the closure of the Royal Observatory, Greenwich in 1998. Among Moore's favourite episodes of The Sky at Night were those that dealt with eclipses, and he said, "there is nothing in nature to match the glory of a total eclipse of the Sun." Moore was a BBC presenter for the total eclipse in England in 1999, though the view he and his team had from Cornwall was obscured by cloud. Moore was the patron of the South Downs Planetarium and Science Centre, and he attended its official opening in 2001. On 1 April 2007, a 50th anniversary semi-spoof edition of the programme was broadcast on BBC One, with Moore depicted as a Time Lord. It featured special guests, amateur astronomers Jon Culshaw (impersonating Moore presenting the first The Sky at Night) and Brian May. On 6 May 2007, a special edition of The Sky at Night was broadcast on BBC One to commemorate the programme's 50th anniversary, with a party in Moore's garden at Selsey, attended by amateur and professional astronomers. Moore celebrated the record-breaking 700th episode of The Sky at Night at his home in Sussex on 6 March 2011. He presented with the help of special guests Professor Brian Cox, Jon Culshaw and Lord Rees, the Astronomer Royal. It was reported in January 2012 that because of arthritis and the effects of an old spinal injury, he was no longer able to operate a telescope. However, he was still able to present The Sky at Night from his home. Activism and political beliefs Moore briefly supported the Liberal Party in the 1950s, though later condemned the Liberal Democrats, saying he believed they could alter their position radically and that they "would happily join up with the BNP or the Socialist Workers Party ... if [by doing so] they could win a few extra votes." In the 1970s, he was chairman of the anti-immigration United Country Party, a position he held until the party was absorbed by the New Britain Party in 1980. He campaigned for the politician Edmund Iremonger at the 1979 general election, as the two men agreed the French and Germans were not to be trusted. Iremonger and Moore gave up political campaigning after deciding they were Thatcherites. He also admired the Official Monster Raving Loony Party and was briefly their financial adviser. A Eurosceptic, he was a supporter and patron of the UK Independence Party, and campaigned on behalf of Douglas Denny, the UKIP candidate for the Chichester constituency in 2001. Moore was known for his conservative political views. Proudly declaring himself to be English (rather than British) with "not the slightest wish to integrate with anybody", he stated his admiration for British politician Enoch Powell. Moore devoted an entire chapter ("The Weak Arm of the Law") of his autobiography to denouncing modern British society, particularly "motorist-hunting" policemen, sentencing policy, the Race Relations Act, Sex Discrimination Act and the "Thought Police/Politically Correct Brigade". He wrote that "homosexuals are mainly responsible for the spreading of AIDS (the Garden of Eden is home of Adam and Eve, not Adam and Steve)". In 2007, in an interview with Radio Times, he said the BBC was being "ruined by women", commenting that: "The trouble is that the BBC now is run by women and it shows: soap operas, cooking, quizzes, kitchen-sink plays. You wouldn't have had that in the golden days." In response, a BBC spokeswoman described Moore as being one of TV's best-loved figures and remarked that his "forthright" views were "what we all love about him". During his June 2002 appearance on Room 101 he banished female newsreaders into Room 101. He wrote in his autobiography that Liechtenstein – a constitutional monarchy headed by a prince – had the best political system in the world. Moore was a critic of the Iraq War, and said "the world was a safer place when Ronald Reagan was in the White House". Moore cited his opposition to fox hunting, blood sport and capital punishment to rebut claims that he had ultra-right-wing views. Though not a vegetarian, he held "a deep contempt for people who go out to kill merely to amuse themselves." He was an animal lover, supporting many animal welfare charities (particularly Cats Protection). He had a particular affinity for cats and stated that "a catless house is a soulless house". Moore was opposed to astronomy being taught in schools. In an interview, he said: Other interests and popular culture Because of his long-running television career and eccentric demeanour, Moore was widely recognised and became a popular public figure. In 1976 it was used to good effect for an April Fools' Day spoof on BBC Radio 2, when Moore announced a once-in-a-lifetime astronomical event that meant that if listeners could jump at that exact moment, 9.47 a.m. they would experience a temporary sensation of weightlessness. The BBC received many telephone calls from listeners alleging they experienced the sensation. He was a key figure in the establishment of the International Birdman event in Bognor Regis, which was initially held in Selsey. Moore appeared in other television and radio shows, including the BBC Radio 4 panel show Just a Minute. From 1992 until 1998, he played the role of GamesMaster, a character who knew everything about video games, in the Channel 4 television series GamesMaster. GamesMaster would issue video game challenges and answered questions about cheats and tips. The show's host, Dominik Diamond, said that Moore did not understand anything he said on the show but recorded his contributions in single takes. Moore was a keen amateur actor, appearing in local plays. He appeared in self-parodying roles, in several episodes of The Goodies and on the Morecambe and Wise show, and broadcast with Kenneth Horne only a few days before Horne's death. He had a minor role in the fourth radio series of The Hitchhiker's Guide to the Galaxy, and a lead role in the BBC Radio 1 sci-fi play, Independence Day UK in which amongst other things, Moore fills in as a navigator. Among other shows, he appeared in It's a Celebrity Knockout, Blankety Blank and Face the Music, and in the Q.E.D. episode "Round Britain Whizz". Moore expressed appreciation for the science fiction television series Doctor Who and Star Trek, but stated that he had stopped watching when "they went PC - making women commanders, that kind of thing". Despite this he made a cameo appearance in the Doctor Who episode "The Eleventh Hour" in 2010, which was Matt Smith's debut as the Eleventh Doctor. In the 1960s, Moore had been approached by the Doctor Who story editor Gerry Davis to act as a scientific advisor on the series to help with the accuracy of stories, a position ultimately taken by Kit Pedler. A keen amateur chess player, Moore carried a pocket set and was vice president of Sussex Junior Chess Association. In 2003, he presented Sussex Junior David Howell with the best young chess player award on Carlton Television's Britain's Brilliant Prodigies show. Moore had represented Sussex in his youth. Moore was an enthusiastic amateur cricketer, playing for the Selsey Cricket Club well into his seventies. He played for the Lord's Taverners, a cricketing charity team, as a bowler with an unorthodox action. Though an accomplished leg spin bowler, he was a number 11 batsman and a poor fielder. The jacket notes to his book "Suns, Myths and Men" (1968) said his hobbies included "chess, which he plays with a peculiar leg-spin, and cricket." He played golf and won a Pro-Am competition in Southampton in 1975. Until forced to give up because of arthritis, Moore was a keen pianist and accomplished xylophone player, having first played the instrument at the age of 13. He composed a substantial corpus of works, including two operettas. Moore had a ballet, Lyra's Dream, written to his music. He performed at a Royal Command Performance, and performed a duet with Evelyn Glennie. In 1998, as a guest on Have I Got News for You, he accompanied the show's closing theme tune on the xylophone and as a pianist, he once accompanied Albert Einstein playing The Swan by Camille Saint-Saëns on the violin (no recording was made). In 1981 he performed a solo xylophone rendition of the Sex Pistols' "Anarchy in the U.K." in a Royal Variety Performance. He did not enjoy most popular music: when played ten modern rock songs by such artists as Hawkwind, Muse and Pink Floyd, in a 2009 interview with journalist Joel McIver, he explained, "To my ear, all these songs are universally awful." Before encountering health problems, he was an extensive traveller and had visited all seven continents, including Antarctica; he said his favourite two countries were Iceland and Norway. On 7 March 2006 he was hospitalised and fitted with a pacemaker because of cardiac dysrhythmia. Moore was a friend of the Queen guitarist and astrophysicist Brian May, who was an occasional guest on The Sky at Night. May bought Moore's Selsey home in 2008, leasing it back to him for a peppercorn rent the same day to provide financial security. May, Moore and Chris Lintott co-wrote a book Bang! The Complete History of the Universe. In February 2011, Moore completed (with Robin Rees and Iain Nicolson) his comprehensive Patrick Moore's Data Book of Astronomy for Cambridge University Press. In 1986, he was identified as the co-author of a book published in 1954 called Flying Saucer from Mars, attributed to Cedric Allingham, which was intended as a money-making venture and practical joke on UFO believers; Moore never admitted his involvement. Moore believed himself to be the only person to have met the first aviator, Orville Wright, the first man in space, Yuri Gagarin, and the first man on the moon, Neil Armstrong. In March 2015, BBC Radio 4 broadcast a 45-minute play based on the life of Moore, The Far Side of the Moore by Sean Grundy, starring Tom Hollander as Moore and Patricia Hodge as his mother. Moore is portrayed by Daniel Beales in the Netflix series The Crown. Honours and appointments In 1945, Moore was elected a Fellow of the Royal Astronomical Society (FRAS), and in 1977 he was awarded the society's Jackson-Gwilt Medal. He was also a long-time Fellow of the British Interplanetary Society and a member of its Council; he was the founding editor of the Society's monthly magazine Spaceflight, first published in 1956. He made the Sir Patrick Moore Medal to recognise outstanding contributions to the Society. In 1968, he was appointed an Officer of the Order of the British Empire (OBE) and promoted to a Commander (CBE) in 1988. In 1999, he became the Honorary President of the East Sussex Astronomical Society, a position he held until his death. Moore was knighted for "services to the popularisation of science and to broadcasting" in the 2001 New Year Honours. In 2001, he was appointed an Honorary Fellow of the Royal Society (HonFRS), the only amateur astronomer ever to achieve the distinction. In June 2002, he was appointed as the Honorary Vice-president of the Society for the History of Astronomy. Also in 2002, Buzz Aldrin presented him with a British Academy of Film and Television Arts (BAFTA) award for services to television. He was patron of Torquay Boys' Grammar School in south Devon. Moore had a long association with the University of Leicester and its Department of Physics and Astronomy and was awarded an Honorary Doctor of Science (HonDSc) degree in 1996 and a Distinguished Honorary Fellowship in 2008, the highest award that the university can bestow. Personal life and death World War II had a significant influence on Moore's life – he said his only romance ended when his fiancée Lorna, a nurse, was killed in London in 1943 by a bomb which struck her ambulance. Moore subsequently remarked that he never married because "there was no one else for me ... second best is no good for me ... I would have liked a wife and family, but it was not to be." In his biography of Moore, Martin Mobberley expressed doubts over this account, as it was not possible to identify Lorna, saying that Moore told varying stories about her. In his autobiography, he said that after sixty years, he still thought about her, and because of her death, "if I saw the entire German nation sinking into the sea, I could be relied upon to help push it down." In May 2012, Moore told the Radio Times magazine, "We must take care. There may be another war. The Germans will try again, given another chance." He also said, in the same interview, that "the only good Kraut is a dead Kraut". Moore said he was "exceptionally close" to his mother Gertrude, a talented artist who shared his home at Selsey, West Sussex, which was decorated with her paintings of "bogeys" – little friendly aliens – that she produced and sent out annually as the Moores' Christmas cards. Moore wrote the foreword for his mother's 1974 book, Mrs Moore in Space. On 9 December 2012, Moore died of sepsis and heart failure, at his home in Selsey, aged 89. On 9 December 2014, it was reported that the Science Museum, London had acquired a large collection of his objects and manuscripts and memorabilia, including The Sky at Night scripts, and about 70 of his observation books, over more than 60 years, and manuscripts for astronomy and fiction books, and a 12.5-inch reflecting telescope. Bibliography Moore wrote many popular books. From 1962 to 2011, he also edited the long-running annual Yearbook of Astronomy and was editor for many other science books in that period. He also wrote science fiction novels for children and wrote humorous works under the pen-name R. T. Fishall. The list below is therefore not exhaustive. A Guide to the Moon, 1953, Mission to Mars, 1955 The Planet Venus, 1956 The Domes of Mars, 1956 The Voices of Mars, 1957 Peril on Mars, 1958 Raiders of Mars, 1959 A Guide to the Planets, 1960, Stars and Space, 1960 A Guide to the Stars, 1960, Library of Congress Catalog Card No. 60-7584 Oxford Children's Reference Library Book 2: Exploring the World, 1966 The Amateur Astronomer's Glossary, 1966 (reprinted as The A-Z of Astronomy) Moon Flight Atlas, 1969 Observer's Book of Astronomy, 1971, Challenge of the Stars, 1972, Can You Speak Venusian?, 1972, How Britain Won the Space Race, 1972 (with Desmond Leslie) The Southern Stars, 1972, Mastermind (Book 1), (edited by Boswell Taylor), the sections on Astronomy, 1973, republished 1984, Watchers of the Stars:The Scientific Revolution, 1974, Next Fifty Years in Space, 1976, Astronomy Quiz Book, 1978, The Scott Saunders series (six juvenile science fiction novels), late 1970s Bureaucrats: How to Annoy Them (humour) (writing as R.T.Fishall), 1982, New Observer's Book of Astronomy, 1983, Armchair Astronomy, 1984, Travellers in Space and Time, 1984, Stargazing: Astronomy Without A Telescope, 1985, Explorers of Space, 1986, Astronomy for the Under Tens, 1986, The Astronomy Encyclopaedia, 1987, Astronomers' Stars, 1987, Television Astronomer: Thirty Years of the "Sky at Night", 1987, Exploring the Night Sky with Binoculars, 1988, Space Travel for the Under Tens, 1988, The Universe for the Under Tens, 1990, Mission to the Planets, 1991, New Guide to the Planets, 1993, The Sun and the Moon (Starry Sky), 1996, The Guinness Book of Astronomy, 1995, The Stars (Starry Sky), 1996, The Sun and the Moon (Starry Sky), 1996, The Planets (Starry Sky), 1996, Eyes on the Universe: Story of the Telescope, 1997, Exploring the Earth and Moon, 1997, Philip's Guide to Stars and Planets, 1997, Brilliant Stars, 1997, Patrick Moore on Mars, 1998, Patrick Moore's Guide to the 1999 Total Eclipse , 1999, Countdown!, or, How nigh is the end?, 1999, Exploring the Night Sky with Binoculars, 2000, The Star of Bethlehem, 2001, 80 Not Out: The Autobiography, 2003, 2004 The Yearbook of Astronomy, 2003, (editor) Voyage to Mars, 2003 Our Universe: Facts, Figures and Fun, 2007, Patrick Moore's Data Book of Astronomy, 2011, Cambridge University Press, and See also Jack Horkheimer, host of the astronomy show Jack Horkheimer: Star Gazer'' (American counterpart) Notes References Citations Sources External links Bang! The Complete History of the Universe by Brian May, Patrick Moore and Chris Lintott 1923 births 2012 deaths 20th-century English astronomers 21st-century British astronomers 20th-century English male writers 20th-century English novelists Amateur astronomers Anti-German sentiment in Europe Astronomy in Ireland BAFTA winners (people) British Home Guard soldiers Commanders of the Order of the British Empire English autobiographers English Eurosceptics English male non-fiction writers English male novelists English science fiction writers English television presenters Fellows of the Royal Astronomical Society Honorary Fellows of the Royal Society Knights Bachelor Legion of Frontiersmen members Military personnel from the London Borough of Harrow Official Monster Raving Loony Party People from Pinner People from Selsey Royal Air Force officers Royal Air Force Volunteer Reserve personnel of World War II Schoolteachers from Kent Schoolteachers from Surrey UK Independence Party people Xylophonists Writers from the London Borough of Harrow
Patrick Moore
Astronomy
5,806
10,434,301
https://en.wikipedia.org/wiki/Steve%20Wittman
Sylvester Joseph "Steve" Wittman (April 5, 1904 – April 27, 1995) was an American air-racer and aircraft engineer. An illness in Wittman's infancy claimed most of his vision in one eye, which convinced him from an early age that his dream of flying was unattainable. However, he learned how to fly in 1924 in a Standard J-1 and built his first aircraft, the Harley-powered "Hardly Abelson" in late 1924. From 1925 to 1927, he had his own flying service, offering joyrides, and during this time also became a demonstration and test pilot for The Pheasant Aircraft Company and the Dayton Aircraft Company, flying the Pheasant H-10 in multiple events. He also began his air-racing career, flying his first race in 1926 at a Milwaukee event in his J-1. After competing in his first transcontinental air race from New York to Los Angeles in 1928, he attained a medical waiver on his eyesight and received his pilot's certificate soon after (signed by Orville Wright). He then went on to design, build and pilot his own aircraft, including "Chief Oshkosh" in 1931 and "Bonzo" in 1934. Wittman's first race in an aircraft he had designed was in "Bonzo", in the 1935 Thompson Trophy race, where he placed second. In 1937, piloting his second homebuilt, "Chief Oshkosh", Wittman placed second in the Greve Trophy Race. Wittman flew "Bonzo" in the Thompson Trophy race, and he led for the first 18 laps of the 20 lap race, at an average speed of over 275 mph (442.57 km/h). Suddenly his engine began to run rough, and Wittman was forced to throttle back to remain in the race, finishing in 5th place. In 1938, he was awarded the Louis Blériot medal by the Fédération Aéronautique Internationale (FAI). Also in 1937, Wittman designed and built "Buttercup". A high wing design built to outperform the Cubs, Chiefs, T-Crafts, and Luscombes of the day. Based on that aircraft, he built the Wittman Big X in 1945, and the popular Wittman Tailwind series of homebuilts. During World War II, his Wittman Flying Service was part of the Civilian Pilot Training Program, training pilots for the Army Air Corps. After the war, Wittman finished eighth in the 1946 Thompson Trophy race with a clipped-wing Bell P-63 Kingcobra fighter. In 1947, Bill Brennand won the inaugural Goodyear class race at the National Air Races piloting Wittman's 'Buster'. "Buster" was a rebuild of the pre-war "Chief Oshkosh", went on to win many more Goodyear/Continental Trophy races, and was retired after the 1954 Dansville, New York air races. It is now on display at the National Air and Space Museum in Washington, D.C. Wittman built an entirely new "Bonzo" for the 1948 National Air Races, where he flew it, finishing third. Wittman raced "Bonzo" through the 1950s and 1960s, including the first few Reno National Championship air races, before retiring from Formula One competition in 1973. "Bonzo" is now displayed next to Wittman's prewar "Bonzo" in the EAA Aviation Museum, along with several other Wittman airplanes. Wittman was manager of the Oshkosh, Wisconsin, airport from 1931 to 1969, which is now named after him (Wittman Regional Airport). Wittman became involved in the newly formed Experimental Aircraft Association in 1953 and was instrumental in bringing the EAA's annual fly-in to the Oshkosh Airport in 1970. He designed and built the Wittman V-Witt to compete in the new Formula V Air Racing class. He competed in races with that aircraft until 1979. Winners of the Formula V National Championship are presented with the Steve Wittman Trophy. Wittman remained active in aviation his entire life. For Wittman's 90th birthday celebration, he demonstrated aerobatic maneuvers in his V-Witt and Oldsmobile-powered Tailwind. He also used "Buttercup" to give Young Eagles flights. Letters of appreciation were given by US President Bill Clinton and Wisconsin Governor Tommy Thompson. Steve married Dorothy Rady in 1941. He taught her to fly and she accompanied him to most of his races. Dorothy died in 1991 and Wittman married Paula Muir in 1992. On April 27, 1995, Wittman and Muir took off for a routine cross-country flight from their winter residence in Ocala, Florida, to their summer residence in Oshkosh, Wisconsin. The Wittman "O&O" N41SW (41 for 1941, year of his first marriage, plus SW, his initials) crashed five miles south of Stevenson, Alabama, killing both Wittman and Muir. The cause was improper installation of the wing fabric, causing it to debond, resulting in aileron/wing flutter. Wittman was posthumously inducted into the Motorsports Hall of Fame of America in 1998 and the National Aviation Hall of Fame in 2014. Wittman-designed aircraft Wittman Hardley Ableson Wittman Chief Oshkosh Wittman D-12 Bonzo Wittman DFA "Little Bonzo" Wittman Buttercup Wittman Big X Wittman Tailwind Wittman V-Witt References External links Wisconsin Aviation Hall of Fame website National Aviation Hall of Fame website 1904 births 1995 deaths American aircraft designers American aerospace engineers Aviators killed in aviation accidents or incidents in the United States People from Byron, Fond du Lac County, Wisconsin Aviators from Wisconsin Accidental deaths in Alabama 20th-century American engineers Victims of aviation accidents or incidents in 1995 Experimental Aircraft Association
Steve Wittman
Engineering
1,203
35,248,456
https://en.wikipedia.org/wiki/Saltire%20Prize
The Saltire Prize, named after the flag of Scotland, was a national award for advances in the commercial development of marine energy. Announced in 2014, to be considered for the £10 million award, teams had to demonstrate, in Scottish waters, a commercially viable wave or tidal stream energy technology "that achieves the greatest volume of electrical output over the set minimum hurdle of 100 GWh over a continuous 2-year period using only the power of the sea." The Saltire Prize was open to any individual, team or organisation from across the world who believed they had wave or tidal energy technology capable of fulfilling the challenge. Applications could be submitted between March 2010 and January 2015. The funding was later allocated to the Saltire Tidal Energy Challenge Fund as there were no eligible entries for the original prize. Additional prizes The Saltire Prize Lecture — delivered at the Scottish Renewables Marine Conference every September, it focused on the challenges in converting our world lead in wave and tidal energy to an industry of commercial scale, and in securing the economic, environmental and social benefits that this industry can bring. The lecture was designed to promote knowledge exchange between academics, industry, financiers and government. The Saltire Prize Medal — created to recognise outstanding contributions to the development of marine renewable energy. The Medal was awarded every March at the Scottish Renewables Annual Conference, Exhibition and Dinner. The Junior Saltire Prize — launched in 2011, this was aimed at primary and secondary school pupils and was designed to help raise awareness of the opportunities that Scotland has to exploit its marine renewables potential. It was sponsored by Skills Development Scotland and awards are presented to teams in three age groups: Primary 5-7 (age 8-12), Secondary 1-3 (age 11-15), and Secondary 4-6 (age 14-18). A Saltire Prize-sponsored doctorate — in collaboration with the Energy Technology Partnership (ETP). This was announced in August 2012. The research would consider how marine energy projects can be designed to maximise economic energy production while protecting the environment. Power of the Sea — a one-off junior photography competition sponsored by the Saltire Prize, aimed at raising awareness of the natural environment and its potential for marine energy. In December 2012, four young photographers from Scottish primary schools were selected by renowned Scottish photographer, David Eustace, as the national winners. The Junior Saltire Prize and the sponsored doctorate were discontinued in 2016, having cost £60,000 and £48,418 respectively. Saltire Prize Medal In 2011 the inaugural Saltire Prize Medal was awarded to Professor Stephen Salter, who led the team which designed the Salter's Duck device in the 1970s. Richard Yemm, inventor of the Pelamis Wave Energy Converter, was awarded the medal in 2012. Professor Peter Fraenkel, MBE, a pioneer for the development of marine turbines, won the 2013 medal. The 2014 medal went to Allan Thomson, founder of Aquamarine Power. No further medals have been awarded. History When it was first announced in 2008 by then First Minister of Scotland Alex Salmond it was the world's largest ever single prize for innovation in marine renewable energy. The prize was overseen by the Challenge Committee. Saltire Prize policy was the responsibility of the Offshore Renewables Policy Team in the Scottish Government's Energy and Climate Change Directorate. When it launched, the criteria included: Open to any individual, team, or organsisation, from anywhere in the world, however projects had to be located in Scottish waters. Using the energy from waves and/or tidal streams to provide electrical output. Tidal barrages, offshore wind, osmotic power, ocean thermal energy conversion, and marine biomass were all excluded. Individual devices or arrays of multiple devices (comprising one or more technology) could be used, provided they were part of a discrete project with a single electricity connection point. Registration was open between June 2012 and January 2015. The winner would be whoever generated the most electricity within a continuous 2-year period before the deadline of June 2017, subject to a minimum hurdle of 100 GW. The winner was to be announced in July 2017. Competitors There were five entrants for the Saltire Prize, in a phase of the contest that ran until 2017, two wave energy and three tidal-stream: Pelamis Wave Power, although the company went into administration in November 2014. Aquamarine Power secured a 40 MW lease off the north-west coast of Lewis for their Oyster wave energy device, although this company also went into administration in 2015 before deploying any devices there. ScottishPower Renewables planned to deploy a 95 MW tidal array at the Ness of Duncansby site, in the Pentland Firth, however this project never progressed. West Islay Tidal was a proposed 30 MW project by DP Energy in the Sound of Islay, but again this project never progressed. The MeyGen tidal array developed by Atlantis Resources (now SAE Renewables) successfully installed phase 1a comprising four 1.5 MW turbines by February 2017 and was operational by April 2018. By March 2015, it was clear that the prize was not going to be claimed, however the Saltire Prize Challenge Committee considered other options to drive innovation in the wave and tidal power sectors in Scotland. In February 2015, the Saltire Tidal Energy Challenge Fund was announced. Saltire Tidal Energy Challenge Fund The Saltire Tidal Energy Challenge Fund was set up in February 2015 to provide support to the Scottish tidal power sector, complementing the funding for Wave Energy Scotland. The fund was to support capital cost of developing innovations to reduce the cost of tidal energy, for projects to be deployed in Scotland before March 2020. These had to demonstrate value and the potential for positive social and economic benefit to Scotland. In August 2019, Orbital Marine Power was the first recipient of the fund, and awarded £3.4 million towards developing the Orbital O2 turbine. In March 2020, SIMEC Atlantis Energy (now SAE Renewables) was awarded £1.5 million towards developing a sub-sea hub to connect multiple turbines at the MeyGen project. See also List of engineering awards Crown Estate Marine Scotland Renewables Obligation Scottish Adjacent Waters Boundaries Order 1999 Tidal stream generator Wave farm References External links Official website 2007 establishments in Scotland 2007 in science Awards established in 2007 British science and technology awards Business and industry awards Electrical engineering awards Renewable energy in Scotland Renewable energy technology Science and technology in Scotland Scottish awards Scottish coast Scottish Government Sustainability in Scotland Sustainable development Tidal power Wave power
Saltire Prize
Engineering
1,309
5,828,258
https://en.wikipedia.org/wiki/Hepatizon
Hepatizon (Greek etymology: , English translation: "liver"), also known as black Corinthian bronze, was a highly valuable metal alloy in classical antiquity. It is thought to be an alloy of copper with the addition of a small proportion of gold and silver (perhaps as little as 8% of each), mixed and treated to produce a material with a dark purplish patina, similar to the colour of liver. It is referred to in various ancient texts, but few known examples of hepatizon exist today. Of the known types of bronze or brass in classical antiquity (known in Latin as aes and in Greek as χαλκός), hepatizon was the second most valuable. Pliny the Elder mentions it in his Natural History, stating that it is less valuable than Corinthian bronze, which contained a greater proportion of gold or silver and as a result resembled the precious metals, but was esteemed before bronze from Delos and Aegina. As a result of its dark colour, it was particularly valued for statues. According to Pliny, the method of making it, like that for Corinthian bronze, had been lost for a long time. Similar alloys are found outside Europe. For example, shakudō is a Japanese billon of gold and copper with a characteristic dark blue-purple patina. See also Metallurgy References Sources New Scientist, 22 January 1994, "Secret of Achilles' Shield" Further reading Craddock, Paul and Giumlia-Mair, Allessandra, "Hsmn-Km, Corinthian bronze, Shakudo: black patinated bronze in the ancient world", Chapter 9 in Metal Plating and Patination: Cultural, technical and historical developments, Ed. Susan La-Niece, 2013, Elsevier, , 9781483292069, [https://books.google.co.uk/books?id=XgshBQAAQBAJ&pg=PA114 Craddock, P. T., "Metal" V. 4 and 5, Grove Art Online, Oxford Art Online. Oxford University Press. Web. 1 Oct. 2017, Subscription required Copper alloys Precious metal alloys Sculpture materials Ancient Greek metalwork
Hepatizon
Chemistry
467
22,467,708
https://en.wikipedia.org/wiki/K.%20G.%20Corfield%20Ltd
K. G. Corfield Ltd was an innovative camera and lens manufacturing company based in Wolverhampton. The company produced high quality cameras and lenses basing many design features on the Leica range of 35mm cameras. One unique design was employed in the Periflex series of cameras which utilised a novel periscope viewing system to achieve fine focus. A miniature periscope descended into the optical path of the lens when the film was advanced. This provided a much enlarged view of the central area of the film frame. When the shutter was released, the periscope sprung vertically up out of the optical path and then the horizontal cloth focal plane shutter operated. The company also produced other photographic equipment including the Lumimeter, an exposure meter for use in the darkroom based on the comparison between reflected light from the enlarger and a variable transmitted light from the meter observed in a split screen. When both sides were equally lit, the dial controlling the variable light showed the correct exposure. The driving force behind the company and its products was Sir Kenneth Corfield In January 1959, the camera manufacturer moved from Wolverhampton to Ballymoney, becoming the only camera manufacturers on the island of Ireland. In the face of face of Japanese and German competition, the enterprise failed. The company ceased trading in 1971 Subsequently, Sir Kenneth Corfield resurrected the firm to build the Architect camera and to become involved in the production of Gandolfi cameras. Products Periflex Periflex 2 Periflex 3 Goldstar Interplan Architect Lumimeter Distributors for Prestolite Electric alternators Shirley Wellard Universal Cassettes References Cameras Photography equipment manufacturers of the United Kingdom Lens manufacturers Manufacturing companies based in Wolverhampton Defunct manufacturing companies of the United Kingdom Photography in the United Kingdom
K. G. Corfield Ltd
Technology
360
17,482,772
https://en.wikipedia.org/wiki/U.S.%20Bank%20Building%20%28Chicago%29
U.S. Bank Building, formerly 190 South LaSalle Street, is a tall skyscraper in Chicago, Illinois. History It was completed in 1987 and has 40 floors. Johnson/Burgee Architects designed the building, which is the 57th tallest building in Chicago. From 1988-2016 the lobby of the building featured a tapestry by Helena Hernmarck titled "The 1909 Plan of Chicago" depicting the Civic Center Plaza proposed in the Burnham Plan of Chicago. The tapestry is now in the collection of the Art Institute of Chicago. In May 2013, U.S. Bank announced it agreed to increase its leased space in the structure from to . The terms of the lease also gave the bank naming rights for the building through 2026. Gallery See also List of tallest buildings in Chicago References Notes External links Skyscraper office buildings in Chicago Office buildings completed in 1987 Philip Johnson buildings Leadership in Energy and Environmental Design certified buildings 1987 establishments in Illinois
U.S. Bank Building (Chicago)
Engineering
187
41,552,685
https://en.wikipedia.org/wiki/Saproamanita%20nauseosa
Saproamanita nauseosa is a species of agaric fungus in the family Amanitaceae. First described by English mycologist Elsie Maud Wakefield in 1918 as a species of Lepiota, it was named for its nauseating odor. The type specimen was found growing on soil in the Nepenthes greenhouse at Kew Gardens. Derek Reid transferred the species to Amanita in 1966, and then in 2016 the separate genus Saproamanita was created by Redhead et al. for saprophytic Amanitas and it was transferred to this new genus. The fungus is found in Australia and the Caribbean region of North America. See also List of Amanita species References External links Amanitaceae Fungi of Australia Fungi of North America Fungi described in 1918 Fungus species
Saproamanita nauseosa
Biology
162
26,077,794
https://en.wikipedia.org/wiki/EQANIE
EQANIE (European Quality Assurance Network for Informatics Education e.V.) is a non-profit association seeking to enhance evaluation and quality assurance of informatics study programmes and education in Europe. It was founded on January 9, 2009 in Düsseldorf, Germany. EQANIE develops criteria and procedures for the evaluation and quality assurance in informatics study programmes and education. EQANIE awards the so-called Euro-Inf Quality Label to degree programmes that comply with the Euro-Inf Framework Standards and Accreditation Criteria. As of 2021, informatics study programmes from 21 different countries have been accredited. Background EQANIE’s founding is to be seen against the background of the Bologna Process, aiming at the creation of a European Higher Education Area. The association emanated from the informal network of stakeholders involved in the Euro-Inf Project co-financed by the European Union under the Socrates-Programme from 2006 until 2008. The Euro-Inf consortium comprised the German Accreditation Agency ASIIN, Hamburg UAS, the University of Paderborn and the Council of European Professional Informatics Societies (CEPIS). The project consortium established and tested the so-called Euro-Inf Framework Standards and Accreditation Criteria for Informatics Programmes in Europe. The rights of ownership and copyright on the assessment tools developed by the Euro-Inf Project are held by EQANIE. Main objectives of EQANIE in the area of accreditation and quality assessment are: Improving the quality of educational programmes in informatics; providing the Euro-Inf Quality Label for accredited educational programmes in informatics; facilitating mutual transnational recognition by programme validation and certification; facilitating recognition by the competent authorities, in accord with the EU directives and other agreements; increasing mobility of graduates as recommended by the Lisbon Strategy The key principle of Euro-Inf accreditation is that all graduates of a Euro-Inf accredited degree should have undertaken a defined set of learning activities and should have achieved a broadly defined set of learning outcomes. The Framework represents a quality threshold; those degree programmes that have demonstrated compliance are awarded the Euro-Inf Bachelor / Euro-Inf Master Label. The Accreditation Process An Institution wishing to have one or more of its degrees accredited has to select between two different paths: The institution may submit an application to the General Secretary of EQANIE that includes a self-assessment report, compiled in accordance with the Euro-Inf guidelines, a matrix showing how the modules that make up each degree programme satisfy the Eur-Inf expected Learning Outcomes, and supporting documentation that includes the module descriptors, short CV's of academic staff, etc. An audit team studies the documentation and visits the institution. After the visit, the Secretariat prepares a report which is sent to the Institution to be checked for factual accuracy. The auditors then submit their assessment to the Accreditation Committee which decides on the outcome. For a positive decision, the degree is added to the list of accredited degrees on the EQANIE website ( www.eqanie.eu). EQANIE has also appointed three national agencies the authority to award the Euro-Inf labels on its behalf, namely ASIIN in Germany, BCS in the UK, and ANECA in Spain. Structure The General Assembly is the highest decision-making body of EQANIE. It is composed of one delegate per EQANIE member-organization. The General Assembly meets at least once a year. The Executive Board is appointed by the General Assembly for a period of three years. Members of the Executive Board may be re-elected once. After Prof. Dr. Hans-Ulrich Heiß and Prof. Dr. Eduardo Vendrell, Prof. Dr. Liz Bacon is currently the President of EQANIE. References External links www.eqanie.eu Euro-Inf Framework Standards ISO Online Training Higher education accreditation Higher education organisations based in Europe Information technology organizations based in Europe Professional titles and certifications Professional certification in computing Information technology education
EQANIE
Technology
805
7,955,869
https://en.wikipedia.org/wiki/Left%20corner
In formal language theory, the left corner of a production rule in a context-free grammar is the left-most symbol on the right side of the rule. For example, in the rule A→Xα, X is the left corner. The left corner table associates to a symbol all possible left corners for that symbol, and the left corners of those symbols, etc. Given the grammar S → VP S → NP VP VP → V NP NP → DET N the left corner table is as follows. Left corners are used to add bottom-up filtering to a top-down parser, or top-down filtering to a bottom-up parser. References Parsing
Left corner
Technology
137
21,110,451
https://en.wikipedia.org/wiki/NetResult
NetResult is the name of several United Kingdom-based companies. First company The first company, Net Result.uk.com Ltd (Trading as NetResult), is an IT company formed in 1996 that specialises in managed IT support for business, consultancy, cyber security, hosting and other IT services. Headquartered in Mold, Flintshire, Wales; it was founded in 1996 by Nick Bell. Second company The second is London-based, formed in 2000 and incorporated at Companies House in December 2002. It specialised in the protection of intellectual property (such as trademarks and copyrights) on the internet. Operations NetResult was hired by the UEFA to deal with problems with copyright infringement, specifically online postings of highlights and live web streams of Premier League matches. They have issued takedown notices to many Internet sites hosting and linking to highlights from Premier League telecasts, including YouTube. Third company A third company, describing itself as Netresult Ltd, trades as Netresult Training. It is associated with another trading name, Netresult Web Design. Fourth company The fourth company, NetResult Ltd., which was an early producer of websites and related digital marketing materials, founded in Brighton in 1995. Their clients included major companies such as Schering-Plough (for a website about the Clarityn antihistamine). They were located directly above the premises of a pioneering internet service provider, Pavilion Internet, which was later acquired by Easynet. This NetResult ceased business by 1997. References External links NetResult ("second company", above) Netresult Training Netresult Web Design NetResult UK Copyright enforcement companies
NetResult
Technology
337
69,030,017
https://en.wikipedia.org/wiki/Potassium%20hypochromate
Potassium hypochromate is a chemical compound with the formula K3CrO4 with the unusual Cr5+ ion. This compound is unstable in water but stable in alkaline solution and was found to have a similar crystal structure to potassium hypomanganate. Preparation This compound is commonly prepared by reacting chromium(III) oxide and potassium hydroxide at 850 °C under argon: Cr2O3 + 6 KOH → 2 K3CrO4 + H2O + 2 H2 This compound can be prepared other ways such as replacing chromium oxide with potassium chromate. It is important that there is no Fe2+ ions present because it would reduce the Cr(V) ions to Cr(III) ions. Reactions Potassium hypochromate decomposes in water to form chromium(III) oxide and potassium chromate when alkali is not present or low. Potassium hypochromate also reacts with acids such as hydrochloric acid to form chromium(III) oxide, potassium chromate, and potassium chloride: 6 K3CrO4 + 10 HCl → 4 K2CrO4 + Cr2O3 + 5 H2O + 10 KCl Other reducing agents such as hydroperoxides can oxidize the hypochromate ion into chromate ions. At extremely high temperatures, it decomposes into potassium chromate and potassium metal. This compound is used to synthesize various compounds such as chromyl chlorosulfate by reacting this compound with chlorosulfuric acid. References Potassium compounds Chromates
Potassium hypochromate
Chemistry
341
43,016,573
https://en.wikipedia.org/wiki/Tom%20Clancy%27s%20Rainbow%20Six%20Siege
Tom Clancy's Rainbow Six Siege is a 2015 online tactical shooter video game developed by Ubisoft Montreal and published by Ubisoft. The game puts heavy emphasis on environmental destruction and cooperation between players. Each player assumes control of an attacker or a defender in different gameplay modes such as rescuing a hostage, defusing a bomb, or taking control of an objective within a room. The title has no campaign but features offline training modes that can be played solo. Siege is an entry in the Rainbow Six series and the successor to Tom Clancy's Rainbow 6: Patriots, a tactical shooter that had a larger focus on narrative. After Patriots was eventually cancelled due to its technical shortcomings, Ubisoft decided to reboot the franchise. The team evaluated the core of the Rainbow Six franchise and believed that letting players impersonate the top counter-terrorist operatives around the world suited the game most. To create authentic siege situations, the team consulted actual counter-terrorism units and looked at real-life examples of sieges such as the 1980 Iranian Embassy siege. Powered by AnvilNext 2.0, the game also utilizes Ubisoft's RealBlast technology to create destructible environments. It was released worldwide for PlayStation 4, Windows, and Xbox One on December 1, 2015, and for PlayStation 5 and Xbox Series X/S exactly five years later on December 1, 2020. The game received an overall positive reception from critics, with praise mostly directed to the game's tense multiplayer and focus on tactics. However, the game was criticized for its progression system and its lack of content. Initial sales were weak, but the game's player base increased significantly as Ubisoft adopted a "games as a service" model for the game and subsequently released several packages of free downloadable content. Several years after the game's release, some critics regarded Siege as one of the best multiplayer games in the modern market due to the improvements brought by the post-launch updates. The company partnered with ESL to make Siege an esports game. In December 2020, the game surpassed 70 million registered players across all platforms. Rainbow Six Extraction, a spin-off game featuring Siege characters, was released in January 2022. Gameplay Tom Clancy's Rainbow Six Siege is a first-person shooter game, in which players utilize many different operators from the Rainbow team. Different operators have different nationalities, weapons, and gadgets. The game features an asymmetrical structure whereby the teams are not always balanced in their choices of abilities. The base Counter-Terrorism Units (CTUs) available for play are the American Hostage Rescue Team (referred to in-game as the FBI SWAT), the British SAS, the German GSG-9, the Russian Spetsnaz and the French GIGN, each of which has four operators per unit split between attackers and defenders (other units were later added through downloadable content, see below). Players also have access to a "Recruit" operator who can choose from a more flexible assortment of equipment at the expense of having a unique gadget. Players can pick any operator from any unit that is defending or attacking before a round starts, choosing spawn points as well as attachments on their guns but are not allowed to change their choices once the round has started. An in-game shop allows players to purchase operators or cosmetics using the in-game currency, "Renown", which is earned at the end of matches from actions performed in-game. Different gameplay modes award renown at different rates, with ranked matches offering the largest renown multiplier potential per match. Players can also complete in-game "challenges" to get a small amount of renown. Renown gain rate can also be affected by the using in-game "boosters" which gives the player an increase in all renown earned for various duration, starting with 24 hours. A premium currency known as "R6 credits" can also be purchased using real-world currency to get operators quicker in-game, or other cosmetic items, such as weapon or operator skins . When the round begins in an online match, the attackers choose one of several spawn points from which to launch their attack while defenders do the same from which to defend from. A 45-second preparatory period will then commence wherein the attackers are then given control over mecanum-wheeled drones to scout the map in search of enemy operators, traps and defensive set-ups in addition to the target objective(s), while the defenders establish their defences and are encouraged to do so without having the defensive and target objective(s) details being discovered, chiefly through destroying the drones. If a player dies, they cannot respawn until the end of a round. Players who were killed by opponents can enter "Support Mode", which allows them to gain access to drone's cameras and security cameras so that they can continue to contribute to their team by informing them of opponent locations and activities. Matches last only 3 minutes and 30 seconds for a casual and three minutes for a ranked. Teamwork and cooperation are encouraged in Siege, and players need to take advantage of their different abilities in order to complete the objective and defeat the enemy team. Communication between players is also heavily encouraged. The game also has a spectator mode, which allows players to observe a match from different angles. The game features a heavy emphasis on environmental destruction using a procedural destruction system. Players can break structures by planting explosives on them, or shoot walls with a shotgun to make a hole. Players may gain tactical advantages through environmental destruction, and the system aims at encouraging players to utilize creativity and strategy. A bullet-penetration system is featured, in which bullets that pass through structures deal less damage to enemies. In addition to destruction, players on the defending team can also set up a limited number of heavy-duty fortifications on walls and deployable shields around them for protection; these can be destroyed through breaching devices, explosives, or by utilizing operator specific gadgets in the case of the former. In order to stop attackers' advance, defenders can place traps like barbed-wire and explosive entry denial devices around the maps. Vertical space is a key gameplay element in the game's maps: players can destroy ceilings and floors using breach charges and can ambush enemies by rappelling through windows. Powerful weapons like grenades and breach charges are valuable, as only a limited amount can be used in a round. Modes At launch, the game featured 11 maps and 5 different gameplay modes spanning both PVE and PVP. With the downloadable content (DLC) released post-launch with an additional four maps from year one and three maps from year two – there are currently 26 playable maps. The gameplay modes featured include: Hostage: a non-competitive multiplayer mode, in which the attackers must extract the hostage from the defenders, while the defenders must prevent that from happening either by eliminating all of the attacking team or successfully defending the hostage until the time expires. A secondary manner of winning can occur if the attacking or defending team accidentally damages the hostage, causing the hostage to "down"; if the opposing team can prevent the revival of the hostage, and the hostage bleeds-out and dies, they will win the round. Bomb: a competitive multiplayer mode, in which the attackers are tasked with locating and defusing one of two bombs. The defenders must stop the attackers by killing all of them or destroying the defuser. If all attackers are killed after the defuser is planted, the defuser must still be destroyed for a defending victory. Secure Area: a non-competitive multiplayer mode, in which the defenders must protect a room with a biohazard container, while the attackers must fight their way in and secure it. The match ends when all players from one team are killed or the biohazard container is secured by the attackers when there are no defenders in the room. Tactical Realism: a variation of the standard competitive multiplayer modes, added with the release of the Operation Skull Rain DLC. The game mode features a heavier emphasis on realism and teamwork, removing most of the heads-up display (HUD) elements, the ability to mark opponents, and the ability to see teammates' contours through walls, while also featuring the addition of a realistic ammo management system. This mode is no longer in the game but some aspects are in the other multiplayer modes. Training grounds: a solo or cooperative multiplayer mode for up to five players. Players take on the role of either attackers or defenders, and must fight against waves of enemies controlled by artificial intelligence across various modes like Bomb, Hostage, or Elimination. Situations: a single-player series of 10 solo and 1 co-op multiplayer missions that attempt serve as introductory and interactive tutorials to the game's mechanics. Outbreak: A limited time event exclusive to Operation Chimera, Outbreak pits a 3 player team in a co-op PVE environment against an extraterrestrial biohazard threat, namely AI-controlled heavily mutated forms of humans infected with said alien parasite. Two difficulties exists for this mode, for which the chief difference was the inclusion of friendly fire on the harder one. Arcade: Random limited time events which modify elements of existing modes and is on a smaller scale compared to seasonal game modes, this includes the Golden Gun event Seasonal Events: Limited time events which are available for one season. These are normally large scale game modes which are completely unique to regular Bomb, Secure Area, or Hostage Game modes. Setting Three years after the Rainbow Program's deactivation, there is a resurgence of terrorist activities, with the White Mask being the most prominent. The terrorists' goals are unknown, yet they are causing chaos across the world. To counter this rising threat, the program is reactivated by a new leader, Aurelia Arnot (Angela Bassett). Arnot, operating under the codename "Six", assembles a group of special forces operatives from different countries to face and combat the White Masks. Recruits go through multiple exercises to prepare them for future encounters with the White Masks, training to perform hostage rescue and bomb disposal. Eventually, the White Masks launch a chemical attack on Bartlett University, and the recruits are sent to disarm the bombs and eliminate the enemy presence. The operation is a massive success, though there are casualties. Arnot affirms that the reactivation of Team Rainbow is the best and only choice in a time filled with risks and uncertainties, and that Team Rainbow is ready for their next mission – to hunt down the leader of their enemy – and they stand prepared to protect and defend their nation from terrorists. In 2019, Arnot resigns from her position to become the Secretary of State and recommends her advisor, Doctor Harishva "Harry" Pandey (Andy McQueen), to take her place as "Six". Harry seeks to improve the synergies of Team Rainbow members; thus he develops The Program, a global training regimen to help Rainbow operators understand each other better, and to expand their operations. To that end, Harry also organizes annual tournaments for Team Rainbow, which are broadcast and viewed globally as public training exercises. Some time later, Harry invites members of a private military company, Nighthaven, to join Team Rainbow, in order to observe their skills and prevent competitors from also hiring them. Nighthaven members regularly clash with existing Rainbow operators due to their difference in battle tactics; Nighthaven's is considered to be more brutal and reckless, with a notable disregard for their own members' safety. This eventually leads to a public dispute between Rainbow operative Eliza "Ash" Cohen (Patricia Summersett) and Nighthaven's founder, Jaimini "Kali" Shah (Yasmine Aker), after Ash was injured during the 2021 tournament finals due to Kali's action. By 2022, Kali has successfully recruited several Rainbow operators to join Nighthaven, much to Harry's dismay. With a surge in global criminal activities, Harry begins reorganizing the remaining Rainbow operators into several smaller teams, and has them focus on new operations, while also looks into Nighthaven and their activities. Following the assassination of a high-profile tech company CEO, Masayuki Yahata, Team Rainbow investigates his death and finds evidence implicating Nighthaven's involvement. Rainbow squad Ghosteyes, led by Taina "Caveira" Pereira (Renata Eastlick), conducts a strike on Nighthaven amidst one of their operations and apprehends Kali, who denies involvement in Yahata's assassination while also claiming Nighthaven's weaponry had been stolen prior. Some time later, Harry is confronted by "Deimos" (Dalias Blake), a former Rainbow operative who orchestrated Yahata's death as well as the conflict between Team Rainbow and Nighthaven. Seeking to eliminate Team Rainbow, Deimos claims that the unit has become a disgrace under Harry's leadership, before killing him. Following Deimos' trails over the next year, Team Rainbow finally manages to pinpoint his next target, and sets up an ambush. Rainbow members Gustave "Doc" Kateb (Alex Ivanovici), Elena "Mira" Álvarez (Anahi Bustillos), Julien "Rook" Nizan (Marc-André Brunet) and Sam "Zero" Fisher (Jeff Teravainen) confront Deimos and apprehend him. The team later learns at their headquarters that Deimos' real identity is Gerald Morris, who betrayed and killed his squad mate during a Rainbow operation in 2012 and was declared killed in action. With Deimos' group, the Keres Legion, remaining at large, Rainbow Team recruits retired operative Kure "Skopós" Galanos, who previously worked with Deimos in his Rainbow days. Skopós teams up with Rainbow operative Grace "Dokkaebi" Nam (Christine Lee) to assault a Keres safehouse and retrieve intel pertaining to the organization. Development The game's predecessor was Tom Clancy's Rainbow 6: Patriots, a tactical shooter announced in 2011. It had a focus on narrative, and the story campaign features many cut-scenes and scripted events. However, the game fell into a development hell shortly after its announcement. The game's outdated engine and frequent change of leadership hindered development progress, and game quality was not up to par. In addition, it was planned to be released on seventh-generation video-game consoles which were not capable of processing certain game mechanics. Seeing the arrival of a new generation of consoles, the team wanted to make use of this opportunity to create a more technologically advanced game. As a result, Ubisoft decided to cancel Patriots and assembled a new team of 25 people to come up with ideas to reboot the series. To make the new game feel refreshing, only certain multiplayer elements were retained as the small team took the game in a different direction. They evaluated the core of the Rainbow Six series, which they thought was about being a member of a counter-terrorist team travelling around the globe to deal with dangerous terrorist attacks – operations which are usually intense confrontations between attackers and defenders. However, the team wanted to fit these ideas into a multiplayer format which would increase the game's sustainability. These became the basic concept ideas for the game. As the development team hoped that the game can be replayed frequently, the team decided to devote all the resources into developing the game's multiplayer and abandoned the single-player campaign. Development of the game officially began in January 2013. Ubisoft Montreal, the developer of Patriots, handled the game's development, with Ubisoft's offices in Barcelona, Toronto, Kyiv, Shanghai and Chengdu providing assistance. The game was originally called Rainbow Six Unbreakable, a title that reflected not only the game's destruction mechanic but also the mindset of the development team, who had to deliver a game that was once stuck in development hell. According to Alexandre Remy, the brand director, the team was confident in their new vision for the game but very nervous when they revealed it, realizing the change of direction would likely disappoint some fans. Design The 150-person team consisted mainly of first-person shooter veterans or longtime Rainbow Six players. Despite having prior knowledge on how these types of games work, the team decided to study historic examples of counter-terrorist operations, including 1980's London Iranian Embassy siege, 1977's Lufthansa Flight 181 hijacking, and 2002's Moscow theatre hostage crisis to ensure that the portrayal of these operations was accurate and appropriate. The team also consulted counter-terrorism units, such as the National Gendarmerie Intervention Group (GIGN), for their opinions on how they would react during a hostage rescue situation. According to Xavier Marquis, the game's creative director, having a hostage rescue mission in the game helped create an immersive story. By allowing players to assume control of an operator tasked with saving innocents, it gives them an objective and a priority. They must be careful in dealing with the situation and try their best not to hurt the hostage. This further promotes teamwork between players and prompts them to plan before attacking, and further makes the game more realistic, tense and immersive. To make the game feel more realistic, the team introduced a mechanic called "living hostage" to govern the hostage character's behaviour – e.g., coughing if there is dust in the air or shielding him or herself if there is nearby gunfire. The environmental destruction mechanic was one of the game's most important elements. When the game's development was begun, the developer's in-house team completed their work on RealBlast Destruction, an engine that "procedurally breaks everything down" and remodels the environment. The development team thought that this technology fitted the game's style and gameplay, and decided to use it. This aspect of the game became increasingly important during development, and the team spent an extended period of time making sure that these destructions were authentic. As a result, the team implemented a materials-based tearing system, in which environmental objects of different materials show different reactions to players' attacks. To render the game's texture, the team used physically based rendering, even though it was ineffective during the game's early stage of production due to issues with the game's engine. A material bank and substance painter were utilized to create textures for environmental objects when they were damaged or destroyed. The team also implemented subtle visual cues to help players identify whether a structure was destructible or not, as opposed to "distracting" players with more-obvious hints. The destruction mechanic prompted Ubisoft to change their level-design approach, as they had to ensure that the level was still logical and realistic when parts of the environment were destroyed. According to Ubisoft, "teamwork, tactics, and tension" were the game's three most important pillars. The team initially worked on a respawn feature, allowing players to rejoin after they are killed in the game. However, following several internal tournaments, the team realized that some of their employees would always win a match. They concluded that the respawn system worked to the benefit of strong players and placed individual skill above teamwork, which did not fit the developer's focus on game tactics. Removing the respawn feature meant greater consequences for taking risks, and players had to rely on their teammates in order to survive and achieve objectives. According to Chris Lee, the game's designer, the team initially worried that the system would only appeal to hardcore players. However, after several testings, they found that the removal of the respawn system provided new challenges to strong players and forced them to cooperate with their teammates – while it rewarded weaker players who were willing to take their time, plan their actions, and be strategic. The gameplay system was designed to allow players to have a lot of freedom. As a result, the team implemented the "Golden 3C Rules", which represents Character, Control, and Camera. Players are always controlling their own actions and movements, and the team intentionally avoided any animation that would disrupt the players. As a result, actions such as setting explosives, or placing a breach charge, can be cancelled immediately so that players can react and shoot. The game's camera only moves when the player moves, as the team feared that the changes of camera angle may lead to players' in-game deaths. A free-lean system was introduced to the game so that players can have more control over their line of sight. According to Ubisoft, this input-driven control mechanism makes the game feel more "natural" and "fluid". This is because it allows players to concentrate on planning and coordinating, rather than thinking if the camera or environment will interfere with their actions. Several gameplay elements were scrapped or removed from the final game. One of the features of its predecessors, artificial intelligence-controlled squadmates, were removed from single-player missions. This decision was made because the team wanted players to play with a squad controlled by actual players rather than computers. The team once considered adding a map editor so that players could design their own maps, but this plan never came to fruition. Hit markers, which would indicate an injury inflicted on an opponent, were removed because the team feared that players would abuse the system by "peppering the walls with gunfire" and use hit markers to locate enemies. Players cannot jump in the game, as real-life counter-terrorist unit operators do not jump while carrying out their missions. According to Louis Philippe, the game's audio director, the team originally used intense music and sounds to create tension. However, the team decided to scrap this idea, realizing that the best way to create a tense atmosphere is to create the sounds of other players, which are often unexpected. The team created Navigation Sounds, in which the sound a player made is determined by their operators' weight, armour, and speed. Gadget deployment such as fortifying and breach-charging create louder sounds that may reveal the player's presence. The team thought that this would be enjoyable for players and influence their gameplay experience. The game's music was composed by Paul Haslinger, who had worked on the score of the previous Rainbow Six games and the Far Cry series. His co-composer was Ben Frost, who debuted his first video game soundtrack with Siege. Leon Purviance assisted Frost and Haslinger in composing the music. Release Ubisoft announced the game at their press conference during Electronic Entertainment Expo 2014. In August 2015, Ubisoft announced that they had delayed the game's release from October 10 to December 1, 2015, in order to give additional time for the team to balance the game for cooperative multiplayer play. A closed alpha-testing was held by Ubisoft on April 7–13, 2015, in which players could play an early version of the game in order to help the development team test their servers and core gameplay loops, and to provide feedback. Ubisoft held a closed beta, starting on September 24, 2015, for further testing. The company originally wanted to hold another round of testing with the release of the game's open beta on November 25, 2015, but they delayed its release to November 26 due to matchmaking issues. Players who purchased Siege for the Xbox One could download Tom Clancy's Rainbow Six: Vegas and its sequel, Tom Clancy's Rainbow Six: Vegas 2 for free. To launch the game in Asian markets, Ubisoft announced plans to remove graphic imagery from all versions of the game via a patch. The plan was later withdrawn by the developer due to review bombing and negative fans feedback. The game had multiple versions for players to purchase. A season pass was announced on November 12, 2015. Players who bought this version of the game could gain early access to operators offered in the DLCs and receive several weapon skins. The game was also released alongside its Collector's Edition, which included the game's season pass, a hat, a compass and bottle opener, a backpack, and a 120-page guide. A Starter Edition was released on PC in June 2016, featuring all content offered in the Standard Edition, and included two operators at start for use plus enough Rainbow 6 Credits to purchase up to two more of the player's choice while the rest has to be purchased through either Renown at an increased cost or additional Rainbow 6 Credits. The Starter Edition was cheaper than the Standard Edition and was initially available for a limited time. In February 2017, the Starter Edition became permanently available via Uplay. According to Ubisoft, the game adopted a "games as a service" approach, as they would provide long-term support for the game and offer post-release content to keep players engaged. The management team initially doubted the idea but eventually decided to approve it. The title was supported with many updates upon launch, with the company introducing fixes to bugs and improvements on both matchmaking and general gameplay mechanics. To enable players' involvement in the game's continued development, Ubisoft introduced the R6Fix programme in 2018. It allows players to submit bug reports to Ubisoft, which would fix the bug and award the player in-game items. They also introduced an auto-kick system, which automatically removes players from a match when they kill friendly players and launched the BattlEye system in August 2016 to punish cheaters. To counter toxicity within the game's community, in mid 2018 Ubisoft began issuing bans to any player who has used racist and homophobic slurs. All downloadable content maps were released to all players for free. All downloadable operators can be unlocked using the in-game currency though purchasing the season pass enables players to gain instant access to them. Players can purchase cosmetic items using real-world money, but the team did not wish to put gameplay content behind a paywall in order to be more player-friendly. The team avoided adding more modes to the game because most would not fit well with the game's close-quarter combat. Downloadable content for the game was divided into several seasons, with a Mid-Season Reinforcement patch which added new weapons and modified some of the operators' core abilities. This post-release content was developed by the Montreal studio in conjunction with Blue Byte in Germany. Ubisoft announced that they would keep supporting the game and adding new playable characters for 10 more years. As a result, no sequel was planned. In January 2018, Ubisoft announced the introduction of 'Outbreak Packs', which are loot boxes that can be unlocked with R6 credits (which can be purchased with microtransactions) to gain character items. The company also announced that the base version of the game will be replaced by a bundle named The Advanced Edition, which includes the base game and a small number of outbreak packs and R6 credits. The changes resulted in players' backlash, as existing players have to pay for new content while new players do not. Ubisoft compensated players by giving players a free premium skin for free and announced plans to change the Standard Edition so that players can unlock new operators at a faster pace. In July 2018, Ubisoft announced the introduction of a limited time pack named 'Sunsplash Packs', which are available to purchase with R6 credits and contain cosmetics that have an association with the season of Summer. In October 2018, Ubisoft unveiled the Crimsonveil packs, which added a Halloween themed weapon skin, charm, headgear, and uniform for 4 operators, plus a seasonal weapon skin and a charm that was themed with the skin. Esports Ubisoft also envisioned the game as an esports game. The company had their first meeting with David Hiltscher, vice president of ESL, in late 2013. ESL offered feedback on the game's balancing and helped the developer to ensure that the game was suitable for competitive play. The team focused on introducing new operators to provide variety for esports viewers after the game's release, a decision inspired by modern multiplayer online battle arena games such as Dota 2, as this type of game often has 80–100 playable characters. ESL and Ubisoft officially announced Tom Clancy's Rainbow Six Pro League, a global tournament for Windows and Xbox One players. The competition was held at Intel Extreme Masters eSports tournament on March 4, 2016. A European team, PENTA Sports, became the champion of the first season of Rainbow Six Pro League after defeating another team, GiFu, at the final of the tournament held in May 2016. In 2017, it was revealed that Pro League Year Two would return, but Xbox One tournaments would not be featured. Ubisoft also held the Six Invitational tournaments in 2017 and 2018, in which top teams competed for the top prize. The 2018 tournament attracted 321,000 viewers on Twitch. Both Nathan Lawrence from Red Bull and Richie Shoemaker from Eurogamer compared the game favourably with Counter-Strike: Global Offensive, with both being hopeful that Siege can dethrone Global Offensive'''s status as the most successful competitive esports first-person shooter in the future. Rainbow Six Siege Year 3 Season 4 was announced on November 18 at the Pro League Season 8 Finals in Rio de Janeiro and was set in Morocco. The Six Invitational 2020, in February 2020, had the highest prize pool in all of Rainbow Six with $3,000,000 split among 16 teams, with the victors, Spacestation Gaming from North America, taking home the lion's share of $1,000,000. The Six Invitational 2020 also announce enormous changes to both the game itself and the competitive scene. The changes included the end of Pro League and a new points-based system. These changes to the competitive scene have been compared to that of Dota 2 and League of Legends. Downloadable content CrossoversRainbow Six Siege operators have made two appearances in another Clancy series, Ghost Recon, in DLC missions: In Operation Archangel, a summer 2018 DLC mission for Wildlands, Valkyrie and Twitch travel to Bolivia after Caveira has gone AWOL and is suspected of killing several members of the Santa Blanca Mexican drug cartel, which the Ghosts are working to bring down under the leadership of Nomad and their CIA contact Karen Bowman. The mission escalates into an operation to save Caveira's younger brother João, an undercover officer from the Federal Police of Brazil, from the cartel. In Amber Sky, a January 2021 DLC mission for Breakpoint, Ash, Finka and Thatcher in addition to Lesion as point of contact travel to the South Pacific island of Auroa to help Nomad and the Ghosts stop private military contractor Sentinel under the command of rogue former Ghost Lieutenant Colonel Cole D. Walker from manufacturing and selling a chemical weapon called Amber Ruin. Reception The pre-release reception of the game was positive, with critics praising the game's design and tensions created during matches. In 2014, the game received four nominations from Game Critics Awards: Best of Show, Best PC Game, Best Action Game and Best Online Multiplayer Game. The game eventually became the winner of the Best PC Game category.Tom Clancy's Rainbow Six Siege received "generally favorable" reviews from critics, according to review aggregator Metacritic. Critics generally praised the game's destructive environment, tactical nature, map design, and its focus on teamwork. However, the lack of content and the game's microtransactions were subjected to criticism. The game's multiplayer was widely praised by critics. Chris Carter from Destructoid praised the game's open-ended nature, which made each match unpredictable and helped the experience to stay fresh even after an extended period of playing. GameSpot's Scott Butterworth appreciated the title for allowing players to make use of their creativity in approaching a mission. James Davenport from PC Gamer echoed this thought, and he described Siege as a "psychological race" in which players are constantly trying to outwit their opponents. Ryan McCaffery from IGN also praised the tactical possibilities, which make the game "tense and riveting". The large number of operators available for players to choose were praised by both Carter and Matt Bertz from Game Informer, who commented that they added depths and variety to the game and that players could experiment to see which pairs of operators can complement each other. However, McCaffery was disappointed by the lack of variety of game modes and commented that most players usually neglect the mode's objectives and opted to simply eliminate their opponents. Terrorist Hunt received divisive opinions from critics. Carter thought that it was more relaxing, and Butterworth thought it was exhilarating. However, Bertz criticized its lack of variety, weak artificial intelligence, and its less-intense nature when compared with the player-versus-player modes. Martin Robinson from Eurogamer also noted that the mode only ran at 30 frames per second, which limited its appeal. The game's focus on tactics was praised. Bertz applauded the tactical nature of the game, as it fostered communications between players. However, he noted that teamwork may not be possible if players did not have a headphone and microphone. Arthur Gies from Polygon echoed these comments, stating that the game's over-reliance on teamwork meant that when teammates were not communicating, the game would not be fun to play. The "No Respawn" system was praised by Butterworth for making each match feel intense, as even the best player needs to think tactically in order to win. Jonathon Leack from Game Revolution enjoyed the scouting phase of a multiplayer match, which encouraged players to communicate with each other and coordinate their attacks. However, Gies noted that the placement of game objectives does not vary much, thus making the scouting phase meaningless. Both Bertz and Butterworth agreed that the game's competitive nature increases the game's replay value. Ben Griffin from GamesRadar praised the destruction mechanic for bringing tactical depth to the game. The gameplay received mixed reviews. Both Bertz and Griffin criticized the game's incompetent hit detection system, which made the experience unfair. Bertz described the game's gunplay as "serviceable", while Leack noticed a delay in shooting, which drags the game's pace and leads to a steep learning curve. However, Leack appreciated the game's map design, which opened many possibilities. He also praised its attention to detail and sound design, which can often make a multiplayer match feel like a "great action film". Bertz was disappointed by the lack of customization options, which did not offer long-term progression for the players. Butterworth similarly criticized the progression system for being slow. As players cannot play as the same operator in the same match, he was often forced to play as the generic "recruit" character when he was only at the beginning stage of the game. He also criticized the limited weapon customization options, which barely affect gameplay. McCaffery described customization as the "least interesting" aspect of the game and claimed that most gameplay features were locked when players started playing. Griffin, Gies, and Steven Burns from VideoGamer.com were annoyed by the microtransactions featured in the game, with Griffin describing it as a greedy attempt by Ubisoft to make more money, though Davenport did not mind these features as they were limited to cosmetic items and could be unlocked through earning Renown. Bertz was annoyed the lack of clan infrastructure, which may cause players troubles when they were finding matches, while Griffin thought that map rotation often felt random and was disappointed that players cannot vote to decide which map they are going to play next. Single-player was generally considered a disappointment by critics, with Situations receiving mixed reviews. Carter described it as one of his "favorite non-campaign additions" as the mode gave players incentive to return due to its rating system. Butterworth described it as a "surprisingly robust" mode and thought that there were great tutorial missions that help players to understand gameplay before trying multiplayer. However, Bertz criticized it for its lack of replay value, and Griffin noted their short length. McCaffery thought that it served as a competent tutorial, but its solo play nature meant that players could not practice team play and tactics. Davenport criticized the narrative in Situations, which he thought was not meaningful. Many critics were disappointed with the lack of a single-player campaign or a cooperative campaign, but Butterworth believed that the strong multiplayer components can compensate for this absence. Gies noticed certain network issues would affect the single-player. Many critics generally had a positive opinion on the package. Bertz thought that the game's multiplayer design had laid a great foundation for the game, but it was not taken advantage of due to the small number of game modes. Leack felt that Sieges tight focus on tactical gameplay had "provided something unlike any other game on the market". Butterworth found the game very unique and that there was "nothing else like it" when he put aside the game's minor annoyances. Griffin wrote that the title felt very fresh, as most games in the market did not value tactics. Davenport similarly praised the game for being very concentrated and making no compromises on gameplay design, which in turn make the title one of the best tactical multiplayer shooters on the market. Gies recognized the game's potential but thought that they were overshadowed by the game's numerous technical annoyances, frustrating progression system and its lack of content. Robinson was impressed by the game's multiplayer mode, and that the overall package could be considered as the year's best multiplayer game. However, he questioned Ubisoft for releasing the game with so little content while still selling it at full-price. By December 2020, the game had more than 70 million registered players. Games as a service In May 2015, CEO of Ubisoft Yves Guillemot announced that the company expected the game to outsell 's seven million sales over the course of its lifetime because of post-launch support. At the game's launch, it debuted at number six in UK Software Charts, selling 76,000 retail copies across all three platforms. Critics thought that the launch performance was underwhelming and lacklustre. However, through continued post-release support and updates, the player base had doubled since the game's launch. Following the summer 2016 launch of the third DLC, Skull Rain, the size of the player base had a 40% increase, and the title had more than 10 million registered players. Two years after launch, the game remained as one of the top 40 best-selling retail games in the UK. The strong performance of Siege, along with Tom Clancy's The Division (2016) and Tom Clancy's Ghost Recon Wildlands (2017) boosted the total number of players of the Tom Clancy's franchise to 44 million in 2017. In August 2017, Ubisoft announced that the game had passed 20 million players and that the game was played by 2.3 million players every day. Two years after the game's launch, Ubisoft announced that the game has passed 25 million registered players. As of February 2019, the game had more than 45 million registered players. Critics agreed that while the game suffered a rocky launch, Ubisoft's efforts in updating the game and fixing bugs have increased the game's quality and had transformed the game into a better experience. IGN and Eurogamer re-reviewed the game in 2018 and both concluded that the game had improved significantly since its launch. Several years after the game's launch, Siege was regarded as one of the best multiplayer games released for PlayStation 4 and Xbox One by some critics, with praise being directed to its distinctiveness and the similarities it shares with multiplayer online battle arena games and hero shooters. According to Remy, the team focused on players retention in the year after the game's launch, but the growth of the game's player base exceeded their expectations. He called the game "a testament" to the games as a service model. Jeff Grubb from VentureBeat attributed Siege's high player retention rate and successful eSports events to Ubisoft's continuous and frequent updates to the game. GameSpot described Siege as "one of modern AAA gaming's biggest comebacks", and the best proof to show that the "games-as-a-service" model works well, attributing its success to Ubisoft's continuous updates and the thriving community. Haydn Taylor from Gameindustry.biz praised Ubisoft's monetization methods, which was less aggressive than other gaming companies like Electronic Arts. Unlike titles such as Star Wars Battlefront II (2017), the game's monetization methods and the use of loot boxes generated minimal amount of backlash from players. He added that Ubisoft had showed, with Siege, "the delicate and reasoned approach that's been missing from the industry's clumsy, heavy-handed adoption of the games-as-a-service model". The post-launch success for Siege further solidified Ubisoft's belief in the model. Future Ubisoft multiplayer-focused titles – such as For Honor – adopted this structure, in which the company would provide free DLC and updates several years after the game's official release. Accolades Controversy On November 2, 2018, Ubisoft Montreal announced they were going to make "aesthetic changes" to Tom Clancy's Rainbow Six Siege by removing references to death, sex, and gambling in order to comply with regulations of Asian countries. However, the announcement generated opposition from the gaming community who believed the changes were going to be made for the game's upcoming release in China, likening the move to censorship. Because of pressure from the community, Ubisoft Montreal announced on November 21 that they were reversing the decision to make the changes, "We have been following the conversation with our community closely over the past couple of weeks, alongside regular discussions with our internal Ubisoft team, and we want to ensure that the experience for all our players, especially those that have been with us from the beginning, remains as true to the original artistic intent as possible." Ubisoft filed a lawsuit against the Chinese developers Ejoy as well as Apple and Google in May 2020, based on Ejoy's mobile game Area F2, which Ubisoft claimed was a clone of Siege. Ubisoft stated that "Virtually every aspect of [Area F2] is copied from [Siege], from the operator selection screen to the final scoring screen and everything in between". Ubisoft claimed they had attempted to have the game removed from Apple and Google's respective app stores but they failed to grant the removal, and as Area F2'' was a free-to-play game with microtransactions, the two companies were financially benefiting from the copyright violation, and thus included in the lawsuit. During the Six Invitational 2022, Ubisoft announced that one of its upcoming majors for the game would take place in the United Arab Emirates, drawing criticism because of the UAE's treatment of LGBTQ+ people and concern for the safety of Ubisoft's LGBTQ+ personnel and players. Days later, Ubisoft announced their "decision to move the Six Major of August 2022 to another Rainbow Six Esports region" due to the response from the fanbase. Notes References External links 2015 video games Video game controversies Asymmetrical multiplayer video games Cooperative video games Multiplayer and single-player video games PlayStation 4 games PlayStation 5 games Hero shooters Tactical shooters Tom Clancy's Rainbow Six Tom Clancy games Tom Clancy's Rainbow Six games Ubisoft games Video games about bomb disposal Video games developed in Canada Video games scored by Paul Haslinger Video games set in Australia Video games set in Italy Video games set in Boston Video games set in Brazil Video games set in Canada Video games set in France Video games set in Germany Video games set in Greece Video games set in Hong Kong Video games set in Japan Video games set in Los Angeles Video games set in Morocco Video games set in Moscow Video games set in Massachusetts Video games set in Oregon Video games set in Russia Video games set in Seoul Video games set in South Korea Video games set in Spain Video games set in the United Kingdom Video games set in the United States Video games set in the Middle East Video games using Havok Windows games Xbox Cloud Gaming games Xbox One games Xbox Series X and Series S games Video games about Delta Force Video games about police officers Video games about the Special Air Service Video games about terrorism Video games about the United States Navy SEALs Video games set in the 2010s Video games set in the 2020s Video games set in Africa Video game reboots
Tom Clancy's Rainbow Six Siege
Physics
9,122
53,878,539
https://en.wikipedia.org/wiki/Atractyloside
Atractyloside (ATR) is a natural, toxic glycoside present in numerous plant species worldwide in the daisy family including Atractylis gummifera and Callilepis laureola, and it's used for a variety of therapeutic, religious, and toxic purposes. Exposure to ATR via ingestion or physical contact is toxic and can be fatal for both humans and animals, especially by kidney and liver failure. ATR acts as an effective ADP/ATP translocase inhibitor which eventually halts ADP and ATP exchange and the cell dies due to lack of energy. Historically, atractyloside poisoning has been challenging to verify and quantify toxicologically, though recent literature has described such methods within acceptable standards of forensic science. Sources Atractyloside is found in numerous plant species in the daisy family e.g. Atractylis gummifera, Callilepis laureola, Xanthium strumarium, Iphiona alsoeri, Pascalia glauca, Wedelia glauca, and Iphiona aucheri among others. It is also found in very low concentrations in Coffea arabica. The widespread regions across all of these plants' native areas of growth results in ATR's easy availability worldwide. However the ATR concentration found in plants is dependent upon the species, season, and origin. For example, the ATR content measured in dried Atractlyis gummifera between Sardinia, Italy and Sicily, Italy revealed a higher content in the Sicilian region by nearly a factor of five, and a higher content in colder months across both regions. Additionally, the preparation of plants with atractyloside in some traditional medicines affects the atractyloside content. The preparation technique, such as decoction or infusion, extracts the desired chemical compound, after which the contents could be diluted or concentrated. History Atractylosides have been used as poisons since at least 100 AD, though it was not isolated and characterized until 1868 by LeFranc, after extracting it from Atractylis gummifera. After high-profile accidental poisonings—children in Italy and Algeria ate parts of the plant in 1955 and 1975, respectively—renewed interest in atractyloside resulted in future research. Historically, the ATR plant sources have been used for numerous reasons: whether for its therapeutic properties, magico-religious purposes, or its toxicity. While its therapeutic uses may be due to the coincidental presence of other compounds, some uses of ATR-containing plants include treating sinusitis, headaches, syphilitic ulcers, and whitening teeth among other applications. Separately, the Atractylis gummifera is a traditional herb used in North Africa while Callilepis laureola is well known to the Zulu people in South Africa for both therapeutical applications and its spiritual context to ward away evil spirits. When in high dosages, ATR's toxicity has been utilized for suicide and murder, though there have been no especially high-profile incidents reported, at least somewhat due to difficulties identifying ATR poisoning. More commonly than suicide or murder, ATR is a result of accidental poisoning: livestock grazing can poison animals, while an unintended overdose or exposure of a plant containing ATR can poison humans. Particularly, the Atractylis gummifera is easily confused with wild artichoke and other vegetables, and its sweet-tasting roots facilitate its consumption. Structure and reactivity Atractyloside is a hydrophilic glycoside. A modified glucose is linked to the hydrophobic diterpene atractyligenin by a β1-glycosidic bond. A carboxyl group is positioned at the C4 position in the axial position. The glucose part is esterified with isovaleric acid on the C2' atom, and sulfuric acid on the C3' and C4' atoms. By hydrolysis a molecule of D-(+)-glucose, isovaleric acid, atractyligenin, and two molecules of sulfuric acid could be obtained. The two sulfate groups and the carboxyl group in ATR are in a deprotonated form under physiological conditions. Thus, ATR is triple negatively charged. A modified variant of the atractylenolide carries an additional carboxyl group at the C4 atom of the atractyligenin, which is then referred to as carboxy-atractyloside (CATR), sometimes referred to as "gummiferin". The ATR/CATR chemical structure on the right indicates this difference between compounds. Mechanism of action In biochemical studies of mitochondria, the effect of atractyloside on the ADP/ATP transport was recognized even before the actual transporter was identified. ATR or CATR bind to the ADP/ATP translocase, which is located on the inner mitochondrial membrane. ATR binds competitively to the translocase competitive up to a concentration of 5 mmol while CATR binds in a non-competitive manner. As a result, the exchange of ADP and ATP is no longer carried out and the cell dies due to lack of energy. The chemical structure and charge distribution of atractyloside is similar to that of ADP: the sulfate groups correspond to the phosphate groups, the glucose part corresponds to the ribose part, and the hydrophobic atractyligenine residue corresponds to the hydrophobic purine residue of ADP. The carboxyl group on the C4 atom of the atractyligenin is important for toxicity. If this is reduced to a hydroxyl group (atractylitriol), the substance becomes non-toxic. Modification of any of the sulfate groups renders the compound non-toxic. On the other hand, the free hydroxy group on the C6 atom of the glucose moiety can be modified without loss of compound potency. Poisoning Symptoms Consumption of atractyloside (ATR) in plants will oftentimes also contain carboxyatractyloside (CATR), a highly toxic glycoside. Ingestion of A. gummifera, C. laureola, Xanthium, or their extracts, may result in symptoms of gastrointestinal pain, nausea, diarrhea, and vomiting. Also possible is respiratory depression which may cause hypoxemia, leading to tissue hypoxia, spasms, stiffness, and convulsions. In several cases, these symptoms are followed by coma. Postmortem analysis may indicate hepatocellular damage and renal failure. More recent literature has described sustained application of ATR on skin causing the symptoms described above, including hepatorenal injury. Identification / Quantification The detection of herbal toxins has generally caused a diagnostic problem due to wide variety of plants and limited standard screening. For a long time, the identification of ATR poisoning was limited to postmortem analysis of one's kidneys or liver. Subsequent developments made to identify the presence of ATR in bodily fluids (blood or urine) only worked with high concentrations of ATR. Now, more recent research has established the necessary sensitivity and specificity to be applied to forensic toxicology. The development of the below procedure relied on findings from unsuccessful methods of identification, primarily traced to the following literature in which the specificity and sensitivity was improved over time. Due to the limited research on the subject of ATR identification, this literature represents the primary sources to review: 1999: Established first quantifiable measurement of atractyloside in whole blood with high-performance liquid chromatography-tandem mass spectrometry (HPLC-MS-MS); 2001: GC-MS method required derivitization to detect atractyloside fragments; 2004: LC-MS (EI) using Waters Thermabeam detector resulted in complete fragmentation of the molecule; gentler ionization technique (ESI) was successfully used to detect ATR after chromatographic separation; 2006: Further development of procedure with ESI, eluent composition, and other experimental conditions, though still lacking specificity for forensic science. The procedure by Carlier et al. uses high-performance liquid chromatography coupled with high-resolution tandem mass spectrometry (HPLC-HRMS/MS). After the extraction of ATR and CATR from the blood or urine sample, separation was performed by reverse-phase HPLC. The MS detection used a quadrupole-orbitrap high-resolution detector after heated electrospray in negative ionization mode. These extraction techniques yielded 71.1% and 48.3% of ATR and CATR, respectively, in which these results met acceptable international criteria for forensic science: precision (≤15% or ≤20% at the LLOQ) and accuracy (between 80 and 115% or 80-120% at the LLOQ). For reference, additional sources have fully characterized atractyloside in NMR, MS, IR, etc. Lethality The mean lethal dose in rats (i.p.) for ATR is 143 mg/kg and for CATR is 2.9 mg/kg. This lethal dose of ATR takes approximately 150–180 minutes after injection until acute tubular necrosis occurred. This lethal dose varies across species and method of exposure. For example, the mean lethal dose of ATR in rats (s.c.) is 155 mg/kg. Published mean lethal doses of ATR in other species includes 250 mg/kg (s.c.) for rabbit, 200 mg/kg (i.p.) for guinea pig, and 15 mg/kg (i.v.) for dog. See also Carboxyatractyloside References Poisons Carboxylate esters Diols Carboxylic acids Alkene derivatives Diterpene glycosides Plant toxins ADP/ATP translocase inhibitors Sulfate esters
Atractyloside
Chemistry,Environmental_science
2,089
61,235,210
https://en.wikipedia.org/wiki/Techniques%20to%20isolate%20haematopoietic%20stem%20cells
Since haematopoietic stem cells cannot be isolated as a pure population, it is not possible to identify them under a microscope. Therefore, there are many techniques to isolate haematopoietic stem cells (HSCs). HSCs can be identified or isolated by the use of flow cytometry where the combination of several different cell surface markers is used to separate the rare HSCs from the surrounding blood cells. HSCs lack expression of mature blood cell markers and are thus, called Lin-. Lack of expression of lineage markers is used in combination with detection of several positive cell-surface markers to isolate HSCs. In addition, HSCs are characterized by their small size and low staining with vital dyes such as rhodamine 123 (rhodamine lo) or Hoechst 33342 (side population). CD34+ Cells can be isolated by 4 different techniques from peripheral blood samples By magnetic beads with MACS By FACS By labelled anti-antibodies Manually by culture. Since CD34 are in suspension culture and almost all cells in PBMC gets adhered, CD34 can be isolated through this process Cluster of differentiation and other markers The classical marker of human HSC is CD34 first described independently by Civin et al. and Tindle et al. It is used to isolate HSC for reconstitution of patients who are haematologically incompetent as a result of chemotherapy or disease. Many markers belong to the cluster of differentiation series, like: CD34, CD38, CD90, CD133, CD105, CD45, and also c-kit – the receptor for stem cell factor. There are many differences between the human and murine hematopoietic cell markers for the commonly accepted type of hematopoietic stem cells. Mouse HSC: EMCN+, CD34lo/−, SCA-1+, Thy1.1+/lo, CD38+, C-kit+, lin− Human HSC: EMCN+, CD34+, CD59+, Thy1/CD90+, CD38lo/−, C-kit/CD117+, lin− However, not all stem cells are covered by these combinations that, nonetheless, have become popular. In fact, even in humans, there are hematopoietic stem cells that are CD34−/CD38−. Also some later studies suggested that earliest stem cells may lack c-kit on the cell surface. For human HSCs use of CD133 was one step ahead as both CD34+ and CD34− HSCs were CD133+. Traditional purification method used to yield a reasonable purity level of mouse hematopoietic stem cells, in general, requires a large (~10–12) battery of markers, most of which were surrogate markers with little functional significance, and thus partial overlap with the stem cell populations and sometimes other closely related cells that are not stem cells. Also, some of these markers (e.g., Thy1) are not conserved across mouse species, and use of markers like CD34− for HSC purification requires mice to be at least 8 weeks old. SLAM code Alternative methods that could give rise to a similar or better harvest of stem cells is an active area of research, and are presently emerging. One such method uses a signature of SLAM family cell surface molecules. The SLAM (Signaling lymphocyte activation molecule) family is a group of more than 10 molecules whose genes are located mostly tandemly in a single locus on chromosome 1 (mouse), all belonging to a subset of the immunoglobulin gene superfamily, and originally thought to be involved in T-cell stimulation. This family includes CD48, CD150, CD244, etc., CD150 being the founding member, and, thus, also known as slamF1, i.e., SLAM family member 1. The signature SLAM codes for the hemopoietic hierarchy are: Hematopoietic stem cells (HSC): CD150+CD48−CD244− Multipotent progenitor cells (MPPs): CD150−CD48−CD244+ Lineage-restricted progenitor cells (LRPs): CD150−CD48+CD244+ Common myeloid progenitor (CMP): lin−SCA-1−c-kit+CD34+CD16/32mid Granulocyte-macrophage progenitor (GMP): lin−SCA-1−c-kit+CD34+CD16/32hi Megakaryocyte-erythroid progenitor (MEP): lin−SCA-1−c-kit+CD34−CD16/32low For HSCs, CD150+CD48− was sufficient instead of CD150+CD48−CD244− because CD48 is a ligand for CD244, and both would be positive only in the activated lineage-restricted progenitors. It seems that this code was more efficient than the more tedious earlier set of the large number of markers, and are also conserved across the mouse strains; however, recent work has shown that this method excludes a large number of HSCs and includes an equally large number of non-stem cells. CD150+CD48− gave stem cell purity comparable to Thy1loSCA-1+lin−c-kit+ in mice. LT-HSC/ST-HSC/early MPP/late MPP Irving Weissman's group at Stanford University was the first to isolate mouse hematopoietic stem cells in 1986 and was also the first to work out the markers to distinguish the mouse long-term (LT-HSC) and short-term (ST-HSC) hematopoietic stem cells (self-renew-capable), and the Multi-potent progenitors (MPP, low or no self-renew capability – the later the developmental stage of MPP, the lesser the self-renewal ability and the more of some of the markers like CD4 and CD135): LT-HSC: CD34−, CD38−, SCA-1+, Thy1.1+/lo, C-kit+, lin−, CD135−, Slamf1/CD150+ ST-HSC: CD34+, CD38+, SCA-1+, Thy1.1+/lo, C-kit+, lin−, CD135−, Slamf1/CD150+, Mac-1 (CD11b)lo Early MPP: CD34+, SCA-1+, Thy1.1−, C-kit+, lin−, CD135+, Slamf1/CD150−, Mac-1 (CD11b)lo, CD4lo Late MPP: CD34+, SCA-1+, Thy1.1−, C-kit+, lin−, CD135high, Slamf1/CD150−, Mac-1 (CD11b)lo, CD4lo References Stem cell research
Techniques to isolate haematopoietic stem cells
Chemistry,Biology
1,506
44,527,442
https://en.wikipedia.org/wiki/Heidelberg%20Research%20Architecture
The Heidelberg Research Architecture (HRA) is the Digital Humanities unit of the Cluster of Excellence "Asia and Europe in a Global Context" at Heidelberg University. It embraces digital resource development, project consultation and trainings for researchers and students. The HRA’s specialists work with researchers and students at the Cluster to form an integrated digital humanities environment for transcultural studies. Its agenda ranges from developing the Tamboti ecosystem that allows transcultural relations to be traced and analysed across diverse media types (texts, images, films, audio) and disparate objects (texts, concepts, social networks), to offering hands-on workshops on digital humanities tools for researchers working at the Cluster and Heidelberg University. Furthermore, digital resources and their production are increasingly integrated into teaching environments. The HRA provides advice to researchers on how to conceptualize and implement digital resources to structure their research. As a cooperation partner at local, national, and international level the HRA works together with partners from the public as well as the private sector. The Tamboti ecosystem At the core of the HRA’s work is the development of the Tamboti ecosystem. Tamboti is modular, and currently (November 2014) features these modules: MODS editor Ziziphus VRA Core 4 editor Atomic Wiki Tamboti serves as central access point to data from several different collections, where users can store and share, but also edit and comment on it. The ecosystem facilitates research collaboration and academic exchange in the field of Digital Humanities. Tamboti contains multilingual bibliographical information on books and articles as well as research based metadata on image collections, videos and various forms of digital material. It is possible to search the different types of sources and establish semantically qualified relations between them. Tamboti's modular design makes it flexible to use and adopt for research. On the back-end Tamboti makes use of eXist-db. Data is stored in XML using international metadata schema, like MODS, MADS, VRA Core 4, or TEI which allows for both, interchangeability and sustainability of the knowledge. Tamboti is open source software. International metadata standards and terminologies are used to ensure sustainability. References Heidelberg University Digital humanities
Heidelberg Research Architecture
Technology
461
1,287,577
https://en.wikipedia.org/wiki/Vertex%20operator%20algebra
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence. The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice. Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method. The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator. Motivated by this observation, they added the Virasoro action and bounded-below property as axioms. We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known. Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld. Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module V♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the chiral de Rham complex on a complex manifold arise in geometric representation theory and mathematical physics. Formal definition Vertex algebra A vertex algebra is a collection of data that satisfy certain axioms. Data a vector space , called the space of states. The underlying field is typically taken to be the complex numbers, although Borcherds's original formulation allowed for an arbitrary commutative ring. an identity element , sometimes written or to indicate a vacuum state. an endomorphism , called "translation". (Borcherds's original formulation included a system of divided powers of , because he did not assume the ground ring was divisible.) a linear multiplication map , where is the space of all formal Laurent series with coefficients in . This structure has some alternative presentations: as an infinite collection of bilinear products where and , so that for each , there is an such that for . as a left-multiplication map . This is the 'state-to-field' map of the so-called state-field correspondence. For each , the endomorphism-valued formal distribution is called a vertex operator or a field, and the coefficient of is the operator . In the context of vertex algebras, a field is more precisely an element of , which can be written such that for any for sufficiently small (which may depend on ). The standard notation for the multiplication is Axioms These data are required to satisfy the following axioms: Identity. For any and . Translation. , and for any , Locality (Jacobi identity, or Borcherds identity). For any , there exists a positive integer such that: Equivalent formulations of locality axiom The locality axiom has several equivalent formulations in the literature, e.g., Frenkel–Lepowsky–Meurman introduced the Jacobi identity: , where we define the formal delta series by: Borcherds initially used the following two identities: for any vectors u, v, and w, and integers m and n we have and . He later gave a more expansive version that is equivalent but easier to use: for any vectors u, v, and w, and integers m, n, and q we have Finally, there is a formal function version of locality: For any , there is an element such that and are the corresponding expansions of in and . Vertex operator algebra A vertex operator algebra is a vertex algebra equipped with a conformal element , such that the vertex operator is the weight two Virasoro field : and satisfies the following properties: , where is a constant called the central charge, or rank of . In particular, the coefficients of this vertex operator endow with an action of the Virasoro algebra with central charge . acts semisimply on with integer eigenvalues that are bounded below. Under the grading provided by the eigenvalues of , the multiplication on is homogeneous in the sense that if and are homogeneous, then is homogeneous of degree . The identity has degree 0, and the conformal element has degree 2. . A homomorphism of vertex algebras is a map of the underlying vector spaces that respects the additional identity, translation, and multiplication structure. Homomorphisms of vertex operator algebras have "weak" and "strong" forms, depending on whether they respect conformal vectors. Commutative vertex algebras A vertex algebra is commutative if all vertex operators commute with each other. This is equivalent to the property that all products lie in , or that . Thus, an alternative definition for a commutative vertex algebra is one in which all vertex operators are regular at . Given a commutative vertex algebra, the constant terms of multiplication endow the vector space with a commutative and associative ring structure, the vacuum vector is a unit and is a derivation. Hence the commutative vertex algebra equips with the structure of a commutative unital algebra with derivation. Conversely, any commutative ring with derivation has a canonical vertex algebra structure, where we set , so that restricts to a map which is the multiplication map with the algebra product. If the derivation vanishes, we may set to obtain a vertex operator algebra concentrated in degree zero. Any finite-dimensional vertex algebra is commutative. Thus even the smallest examples of noncommutative vertex algebras require significant introduction. Basic properties The translation operator in a vertex algebra induces infinitesimal symmetries on the product structure, and satisfies the following properties: , so is determined by . (skew-symmetry) For a vertex operator algebra, the other Virasoro operators satisfy similar properties: (quasi-conformality) for all . (Associativity, or Cousin property): For any , the element given in the definition also expands to in . The associativity property of a vertex algebra follows from the fact that the commutator of and is annihilated by a finite power of , i.e., one can expand it as a finite linear combination of derivatives of the formal delta function in , with coefficients in . Reconstruction: Let be a vertex algebra, and let be a set of vectors, with corresponding fields . If is spanned by monomials in the positive weight coefficients of the fields (i.e., finite products of operators applied to , where is negative), then we may write the operator product of such a monomial as a normally ordered product of divided power derivatives of fields (here, normal ordering means polar terms on the left are moved to the right). Specifically, More generally, if one is given a vector space with an endomorphism and vector , and one assigns to a set of vectors a set of fields that are mutually local, whose positive weight coefficients generate , and that satisfy the identity and translation conditions, then the previous formula describes a vertex algebra structure. Operator product expansion In vertex algebra theory, due to associativity, we can abuse notation to write, for This is the operator product expansion. Equivalently, Since the normal ordered part is regular in and , this can be written more in line with physics conventions as where the equivalence relation denotes equivalence up to regular terms. Commonly used OPEs Here some OPEs frequently found in conformal field theory are recorded. Examples from Lie algebras The basic examples come from infinite-dimensional Lie algebras. Heisenberg vertex operator algebra A basic example of a noncommutative vertex algebra is the rank 1 free boson, also called the Heisenberg vertex operator algebra. It is "generated" by a single vector b, in the sense that by applying the coefficients of the field b(z) := Y(b,z) to the vector 1, we obtain a spanning set. The underlying vector space is the infinite-variable polynomial ring , where for positive , acts obviously by multiplication, and acts as . The action of b0 is multiplication by zero, producing the "momentum zero" Fock representation V0 of the Heisenberg Lie algebra (generated by bn for integers n, with commutation relations [bn,bm]=n δn,–m), induced by the trivial representation of the subalgebra spanned by bn, n ≥ 0. The Fock space V0 can be made into a vertex algebra by the following definition of the state-operator map on a basis with each , where denotes normal ordering of an operator . The vertex operators may also be written as a functional of a multivariable function f as: if we understand that each term in the expansion of f is normal ordered. The rank n free boson is given by taking an n-fold tensor product of the rank 1 free boson. For any vector b in n-dimensional space, one has a field b(z) whose coefficients are elements of the rank n Heisenberg algebra, whose commutation relations have an extra inner product term: [bn,cm]=n (b,c) δn,–m. The Heisenberg vertex operator algebra has a one-parameter family of conformal vectors with parameter of conformal vectors given by with central charge . When , there is the following formula for the Virasoro character: This is the generating function for partitions, and is also written as q1/24 times the weight −1/2 modular form 1/η (the reciprocal of the Dedekind eta function). The rank n free boson then has an n parameter family of Virasoro vectors, and when those parameters are zero, the character is qn/24 times the weight −n/2 modular form η−n. Virasoro vertex operator algebra Virasoro vertex operator algebras are important for two reasons: First, the conformal element in a vertex operator algebra canonically induces a homomorphism from a Virasoro vertex operator algebra, so they play a universal role in the theory. Second, they are intimately connected to the theory of unitary representations of the Virasoro algebra, and these play a major role in conformal field theory. In particular, the unitary Virasoro minimal models are simple quotients of these vertex algebras, and their tensor products provide a way to combinatorially construct more complicated vertex operator algebras. The Virasoro vertex operator algebra is defined as an induced representation of the Virasoro algebra: If we choose a central charge c, there is a unique one-dimensional module for the subalgebra C[z]∂z + K for which K acts by cId, and C[z]∂z acts trivially, and the corresponding induced module is spanned by polynomials in L–n = –z−n–1∂z as n ranges over integers greater than 1. The module then has partition function . This space has a vertex operator algebra structure, where the vertex operators are defined by: and . The fact that the Virasoro field L(z) is local with respect to itself can be deduced from the formula for its self-commutator: where c is the central charge. Given a vertex algebra homomorphism from a Virasoro vertex algebra of central charge c to any other vertex algebra, the vertex operator attached to the image of ω automatically satisfies the Virasoro relations, i.e., the image of ω is a conformal vector. Conversely, any conformal vector in a vertex algebra induces a distinguished vertex algebra homomorphism from some Virasoro vertex operator algebra. The Virasoro vertex operator algebras are simple, except when c has the form 1–6(p–q)2/pq for coprime integers p,q strictly greater than 1 – this follows from Kac's determinant formula. In these exceptional cases, one has a unique maximal ideal, and the corresponding quotient is called a minimal model. When p = q+1, the vertex algebras are unitary representations of Virasoro, and their modules are known as discrete series representations. They play an important role in conformal field theory in part because they are unusually tractable, and for small p, they correspond to well-known statistical mechanics systems at criticality, e.g., the Ising model, the tri-critical Ising model, the three-state Potts model, etc. By work of Weiqang Wang concerning fusion rules, we have a full description of the tensor categories of unitary minimal models. For example, when c=1/2 (Ising), there are three irreducible modules with lowest L0-weight 0, 1/2, and 1/16, and its fusion ring is Z[x,y]/(x2–1, y2–x–1, xy–y). Affine vertex algebra By replacing the Heisenberg Lie algebra with an untwisted affine Kac–Moody Lie algebra (i.e., the universal central extension of the loop algebra on a finite-dimensional simple Lie algebra), one may construct the vacuum representation in much the same way as the free boson vertex algebra is constructed. This algebra arises as the current algebra of the Wess–Zumino–Witten model, which produces the anomaly that is interpreted as the central extension. Concretely, pulling back the central extension along the inclusion yields a split extension, and the vacuum module is induced from the one-dimensional representation of the latter on which a central basis element acts by some chosen constant called the "level". Since central elements can be identified with invariant inner products on the finite type Lie algebra , one typically normalizes the level so that the Killing form has level twice the dual Coxeter number. Equivalently, level one gives the inner product for which the longest root has norm 2. This matches the loop algebra convention, where levels are discretized by third cohomology of simply connected compact Lie groups. By choosing a basis Ja of the finite type Lie algebra, one may form a basis of the affine Lie algebra using Jan = Ja tn together with a central element K. By reconstruction, we can describe the vertex operators by normal ordered products of derivatives of the fields When the level is non-critical, i.e., the inner product is not minus one half of the Killing form, the vacuum representation has a conformal element, given by the Sugawara construction. For any choice of dual bases Ja, Ja with respect to the level 1 inner product, the conformal element is and yields a vertex operator algebra whose central charge is . At critical level, the conformal structure is destroyed, since the denominator is zero, but one may produce operators Ln for n ≥ –1 by taking a limit as k approaches criticality. Modules Much like ordinary rings, vertex algebras admit a notion of module, or representation. Modules play an important role in conformal field theory, where they are often called sectors. A standard assumption in the physics literature is that the full Hilbert space of a conformal field theory decomposes into a sum of tensor products of left-moving and right-moving sectors: That is, a conformal field theory has a vertex operator algebra of left-moving chiral symmetries, a vertex operator algebra of right-moving chiral symmetries, and the sectors moving in a given direction are modules for the corresponding vertex operator algebra. Definition Given a vertex algebra V with multiplication Y, a V-module is a vector space M equipped with an action YM: V ⊗ M → M((z)), satisfying the following conditions: (Identity) YM(1,z) = IdM (Associativity, or Jacobi identity) For any u, v ∈ V, w ∈ M, there is an element such that YM(u,z)YM(v,x)w and YM(Y(u,z–x)v,x)w are the corresponding expansions of in M((z))((x)) and M((x))((z–x)). Equivalently, the following "Jacobi identity" holds: The modules of a vertex algebra form an abelian category. When working with vertex operator algebras, the previous definition is sometimes given the name weak -module, and genuine V-modules must respect the conformal structure given by the conformal vector . More precisely, they are required to satisfy the additional condition that L0 acts semisimply with finite-dimensional eigenspaces and eigenvalues bounded below in each coset of Z. Work of Huang, Lepowsky, Miyamoto, and Zhang has shown at various levels of generality that modules of a vertex operator algebra admit a fusion tensor product operation, and form a braided tensor category. When the category of V-modules is semisimple with finitely many irreducible objects, the vertex operator algebra V is called rational. Rational vertex operator algebras satisfying an additional finiteness hypothesis (known as Zhu's C2-cofiniteness condition) are known to be particularly well-behaved, and are called regular. For example, Zhu's 1996 modular invariance theorem asserts that the characters of modules of a regular VOA form a vector-valued representation of . In particular, if a VOA is holomorphic, that is, its representation category is equivalent to that of vector spaces, then its partition function is -invariant up to a constant. Huang showed that the category of modules of a regular VOA is a modular tensor category, and its fusion rules satisfy the Verlinde formula. Heisenberg algebra modules Modules of the Heisenberg algebra can be constructed as Fock spaces for which are induced representations of the Heisenberg Lie algebra, given by a vacuum vector satisfying for , , and being acted on freely by the negative modes for . The space can be written as . Every irreducible, -graded Heisenberg algebra module with gradation bounded below is of this form. These are used to construct lattice vertex algebras, which as vector spaces are direct sums of Heisenberg modules, when the image of is extended appropriately to module elements. The module category is not semisimple, since one may induce a representation of the abelian Lie algebra where b0 acts by a nontrivial Jordan block. For the rank n free boson, one has an irreducible module Vλ for each vector λ in complex n-dimensional space. Each vector b ∈ Cn yields the operator b0, and the Fock space Vλ is distinguished by the property that each such b0 acts as scalar multiplication by the inner product (b, λ). Twisted modules Unlike ordinary rings, vertex algebras admit a notion of twisted module attached to an automorphism. For an automorphism σ of order N, the action has the form V ⊗ M → M((z1/N)), with the following monodromy condition: if u ∈ V satisfies σ u = exp(2πik/N)u, then un = 0 unless n satisfies n+k/N ∈ Z (there is some disagreement about signs among specialists). Geometrically, twisted modules can be attached to branch points on an algebraic curve with a ramified Galois cover. In the conformal field theory literature, twisted modules are called twisted sectors, and are intimately connected with string theory on orbifolds. Additional examples Vertex operator algebra defined by an even lattice The lattice vertex algebra construction was the original motivation for defining vertex algebras. It is constructed by taking a sum of irreducible modules for the Heisenberg algebra corresponding to lattice vectors, and defining a multiplication operation by specifying intertwining operators between them. That is, if is an even lattice (if the lattice is not even, the structure obtained is instead a vertex superalgebra), the lattice vertex algebra decomposes into free bosonic modules as: Lattice vertex algebras are canonically attached to double covers of even integral lattices, rather than the lattices themselves. While each such lattice has a unique lattice vertex algebra up to isomorphism, the vertex algebra construction is not functorial, because lattice automorphisms have an ambiguity in lifting. The double covers in question are uniquely determined up to isomorphism by the following rule: elements have the form for lattice vectors (i.e., there is a map to sending to α that forgets signs), and multiplication satisfies the relations eαeβ = (–1)(α,β)eβeα. Another way to describe this is that given an even lattice , there is a unique (up to coboundary) normalised cocycle with values such that , where the normalization condition is that ε(α, 0) = ε(0, α) = 1 for all . This cocycle induces a central extension of by a group of order 2, and we obtain a twisted group ring with basis , and multiplication rule – the cocycle condition on ensures associativity of the ring. The vertex operator attached to lowest weight vector in the Fock space is where is a shorthand for the linear map that takes any element of the α-Fock space to the monomial . The vertex operators for other elements of the Fock space are then determined by reconstruction. As in the case of the free boson, one has a choice of conformal vector, given by an element s of the vector space , but the condition that the extra Fock spaces have integer L0 eigenvalues constrains the choice of s: for an orthonormal basis , the vector 1/2 xi,12 + s2 must satisfy for all λ ∈ Λ, i.e., s lies in the dual lattice. If the even lattice is generated by its "root vectors" (those satisfying (α, α)=2), and any two root vectors are joined by a chain of root vectors with consecutive inner products non-zero then the vertex operator algebra is the unique simple quotient of the vacuum module of the affine Kac–Moody algebra of the corresponding simply laced simple Lie algebra at level one. This is known as the Frenkel–Kac (or Frenkel–Kac–Segal) construction, and is based on the earlier construction by Sergio Fubini and Gabriele Veneziano of the tachyonic vertex operator in the dual resonance model. Among other features, the zero modes of the vertex operators corresponding to root vectors give a construction of the underlying simple Lie algebra, related to a presentation originally due to Jacques Tits. In particular, one obtains a construction of all ADE type Lie groups directly from their root lattices. And this is commonly considered the simplest way to construct the 248-dimensional group E8. Monster vertex algebra The monster vertex algebra (also called the "moonshine module") is the key to Borcherds's proof of the Monstrous moonshine conjectures. It was constructed by Frenkel, Lepowsky, and Meurman in 1988. It is notable because its character is the j-invariant with no constant term, , and its automorphism group is the monster group. It is constructed by orbifolding the lattice vertex algebra constructed from the Leech lattice by the order 2 automorphism induced by reflecting the Leech lattice in the origin. That is, one forms the direct sum of the Leech lattice VOA with the twisted module, and takes the fixed points under an induced involution. Frenkel, Lepowsky, and Meurman conjectured in 1988 that is the unique holomorphic vertex operator algebra with central charge 24, and partition function . This conjecture is still open. Chiral de Rham complex Malikov, Schechtman, and Vaintrob showed that by a method of localization, one may canonically attach a bcβγ (boson–fermion superfield) system to a smooth complex manifold. This complex of sheaves has a distinguished differential, and the global cohomology is a vertex superalgebra. Ben-Zvi, Heluani, and Szczesny showed that a Riemannian metric on the manifold induces an N=1 superconformal structure, which is promoted to an N=2 structure if the metric is Kähler and Ricci-flat, and a hyperkähler structure induces an N=4 structure. Borisov and Libgober showed that one may obtain the two-variable elliptic genus of a compact complex manifold from the cohomology of the Chiral de Rham complex. If the manifold is Calabi–Yau, then this genus is a weak Jacobi form. Vertex algebra associated to a surface defect A vertex algebra can arise as a subsector of higher dimensional quantum field theory which localizes to a two real-dimensional submanifold of the space on which the higher dimensional theory is defined. A prototypical example is the construction of Beem, Leemos, Liendo, Peelaers, Rastelli, and van Rees which associates a vertex algebra to any 4d N=2 superconformal field theory. This vertex algebra has the property that its character coincides with the Schur index of the 4d superconformal theory. When the theory admits a weak coupling limit, the vertex algebra has an explicit description as a BRST reduction of a bcβγ system. Vertex operator superalgebras By allowing the underlying vector space to be a superspace (i.e., a Z/2Z-graded vector space ) one can define a vertex superalgebra by the same data as a vertex algebra, with 1 in V+ and T an even operator. The axioms are essentially the same, but one must incorporate suitable signs into the locality axiom, or one of the equivalent formulations. That is, if a and b are homogeneous, one compares Y(a,z)Y(b,w) with εY(b,w)Y(a,z), where ε is –1 if both a and b are odd and 1 otherwise. If in addition there is a Virasoro element ω in the even part of V2, and the usual grading restrictions are satisfied, then V is called a vertex operator superalgebra. One of the simplest examples is the vertex operator superalgebra generated by a single free fermion ψ. As a Virasoro representation, it has central charge 1/2, and decomposes as a direct sum of Ising modules of lowest weight 0 and 1/2. One may also describe it as a spin representation of the Clifford algebra on the quadratic space t1/2C[t,t−1](dt)1/2 with residue pairing. The vertex operator superalgebra is holomorphic, in the sense that all modules are direct sums of itself, i.e., the module category is equivalent to the category of vector spaces. The tensor square of the free fermion is called the free charged fermion, and by boson–fermion correspondence, it is isomorphic to the lattice vertex superalgebra attached to the odd lattice Z. This correspondence has been used by Date–Jimbo–Kashiwara-Miwa to construct soliton solutions to the KP hierarchy of nonlinear PDEs. Superconformal structures The Virasoro algebra has some supersymmetric extensions that naturally appear in superconformal field theory and superstring theory. The N=1, 2, and 4 superconformal algebras are of particular importance. Infinitesimal holomorphic superconformal transformations of a supercurve (with one even local coordinate z and N odd local coordinates θ1,...,θN) are generated by the coefficients of a super-stress–energy tensor T(z, θ1, ..., θN). When N=1, T has odd part given by a Virasoro field L(z), and even part given by a field subject to commutation relations By examining the symmetry of the operator products, one finds that there are two possibilities for the field G: the indices n are either all integers, yielding the Ramond algebra, or all half-integers, yielding the Neveu–Schwarz algebra. These algebras have unitary discrete series representations at central charge and unitary representations for all c greater than 3/2, with lowest weight h only constrained by h≥ 0 for Neveu–Schwarz and h ≥ c/24 for Ramond. An N=1 superconformal vector in a vertex operator algebra V of central charge c is an odd element τ ∈ V of weight 3/2, such that G−1/2τ = ω, and the coefficients of G(z) yield an action of the N=1 Neveu–Schwarz algebra at central charge c. For N=2 supersymmetry, one obtains even fields L(z) and J(z), and odd fields G+(z) and G−(z). The field J(z) generates an action of the Heisenberg algebras (described by physicists as a U(1) current). There are both Ramond and Neveu–Schwarz N=2 superconformal algebras, depending on whether the indexing on the G fields is integral or half-integral. However, the U(1) current gives rise to a one-parameter family of isomorphic superconformal algebras interpolating between Ramond and Neveu–Schwartz, and this deformation of structure is known as spectral flow. The unitary representations are given by discrete series with central charge c = 3-6/m for integers m at least 3, and a continuum of lowest weights for c > 3. An N=2 superconformal structure on a vertex operator algebra is a pair of odd elements τ+, τ− of weight 3/2, and an even element μ of weight 1 such that τ± generate G±(z), and μ generates J(z). For N=3 and 4, unitary representations only have central charges in a discrete family, with c=3k/2 and 6k, respectively, as k ranges over positive integers. Additional constructions Fixed point subalgebras: Given an action of a symmetry group on a vertex operator algebra, the subalgebra of fixed vectors is also a vertex operator algebra. In 2013, Miyamoto proved that two important finiteness properties, namely Zhu's condition C2 and regularity, are preserved when taking fixed points under finite solvable group actions. Current extensions: Given a vertex operator algebra and some modules of integral conformal weight, one may under favorable circumstances describe a vertex operator algebra structure on the direct sum. Lattice vertex algebras are a standard example of this. Another family of examples are framed VOAs, which start with tensor products of Ising models, and add modules that correspond to suitably even codes. Orbifolds: Given a finite cyclic group acting on a holomorphic VOA, it is conjectured that one may construct a second holomorphic VOA by adjoining irreducible twisted modules and taking fixed points under an induced automorphism, as long as those twisted modules have suitable conformal weight. This is known to be true in special cases, e.g., groups of order at most 3 acting on lattice VOAs. The coset construction (due to Goddard, Kent, and Olive): Given a vertex operator algebra V of central charge c and a set S of vectors, one may define the commutant C(V,S) to be the subspace of vectors v strictly commute with all fields coming from S, i.e., such that Y(s,z)v ∈ V[[z]] for all s ∈ S. This turns out to be a vertex subalgebra, with Y, T, and identity inherited from V. And if S is a VOA of central charge cS, the commutant is a VOA of central charge c–cS. For example, the embedding of SU(2) at level k+1 into the tensor product of two SU(2) algebras at levels k and 1 yields the Virasoro discrete series with p=k+2, q=k+3, and this was used to prove their existence in the 1980s. Again with SU(2), the embedding of level k+2 into the tensor product of level k and level 2 yields the N=1 superconformal discrete series. BRST reduction: For any degree 1 vector v satisfying v02=0, the cohomology of this operator has a graded vertex superalgebra structure. More generally, one may use any weight 1 field whose residue has square zero. The usual method is to tensor with fermions, as one then has a canonical differential. An important special case is quantum Drinfeld–Sokolov reduction applied to affine Kac–Moody algebras to obtain affine W-algebras as degree 0 cohomology. These W algebras also admit constructions as vertex subalgebras of free bosons given by kernels of screening operators. Related algebraic structures If one considers only the singular part of the OPE in a vertex algebra, one arrives at the definition of a Lie conformal algebra. Since one is often only concerned with the singular part of the OPE, this makes Lie conformal algebras a natural object to study. There is a functor from vertex algebras to Lie conformal algebras that forgets the regular part of OPEs, and it has a left adjoint, called the "universal vertex algebra" functor. Vacuum modules of affine Kac–Moody algebras and Virasoro vertex algebras are universal vertex algebras, and in particular, they can be described very concisely once the background theory is developed. There are several generalizations of the notion of vertex algebra in the literature. Some mild generalizations involve a weakening of the locality axiom to allow monodromy, e.g., the abelian intertwining algebras of Dong and Lepowsky. One may view these roughly as vertex algebra objects in a braided tensor category of graded vector spaces, in much the same way that a vertex superalgebra is such an object in the category of super vector spaces. More complicated generalizations relate to q-deformations and representations of quantum groups, such as in work of Frenkel–Reshetikhin, Etingof–Kazhdan, and Li. Beilinson and Drinfeld introduced a sheaf-theoretic notion of chiral algebra that is closely related to the notion of vertex algebra, but is defined without using any visible power series. Given an algebraic curve X, a chiral algebra on X is a DX-module A equipped with a multiplication operation on X×X that satisfies an associativity condition. They also introduced an equivalent notion of factorization algebra that is a system of quasicoherent sheaves on all finite products of the curve, together with a compatibility condition involving pullbacks to the complement of various diagonals. Any translation-equivariant chiral algebra on the affine line can be identified with a vertex algebra by taking the fiber at a point, and there is a natural way to attach a chiral algebra on a smooth algebraic curve to any vertex operator algebra. See also Operator algebra Zhu algebra Notes Citations Sources Conformal field theory Lie algebras Non-associative algebra
Vertex operator algebra
Mathematics
7,763
893,433
https://en.wikipedia.org/wiki/Lightweight%20Extensible%20Authentication%20Protocol
Lightweight Extensible Authentication Protocol (LEAP) is a proprietary wireless LAN authentication method developed by Cisco Systems. Important features of LEAP are dynamic WEP keys and mutual authentication (between a wireless client and a RADIUS server). LEAP allows for clients to re-authenticate frequently; upon each successful authentication, the clients acquire a new WEP key (with the hope that the WEP keys don't live long enough to be cracked). LEAP may be configured to use TKIP instead of dynamic WEP. Some 3rd party vendors also support LEAP through the Cisco Compatible Extensions Program. An unofficial description of the protocol is available. Security considerations Cisco LEAP, similar to WEP, has had well-known security weaknesses since 2003 involving offline password cracking. LEAP uses a modified version of MS-CHAP, an authentication protocol in which user credentials are not strongly protected. Stronger authentication protocols employ a salt to strengthen the credentials against eavesdropping during the authentication process. Cisco's response to the weaknesses of LEAP suggests that network administrators either force users to have stronger, more complicated passwords or move to another authentication protocol also developed by Cisco, EAP-FAST, to ensure security. Automated tools like ASLEAP demonstrate the simplicity of getting unauthorized access in networks protected by LEAP implementations. References Cisco protocols Wireless networking
Lightweight Extensible Authentication Protocol
Technology,Engineering
265
68,453,203
https://en.wikipedia.org/wiki/Avy%20B.V.
Avy B.V. is a Dutch technology company that develops and operates drones and aerial networks for long-range missions. Avy's B.V.'s drones can take off and land vertically like a helicopter and fly longer distances than a quadcopter because of their fixed-wing configuration. Its second drone, the Avy Aera, is a VTOL fixed-wing drone and was released at Amsterdam Drone Week in 2019 (Dec 4th - 6th). History Avy B.V. was founded in 2016 by Patrique Zaman in Amsterdam, Netherlands 2015–2017: European Space Agency (ESA) Incubator 2017: UAE Drones For Good Award. Avy in Dubai as one of the ten finalists. 2017: Avy exhibited in the Stedelijk Museum as part of the Design for Refugees exhibition. 2017: First BVLOS missions in three national parks in South Africa (Hluhluwe, Adventures with elephants, Leshiba) 2018: Move to new HQ in Amsterdam. 2018: Seed investment from Orange Wings. 2019: Release of the Avy Aera at Amsterdam Drone Week. 2019: Foundation Medical Drone Service consortium. 2020: Avy receives 1.4 million euros in subsidy grant from EU horizon 2020. 2020: Avy takes part in Lake Kivu challenge, a VTOL drone competition hosted by African Drone Forum in Rwanda. The company competed in the "Emergency Delivery category" and won a safety award. 2020: The company wins a Blue Tulip Award in the category of "Best Mobility Innovation". 2021: Launch "Drones for health" project in partnership with Botswana International University of Science and Technology (BIUST), United Nations Population Fund (UNFPA) and the Botswana Ministry of Health and Wellness. 2021: Won an Airwards in the "Emergency Response and SAR" category. Products Avy Aera Avy Aera was launched on Dec 4th, 2019 at the Amsterdam Drone Week in the RAI. Aera has an external dimension of 2400mm x 1300mm X 500mm, and carries a maximum payload of up to 1.5 kg. A VTOL drone is a combination between a helicopter and a plane, as it can take-off and land vertically. It has wings to enlarge the flight endurance. This drone can cover up to 85 km and has one hour of flight time. The long-range drone can fly beyond visual line-of-sight (BVLOS) missions, and it has a modular payload, making it suitable for different applications. It can be equipped with a stabilized gimbal that has RGB and a thermal camera for wildfire detection and monitoring. For medical deliveries, this model can transport a medical (cooled) cargo box, which is able to keep medical commodities such as blood, samples and vaccines in a temperature controlled state between 2-8 °C. Avy Area is certified to fly BVLOS in compliance with the new EU drone regulations. Docking station The Avy Aera can be remotely and autonomously operated from the docking station, a locally placed and secured drone station where the drone can autonomously take off and land for check and charge. The drone and the station are connected through software and are remotely operated from the network control center. This center can be separate or integrated inside the control room of emergency services. This whole system forms the infrastructure for an aerial drone network. Projects Healthcare Logistics The Medical Drones Service consortium was launched in late 2019. This consists of ANWB MAA (flights operator), PostNL (logistic provider), Erasmus MC (hospital), Isala (hospitals), Sanquin (blood bank), KPN (telecom), Certe (Lab), and Avy, that collaboratively joined a three year pilot to research and test how drones can contribute to deliver healthcare in the Netherlands and keep healthcare accessible in the future. The medical partners are important to develop the right kind of emergency service. Avy and KPN are the two technology partners. Halfway through the project, the first BVLOS flights have been performed by the ANWB MAA on different routes between hospitals & blood bank in the Netherlands. Emergency Services With climate change, rapid detection of wildfires becomes important with the increasing risk of wildfires. Avy partnered up with CHC Helicopters and Safety Region of North Holland to research the use of drones for detection of early-stage wildfires. In February 2021, the Avy Area (equipped with a stabilized gimbal camera with RGB and thermal functionality) performed several test flights in National Park the Hoge Veluwe for the Security Regions VNOG and Gelderland Midden. In September 2021 phase 2 & 3 of this project will start with more test flights above the Veluwe. Last-mile Medical Delivery In April 2021, Avy partnered with the Botswana International University of Science and Technology (BIUST), UNFPA and Botswana Ministry of Health and Wellness to start the Drones for Health project. This aims to reduce the numbers of maternal deaths by using drones to deliver health supplies and emergency commodities. The Avy Aera was 65% faster than common road transport to reach certain communities. The Drones for Health project was officially launched on May 7, 2021 as it was initiated by BIUST, UNFPA, and the Ministry of Health & Wellness of Botswana. Drone Specifications Avy Aera Dimensions: Wingspan: 2400mm Length: 1300mm Height: 500mm Transport case: 2000 x 600 x 600mm Weight: 12 kg Payloads: Maximum payload weight: 1.5 kg Payload volume: 200 x 275 x 135mm (L x W x H) Cargo module: Default Medical payload: Insulated for cooled transport First response payload: Nighthawk 2 Flight Performance: Flight time: 55 minutes Range: 60 km Cruise speed: 40 kt (74 km/h) Awards In 2020, Avy won a Blue Tulip Award (organized by Accenture) for Best Mobility Innovation category. In 2021, the company in partnership with the Dutch fire brigade won an Airwards, the global award to recognize positive drone use cases, in the "Emergency Response and SAR" category. The project aims to build early wildfire warning systems with daily drone flights. References Unmanned aerial vehicle manufacturers Companies based in Amsterdam Sustainable transport 2016 establishments in the Netherlands Technology companies of the Netherlands Companies of the Netherlands Privately held companies of the Netherlands Multinational companies headquartered in the Netherlands Dutch brands
Avy B.V.
Physics
1,312
54,285,105
https://en.wikipedia.org/wiki/Animal%20Cognition
Animal Cognition is a peer-reviewed scientific journal published by Springer Science+Business Media. It covers research in ethology, behavioral ecology, animal behavior, cognitive sciences, and all aspects of human and animal cognition. According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.084. References External links Ethology journals English-language journals Academic journals established in 1998 Springer Science+Business Media academic journals Cognitive science journals Animal cognition
Animal Cognition
Biology
93
5,791,910
https://en.wikipedia.org/wiki/Grosshans%20subgroup
In mathematics, in the representation theory of algebraic groups, a Grosshans subgroup, named after Frank Grosshans, is an algebraic subgroup of an algebraic group that is an observable subgroup for which the ring of functions on the quotient variety is finitely generated. References External links Invariants of Unipotent subgroups Representation theory of algebraic groups
Grosshans subgroup
Mathematics
75
3,640,804
https://en.wikipedia.org/wiki/Johan%20August%20Brinell
August Brinell (10 October 1849 – 17 November 1925) was a Swedish metallurgical engineer. Brinell is noted as the creator of a method for quantifying the surface hardness of materials, now known as the Brinell hardness test. His name is also commemorated in the description of a failure mechanism of material surfaces known as Brinelling. Biography Brinell was born in Bringetofta, Nässjö Kommun, Sweden. He began his career as an Engineer at the Lesjöfors Ironworks and in 1882 became chief engineer at the Fagersta Ironworks. In 1903 he became Chief Engineer at Jernkontoret, the Swedish Ironmasters' Association. He remained at that post until 1914. Brinell was elected a member of the Royal Swedish Academy of Sciences in 1902, and of the Royal Swedish Academy of Engineering Sciences in 1919. He was awarded the Bessemer Gold Medal of the Iron and Steel Institute in 1907. He died of pneumonia in 1925 in Stockholm. Legacy Brinell is best known today for the Brinell hardness test, which he proposed in 1900. In this test a 10-millimetre diameter hardened steel or carbide ball is pushed into the surface of the material being tested, with a 3000 kg imposed load. The depth to which the ball penetrates the material surface is an indication of the Brinell Hardness Number, which is calculated as follows: BHN = load in kilograms divided by the spherical area of the indentation in square millimetres (refer to Brinell scale for method of calculation) It is a rapid, non-destructive (except at the surface being tested) means of determining the hardness of metals. This area is a function of the ball diameter and the depth of the indentation. With minor variations, his test still remains in wide use. This method is best for achieving the macro-hardness of material, particularly those materials with heterogenous structure. The high school or gymnasium in Nässjö is named after him External links Complete Dictionary of Scientific Biography Svenskt porträttgalleri. XVII. - Stockholm, 1905, Brinell, Johan August References Swedish metallurgists Members of the Royal Swedish Academy of Sciences Members of the Royal Swedish Academy of Engineering Sciences 1849 births 1925 deaths Bessemer Gold Medal
Johan August Brinell
Chemistry
476
17,671,142
https://en.wikipedia.org/wiki/SCSI%20RDMA%20Protocol
In computing the SCSI RDMA Protocol (SRP) is a protocol that allows one computer to access SCSI devices attached to another computer via remote direct memory access (RDMA). The SRP protocol is also known as the SCSI Remote Protocol. The use of RDMA makes higher throughput and lower latency possible than what is generally available through e.g. the TCP/IP communication protocol. Though the SRP protocol has been designed to use RDMA networks efficiently, it is also possible to implement the SRP protocol over networks that do not support RDMA. History SRP was published as an ANSI standard (ANSI INCITS 365-2002) in 2002 and renewed in 2007 and 2019. Related Protocols As with the ISCSI Extensions for RDMA (iSER) communication protocol, there is the notion of a target (a system that stores the data) and an initiator (a client accessing the target) with the target initiating data transfers. In other words, when an initiator writes data to a target, the target executes an RDMA read to fetch the data from the initiator and when a user issues a SCSI read command, the target sends an RDMA write to the initiator. While the SRP protocol is easier to implement than the iSER protocol, iSER offers more management functionality, e.g. the target discovery infrastructure enabled by the iSCSI protocol. Performance Bandwidth and latency of storage targets supporting the SRP or the iSER protocol should be similar. On Linux, there are two SRP and two iSER storage target implementations available that run inside the kernel (SCST and LIO) and an iSER storage target implementation that runs in user space (STGT). Measurements have shown that the SCST SRP target has a lower latency and a higher bandwidth than the STGT iSER target. This is probably because the RDMA communication overhead is lower for a component implemented in the Linux kernel than for a user space Linux process, and not because of protocol differences. Implementations In order to use the SRP protocol, an SRP initiator implementation, an SRP target implementation and networking hardware supported by the initiator and target are needed. The following software SRP initiator implementations exist: Linux SRP initiator, available since November 2005 (kernel version 2.6.15). Windows SRP initiator, available through the winOFED InfiniBand stack. VMWare SRP initiator, available since January 2008 through Mellanox' OFED Drivers for VMware Infrastructure 3 and vSphere 4. Solaris 10 SRP initiator, available through Sun's download page. Solaris 11 and OpenSolaris SRP initiator, integrated as a component of project COMSTAR. The IBM POWER virtual SCSI client driver for Linux (ibmvscsi), available since January 2008 (kernel version 2.6.24). Virtual SCSI allows client logical partitions to access I/O devices (disk, CD, and tape) that are owned by another logical partition. The following SRP target implementations exist: The SCST SRP target implementation. This is a mature SRP target implementation available since 2008 via both SCST and OFED. Linux LIO SRP target, available since January 2012 (kernel version 3.3), based on the SCST SRP target. The IBM POWER virtual SCSI target driver (ibmvstgt), available since January 2008 (kernel version 2.6.24). DataDirect Network's (DDN) disk subsystems such as the S2A9900 and SFA10000, which use the SRP target implementation in the disk subsystem's controllers to present LUNs to servers (the servers act as SRP initiators). IBM's FlashSystem. The Solaris COMSTAR target, available since early 2009 in OpenSolaris and Solaris 11. See also iSCSI Extensions for RDMA (iSER) References Computer networking SCSI
SCSI RDMA Protocol
Technology,Engineering
846
52,526,855
https://en.wikipedia.org/wiki/Amanita%20proxima
Amanita proxima is a species of Amanita from France, Italy, and Spain. It is poisonous. References External links proxima Fungus species
Amanita proxima
Biology
35
9,126,301
https://en.wikipedia.org/wiki/Play%20date
A play date or playdate is an arranged appointment for children to meet and play. References External links Parenting Play (activity)
Play date
Biology
27
38,848,434
https://en.wikipedia.org/wiki/Biosensors%20and%20Bioelectronics
Biosensors and Bioelectronics is a peer-reviewed scientific journal published by Elsevier. It covers research on biosensors and bioelectronics. The journal was established in 1985 as Biosensors and obtained its current name in 1991. The journal was established by I. John Higgins (Cranfield University), W. Geoff Potter (Science and Engineering Research Council) and Anthony P.F. Turner (Cranfield University, later Linköping University), who became editor-in-chief, until his retirement in 2019. The current Editors in Chief are Chenzhong Li (Tulane University), Arben Merkoçi (Catalan Institute of Nanoscience and Nanotechnology), and Man Bock Gu (Korea University). In 1990, the journal was complemented with an associated conference, Biosensors 90. The World Congress on Biosensors continues today. According to the Journal Citation Reports, the journal has a 2023 impact factor of 10.7 5-Year Impact Factor: 9.323 Biosensors & Bioelectronics is the principal international journal devoted to research, design, development, and application of biosensors and bioelectronics. It is an interdisciplinary journal serving professionals with an interest in the exploitation of biological materials in novel diagnostic and electronic devices. Biosensors are defined as analytical devices incorporating a biological material (e.g. tissue, microorganisms, organelles, cell receptors, enzymes, antibodies, nucleic acids, etc.), a biologically derived material, or a biomimetic intimately associated with or integrated within a physicochemical transducer or transducing microsystem, which may be optical, electrochemical, thermometric, piezoelectric or magnetic. Biosensors usually yield a digital electronic signal which is proportional to the concentration of a specific analyte or group of analytes. While the signal may in principle be continuous, devices can be configured to yield single measurements to meet specific market requirements. Biosensors have been applied to a wide variety of analytical problems including in medicine, the environment, food, process industries, security, and defense. The emerging field of Bioelectronics seeks to exploit biology in conjunction with electronics in a wider context encompassing, for example, biomaterials for information processing, information storage, and actuators. A key aspect is an interface between biological materials and electronics. While endeavoring to maintain coherence in the scope of the journal, the editors will accept reviews and papers of obvious relevance to the community, which describe important new concepts, underpin an understanding of the field or provide important insights into the practical application of biosensors and bioelectronics. Abstracting and indexing The journal is abstracted and indexed in: PubMed Current Contents BIOSIS Previews AGRICOLA Cambridge Scientific Abstracts Embase Chemical Abstracts Service Science Citation Index INSPEC Scopus References External links World Congress on Biosensors Biosensors Elsevier academic journals English-language journals Academic journals established in 1985
Biosensors and Bioelectronics
Biology
642
35,467,600
https://en.wikipedia.org/wiki/Morchella%20sextelata
Morchella sextelata is a species of ascomycete fungus in the family Morchellaceae. Described as new to science in 2012, it is found in North America (in Washington, Idaho, Montana, Wyoming, New Mexico, and Yukon Territory). It has also been found in China, although it is not known if this is a result of an accidental introduction or natural dispersion. The fruit bodies have a roughly conical cap up to tall and wide, with a surface of mostly vertically arranged pits. The cap is initially yellowish to brownish, but it darkens to become almost black in maturity. The stipe is white and hollow, measuring high by wide. Morchella sextelata is one of four species of wildfire-adapted morels in western North America, the others being M. capitata, M. septimelata, and M. tomentosa. M. sextelata cannot be reliably distinguished from M. septimelata without the use of DNA analysis. Taxonomy Morchella sextelata was originally identified as phylogenetic species "Mel-6" in the species-rich Elata clade (brown morels) elucidated by microbiologist Kerry O'Donnell and colleagues in a 2011 publication. The specific epithet sextelata alludes to this preliminary name. Although M. sextelata is not distinguishable from Morchella septimelata on physical or ecological characteristics, they are clearly genetically distinct species, and can be differentiated by comparing DNA sequences or with restriction fragment length polymorphism analysis. Allopatric speciation is thought to have been the driving evolutionary force that caused M. sextelata to diverge from its ancestors roughly 25 million years ago. The original specimens collected were obtained as part of the Morel Data Collection Project, a research effort designed to improve the understanding of North American morels. Description The fruit bodies of Morchella sextelata are high with a conical cap that is high and wide at the widest point. The cap surface features pits and ridges, formed by the intersection of 12–20 primary vertical ridges and frequent shorter, secondary vertical ridges, with occasional sunken, horizontal ridges. The cap is attached to the stipe with a sinus about 2–4 mm deep and 2–4 mm wide. The ridges are smooth or very finely tomentose (covered with densely matted filaments). They are initially colorless, becoming pale tan, then dark grayish brown in maturity, eventually darkening to nearly black. They are flattened when young but sometimes become sharpened or eroded in maturity. The pits are somewhat elongated vertically. They are smooth, brownish to yellowish tan to pinkish to buff. The whitish to pale brownish stipe is long by wide and is roughly equal in width throughout its length, or sometimes slightly club-shaped near the base. Its surface is either smooth or covered with whitish granules. The flesh is whitish, measuring 1–2 mm thick in the hollow cap; it may become layered and chambered in the base of the stipe. The sterile inner surface of the cap is whitish and pubescent (covered with short, soft "hair"). The ascospores of M. sextelata are elliptical and smooth, typically measuring 18–25 by 10–16 μm. Asci (spore-bearing cells) are eight-spored, hyaline (translucent), cylindrical, and measure 200–325 by 5–25 μm. Paraphyses are cylindrical, septate, and measure 175–300 by 2–15 μm. Their tips are variably shaped, from rounded, to club-shaped, to fuse-shaped. The contents of the paraphyses are hyaline (translucent) to faintly brownish in dilute potassium hydroxide (KOH). Hyphae on the sterile cap ridges are septate and measure 50–180 by 5–25 μm. The terminal cells are variably shaped (similar to the paraphyses), and have brownish contents in KOH. North American Morchella are generally considered choice edibles, but the edibility of M. sextelata was not mentioned in its original description. Similar species Morchella sextelata is morphologically indistinguishable from several other morel species in the M. elata clade, including M. septimelata, M. brunnea, M. angusticeps, and M. septentrionalis. M. septimelata can be distinguished from these latter three lookalikes by habitat or distribution: M. brunnea is found in non-burned forests of western North America; M. angusticeps is found east of the Rocky Mountains; and M. septentrionalis is restricted to a northern distribution (about 44°N northward) in eastern North America. M. septimelata, however, also grows in burn sites and so is both morphologically and ecologically indistinguishable from M. sextelata. Although there are subtle differences in the structure of the sterile ridges between the species, the authors were not confident that enough specimens had been examined to establish that these differences were consistent. Habitat, distribution, and ecology Morchella sextelata may be either saprobic or mycorrhizal at different times in its life cycle. Its fruit bodies grow in partially burned conifer forests, particularly those dominated by Douglas fir (Pseudotsuga menziesii) and ponderosa pine (Pinus ponderosa). They tend to appear in great numbers the year immediately following fire and appear in decreasing frequency in successive years. Fruiting occurs from April through July, at elevations between . The distribution includes Washington, Idaho, Montana, Wyoming, and Yukon Territory. M. sextelata has also been found in China, but it remains unclear whether dispersal between these distant locations occurred naturally or through accidental introduction by humans. Morchella sextelata, identified as phylogenetic species "Mel-6", has been shown to colonize the non-native species Bromus tectorum (cheatgrass) as an endophyte, increasing the overall growth of the grass, as well as the abundance of seeds and their tolerance to extreme heat (). This has been hypothesized to be a contributing factor in the success of cheatgrass as an invasive species in western North America. References External links sextelata Edible fungi Fungi described in 2012 Fungi of Asia Fungi of North America Fungus species
Morchella sextelata
Biology
1,334
35,468,052
https://en.wikipedia.org/wiki/Open-crotch%20pants
Open-crotch pants (), also known as open-crotch trousers or split pants, are worn by toddlers throughout mainland China. Often made of thick fabric, they are designed with either an unsewn seam over the buttocks and crotch or a hole over the central buttocks. Both allow children to urinate and defecate without the pants being lowered. The child simply squats, or is held by the parent, eliminating the need for diapers. The sight of the partially exposed buttocks of kaidangku-clad children in public places frequently astonishes foreign visitors, who often photograph them. They have been described as being "as much a sign of China as Chairman Mao's portrait looming over Tiananmen Square." In China they are often seen as a relic of the country's rural past, with younger mothers, particularly in cities, preferring to diaper their children instead. However, Western advocates of the elimination communication method of toilet training have pointed to the advantages of their use, specifically that children complete their toilet training more quickly and at an earlier age. Other benefits claimed include the elimination of diaper rash and reduction of the environmental problems caused by disposable diapers. Use Toilet training begins very early in China, sometimes within days of birth and usually no later than a month. Frequently babies are held closely by parents, grandparents or other extended family members caring for them, sensitive to when they need to relieve themselves. A child who appears ready to urinate or defecate is held over the toilet or any other receptacle available if a commode cannot be reached in time. The adult makes a high-pitched soft whistle while holding the child in a bǎ (), or bunched-up position, a term sometimes used for the whole process, imitating the sound of running water or urine, to get the child to relax the appropriate muscles. Open-crotch pants are worn when the child is first taken out of the house. Mostly male children wear them; girls (and occasionally some boys) are put in infant-size sundresses. Their use continues even after wearers have gained some control over their bodily functions, since they may not have yet gained the stature or motor skills necessary to use a toilet. Instead, when outdoors, they use a wastebasket or large potted plant. If neither of those is available, caretakers often let the children use the sidewalk or any other available uncovered surface and clean it up themselves afterwards. History In 2003 The New York Times described open-crotch pants as having been in use in China for "decades". Seven years earlier, in her memoir Red China Blues, Chinese Canadian journalist Jan Wong speculates that their use evolved from chronic shortages of cloth, soap and water. While those items were in short supply, "people weren't" she wrote. "Someone was always available to ba a Chinese baby." Their use continued during the 20th century as China modernized in other ways. During the later years of Mao Zedong's rule, brightly colored kaidangku on the streets of Beijing offered a sharp contrast to the austere blue and gray tones of adult clothing prescribed by the Cultural Revolution. Even after the economic liberalizations promoted by Deng Xiaoping in the subsequent decades and the ensuing introduction of more Western culture and ideas, they remained in use for the vast majority of children in the People's Republic of China. When Wong, then a Chinese correspondent for Toronto's The Globe and Mail, bore a son in Beijing in 1990, only one hotel in the city sold disposable diapers. Since they charged US$1 apiece, she decided to toilet-train him the traditional Chinese way. Western manufacturers of consumer products, including disposable diapers, continued to push for entry into the vast Chinese market. In 1998 the American company Procter & Gamble (P&G) was able to introduce its popular Pampers brand to China; competitors soon followed. However, Chinese parents at first saw no reason to abandon kaidangku for diapers, and the company struggled to gain market share. After re-engineering its diapers to be softer and selling them at a lower price than it offered them for in the U.S., P&G launched its "Golden Sleep" campaign in 2007 suggested by its market research, with advertisements claiming that babies slept better in diapers, which could in turn be better for their cognitive development. Even before that, attitudes had begun to change. Within five years of Pampers' introduction, about $200 million in disposable diapers were being sold in China annually, and many manufacturers reported their sales were growing by double-digit percentages. One of the foreign manufacturers, Japan's Unicharm, said in 2002 that its MamyPoko brand was so popular it was planning to build a plant in China to make them. The shift in attitudes had drastically reduced the use of open-crotch pants—upscale retailers no longer carried them, and Chinese parenting magazines depicted babies wearing diapers exclusively. Attitudes among Chinese had changed, as well. Mothers the Times talked to in 2003 dismissed kaidangku as out of step with the values of China's growing middle class. "Split pants? That's so old-fashioned!" one Shanghai woman said. "It's not hygienic. It's bad for the environment. Only poor people who live on farms wear them." A Guangzhou woman quoted in China Daily a year later agreed, calling them "uncivilized". People who could afford to buy diapers for their children did so, she asserted, and a Beijing post-natal care center advised mothers to use diapers no matter what the cost. Zhao Zhongxin, an education professor at Beijing Normal University, said open-crotch pants had become an indicator of socioeconomic status in the new China. "Children in the cities do not wear kaidangku anymore. But children in the countryside still do," he told China Daily. "This is the difference between the minds and living conditions of rural people and urban people," who, the paper added, might also be more mindful of city-government campaigns for cleaner public spaces overall, especially prior to the 2008 Summer Olympics in Beijing, which included exhortations to parents to diaper their children at least for the duration of the Games. A spokeswoman for domestic diaper maker Goodbaby admitted to the newspaper that it was harder to overcome resistance to diaper use outside cities. "Some people, especially farmers, may think they are too wasteful." Other mothers used both open-crotch pants and diapers depending on the situation. In 2003 the Times reported that they were still frequently seen on hot days in Shanghai, although they were no longer ubiquitous in those conditions. A Zhejiang woman who ran a fruit stand in the city told the newspaper that she dressed her son in them only in that weather, since it was more comfortable for him and reduced the risk of diaper rash. And one Beijing mother whom China Daily spoke to while she watched her kaidangku-clad son at a Beijing playground dismissed opposition to the pants. "Even if people don't think it looks good, that's a minority opinion," she said. "This is a Chinese tradition." By the end of the decade Pampers had become the top-selling diaper brand in China. Foreign and domestic observers alike confidently predicted the demise of kaidangku. In 2010 Brandchannel called them "a fading memory." Yet reports from China early in the next decade suggested their use continued. Advantages and disadvantages Despite the increasing prevalence of diaper use, which became a $3-billion industry in the country by 2010, enough Chinese parents still use open-crotch pants, or consider doing so, for parenting websites in that language to list their benefits and detriments to better help parents make an informed decision. Among the former are that their use offsets the infant's inability to communicate, eliminates the need for scheduled toileting times and greatly reduces the need to wash soiled clothing. Most frequently cited is the ability to begin and finish toilet training earlier. It is not uncommon for infants in kaidangku to begin being toilet trained before their first birthday and be fully trained around that milestone or shortly afterwards, before most of their Western counterparts have even begun. During a 1981 visit to a Beijing preschool, Fox Butterfield, then a Chinese correspondent for The New York Times, reported that he expressed skepticism from his own contemporaneous parenting experience over the possibility that children that young could be successfully toilet trained, only to have it immediately dispelled by a 14-month-old girl's timely use of the spittoon provided for her. However, parents are cautioned that kaidangku can be dirtier, leading to a higher risk of problems like urethritis, cystitis and other complications of urinary tract infections. Children in them are also believed to face a higher risk of frostbite in winter, and 163.com warns that boys with easy access to their exposed genitals "can easily develop bad habits." Goodbaby, the Shanghai-based diaper maker, lists some other problems with open-crotch pants on its website. In addition to the medical, sanitary and environmental drawbacks, it says that they show no respect for the child's privacy and that he may in the future be embarrassed by photographs of himself wearing them, particularly as they become less common. While it admits that kaidangku use results merely from different cultural values and not ignorance, it counsels, "we must admit foreign practices are more rigorous and show more respect for the child." Wong, in her memoir, describes another negative side effect. In the early 1990s, she reported on China's leading penis-enlargement surgeon. Many of his patients were men who, as children on farms, suffered serious injury to their organs when they squatted in their open-crotch pants in areas where dogs or pigs ate their own feces and the animals bit the boys' penises in the confusion. Some had never married because of the injury. "China desperately needed a Pampers factory, or at least a dog-food industry," she wrote. Lastly, in his 1996 memoir The Attic, artist Guanlong Cao recalls an incidental benefit of kaidangku to his parents: Use in West As Chinese parents were migrating from kaidangku to diapers, some Western parents were going in the opposite direction, concerned about the environmental impact of used disposable diapers and the health effects on the child. In her 2006 book Diaper Free, Ingrid Bauer bemoaned the marketing success of diaper manufacturers in China. "The traditional kaidangku have rapidly disappeared from the major cities in the last half-decade and are rapidly being replaced by diapers ... Aggressive advertisers create an impression that consumer products are vastly superior to what mothers have practiced for eons and urge parents to buy what they can barely afford," she wrote. Around that same time, inspired by the Chinese example, parents in the U.S. and other Western countries began forming "diaper-free" support groups and practicing elimination communication toilet training on younger babies, using the ba whistling sound to incite urination. Some that The New York Times talked to in 2005 suggested they had gone to that city's Chinatown to purchase open-crotch pants for their own children. Western parents working in China also saw the use of kaidangku up close, and in some cases decided to emulate Chinese methods in toilet training their own children. See also Infant clothing Swaddling, a type of infant clothing once dismissed as old-fashioned but increasingly used References External links Trousers and shorts Infants' clothing Chinese clothing Toilet training
Open-crotch pants
Biology
2,427
33,230,216
https://en.wikipedia.org/wiki/C20H34O
{{DISPLAYTITLE:C20H34O}} The molecular formula C20H34O (molar mass: 290.48 g/mol, exact mass: 290.2610 u) may refer to: Cembratrienol (CBTol) Geranylgeraniol Isotuberculosinol, also known as nosyberkol or edaxadiene Molecular formulas
C20H34O
Physics,Chemistry
88