id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
10,922,177
https://en.wikipedia.org/wiki/British%20Museum%20leather%20dressing
British Museum leather dressing has been used by many conservators since its publication to protect and conserve leather. Formulation The basic formulation is: The first three ingredients are heated together, then added to the cold solvent and allowed to cool while constantly stirring. Care should be exercised, as the solvents are highly flammable and have low boiling points. Variations There are several variations in the formulation. Sometimes 60% of the lanolin was replaced by neatsfoot oil. One disadvantage of the solvent hexane is its tendency to evaporate rapidly. Before the fat/hexane mixture has been able to penetrate deep into the leather, the hexane evaporates to the surface of the leather, taking most of the fat with it. While beeswax prevents air pollutants from penetrating the leather, it does this by closing off the leather, thus disturbing the water balance and causing the leather to dry out. The cedarwood oil acts as a fungicide to further protect the leather. In use The British Museum leather dressing was part of an elaborate leather conservation programme. Other steps entailed cleaning the leather, if necessary with soap and water, and applying an aqueous solution of 7% potassium lactate as a buffer. A warning was given about the dangers of using too much lactate, which made books sticky and could cause fungal growth. The books had to be absolutely dry when the leather dressing was applied. The dressing would be applied sparingly and rubbed into the leather. After two days, the treated leather was polished with a soft cloth. Hard leathers can be soaked in a solution of one part British Museum leather dressing: three parts Stoddard solvent. British Museum leather dressing darkens leather, but it is a treatment with a good success record. References Leather dressing Leather crafting Conservation and restoration materials
British Museum leather dressing
Physics
373
17,886,112
https://en.wikipedia.org/wiki/Synchronous%20virtual%20pipe
When realizing pipeline forwarding a predefined schedule for forwarding a pre-allocated amount of bytes during one or more time frames along a path of subsequent switches establishes a synchronous virtual pipe (SVP). The SVP capacity is determined by the total number of bits allocated in every time cycle for the SVP. For example, for a 10 ms time cycle, if 20,000 bits are allocated during each of 2 time frames, the SVP capacity is 4 Mbit/s. Pipeline forwarding guarantees that reserved traffic, i.e., traveling on an SVP, experiences: bounded end-to-end delay, delay jitter lower than two TFs, and no congestion and resulting losses. Two implementations of the pipeline forwarding were proposed: time-driven switching (TDS) and time-driven priority (TDP) and can be used to create pipeline forwarding parallel network in the future Internet. References Computer networking
Synchronous virtual pipe
Technology,Engineering
196
74,967,374
https://en.wikipedia.org/wiki/Seismic%20velocity%20structure
Seismic velocity structure is the distribution and variation of seismic wave speeds within Earth's and other planetary bodies' subsurface. It is reflective of subsurface properties such as material composition, density, porosity, and temperature. Geophysicists rely on the analysis and interpretation of the velocity structure to develop refined models of the subsurface geology, which are essential in resource exploration, earthquake seismology, and advancing our understanding of Earth's geological development. History The understanding of the Earth's seismic velocity structure has developed significantly since the advent of modern seismology. The invention of the seismogram in the 19th-century catalyzed the systematic study of seismic velocity structure by enabling the recording and analysis of seismic waves. 20th century The field of seismology achieved significant breakthroughs in the 20th century.In 1909, Andrija Mohorovičić identified a significant boundary within the Earth known as the Mohorovičić discontinuity, which demarcates the transition between the Earth's crust and mantle with a notable increase in seismic wave speeds. This work was furthered by Beno Gutenberg, who identified the boundary at the core-mantle layer in the early to mid-20th century. The 1960s introduction of the World Wide Standardized Seismograph Network dramatically improved the collection and understanding of seismic data, contributing to the broader acceptance of plate tectonics theory by illustrating variations in seismic velocities. Later, seismic tomography, a technique used to create detailed images of the Earth's interior by analyzing seismic waves, was propelled by the contributions of Keiiti Aki and Adam Dziewonski in the 1970s and 1980s, enabling a deeper understanding of the Earth's velocity structure. Their work laid the foundation for the Preliminary Reference Earth Model in 1981, a significant step toward modeling the Earth's internal velocities. The establishment of the Global Seismic Network in 1984 by Incorporated Research Institutions for Seismology further enhanced seismic monitoring capabilities, continuing the legacy of the WWSSN. 21st century The advancement in seismic tomography and the expansion of the Global Seismic Network, alongside greater computational power, have enabled more accurate modeling of the Earth's internal velocity structure. Recent progress focuses on the inner core's velocity features and applying methods like ambient noise tomography for improved imaging. Principle of seismic velocity structure The study of seismic velocity structure, using the principles of seismic wave propagation, offers critical insights into the Earth's internal structure, material composition, and physical states. Variations in wave speed, influenced by differences in material density and state (solid, liquid, or gas), alter wave paths through refraction and reflection, as described by Snell's Law. P-waves, which can move through all states of matter and provide data on a range of depths, change speed based on the material's properties, such as type, density, and temperature. S-waves, in contrast, are constrained to solids and reveal information about the Earth's rigidity and internal composition, including the discovery of the outer core's liquid state since they cannot pass through it. The study of these waves' travel times and reflections offers a reconstructive view of the Earth's layered velocity structure. Average velocity structure of planetary bodies Velocity structure of Earth Seismic waves traverse the Earth's layers at speeds that differ according to each layer's unique properties, with their velocities shaped by the respective temperature, composition, and pressure. The Earth's structure features distinct seismic discontinuities where these velocities shift abruptly, signifying changes in mineral composition or physical state. Crust Average P-wave velocity: 6.0–7.0 km/s (continental); 5.0–7.0 km/s (oceanic) Average S-wave velocity: 3.5–4.0 km/s Within the Earth's crust, seismic velocities increase with depth, mainly due to rising pressure, which makes materials denser. The relationship between crustal depth and pressure is direct; as the overlying rock exerts weight, it compacts underlying layers, reduces rock porosity, increases density, and can alter crystalline structures, thus accelerating seismic waves. Crustal composition varies, affecting seismic velocities. The upper crust typically contains sedimentary rocks with lower velocities (2.0–5.5 km/s), while the lower crust consists of denser basaltic and gabbroic rocks, leading to higher velocities. Although geothermal gradient, which refers to the increase in temperature with depth in the Earth's interior, can decrease seismic velocities, this effect is usually outweighed by the velocity-boosting impact of increased pressure. Upper mantle Average P-wave velocity: 7.5–8.5 km/s Average S-wave velocity: 4.5–5.0 km/s Seismic velocity in the upper mantle rises primarily due to increased pressure, similar to the crust but with a more pronounced effect on velocity. Additionally, pressure-induced mineral phase changes, where minerals rearrange their structures, in the upper mantle contribute to this acceleration. For example, olivine transforms into its denser polymorphs, wadsleyite and ringwoodite, at depths of approximately 410 km and 660 km respectively, resulting in a more compact structure that facilitates faster seismic wave propagation in the transition zone. Lower mantle Average P-wave velocity: 10–13 km/s Average S-wave velocity: 5.5–7.0 km/s In the lower mantle, the rise in seismic velocity is driven by increasing pressure, which is greater here than in the upper layers, resulting in denser rock and faster seismic wave travel. Although thermal effects may lessen seismic velocity by softening the rock, the predominant factor in the lower mantle remains the increase in pressure. Outer Core Average P-wave velocity: 8.0–10 km/s S-waves: Do not propagate as the outer core is liquid In the outer core, seismic velocity significantly decreases due to its liquid state, which impedes the speed of seismic waves despite the high pressure. This sharp decline is observed at the core-mantle boundary, also referred to as the D'' region or Gutenberg discontinuity. Furthermore, the reduction in seismic velocity in the outer core suggests the presence of lighter elements like oxygen, silicon, sulfur, and hydrogen, which lower the density of the outer core. Inner core Average P-wave velocity: ~11 km/s Average S-wave velocity: ~3.5 km/s The solid, high-density composition of the inner core, predominantly iron and nickel, results in increased seismic velocity compared to the liquid outer core. While light elements also present in the inner core modulate this velocity, their impact is relatively contained. Anisotropy of inner core The inner core is anisotropic, causing seismic waves to vary in speed depending on their direction of travel. P-waves, in particular, move more quickly along the inner core's rotational axis than across the equatorial plane. This suggests that Earth's rotation affects the alignment of iron crystals during the core's solidification. There is also evidence suggesting a distinct transition zone ("inner" inner core), with a hypothesized transition zone some 250 to 400 km beneath the inner core boundary (ICB). This is inferred from anomalies in travel times for P-wave that travels through the inner core. This transition zone, perhaps 100 to 200 km thick, may provide insights into the alignment of iron crystals, the distribution of light elements, or Earth's accretion history. Studying the inner core poses significant challenges for seismologists and geophysicists, given that it accounts for less than 1% of Earth's volume and is difficult for seismic waves to penetrate. Moreover, S-wave detection is challenging due to minimal compressional-shear wave conversion at the boundary and substantial attenuation within the inner core, leaving S-wave velocity uncertain and an area for future research. Lateral variation of velocity structure Lateral variation in seismic velocity is a horizontal change in seismic wave speeds across the Earth's crust due to differences in geological structures like rock types, temperature, and fluids presence, affecting seismic wave travel speed. This variation helps delineate tectonic plates and geological features and is key to resource exploration and understanding the Earth's internal heat flow. Discontinuity Discontinuities are zones or surfaces within the Earth that lead to abrupt changes in seismic velocity, revealing the composition and demarcating the boundaries between the Earth's layers. The following are key discontinuities within the Earth: Mohorovičić discontinuity: the boundary between the crust and the mantle, located approximately 30–50 km below the continental crust and 5–10 km beneath the oceanic crust. 410 km discontinuity: a phase transition where olivine becomes wadsleyite. 520 km discontinuity: a phase transition where wadsleyite becomes ringwoodite. 660 km discontinuity: a phase transition where of ringwoodite to bridgmanite and ferropericlase. Gutenberg discontinuity: the core-mantle boundary, at approximately 2890 km depth. Lehmann discontinuity: marking the inner core boundary (ICB), at approximately 5150 km depth. Velocity structure of the Moon Knowledge of the Moon's seismic velocity primarily stems from seismic records obtained by Apollo missions' Passive Seismic Experiment (PSE) stations. Between 1969 and 1972, five PSE stations were deployed on the lunar surface, with four operational until 1977. These four stations created a network on the near side of the moon, configured as an equilateral triangle with two stations at one vertex. This network recorded over 13,000 seismic events, and the gathered data remains a subject of ongoing study. The analysis has revealed four moonquake mechanisms: shallow, deep, thermal, and those caused by meteoroid impacts. Crust Average P-wave velocity: 5.1–6.8 km/s Average S-wave velocity: 2.96–3.9 km/s The seismic velocity on the Moon varies within its roughly 60 km thick crust, presenting a low seismic velocity at the surface. Velocity readings increase from 100 m/s near the surface to 4 km/s at a depth of 5 km and rise to 6 km/s at 25 km depth. At 25 km depth, a discontinuity presence, at which the seismic velocity increases abruptly to 7 km/s. This velocity then stabilizes, reflecting the consistent composition and hydrostatic pressure conditions at greater depths. Seismic velocities within the Moon's approximately 60 km thick crust exhibit an initial low of 100 m/s at the surface, which escalates to 4 km/s at 5 km depth, and then to 6 km/s at 25 km depth where velocities sharply increase to 7 km/s and stabilize, revealing a consistent composition and pressure conditions in deeper layers. Surface velocities are low due to the loose, porous nature of the regolith. Deeper, compaction increases velocities, with the region beyond 25 km depth characterized by dense, sealed anorthosite and gabbro layers, suggesting a crust with hydrostatic pressure. The Moon's geothermal gradient minimally reduces velocities by 0.1-0.2 km/s. Mantle Average P-wave velocity: 7.7 km/s Average S-wave velocity: 4.5 km/s Research into the seismic structure of the Moon's mantle is hampered by the scarcity of data. Analysis of moonquake waveforms suggests that seismic wave velocities in the upper mantle (ranging from 60 to 400 km in depth) exhibit a minor negative gradient, with S-wave speeds decreasing at rates between -6×10−4 to -13×10−4 km/s per kilometer. A decease in P-wave velocities has also been postulated. The data delineates a transition zone between 400 km and 480 km depth, where a noticeable decrement in the velocities of both P- and S-waves occurs. Uncertainty grows when probing the lower mantle, extending from 480 km to 1100 km beneath the lunar surface. Some studies detect a consistent decline in S-wave transmission, suggesting absorption or scattering phenomena, while other findings indicate that velocities for P- and S-waves may in fact rise. Temperature increases with depth are believed to be the primary influence behind the observed drop in velocities within the upper mantle, suggesting a mantle heavily regulated by thermal gradients rather than compositional changes. The delineated transition zone implies a division between the chemically distinct upper and lower mantles, possibly explained by an uptick in iron concentration due to high pressure and thermal conditions at depth. Deeper into the lower mantle, the debate over seismic characteristics continues, with theories of partial melting around the 1000 km depth mark to justify the attenuation of S-wave velocities. This molten state may cause a segregation of materials, resulting in a concentration of magnesium-rich olivine in the lower regions and potentially affecting seismic speeds. Core Understanding the seismic velocities within the Moon's core presents challenges due to the limited data available. Outer core: Average P-wave velocity: 4 km/s S-waves: Do not propagate as the outer core is liquid Inner core: Average P-wave velocity: 4.4 km/s Average S-wave velocity: 2.4 km/s The sharp decline in P-wave velocity at the mantle-core boundary suggests a liquid outer core, transitioning from 7.7 km/s in the mantle to 4 km/s in the outer core. The inability of S-waves to traverse this zone further confirms its fluid nature with molten iron sulphate. An increase in seismic velocities upon reaching the inner core intimates a transition to a solid phase. The presence of solid iron-nickel alloys, potentially alloyed with lighter elements, is deduced from this increase. Current geophysical models posit a relatively diminutive Lunar core, with the liquid outer core accounting for 1-3% of the Moon's total mass and the entire core constituting about 15-25% of the lunar mass. While some lunar models suggest the possibility of a core, its existence and characteristics are not unequivocally required by the observed data. Lateral variation of seismic velocity structure Crustal velocity also varies laterally, particularly in impact basins, where meteoroid collisions have compacted the substrate, resulting in higher velocities due to reduced porosity. Lateral variations in the Moon's seismic velocity structure are marked by differences in the crust's physical properties, especially within impact basins. The velocity increases in these regions are attributed to meteoroid impacts, which have compacted the lunar substrate, thereby increasing its density and reducing porosity. This phenomenon has been studied using seismic data from lunar missions, which show that the Moon's crustal structure varies significantly with location, reflecting its complex impact history and internal processes. Velocity structure of Mars The investigation into Mars's seismic velocity has primarily relied on models and the data gathered by the InSight mission, which landed on the planet in 2018. By September 30, 2019, InSight had detected 174 seismic events. Before InSight, the Viking 2 lander attempted to collect seismic data in the 1970s, but it captured only a limited number of local events, which did not yield conclusive insights. Crust Average P-wave velocity: 3.5–5 km/s Average S-wave velocity: 2–3 km/s The crust of Mars, ranging from 10 to 50 km in thickness, exhibits increasing seismic velocity as depth increases, attributable to rising pressure. The upper crust is characterized by low density and high porosity, leading to reduced seismic velocity. Two key discontinuities have been observed: one within the crust at a depth of 5 to 10 km, and another which is likely the crust-mantle boundary, occurring at a depth of 30 to 50 km. Mantle Upper mantle: Average P-wave velocity: 8 km/s Average S-wave velocity: 4.5 km/s Lower mantle: P-wave: 5.5 km/s S-waves: Not applicable (liquid) The Martian mantle, composed of iron-rich rocks, facilitates the transmission of seismic waves at high speeds. Research indicates a variation in seismic velocities between depths of 400 and 600 km, where S-wave speeds decrease while P-wave speeds remain constant or increase slightly. This region is known as the Low Velocity Zone (LVZ) in the Martian upper mantle and may be caused by a static layer overlying a convective mantle. The reduction in velocity at the LVZ is likely due to high temperatures and moderate pressures. Martian mantle research has also identified two discontinuities at depths of approximately 1100 km and 1400 km. These discontinuities suggest phase transitions from olivine to wadsleyite and from wadsleyite to ringwoodite, analogous to the Earth's mantle phase changes at depths of 410 km and 660 km. However, Mars's mantle composition differs from Earth's as it does not have a lower mantle predominated by bridgmanite. Recent study suggested the presence of a molten lower mantle layer in the Mars which could significantly affect the interpretation of seismic data and our understanding of the planet's thermal history. Core Average P-wave velocity: 5 km/s S-waves: Do not propagate as the outer core is liquid Scientific evidence suggests that Mars has a substantial liquid core, inferred from S-wave transmission patterns that indicate these waves do not pass through liquid. The core is likely composed of iron and nickel with a significant proportion of lighter elements, inferred from its lower-than-expected density. The presence of a solid inner core on Mars, comparable to Earth's, is currently the subject of scientific debate. No definitive evidence has yet confirmed the nature of the inner core, leaving its existence and characteristics as topics for further research. Lateral variation of velocity structure Lateral variations in the seismic velocity structure of Mars have been revealed by data from the InSight mission, indicating an intricately layered subsurface. InSight's seismic experiments suggest that these variations reflect differences in crustal thickness and composition, potentially caused by volcanic and tectonic processes unique to Mars. Such variations also provide evidence for the presence of a liquid layer above the core, suggesting a complex interplay of thermal and compositional factors affecting the planet's evolution. Further analysis of marsquake data may illuminate the relationship between these lateral variations and the Martian mantle's convective dynamics. Velocity structure of Enceladus Research on Enceladus's subsurface composition has provided theoretical velocity profiles in anticipation of future exploratory missions. While Enceladus's interior is poorly understood, scientists agree on a general structure consisting of an outer icy shell, a subsurface ocean, and a rocky core. In a recent study, three models—single core, thick ice, and layered core—were proposed to delineate Enceladus' internal characteristics. According to these models, seismic velocities are expected to decrease from the ice shell to the ocean, reflecting transitions from porous, fractured ice to a more fluid state. Conversely, velocities are predicted to rise within the solid silicate core, illustrating the stark contrast between the moon's various layers. Future plan Seismic exploration of celestial bodies has so far been limited to the Moon and Mars. However, future space missions are set to extend seismic studies to other entities in our solar system. The proposed Europa Lander Mission, slated for a launch window between 2025 and 2030, will investigate the seismic activity of Jupiter's moon, Europa. This mission plans to deploy the Seismometer to Investigate Ice and Ocean Structure (SIIOS), an instrument designed by the University of Arizona to withstand Europa's harsh, cold, and radiative environment. SIIOS's goal is to provide insight into Europa's icy crust and subterranean ocean. In conjunction with its Artemis program to the Moon, NASA has also funded initiatives under the Development and Advancement of Lunar Instrumentation (DALI) program. Among these, the Seismometer for a Lunar Network (SLN) project stands out. The SLN aims to facilitate the creation of a lunar seismometer network by integrating seismometers into future lunar landers or rovers. This initiative is part of NASA's broader effort to prepare for continued exploration of the Moon's geology. Methods The study of seismic velocity structure is typically conducted through the observation of seismic data coupled with inverse modeling, which involves adjusting a model based on observed data to infer the properties of the Earth's interior. Here are some methods used to study seismic velocity structure: Applications of velocity structure Applications of seismic velocity structure encompass a range of fields where understanding the Earth's subsurface is crucial: Limitation/Uncertainty S-wave velocity of the inner Earth's core Investigating Earth's inner core through seismic waves presents significant challenges. Directly observing seismic waves that traverses the inner core is difficult due to weak signal conversion at the core boundaries and high attenuation within the core. Recent techniques like earthquake late-coda correlation, which utilises the later part of a seismogram, provide estimates for the inner core's shear wave velocity but are not without challenges. Isotropic assumptions Seismic velocity studies often assume isotropy, treating Earth's subsurface as having uniform properties in all directions. This simplification is practical for analysis but may not be accurate. The inner core and mantle, for example, likely demonstrate anisotropic, or directionally dependent, properties, which can affect the accuracy of seismic interpretations. Dimensional considerations Seismic models are frequently one-dimensional, considering changes in Earth's properties with depth but neglecting lateral variations. Although this method eases computation, it fails to account for the planet's complex three-dimensional structure, potentially misleading our understanding of subsurface characteristics. Non-uniqueness of Inverse Modelling Seismic velocity structures are inferred through inverse modeling, fitting theoretical models to observed data. However, different models can often explain the same data, leading to non-unique solutions. This issue is compounded when inverse problems are poorly conditioned, where small data variations can suggest drastically different subsurface structures. Data Limitations for the Moon and Mars Seismic Studies In contrast to Earth, the seismic datasets for the Moon and Mars are sparse. The Apollo missions deployed a handful of seismometers across the Moon, and Mars's seismic data is limited to the InSight mission's findings. This scarcity restricts the resolution of velocity models for these celestial bodies and introduces greater uncertainty in interpreting their internal structures. See also Seismic wave Seismic tomography Low-velocity zone References Seismology Geophysics Geology Earthquakes
Seismic velocity structure
Physics
4,762
54,330,143
https://en.wikipedia.org/wiki/Dichomitus%20hubeiensis
Dichomitus hubeiensis is a crust fungus that was described as a new species in 2013. The fungus is characterized by the cream to straw-yellow pore surface and large pores numbering 1–2 per millimetre. Microscopic features include both inamyloid and indextrinoid skeletal hyphae, the presence of cystidioles and dendrohyphidia in the hymenium, and roughly ellipsoid spores that measure 10–14 by 5.6–7.0 μm. The specific epithet refers to the type locality in Hubei, central China. References Fungi described in 2013 Fungi of China Polyporaceae Taxa named by Bao-Kai Cui Fungus species
Dichomitus hubeiensis
Biology
145
6,715,646
https://en.wikipedia.org/wiki/Smithfield%20Foods
Smithfield Foods, Inc., is an American pork producer and food-processing company based in Smithfield, Virginia. It operates as an independent subsidiary of the Chinese multinational conglomerate WH Group. Founded in 1936 as the Smithfield Packing Company by Joseph W. Luter and his son, the company is the largest pig and pork producer in the world. In addition to owning over 500 farms in the US, Smithfield contracts with another 2,000 independent farms around the country to raise Smithfield's pigs. Outside the US, the company has facilities in Mexico, Poland, Romania, Germany, Slovakia and the United Kingdom. Globally the company employed 50,200 in 2016 and reported an annual revenue of $14 billion. Its 973,000-square-foot meat-processing plant in Tar Heel, North Carolina, was said in 2000 to be the world's largest, slaughtering 32,000 pigs a day. Then known as Shuanghui Group, WH Group purchased Smithfield Foods in 2013 for $4.72 billion. It was the largest Chinese acquisition of an American company to date. The acquisition of Smithfield's 146,000 acres of land made WH Group, headquartered in Luohe, Henan province, one of the largest overseas owners of American farmland. Smithfield Foods began its growth in 1981 with the purchase of Gwaltney of Smithfield, followed by the acquisition of nearly 40 companies between then and 2008, including: Eckrich Farmland Foods of Kansas City John Morrell Murphy Family Farms of North Carolina Circle Four Farms of Utah Premium Standard Farms Nathan's Famous Healthy Ones The company was able to grow as a result of its highly industrialized pig production, confining thousands of pigs in large barns known as concentrated animal feeding operations, and controlling the animals' development from conception to packing. As of 2006 Smithfield raised 15 million pigs a year and processed 27 million, producing over six billion pounds of pork and, in 2012, 4.7 billion gallons of manure. Killing 114,300 pigs a day, it was the top pig-slaughter operation in the United States in 2007; along with three other companies, it also slaughtered 56 percent of the cattle processed there until it sold its beef group in 2008. The company has sold its products under several brand names, including Cook's, Eckrich, Gwaltney, John Morrell, Krakus, and Smithfield. Shane Smith has been the president and chief executive officer of Smithfield Foods since July 2021. History Founding and early history The company traces its history to 1936, when Joseph W. Luter Sr. and his son, Joseph W. Luter Jr., opened the Smithfield Packing Company in Smithfield, Virginia. The men were working for the meatpacker P. D. Gwaltney, Jr. & Co. when they set up the company; Joseph W. Luter Sr. was a salesman and Joseph W. Luter Jr. the general manager. Financing for the new company came from Peter Pruden of Suffolk and John S. Martin of Richmond. In an interview in 2009, Joseph W. Luter III described how the Luters would buy 15 hog carcasses a day, cut them up, box them, and sell them to small stores in Newport News and Norfolk. They built the Smithfield Packing Company plant in 1946 on Highway 10. By 1959 they had a workforce of 650. Joseph W. Luter Jr. served as Smithfield's chief executive officer (CEO) until his death in 1962. He owned 42 percent of the company when he died. His son, Joseph W. Luter III, was at Wake Forest University at the time and joined Smithfield that year. Working in sales, he borrowed enough to buy a further eight-and-a-half percent of the shares, and in 1966 he became chairman and CEO. He told Virginia Living that the company was killing around 3,000 hogs a day when he took over, and 5,000 by the time he left in January 1970, while the number of employees rose from 800 to 1,400. In July 1969 he sold Smithfield to Liberty Equities for $20 million; they asked him to stay on, but in January 1970 they fired him. From then until 1975 he developed a ski resort, Bryce Mountain, in Virginia. At the recommendation of its banks, Smithfield re-hired Joseph W. Luter III as CEO in April 1975 when it found itself in financial difficulties. At the time, according to Luter, the company had a net worth of under $1 million, debt of $17 million, and losses of $2 million a year. He said it even lost money in December 1974—holiday-ham season—which was "like Budweiser losing money in July". Luter's restructuring of the company is credited with its improved performance. He remained as CEO until 2006 and as chairman until the company was sold to WH Group in 2013. His son, Joseph W. Luter IV, became an executive vice-president of Smithfield Foods in 2008 and president of the Smithfield Packing Company, by then the parent company's largest subsidiary. He resigned in October 2013. At that point his stock was valued at $21.1 million and Joseph W. Luter III's at $30 million. Mergers and acquisitions (1981–2007) Joseph W. Luter III began his expansion of Smithfield in 1981 with the purchase of its main competitor, Gwaltney of Smithfield, for $42 million. This was followed by the acquisition of almost 40 companies in the pork, beef, and livestock industries between 1981 and around 2008, including Esskay Meats/Schluderberg-Kurdle in Baltimore, Valley Dale in Roanoke, and Patrick Cudahy in Milwaukee in 1984. In 1992, Smithfield opened the world's largest processing plant, a 973,000-square-foot facility in Tar Heel, North Carolina, which by 2000 could process 32,000 pigs a day. Smithfield purchased John Morrell & Co in Sioux Falls, SD, in 1995 and Circle Four Farms in 1998. In 1999 it bought two of the largest pig producers in the United States: Carroll's Foods for around $500 million and Murphy Family Farms of North Carolina; the latter was at that point the largest producer. Smithfield settled the acquisition with 3.3 million shares of Smithfield Foods stock, $178 million in cash, and the assumption of about $216 million of debt. Farmland Foods of Kansas City was added in 2003, as were Sara Lee's European Meats, ConAgra Foods Refrigerated Meats, Butterball (the poultry producer), Brown’s of Carolina, and Premium Standard Farms in 2007. Smithfield sold its 49 percent share in Butterball in 2008 for an estimated $175 million. In 2009 Smithfield was assessed a $900,000 penalty by the U.S. Justice Department to settle charges that the company had engaged in illegal merger activity during its takeover of Premium Standard Farms. The acquisitions caused concern among regulators in the United States regarding the company's control of the food supply. After Smithfield's purchase of Murphy Family Farms in 1999, the Agriculture Department described it as "absurdly big". According to agricultural researchers Jill Hobbs and Linda Young, writing in 2001, the acquisitions constituted a "major structural change" in the hog industry in the United States, leaving Smithfield in control of 10–15 percent of the country's hog production. As of 2006 four companies—Smithfield, Tyson Foods, Swift & Company, and Cargill—were responsible for the production of 70 percent of pork in the United States. 2013 purchase by Shuanghui Group On May 29, 2013, WH Group, then known as Shuanghui Group and sometimes also Shineway Group, the largest meat producer in China, announced the purchase of Smithfield Foods for $4.72 billion, a sale first suggested in 2009. At the time of the deal, although it had 475 million pigs of its own, roughly 60 percent of the global total. According to Lynn Waltz, the Chinese ate 85.3 pounds of pork per person in 2012, compared to 59.3 in the US. Shuanghui said it would list Smithfield on the Hong Kong Stock Exchange after completing the takeover. On September 6, 2013, the US government approved Shuanghui International Holding's purchase of Smithfield Food, Inc. The deal was valued at approximately $7.1 billion, which included debt. It was the largest stock acquisition by a Chinese company of an American company. The deal included Smithfield's 146,000 acres of land, which made WH Group one of the largest overseas owners of American farmland. For decades Smithfield had run its acquisitions as independent operating companies, but in 2015 it set up the "One Smithfield" initiative to unify them; Circle Four Farms in Milford, Utah, for example, became Smithfield Hog Production-Rocky Mountain Region. Ken Sullivan said in 2017 that he saw the company's future as a "consumer-packaged goods business". Mergers and acquisitions (2016-) In 2016, Smithfield purchased the Californian pork processor Clougherty Packing PLC for $145 million, along with its Farmer John and Saag's Specialty Meats brands. Smithfield also acquired PFFJ (Pigs for Farmer John) LLC and three of its farms from Hormel Foods Corporation. In August 2017 Smithfield acquired Pini Polska, Hamburger Pini, and Royal Chicken of Poland, and in September that year it announced that it would purchase two Romanian packaged-meat suppliers, Elit and Vericom. In 2019 it acquired Maier Com in Romania. Operations Employees, brands In 2016, Smithfield had 50,200 employees in the United States, Mexico and Europe, and an annual revenue of $14 billion. In 2012 it opened a restaurant, Taste of Smithfield, in Smithfield, Virginia, located in the same Main Street building as its retail store, The Genuine Smithfield Ham Shoppe. As of July 2017, the company's brands included Armour, Berlinki, Carando, Cook's, Curly's, Eckrich, Farmland, Gwaltney, Healthy Ones, John Morrell, Krakus, Kretschmar, Margherita, Morliny, Nathan's Famous, and Smithfield. In 2019 it introduced Pure Farmland, a plant-based brand of soy burgers and meatballs. In early 2019 Smithfield re-branded its food-service business, Smithfield Farmland, as "Smithfield Culinary." The company created advisory boards composed of chefs, established partnerships with culinary schools, and engaged in substantial research and development to improve its products. Smithfield Culinary uses the Carando, Curly's, Eckrich, Farmland, Margherita, and Smithfield brand names. Vertical integration, contract farms Smithfield began buying hog-farming operations in 1990, making it a vertically integrated company. As a result, it was able to expand by over 1,000 percent between 1990 and 2005. Vertical integration allows Smithfield to control every stage of pig production, from conception and birth, to slaughter, processing and packing, a system known as "from squeal to meal" or "from birth to bacon". The company contracted farmers who had moved out of tobacco farming, and sent them piglets between eight and ten weeks old to be brought to market weights on diets controlled by Smithfield. Smithfield retained ownership of the pigs. Only farmers able to handle thousands of pigs were contracted, which meant that smaller farms went out of business. In North Carolina, Smithfield's expansion mirrored hog farmers' decline; there were 667,000 hog farms there in 1980 and 67,000 in 2005. When the US government placed restrictions on the company, it moved into Eastern Europe. As a result, in Romania there were 477,030 hog farms in 2003 and 52,100 in 2007. There was a similar decline, by 56 percent between 1996 and 2008, in Poland. Joseph W. Luter III said that vertical integration produces "high quality, consistent products with consistent genetics". The company obtained 2,000 pigs and the rights to their genetic lines from Britain's National Pig Development Company in 1990, and used them to create Smithfield Lean Generation Pork, which the American Heart Association certified for its low fat, salt, and cholesterol content. According to Luter, it was vertical integration that enabled this. Housing and lagoons The pigs are housed together in their thousands in identical barns with metal roofs, known as Concentrated animal feeding operations (CAFOs). The floors of the buildings are slatted, allowing waste to be flushed into 30-feet-deep "open-air pits the size of two football fields", according to The Washington Post. These are referred to within the industry as anaerobic lagoons. They dispose of effluent at a low cost, but they require large areas and release odors and methane, a greenhouse gas. Smithfield Foods states that the lagoons contain an impervious liner made to withstand leakage. According to Jeff Tietz in Rolling Stone, the waste—a mixture of excrement, urine, blood, afterbirths, stillborn pigs, drugs and other chemicals—overflows when it rains, and the liners can be punctured by rocks. Smithfield attributes the pink color of the waste to the health of the lagoons, and states that the color is "a sign of bacteria doing what it should be doing. It's indicative of lower odor and lower nutrient content." In 2018 it announced an "animal waste-to-energy" plan; the company said it would spend $125 million over ten years, along with Dominion Energy, to cover the lagoons in North Carolina, Utah and Virginia with "high-density plastic and digesters" to capture the methane gas and direct it into a local pipeline. Pregnant sows Smithfield said in 2007 that it would phase out its use of gestation crates by 2017. Pregnant sows spend most of their lives in these stalls, which are too small to allow them to turn around. Pregnancies last about 115 days; the average life span of a sow in the United States is 4.2 litters. When they give birth, they are moved to a farrowing crate for three weeks, then artificially inseminated again and moved back to a gestation crate. The practice has been criticized by animal-welfare groups, supermarket chains, and McDonald's. Smithfield did not commit to requiring its contract farms to phase out the crates. Almost half the company's sows in the United States live on its roughly 2,000 contract farms. In 2009, Smithfield said it would not meet the deadline because of the recession, but in 2011 it returned to its commitment, and to doing the same in Europe and Mexico by 2022. In January 2017 the company said that 87 percent of sows on company-owned farms were no longer in crates, and that it would require its contract farms to phase out crates by 2022. As of January 2018, on company-owned farms in the United States, Smithfield confines pregnant sows in gestation crates for six weeks during the impregnation process. When pregnancy is confirmed, they are moved to pens within a group-housing system for about 10 weeks, then to a farrowing crate, then back to a gestation crate to be impregnated again. It uses two forms of group housing: in one system, 30–40 sows are kept in a pen with access to individual gestation crates; in the other system, five or six sows are housed together in a pen. In July 2017 Direct Action Everywhere filmed the gestation crates at Smithfield's Circle Four Farms in Milford, Utah. The FBI subsequently raided two animal sanctuaries searching for two piglets removed by the activists. In January 2018 Smithfield released a video of the gestation and farrowing areas on one of its farms. California closures In 2020, Smithfield announced the closure of its plant in San Jose, California and the layoff of 139 workers from the site. Smithfield says it closed the plant due to the expiration of its lease and the decision of its landlord to sell. The local union that represented the plant's workers publicly questioned Smithfield's explanation. In June 2022, Smithfield announced the closure of its plant in Vernon, California, by early 2023, and also stated that it is "exploring strategic options to exit its farms in Arizona and California". The company cited the high costs for the company to conduct business within the state of California. Operations in Mexico The earliest confirmed case of the H1N1 virus (swine flu) during the 2009 flu pandemic was in a five-year-old boy in La Gloria, Mexico, near several facilities operated by Granjas Carroll de Mexico, a Smithfield Foods subsidiary that processes 1.2 million pigs a year and employs 907 people. This, together with tension between the company and local community over Smithfield's environmental record, prompted several newspapers to link the outbreak to Smithfield's farming practices. According to The Washington Post, over 600 other residents of La Gloria became ill from a respiratory disease in March that year (later thought to be seasonal flu). The Post writes that health officials found no link between the farms and the H1N1 outbreak. Smithfield said that it had found no clinical signs of swine flu in its pigs or employees in Mexico, and had no reason to believe that the outbreak was connected to its Mexican facilities. The company said it routinely administers flu virus vaccine to its swine herds in Mexico and conducts monthly tests to detect the virus. Residents alleged that the company regularly violates local environmental regulations. According to The Washington Post, local farmers had complained for years about headaches from the smell of the pig farms and said that wild dogs had been eating discarded pig carcasses. Smithfield was using biodigesters to convert dead pigs into renewable energy, but residents alleged that they regularly overflowed. Residents also feared that the waste stored in the lagoons would leak into the groundwater. Exports Since its acquisition by what would become WH Group, Smithfield has partially retooled its plants to export meat for consumption in China. This effort has been at least partially driven by the epidemic of swine fever in China that has resulted in massive reductions in that country's pig population and pork production. One plant in Smithfield, Virginia slaughters about 10,000 pigs per day for export. Smithfield's ramp up of exports to China has occurred in the face of headwinds in the form of 62% tariffs designed to protect China's hog farmers, who largely have small operations. Pork industry trade groups claim that the United States could export twice as much pork to China if the tariffs were lifted. Production volume As of 2006 Smithfield raised 15 million pigs a year and processed 27 million, producing over six billion pounds of pork and, in 2012, 4.7 billion gallons of manure. Killing 114,300 pigs a day, it was the top pig-slaughter operation in the United States in 2007; along with three other companies, it also slaughtered 56 percent of the cattle processed there until it sold its beef group in 2008. Lawsuits In 2010, a jury in Jackson County, Missouri, awarded 13 plaintiffs $825,000 each against a Smithfield subsidiary, Premium Standard, and two other plaintiffs $250,000 and $75,000. The plaintiffs argued that they were unable to enjoy their property because of the smell coming from the Smithfield facilities. In 2017, in Wake County, North Carolina, nearly 500 residents sued a Smithfield subsidiary, Murphy-Brown, in 26 lawsuits, alleging nuisance and ill health caused by smells, open-air lagoons, and pig carcasses. Residents said their outdoor activities were limited as a consequence, and that they were unable to invite visitors to their homes. Smithfield said the complaints were without merit. On August 3, 2018, a federal jury awarded six North Carolina residents $470 million in damages against Murphy-Brown LLC. The verdict included $75 million each in punitive damages, plus $3–5 million in compensatory damages for loss of enjoyment in their properties. A state law capping punitive damages lowered that amount to $94 million. The plaintiffs had filed suit for "stench odor, truck noise and flies generated near their homes on Kinlaw Farm in Bladen County." In December 2018, several plaintiffs living near a Smithfield contract farm in Sampson County received compensatory damages ranging from $100 to $75,000. In March 2019, 10 plaintiffs were awarded $420,000 for nuisance by a jury in North Carolina. State representatives of agriculture in North Carolina accused lawyers and their plaintiffs of attempting to put farmers out of business. Steve Troxler, North Carolina's agricultural commissioner, said the litigation could harm farm production across the country; he argued that legal abuse of the word nuisance is a mounting concern. As a result of the cases, legislators in Georgia, Nebraska, North Carolina, Oklahoma, Utah, and West Virginia passed or proposed changes to right-to-farm laws that reduce either the right to sue or potential damages. Environment impact Emissions Smithfield has come under criticism for the millions of gallons of untreated fecal matter it produces and stores in its lagoons. In 2012 it produced at least 4.7 billion gallons of manure in the United States; during their lifetimes, every pig will produce 1,100–1,300 liters. In a four-year period in North Carolina in the 1990s, 4.7 million gallons of hog fecal matter were released into the state's rivers. Workers and residents near Smithfield plants reported health problems and complained about the stench. The company was fined $12.6 million in 1997 by the Environmental Protection Agency (EPA) for 6,900 violations of the Clean Water Act after discharging illegal levels of slaughterhouse waste into the Pagan River in Virginia, the largest penalty levied under the Clean Water Act at that time. Its facilities in North Carolina came under scrutiny in 1999 when Hurricane Floyd flooded lagoons holding fecal matter; many of Smithfield's contract farms were accused of polluting the rivers. Smithfield reached a settlement in 2000 with the state of North Carolina, agreeing to pay the state $50 million over 25 years. According to Ralph Deptolla, writing for Smithfield Foods, the company created new executive positions to monitor the environmental issues. In 2001 it created an environmental management system and the following year hired Dennis Treacy, director of the Virginia Department of Environmental Quality since 1998, as executive vice-president and chief sustainability officer. Treacy had previously been involved in enforcement efforts against Smithfield. In 2005 the company received ISO 14001 certification for its hog production and processing facilities in the US, with the exception of new acquisitions, and, in 2009, 14 plants in the US and 21 in Romania received certification. By 2011, 578 Smithfield facilities (95 percent of the company's global operations) were ISO 14001-certified. Smithfield subsidiary Murphy-Brown reached an agreement in 2006 with the Waterkeeper Alliance, once one of Smithfield's biggest critics, to enhance environmental protection at the Murphy-Brown's facilities in North Carolina. In 2009 the company said it had reduced its emissions since 2007, including its greenhouse-gas emissions by four percent; it attributed this to the divestiture of the beef group. In 2010 it released its ninth annual Corporate Social Responsibility report, and announced the creation of two sustainability committees. In 2018, Smithfield Foods faced criticism for widely publicized failures of its hog waste lagoons, this time in the wake of Hurricane Florence. Despite statewide attempts to modernize facilities after Hurricane Floyd, more than a hundred and thirty of North Carolina's hog waste lagoons were compromised by floodwaters during Hurricane Florence. Thirty-three lagoons overflowed entirely, discharging their contents into the Cape Fear River watershed. Packaging reduction In 2009 Armour-Eckrich introduced smaller crescent-style packaging for its smoked sausages, which reduced the plastic film and corrugated cardboard the company used by over 840,000 pounds per year. In 2010 the John Morrell plant in Sioux Falls, South Dakota, reduced its use of plastic by 40,600 pounds a year, and Farmland Foods reduced the corrugated packaging entering waste streams by over five million pounds a year. Smithfield Packing used 17 percent less plastic for deli meat. The company also eliminated 20,000 pounds of corrugated material a year by using smaller boxes to transport chicken frankfurters to its largest customer. Smithfield Renewables Smithfield and Dominion Energy formed a joint venture, Align Renewable Natural Gas, in 2018 to make and distribute renewable natural gas from biological sources. The two sides they will invest $500 million by 2028. Align harvests methane from Smithfield's farms. It can be mixed and used entirely interchangeably with conventionally produced natural gas. Align will sell gas collected in Utah to California's low carbon fuel standard market. The two companies aspire to produce enough gas through Align to power 70,000 homes by 2028. Align's first project started serving 3,000 homes in Milford, Utah in 2019. Dominion allows its customers to buy blocks or renewable natural gas from Align in increments of $5 on a voluntary basis. One $5 increment is worth about half a dekatherm of energy. In 2019, a joint venture, called Monarch Bioenergy, in Northern Missouri with Roeslein Alternative Energy constructed a "low-pressure natural gas transmission line" between a Smithfield farm and the city of Milan, Missouri. The construction was part of Smithfield Renewables' "manure-to-energy" project. In early 2020, Smithfield and Roeslein announced an additional $45 million investment in their joint venture. This investment will fund adding gas harvesting infrastructure to at least 85% of Smithfield's hog farms in Missouri. Smithfield also has other gas projects located in North Carolina, Utah, and Virginia. Smithfield also has a deal with Duke Energy to harvest renewable natural gas from its farms in North Carolina. Smithfield, Duke, and OptimaBio have also partnered to harvest renewable natural gas from wastewater at Smithfield's plant in Bladen County, North Carolina. Gas is sent from the plant via Piedmont Natural Gas lines to Duke Energy power plants where it is used to generate electricity. This project cost $14 million. Antibiotics Concerns have been raised about Smithfield's use of low doses of antibiotics to promote the pigs' growth, in addition to using antibiotics as part of a treatment regime. The concern was that the antibiotics were harmful to the animals and were contributing to the rise of antibiotic-resistant strains of bacteria. Smithfield said in 2005 that it would administer antibiotics only to animals who were sick themselves, or who were in close proximity to sick animals; however, in CAFOs all pigs are in close proximity to each other. The company introduced an antibiotic-free Pure Farms brand in 2017; it promoted the brand as free of antibiotics, artificial ingredients, hormones, and steroids. Animal welfare 2006 CIWF investigation In Poland, Smithfield Foods purchased former state farms for what its CEO said were "small dollars" and turned them into CAFOs using grants from the European Bank for Reconstruction and Development. Compassion in World Farming (CIWF) conducted an undercover investigation into Smithfield CAFOs there in 2006, and found sick and injured animals in the barns, and dead animals rotting. The CAFOs were run by Animex, a Smithfield subsidiary. In one barn, 26 pigs were reported to have died in a five-week period. The CIWF report said of a Smithfield lagoon in Boszkowo that the surrounding land was littered with waste including dangerous objects such as needles." 2010 HSUS investigation In December 2010 the Humane Society of the United States (HSUS) released an undercover video taken by one of its investigators inside a Smithfield Foods facility. The investigator had worked for a month at Murphy-Brown, a Smithfield subsidiary in Waverly, Virginia. The Associated Press (AP) reported that the investigator videotaped 1,000 sows living in gestation crates. According to the AP, the material shows a pig being pulled by the snout, shot in the head with a stun gun, and thrown into a bin while trying to wriggle free. The investigator said he saw sows biting their crates and bleeding; staff jabbing them to make them move; staff tossing piglets into carts; and piglets born prematurely in gestation crates falling through the slats into the manure pits. In response, Smithfield stated that it does not tolerate abuse or otherwise improper care of animals. The company asked Temple Grandin, a professor of animal husbandry, to review the footage; she recommended an inspection by animal welfare expert Jennifer Woods. Smithfield announced on December 21 that it had fired two workers and their supervisor. At the company's invitation, the Virginia state veterinarian Richard Wilkes visited the facility on December 22. He praised Smithfield for its efforts to improve animal welfare and said he saw no signs of abuse. The Humane Society criticized the tour. Labor relations 1994–2008 union dispute The Smithfield Packing plant in Tar Heel, North Carolina, was the site of an almost 15-year dispute between the company and the United Food and Commercial Workers Union (UFCW), which had tried since 1994 to organize the plant's roughly 5,000 hourly workers. Workers voted against the union in 1994 and 1997, but the National Labor Relations Board (NLRB) alleged that unfair election conduct had occurred and ordered a new election. During the 1997 election the company is alleged to have fired workers who supported the union, stationed police at the plant gates, and threatened plant closures. In 2000, according to Human Rights Watch, Smithfield set up its own security force, with "special police" status under North Carolina law, and in 2003 arrested workers who supported the union. Smithfield appealed the NLRB's ruling that the 1997 election was invalid, and, in 2006, the US Circuit Court of Appeals found in favor of the NLRB. After demonstrations, lockouts, and a shareholder meeting that was disrupted by shareholders supporting the union, the union called for a boycott of Smithfield products. In 2007 Smithfield countered by filing a federal RICO Act lawsuit against the union. The following year Smithfield and the union reached an agreement, under which the union agreed to suspend its boycott in return for the company dropping its RICO lawsuit and allowing another election. In December 2008, workers voted 2,041 to 1,879 in favor of joining the union. Working conditions Human Rights Watch (HRW) issued a 175-page report in 2005 documenting what it said were unsafe work conditions in the US meat and poultry industry, citing working conditions at Smithfield Foods as an example. In particular, the report said, workers make thousands of repetitive motions with knives during each shift, leading to lacerations and repetitive strain injuries. It also alleged that the workers' immigrant status may be exploited to prevent them from making complaints or forming unions. According to the report, the speed at which the pigs are killed and processed makes the job inherently dangerous for workers. A Smithfield manager testified in 1998, during an unfair labor practices trial, that at the Tar Heel plant in North Carolina it takes 5–10 minutes to slaughter and complete the process of "disassembly" of an animal, including draining, cleaning, and cleaving. One worker told HRW that the disassembly line moves so fast that there is no time to sharpen the knives, which means harder cuts have to be made, with the resultant injuries to workers. Similar criticism was made by other groups about Smithfield facilities in Poland and Romania. The American Meat Institute, a trade group of which Smithfield is a member, disputed the claims in the report. The United Food and Commercial Workers Union used the report in its appeals to consumers and civil rights groups during its dispute with Smithfield. Coronavirus outbreak Smithfield closed numerous plants in order to help control the spread of the coronavirus. In mid-April 2020 the Smithfield plant in Sioux Falls, South Dakota became a "hotspot" for the COVID-19 pandemic. 300 of the plant's 3,700 employees tested positive. On April 12 the company announced the indefinite closure of the plant, which processes 4 to 5 percent of the pork products in the United States. Smithfield has stated that plant closures could cause a meat shortage. By April 14, 438 workers in Smithfield's Sioux Falls plant were confirmed to be infected with the coronavirus, with Sullivan stating, "We have to operate these processing plants even when we have COVID." On April 15, the company announced the closure of a plant in Cudahy, Wisconsin that makes bacon and sausage, and a plant in Martin City, Missouri that makes hams. Both plants were dependent on the Sioux Falls slaughterhouse. Employees in both facilities had tested positive for coronavirus, and by April 15, 28 workers at the plant in Cudahy had tested positive. By April 17, the Sioux Falls outbreak had grown to 777 cases, of whom 634 were Smithfield employees and 143 were other people who got infected after contact with a Smithfield employee. In 2020, Smithfield was cited by OSHA for violating workplace safety rules relevant to the pandemic. Smithfield says OSHA's accusations are without merit and is disputing the citation. By September 11, 2020, the Sioux Falls plant was tied to nearly 1,300 worker infections and four worker deaths. On December 23, 2020, animal rights activist Matt Johnson of Direct Action Everywhere was interviewed on Fox Business posing as the CEO Smithfield Foods Dennis Organ and made claims that the factories were petri dishes for the coronavirus. In the interview, he said the meat industry could be "effectively bringing on the next pandemic, with CDC data showing that three of four infectious diseases come from animals and the conditions inside of our farms can sometimes be petri dishes for new diseases". Fox Business later had to issue an acknowledgement and retraction of the interview with host Maria Bartiromo, admitting they "were punked." Medical supplies Smithfield is a supplier of heparin, which is extracted from pigs' intestines and used as a blood thinner, to the pharmaceutical industry. In 2017 the company opened a bioscience unit and joined a tissue engineering group funded by the United States Department of Defense to the tune of $80 million. According to Reuters, the group included Abbott Laboratories, Medtronic and United Therapeutics. Marketing Sports sponsorships In 2012, Smithfield announced a 15-race sponsorship with Richard Petty Motorsports (RPM) and driver Aric Almirola driving the No. 43 Ford Fusion in the NASCAR Sprint Cup Series. The sponsorship was increased to 30 races beginning in 2014. Smithfield rotates its brands on the car, featuring Smithfield, Eckrich, Farmland, Gwaltney, and Nathan's Famous. Smithfield and RPM parted ways in September 2017, allowing Smithfield to sponsor Stewart-Haas Racing in 2018. As of 2023, Smithfield continues to sponsor Almirola in the NASCAR Cup Series with Stewart-Haas Racing. Almirola, who was set to retire from racing competition at the conclusion of the 2022 season, was coaxed by Smithfield to continue with his racing career and their partnership for 2023 and beyond in a multi-year agreement with the company. Meat substitutes Smithfield has started marketing meat substitutes similar to those sold by Impossible Foods. Smithfield sells these products under the Pure Farmland brand. Notes References Further reading Eisnitz, Gail A. (2006) [1997]. Slaughterhouse. Prometheus Books. Evans-Hylton, Patrick (2004). Smithfield: Ham Capital of the World. Arcadia Publishing. Hahn Niman, Nicolette (2010). Righteous Porkchop: Finding a Life and Good Food Beyond Factory Farms. HarperCollins. Horowitz, Roger (2005). Putting Meat on the American Table: Taste, Technology, Transformation. Johns Hopkins University Press. Wise, Steven M. (2009). An American Trilogy. Da Capo Press. External links 1936 establishments in Virginia 2013 mergers and acquisitions Companies based in Virginia Companies formerly listed on the New York Stock Exchange Food and drink companies established in 1936 Ham producers Intensive farming Meat companies of the United States Pig farming Sausage companies of the United States American subsidiaries of foreign companies Isle of Wight County, Virginia Meat packers
Smithfield Foods
Chemistry
7,563
38,213,904
https://en.wikipedia.org/wiki/FNSS%20Samur
Samur or SYHK (short for Seyyar Yüzücü Hücum Köprüsü) is a Turkish amphibious armoured vehicle-launched bridge. Samur is the Turkish word for sable. The equipment was developed and produced for the Turkish Armed Forces (TSK) by the Turkish company FNSS Defence Systems. After six years of development work, four units were delivered on September 14, 2011, in Ankara. The SYHK will improve the capability of the Turkish Army during river crossing operations. Characteristics Basic systems Central tire inflation system (CTIS) Traction control system (TC) Recovery crane CBRN and ballistic protected personnel cabin Standard and emergency anchoring systems Radio and intercom Controller area network bus CAN bus Integrated failure detection system Automatic bilge water pumping (manually if needed) Vehicle specifications Power plant: diesel engine Transmission: 6 speeds forward, 1 reverse (fully automatic) Number of axles: 4 (All-wheel drive) Suspension: Double wishbone independent air suspension Electric power system: Battery: 2 x 12 V, 120 Ah (C20) Alternator: 2 x 140 A brushless Brake system: Hydraulic brake and anti-lock braking system (ABS) (all wheels) Parking pawl: Integrated into transmission, spring mechanic and hydraulic controlled Tires: 16.00 R20 solid disc (Run-flat tire type) Max. speed: Land: Water: by two pump-jets Operational range: Max. grade: 50% Max. grade (side slope): 30% Max. steep obstacle height: Max. ditch width: Min. turning radius: Max. payload capacity: Double transport unit: MLC 70 (tracked vehicles) Triple transport unit: MLC 100 (wheeled vehicles) Deployed bridge mode: MLC 70 and MLC 100 General information Crew: 3 Weight: Vehicle class: MLC 35 Length: Width: Height: Ground clearance: (adjustable) References External links FNSS - SAMUR promotional video. Armoured vehicle-launched bridges Wheeled military vehicles Amphibious military vehicles Military recovery vehicles Military engineering vehicles Armoured fighting vehicles of Turkey Post–Cold War military equipment of Turkey Samur
FNSS Samur
Engineering
428
14,570,219
https://en.wikipedia.org/wiki/Adhesive%20bonding%20of%20semiconductor%20wafers
Adhesive bonding (also referred to as gluing or glue bonding) describes a wafer bonding technique with applying an intermediate layer to connect substrates of different types of materials. Those connections produced can be soluble or insoluble. The commercially available adhesive can be organic or inorganic and is deposited on one or both substrate surfaces. Adhesives, especially SU-8 and benzocyclobutene (BCB), are specialized for production of MEMS and electronic components. The procedure enables bonding temperatures from 1000 °C down to room temperature. Adhesive bonding has the advantage of relatively low bonding temperature as well as the absence of electric voltage and current. Because the wafers are not in direct contact, this procedure enables the use of different substrates, such as silicon, glass, metals or other semiconductor materials. A drawback is that small structures become wider during patterning which hampers the production of an accurate intermediate layer with tight dimension control. Further, the possibility of corrosion due to out-gassed products, thermal instability and penetration of moisture limits the reliability of the bonding process. Another disadvantage is the missing possibility of hermetically sealed encapsulation due to higher permeability of gas and water molecules while using organic adhesives. Overview Adhesive bonding with organic materials such as BCB or SU-8 has simple process properties and the ability to form high-aspect ratio micro structures. The bonding procedure is based on polymerization reaction of organic molecules to form long polymer chains during annealing. This cross-link reaction forms BCB and SU-8 to a solid polymer layer. The intermediate layer is applied by spin-on, spray-on, screen-printing, embossing, dispensing or block printing on one or two substrate surfaces. The adhesive layer thickness depends on the viscosity, rotational speed and the applied tool pressure. The procedural steps of adhesive bonding are divided into the following: Cleaning and pre-treatment of substrates surfaces Application of adhesive, solvent or other intermediate layers Contacting substrates Hardening intermediate layer The most established adhesives are polymers that enable connections of different materials at temperatures ≤ 200 °C. Due to these low process temperature metal electrodes, electronics and various micro-structures can be integrated on the wafer. The structuring of polymers as well as the realization of cavities over movable elements are possible using photolithography or dry etching. The hardening conditions depend on the materials used. Hardening of the adhesives are possible: at room temperature through heating cycles using UV light by applying pressure Process parameters The most important process parameters for achieving a high bonding strength are: adhesive material coating thickness bonding temperature processing time chamber pressure tool pressure Surface preparation of plastics There are three major requirements of creating a desirable surface for adhesive bonding of plastics: the weak boundary layer of the given material must be removed or chemically modified to create a strong boundary layer; the surface energy of the adherend should be higher than the surface energy of the adhesive for good wetting; and the surface profile can be improved to provide mechanical interlocking. Meeting one of these major requirements will improve bonding; however, the most desirable surface will incorporate all three requirements. Numerous techniques are available to help produce a desirable surface for adhesive bonding. Degreasing When preparing a surface for adhesive bonding, all oil and grease contamination must be removed in order to form a strong bond. Although the surface may appear to be clean, it is important to still use the degreasing process. Prior to performing the degreasing process, the compatibility of the solvent used and the adherend must be considered to prevent irreversible damage of the surface or part. Vapor degreasing One method of degreasing is vapor degreasing, where the adherend is dipped in a solvent. When removed from the solvent, the vapors condense on the surface of the adherend and dissolve any contaminants that had existed. These contaminants then drip off the adherend with the condensed vapors. Instead of vapor degreasing The other method of degreasing requires a cloth or rag soaked in solvent, which can be used to wipe down the surface of the adherend to remove contaminants. It is important that all residue that had been left behind from the solvents be removed, so that there is no detrimental effects to the adhesive bonding. Following degreasing process After degreasing, a good test to determine cleanliness of the surface is to use a drop of water. If the drop spreads on the surface, a low contact angle and good wettability has been achieved, which indicates the surface is clean and ready for application of the adhesive. If the drop beads up or retains its shape, the degreasing process should be repeated. Abrasion In general, abrasion is superior to other methods of surface preparations due to the fact that it is simple to perform, and it does not produce a significant amount of waste. To prepare the adherend for bonding, the surface can be sanded or grit blasted with an abrasive material to roughen the surface and remove any loose material. Rough surfaces produce stronger bonds because they have an increased surface area for the adhesive to bond to as compared to a relatively smooth surface. In addition, roughening the surface will also increase mechanical interlocking. Following abrasion, the adherend should always be wiped with solvent or an aqueous detergent solution to clean the surface of any oils and loose material and then dried. After this process is complete, the adhesive can be applied. Peel ply For a peel ply, a thin, woven piece of material is applied to the adherend during fabrication. Because the material is woven, it will leave a torturous surface when removed, which will improve bonding by mechanical interlocking. Prior to adhesive bonding, the woven material acts to protect the surface of the adherend from contaminates. When an adhesive is ready to be applied, the material can be peeled off, leaving a rough and clean surface for bonding. Corona discharge treatment Corona discharge treatment (CDT) is typically used to improve adhesion of ink or coatings on plastic films. In the CDT, an electrode is connected to a high voltage source. The film travels on a roller that is covered with a dielectric layer and is grounded. When a voltage is applied, the electrical discharge causes ionization of air, and a plasma is formed. In doing so, the surface of the film is oxidized, thus improving wetting and adhesion. Additionally, the discharge reacts with molecules of the adherend to form free radicals, which react with oxygen and eventually form polar groups that increase the surface energy of the adherend. Another way CDT improves bonding is that it roughens the adherend by removing the amorphous regions of the surface, which increases the surface area and improves adhesive bonding. Depending on the type of adherend being treated with CDT, the treatment times may differ. Some adherends may require longer treating times to achieve the same surface energy. Flame treatment In flame treatment, a mixture of gas and air is used to produce a flame that is run over the surface of the adherend. The flame that is produced must be oxidizing in order to produce an effective treatment. This means that the flame is blue in color. Flame treatment can be performed by using a setup similar to the CDT in which plastic film travels across a roller while the flame contacts it. In addition to more sophisticated methods, flame treatment can also be done by hand with the use of a torch. However, even and steady treatment of the surface is more difficult to obtain. Once the flame treatment is completed, the part can be gently cleaned with water and air dried, which will ensure that an excess of oxides are not formed. Control during the flame treatment is critical. Too much of the treatment will degrade the plastic, which will lead to poor adhesion. Too little of a treatment will not modify the surface enough and will also lead to poor adhesion. An additional aspect of flame treatment that must be considered is possible deformation to the adherend. Precise control of the flame will prevent this from occurring. Plasma treatment Plasma is a gas excited by electrical energy, and contains approximately an equal density of positive- and negative-charged ions. The interaction of the electrons and ions in the plasma with the surface oxidizes the surface and forms free radicals. The oxidation of the surface removes unwanted contaminants and improves adhesion. In addition to removing contaminates, the plasma treatment also introduces polar groups that increase the surface energy of the adherend. Plasma treatment can produce adhesive bonds up to four times stronger than compared to chemically or mechanically treated adherends. In general, plasma treatment is not used often in industry because it is required to be performed below atmospheric pressure. This creates an expensive and less cost effective process. Chemical treatment Chemical treatments are used to change the composition and structure of the surface of the adherend and are often used in addition to degreasing and abrasion to maximize the strength of the adhesive bond. In addition to this, they increase the chance of other bonding forces to occur, such as hydrogen, dipole, and van der Waals bonding between the adherend and the adhesive. Chemical solutions can be applied to the surface of an adherend to either clean or alter the surface of the adherend, depending on the chemical used. Solvents are used to simply clean the surfaces of any contaminates or debris. They do not increase the surface energy of the adherend. To modify the surface of the adherend, acid solutions can be used to etch and oxidize the surface. These solutions must be carefully prepared in order to ensure good bonding strength is developed. These treatments can be made more effective by increasing the time and temperature of the application. However, too long of time can lead to excess reaction products that form and can hinder the bonding performance between the adhesive and adherend. As with other surface preparation methods, a good test to assure a good chemical treatment is to put a drop of water on the surface of the adherend. If the drop flattens or spreads out, it means the surface of the adherend has good wettability and should allow for good bonding. A final consideration when using chemical treatments is that of safety. The chemicals used in the treatments can be hazardous to human health and before using any, the material safety data sheet for the particular chemical should be referenced. Ultraviolet radiation treatment Ultraviolet (UV) radiation plays a role in numerous surface treatments, including some of the fore mentioned treatments, although it may not be the dominating factor. An example of a UV treatment where UV radiation is the primary factor that effects the surface preparation is with the use of excimer lasers. Excimer lasers are extremely high energy and use to create pulses of radiation. When the laser makes contact with the surface of the adherend, it removes a layer of material, therefore cleaning the surface. In addition, if the UV radiation laser treatment is performed in the presence of air, the surface of the adherend can be oxidized, thus improving the surface energy. Finally, the radiation pulses can be used to create specific surface patterns that will increase surface area and improve bonding. SU-8 photoresist Overview SU-8 is a 3 component UV-sensitive negative photoresist based on epoxy resin, gamma butyrolactone, and triaryl sulfonium salt. SU-8 polymerizes at approximately 100 °C and is temperature-stable up to 150 °C. It is CMOS- and bio-compatible and has excellent electrical, mechanical and fluidic properties. It also has a high cross-linking density, high chemical resistance, and high thermal stability. The viscosity depends on the mixture with the solvent for different layer thicknesses (1.5 to 500 μm). Using multilayer coating a layer thickness up to 1 mm is reachable. The lithographic structuring is based on a photoinitiator triaylium-sulfonium that releases lewis acid during UV radiation. This acid works as catalyst for the polymerization. The connection of the molecules is activated over different annealing steps, so called post-exposure bake (PEB). Using SU-8 can achieve a high bonding yield. In addition, the substrate flatness, clean room conditions and the wettability of the surface are important factors to achieve good bonding results. Procedural steps The standard process (compare to figure "Schematic bonding process") consists of applying SU-8 on the top wafer by spin-on or spray-on of thin layers (3 to 100 μm). Subsequently, the structuring of the photo-resist using direct UV light exposure is applied but can also be achieved through deep reactive-ion etching (DRIE). During coating and structuring of the SU-8 the tempering steps before and after exposure have to be considered. Based on thermal layer stress the risk of crack formation exists. While coating the photoresist the formation of voids due to the layer thickness inhomogeneity has to be avoided. The adhesive layer thickness should be larger than the flatness imperfection of the wafer to establish good contact. The procedural steps based on a typical example are: Cleaning Top wafer Thermal oxidation Dehydration Spin coating the SU-8 Softbake 120 s at 65 °C 300 s to 95 °C Cooling down Exposure with 165 to 200  Post-exposure-bake 2 to 120 min at 50 to 120 °C to room temperature relaxation time development rinsing and dry spinning hard bake at 50 to 150 °C for 5 to 120 min For non-planar wafer surfaces or free standing structures, spin-coating is not a very successful SU-8 deposition method. As a result, spray-on is mainly used on structured wafers. The bonding takes place at the polymerization temperature of SU-8 at approximately 100 °C. The soft-bake allows that high residual solvent content minimizes intrinsic stress and improves cross-linking. The SU-8 layer is patterned using soft contact exposure followed by post-exposure bake. The non-exposed SU-8 is removed by immersing in, e.g. propylene glycol methyl ether acetate (PGMEA). Ensuring void-free bonding a homogeneous layer thickness of the SU-8 over the wafer surface is important (compare to cross section photo). To ensure good contact of the wafer pair a constant pressure between 2.5 and 4.5 bar during bonding is applied. The frames should be kept above the non-flatness value of the wafer, based on the fact that defects usually are caused by the curvature of the wafer. A shear strength of the bonded wafer pair of about 18 to 25 MPa is achievable. Examples Adhesive bonding using SU-8 is applicable to zero-level packaging technology for low cost MEMS packaging. Metallic feed-throughs can be used for electrically connections to packed elements through the adhesive layer. Also biomedical and micro fluidic devices are fabricated based on SU-8 adhesive layer as well as micro fluidic channels, movable micro-mechanical components, optical waveguides and UV-LIGA components. Benzocyclobutene (BCB) Overview Benzocyclobutene (BCB) is a hydrocarbon that is widely used in electronics manufacturing. BCB exists in a dry etch and a photosensitive version, each requiring different procedural steps for structuring (compare BCB process flow). It does release only small amounts of by-products during curing which enables a void-free bond. This polymer ensures very strong bonds and excellent chemical resistance to numerous acids, alkalines and solvents. The BCB is over 90% transparent to visible light that enables the use for optical MEMS applications. Compared to other polymers the BCB has a low dielectric constant and dielectric loss. The polymerization of BCB is taking place at a temperature around 250 to 300 °C and it is stable up to 350 °C. Using BCB does not ensure a sufficient hermeticity of sealed cavities for MEMS. Procedural steps The procedural steps for dry etch BCB are: Cleaning Supplying the adhesion promoter Drying of the primer BCB deposition Photosensitive BCB Exposure and development Dry etch BCB Pre-bake/soft-cure Patterning of the BCB layer by lithography and dry-etching Bonding at specific temperature, ambient pressure for specific amount of time Post-bake/hard-cure to form solid BCB monomer layer The wafers can be cleaned using H2O2 + H2SO4 or oxygen plasma. The cleaned wafers are rinsed with DI water and dried at elevated temperature, e.g. 100 to 200 °C for 120 min. The adhesion promoter with a specific thickness is deposited, i.e. spin-coated or contact printed on the wafer to improve the bonding strength. Spray coating is preferable when the adhesive is deposited on free standing structures. Subsequently, the BCB layer is spin or spray coated, usually 1 to 50 μm thick, to the same wafer. To prevent that the patterned layer has a lower bond strength than the unpatterned layer, due to the cross-linking of the polymer, a soft-curing step is applied before bonding. The pre-curing of the BCB takes place for several minutes on the hot plate at a specific temperature ≤ 300 °C. The soft cure prevents bubble formation and unbonded areas as well as the distortion of the adhesive layer during compression to improve the alignment accuracy. The degree of polymerization should not be over 50%, so it is robust enough to be patterned and still sufficiently adhesive to be bonded. If the BCB is hard-baked (far over 50%), it loses its adhesives properties and results in an increased amount of void formation. But also if the soft-curing is above 210 °C the adhesive cures too much, so that the material is not soft and sticky enough to achieve a high bonding strength. The substrates with the intermediate layer are pressed together with subsequent curing results in a bond. The post-bake process is applied at 180 to 320 °C for 30 to 240 min usually in a specific atmosphere or vacuum in the bond chamber. This is necessary to hard-cure the BCB. The vacuum prevents air trapped in the bond interface and pumps out the gases of the out-gassing residual solvents during annealing. The temperature and the curing time are variable, so with a higher temperature curing time can be reduced based on a quicker cross-linking. The final bonding layer thickness depends on the thickness of the cured BCB, the spinning speed and the shrink rate. Examples Adhesive bonding using a BCB intermediate layer is a possible method for packaging and sealing of MEMS devices, also structured Si wafers. Its use is specified for applications that does not require hermetic sealing, i.e. MOEMS mirror arrays, RF MEMS switches and tunable capacitors. BCB bonding is used in the fabrication of channels for fluidic devices, for transfer protruding surface structures as well as for CMOS controller wafers and integrated SMA microactuators. Technical specifications References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Adhesive bonding of semiconductor wafers
Materials_science,Engineering
4,021
10,151,726
https://en.wikipedia.org/wiki/Atmospheric%20tide
Atmospheric tides are global-scale periodic oscillations of the atmosphere. In many ways they are analogous to ocean tides. They can be excited by: The regular day-night cycle in the Sun's heating of the atmosphere (insolation) The gravitational field pull of the Moon Non-linear interactions between tides and planetary waves Large-scale latent heat release due to deep convection in the tropics General characteristics The largest-amplitude atmospheric tides are mostly generated in the troposphere and stratosphere when the atmosphere is periodically heated, as water vapor and ozone absorb solar radiation during the day. These tides propagate away from the source regions and ascend into the mesosphere and thermosphere. Atmospheric tides can be measured as regular fluctuations in wind, temperature, density and pressure. Although atmospheric tides share much in common with ocean tides they have two key distinguishing features: Atmospheric tides are primarily excited by the Sun's heating of the atmosphere whereas ocean tides are excited by the Moon's gravitational pull and to a lesser extent by the Sun's gravity. This means that most atmospheric tides have periods of oscillation related to the 24-hour length of the solar day whereas ocean tides have periods of oscillation related both to the solar day as well as to the longer tidal lunar day (time between successive lunar transits) of about 24 hours 51 minutes. Atmospheric tides propagate in an atmosphere where density varies significantly with height. A consequence of this is that their amplitudes naturally increase exponentially as the tide ascends into progressively more rarefied regions of the atmosphere (for an explanation of this phenomenon, see below). In contrast, the density of the oceans varies only slightly with depth and so there the tides do not necessarily vary in amplitude with depth. At ground level, atmospheric tides can be detected as regular but small oscillations in surface pressure with periods of 24 and 12 hours. However, at greater heights, the amplitudes of the tides can become very large. In the mesosphere (heights of about ) atmospheric tides can reach amplitudes of more than 50 m/s and are often the most significant part of the motion of the atmosphere. The reason for this dramatic growth in amplitude from tiny fluctuations near the ground to oscillations that dominate the motion of the mesosphere lies in the fact that the density of the atmosphere decreases with increasing height. As tides or waves propagate upwards, they move into regions of lower and lower density. If the tide or wave is not dissipating, then its kinetic energy density must be conserved. Since the density is decreasing, the amplitude of the tide or wave increases correspondingly so that energy is conserved. Following this growth with height atmospheric tides have much larger amplitudes in the middle and upper atmosphere than they do at ground level. Solar atmospheric tides The largest amplitude atmospheric tides are generated by the periodic heating of the atmosphere by the Sun – the atmosphere is heated during the day and not heated at night. This regular diurnal (daily) cycle in heating generates thermal tides that have periods related to the solar day. It might initially be expected that this diurnal heating would give rise to tides with a period of 24 hours, corresponding to the heating's periodicity. However, observations reveal that large amplitude tides are generated with periods of 24 and 12 hours. Tides have also been observed with periods of 8 and 6 hours, although these latter tides generally have smaller amplitudes. This set of periods occurs because the solar heating of the atmosphere occurs in an approximate square wave profile and so is rich in harmonics. When this pattern is decomposed into separate frequency components using a Fourier transform, as well as the mean and daily (24-hour) variation, significant oscillations with periods of 12, 8 and 6 hours are produced. Tides generated by the gravitational effect of the Sun are very much smaller than those generated by solar heating. Solar tides will refer to only thermal solar tides from this point. Solar energy is absorbed throughout the atmosphere some of the most significant in this context are water vapor at about 0–15 km in the troposphere, ozone at about 30–60 km in the stratosphere and molecular oxygen and molecular nitrogen at about 120–170 km) in the thermosphere. Variations in the global distribution and density of these species result in changes in the amplitude of the solar tides. The tides are also affected by the environment through which they travel. Solar tides can be separated into two components: migrating and non-migrating. Migrating solar tides Migrating tides are Sun synchronous – from the point of view of a stationary observer on the ground they propagate westwards with the apparent motion of the Sun. As the migrating tides stay fixed relative to the Sun a pattern of excitation is formed that is also fixed relative to the Sun. Changes in the tide observed from a stationary viewpoint on the Earth's surface are caused by the rotation of the Earth with respect to this fixed pattern. Seasonal variations of the tides also occur as the Earth tilts relative to the Sun and so relative to the pattern of excitation. The migrating solar tides have been extensively studied both through observations and mechanistic models. Non-migrating solar tides Non-migrating tides can be thought of as global-scale waves with the same periods as the migrating tides. However, non-migrating tides do not follow the apparent motion of the Sun. Either they do not propagate horizontally, they propagate eastwards or they propagate westwards at a different speed to the Sun. These non-migrating tides may be generated by differences in topography with longitude, land-sea contrast, and surface interactions. An important source is latent heat release due to deep convection in the tropics. The primary source for the 24-hr tide is in the lower atmosphere where surface effects are important. This is reflected in a relatively large non-migrating component seen in longitudinal differences in tidal amplitudes. Largest amplitudes have been observed over South America, Africa and Australia. Lunar atmospheric tides Atmospheric tides are also produced through the gravitational effects of the Moon. Lunar (gravitational) tides are much weaker than solar thermal tides and are generated by the motion of the Earth's oceans (caused by the Moon) and to a lesser extent the effect of the Moon's gravitational attraction on the atmosphere. Classical tidal theory The basic characteristics of the atmospheric tides are described by the classical tidal theory. By neglecting mechanical forcing and dissipation, the classical tidal theory assumes that atmospheric wave motions can be considered as linear perturbations of an initially motionless zonal mean state that is horizontally stratified and isothermal. The two major results of the classical theory are atmospheric tides are eigenmodes of the atmosphere described by Hough functions amplitudes grow exponentially with height. Basic equations The primitive equations lead to the linearized equations for perturbations (primed variables) in a spherical isothermal atmosphere: with the definitions eastward zonal wind northward meridional wind upward vertical wind geopotential, square of Brunt-Vaisala (buoyancy) frequency angular velocity of the Earth density altitude geographic longitude geographic latitude heating rate per unit mass radius of the Earth gravity acceleration constant scale height time Separation of variables The set of equations can be solved for atmospheric tides, i.e., longitudinally propagating waves of zonal wavenumber and frequency . Zonal wavenumber is a positive integer so that positive values for correspond to eastward propagating tides and negative values to westward propagating tides. A separation approach of the form and doing some manipulations yields expressions for the latitudinal and vertical structure of the tides. Laplace's tidal equation The latitudinal structure of the tides is described by the horizontal structure equation which is also called Laplace's tidal equation: with Laplace operator using , and eigenvalue Hence, atmospheric tides are eigenoscillations (eigenmodes)of Earth's atmosphere with eigenfunctions , called Hough functions, and eigenvalues . The latter define the equivalent depth which couples the latitudinal structure of the tides with their vertical structure. General solution of Laplace's equation Longuet-Higgins has completely solved Laplace's equations and has discovered tidal modes with negative eigenvalues (Figure 2). There exist two kinds of waves: class 1 waves, (sometimes called gravity waves), labelled by positive n, and class 2 waves (sometimes called rotational waves), labelled by negative n. Class 2 waves owe their existence to the Coriolis force and can only exist for periods greater than 12 hours (or ). Tidal waves can be either internal (travelling waves) with positive eigenvalues (or equivalent depth) which have finite vertical wavelengths and can transport wave energy upward, or external (evanescent waves) with negative eigenvalues and infinitely large vertical wavelengths meaning that their phases remain constant with altitude. These external wave modes cannot transport wave energy, and their amplitudes decrease exponentially with height outside their source regions. Even numbers of n correspond to waves symmetric with respect to the equator, and odd numbers corresponding to antisymmetric waves. The transition from internal to external waves appears at , or at the vertical wavenumber , and , respectively. The fundamental solar diurnal tidal mode which optimally matches the solar heat input configuration and thus is most strongly excited is the Hough mode (1, −2) (Figure 3). It depends on local time and travels westward with the Sun. It is an external mode of class 2 and has the eigenvalue of . Its maximum pressure amplitude on the ground is about 60 Pa. The largest solar semidiurnal wave is mode (2, 2) with maximum pressure amplitudes at the ground of 120 Pa. It is an internal class 1 wave. Its amplitude increases exponentially with altitude. Although its solar excitation is half of that of mode (1, −2), its amplitude on the ground is larger by a factor of two. This indicates the effect of suppression of external waves, in this case by a factor of four. Vertical structure equation For bounded solutions and at altitudes above the forcing region, the vertical structure equation in its canonical form is: with solution using the definitions Propagating solutions Therefore, each wavenumber/frequency pair (a tidal component) is a superposition of associated Hough functions (often called tidal modes in the literature) of index n. The nomenclature is such that a negative value of n refers to evanescent modes (no vertical propagation) and a positive value to propagating modes. The equivalent depth is linked to the vertical wavelength , since is the vertical wavenumber: For propagating solutions , the vertical group velocity becomes positive (upward energy propagation) only if for westward or if for eastward propagating waves. At a given height , the wave maximizes for For a fixed longitude , this in turn always results in downward phase progression as time progresses, independent of the propagation direction. This is an important result for the interpretation of observations: downward phase progression in time means an upward propagation of energy and therefore a tidal forcing lower in the atmosphere. Amplitude increases with height , as density decreases. Dissipation Damping of the tides occurs primarily in the lower thermosphere region, and may be caused by turbulence from breaking gravity waves. A similar phenomenon to ocean waves breaking on a beach, the energy dissipates into the background atmosphere. Molecular diffusion also becomes increasingly important at higher levels in the lower thermosphere as the mean free path increases in the rarefied atmosphere. At thermospheric heights, attenuation of atmospheric waves, mainly due to collisions between the neutral gas and the ionospheric plasma, becomes significant so that at above about 150 km altitude, all wave modes gradually become external waves, and the Hough functions degenerate to spherical functions; e.g., mode (1, −2) develops to the spherical function , mode (2, 2) becomes , with the co-latitude, etc. Within the thermosphere, mode (1, −2) is the predominant mode reaching diurnal temperature amplitudes at the exosphere of at least 140 K and horizontal winds of the order of 100 m/s and more increasing with geomagnetic activity. It is responsible for the electric Sq currents within the ionospheric dynamo region between about 100 and 200 km altitude. Both diurnal and semidiurnal tides can be observed across the ionospheric dynamo region with incoherent scatter radars by tracking the tidal motion of ionospheric plasma. Effects of atmospheric tide The tides form an important mechanism for transporting energy from the lower atmosphere into the upper atmosphere, while dominating the dynamics of the mesosphere and lower thermosphere. Therefore, understanding the atmospheric tides is essential in understanding the atmosphere as a whole. Modeling and observations of atmospheric tides are needed in order to monitor and predict changes in the Earth's atmosphere. See also Atmospheric wave Tide Earth tide Mesosphere Thermosphere Ionospheric dynamo region Notes and references Atmospheric dynamics
Atmospheric tide
Chemistry
2,706
68,296,332
https://en.wikipedia.org/wiki/Drug%20Safety%20Research%20Unit
Drug Safety Research Unit (DSRU) is an independent, non-profit organisation in the United Kingdom, in the field of pharmacology. It is an associate college of the University of Portsmouth, offering postgraduate qualifications in pharmacovigilance. The unit is based in Southampton, and was established in 1981 by Bill Inman and David Finney. Its director as of July 2021 is Professor Saad Shakir. It is operated by the Drug Safety Research Trust, a charitable organization registered in England and Wales. References External links 1981 establishments in the United Kingdom Organisations based in Southampton Drug safety University of Portsmouth
Drug Safety Research Unit
Chemistry
125
23,223,675
https://en.wikipedia.org/wiki/Boing%21%20Docomodake%20DS
is a puzzle-platform game starring NTT DoCoMo's mascot, Docomodake. Their rival is NHK's mascot, Domo-Kun who has been in games of his own. It was developed by Suzak and AQ Interactive. Reception Boing! Docomodake DS received "mixed or average" reviews from critics, according to the review aggregation website Metacritic. References External links (archive) IGN review (archive) Boing! Docomodake DS at GameFAQs 2007 video games Advergames AQ Interactive games Multiplayer and single-player video games Nintendo DS games Nintendo DS-only games NTT Docomo Puzzle-platformers Suzak Inc. games UTV Ignition Games games Video games developed in Japan
Boing! Docomodake DS
Technology
154
29,554,254
https://en.wikipedia.org/wiki/Eremobiotus
Eremobiotus is a genus of tardigrade in the class Eutardigrada. Species Eremobiotus alicatai (Binda 1969) Eremobiotus ovezovae Biserov, 1992 References External links Parachela (tardigrade) Tardigrade genera Extremophiles
Eremobiotus
Biology,Environmental_science
71
27,915,973
https://en.wikipedia.org/wiki/GameSpy%20Technology
GameSpy Technology (also known as GameSpy Industries, Inc.), a division of Glu Mobile, was the developer of the GameSpy Technology product, a suite of middleware tools, software, and services for use in the video game industry. Gamespy Technology was acquired by Glu Mobile in 2012. The company and service were shut down in May 2014 when GameSpy was shut down. Technology GameSpy Technology consisted of an array of portable C SDKs that plug into hosted web services that provided the following functionality: Game advertising and player matchmaking Player and team scores and statistics gathering, arbitration, ranking, rules processing, and leaderboards Arbitrary game data storage and retrieval for files to atomic data Team, guild, and clan services and management NAT Negotiation In-game purchases and downloadable content Presence, authentication, game invites, and instant messaging Player chat rooms CD key authentication Voice communication HTTP, XML, and socket data transport Supported platforms Microsoft Windows Mac OS PlayStation 2 PlayStation 3 PlayStation Portable Nintendo Wii Nintendo DS Nintendo DSi Linux iPhone Integrated Technology Partners Game Engines Unreal Engine 3 External links GameSpy Technology's Official Site References Video game engines Middleware
GameSpy Technology
Technology,Engineering
231
20,898,181
https://en.wikipedia.org/wiki/Geometric%20combinatorics
Geometric combinatorics is a branch of mathematics in general and combinatorics in particular. It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. Other important areas include metric geometry of polyhedra, such as the Cauchy theorem on rigidity of convex polytopes. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope. See also Topological combinatorics References What is geometric combinatorics?, Ezra Miller and Vic Reiner, 2004 Topics in Geometric Combinatorics Geometric Combinatorics, Edited by: Ezra Miller and Victor Reiner Combinatorics Discrete geometry
Geometric combinatorics
Mathematics
212
25,672,608
https://en.wikipedia.org/wiki/Lyonia%20%28journal%29
Lyonia was an electronic, peer-reviewed interdisciplinary scientific journal published by the Harold L. Lyon Arboretum and stored in the ScholarSpace digital institutional repository of the University of Hawaii at Manoa library. The journal was dedicated to the distribution of original ecological research and how it may be implemented in environmental protection. Papers published in Lyonia covered a range of disciplines including ecology, biology, anthropology, economics, law, etc. that pertain to conservation, management, sustainable development, and education in mountain and island settings. The journal was particularly interested in the mountain forests in tropical areas. Lyonia was first published in March 1974 and continued until December 1989. See also Lyon Arboretum ScholarSpace References External links Lyon Arboretum Botany journals Defunct journals of the United States Sustainability journals
Lyonia (journal)
Environmental_science
152
355,377
https://en.wikipedia.org/wiki/Photonic%20crystal
A photonic crystal is an optical nanostructure in which the refractive index changes periodically. This affects the propagation of light in the same way that the structure of natural crystals gives rise to X-ray diffraction and that the atomic lattices (crystal structure) of semiconductors affect their conductivity of electrons. Photonic crystals occur in nature in the form of structural coloration and animal reflectors, and, as artificially produced, promise to be useful in a range of applications. Photonic crystals can be fabricated for one, two, or three dimensions. One-dimensional photonic crystals can be made of thin film layers deposited on each other. Two-dimensional ones can be made by photolithography, or by drilling holes in a suitable substrate. Fabrication methods for three-dimensional ones include drilling under different angles, stacking multiple 2-D layers on top of each other, direct laser writing, or, for example, instigating self-assembly of spheres in a matrix and dissolving the spheres. Photonic crystals can, in principle, find uses wherever light must be manipulated. For example, dielectric mirrors are one-dimensional photonic crystals which can produce ultra-high reflectivity mirrors at a specified wavelength. Two-dimensional photonic crystals called photonic-crystal fibers are used for fiber-optic communication, among other applications. Three-dimensional crystals may one day be used in optical computers, and could lead to more efficient photovoltaic cells. Although the energy of light (and all electromagnetic radiation) is quantized in units called photons, the analysis of photonic crystals requires only classical physics. "Photonic" in the name is a reference to photonics, a modern designation for the study of light (optics) and optical engineering. Indeed, the first research into what we now call photonic crystals may have been as early as 1887 when the English physicist Lord Rayleigh experimented with periodic multi-layer dielectric stacks, showing they can effect a photonic band-gap in one dimension. Research interest grew with work in 1987 by Eli Yablonovitch and Sajeev John on periodic optical structures with more than one dimension—now called photonic crystals. Introduction Photonic crystals are composed of periodic dielectric, metallo-dielectric—or even superconductor microstructures or nanostructures that affect electromagnetic wave propagation in the same way that the periodic potential in a semiconductor crystal affects the propagation of electrons, determining allowed and forbidden electronic energy bands. Photonic crystals contain regularly repeating regions of high and low refractive index. Light waves may propagate through this structure or propagation may be disallowed, depending on their wavelength. Wavelengths that may propagate in a given direction are called modes, and the ranges of wavelengths which propagate are called bands. Disallowed bands of wavelengths are called photonic band gaps. This gives rise to distinct optical phenomena, such as inhibition of spontaneous emission, high-reflecting omni-directional mirrors, and low-loss-waveguiding. The bandgap of photonic crystals can be understood as the destructive interference of multiple reflections of light propagating in the crystal at each interface between layers of high- and low- refractive index regions, akin to the bandgaps of electrons in solids. There are two strategies for opening up the complete photonic band gap. The first one is to increase the refractive index contrast for the band gap in each direction becomes wider and the second one is to make the Brillouin zone more similar to sphere. However, the former is limited by the available technologies and materials and the latter is restricted by the crystallographic restriction theorem. For this reason, the photonic crystals with a complete band gap demonstrated to date have face-centered cubic lattice with the most spherical Brillouin zone and made of high-refractive-index semiconductor materials. Another approach is to exploit quasicrystalline structures with no crystallography limits. A complete photonic bandgap was reported for low-index polymer quasicrystalline samples manufactured by 3D printing. The periodicity of the photonic crystal structure must be around or greater than half the wavelength (in the medium) of the light waves in order for interference effects to be exhibited. Visible light ranges in wavelength between about 400 nm (violet) to about 700 nm (red) and the resulting wavelength inside a material requires dividing that by the average index of refraction. The repeating regions of high and low dielectric constant must, therefore, be fabricated at this scale. In one dimension, this is routinely accomplished using the techniques of thin-film deposition. History Photonic crystals have been studied in one form or another since 1887, but no one used the term photonic crystal until over 100 years later—after Eli Yablonovitch and Sajeev John published two milestone papers on photonic crystals in 1987. The early history is well-documented in the form of a story when it was identified as one of the landmark developments in physics by the American Physical Society. Before 1987, one-dimensional photonic crystals in the form of periodic multi-layer dielectric stacks (such as the Bragg mirror) were studied extensively. Lord Rayleigh started their study in 1887, by showing that such systems have a one-dimensional photonic band-gap, a spectral range of large reflectivity, known as a stop-band. Today, such structures are used in a diverse range of applications—from reflective coatings to enhancing LED efficiency to highly reflective mirrors in certain laser cavities (see, for example, VCSEL). The pass-bands and stop-bands in photonic crystals were first reduced to practice by Melvin M. Weiner who called those crystals "discrete phase-ordered media." Weiner achieved those results by extending Darwin's dynamical theory for x-ray Bragg diffraction to arbitrary wavelengths, angles of incidence, and cases where the incident wavefront at a lattice plane is scattered appreciably in the forward-scattered direction. A detailed theoretical study of one-dimensional optical structures was performed by Vladimir P. Bykov, who was the first to investigate the effect of a photonic band-gap on the spontaneous emission from atoms and molecules embedded within the photonic structure. Bykov also speculated as to what could happen if two- or three-dimensional periodic optical structures were used. The concept of three-dimensional photonic crystals was then discussed by Ohtaka in 1979, who also developed a formalism for the calculation of the photonic band structure. However, these ideas did not take off until after the publication of two milestone papers in 1987 by Yablonovitch and John. Both these papers concerned high-dimensional periodic optical structures, i.e., photonic crystals. Yablonovitch's main goal was to engineer photonic density of states to control the spontaneous emission of materials embedded in the photonic crystal. John's idea was to use photonic crystals to affect localisation and control of light. After 1987, the number of research papers concerning photonic crystals began to grow exponentially. However, due to the difficulty of fabricating these structures at optical scales (see Fabrication challenges), early studies were either theoretical or in the microwave regime, where photonic crystals can be built on the more accessible centimetre scale. (This fact is due to a property of the electromagnetic fields known as scale invariance. In essence, electromagnetic fields, as the solutions to Maxwell's equations, have no natural length scale—so solutions for centimetre scale structure at microwave frequencies are the same as for nanometre scale structures at optical frequencies.) By 1991, Yablonovitch had demonstrated the first three-dimensional photonic band-gap in the microwave regime. The structure that Yablonovitch was able to produce involved drilling an array of holes in a transparent material, where the holes of each layer form an inverse diamond structure – today it is known as Yablonovite. In 1996, Thomas Krauss demonstrated a two-dimensional photonic crystal at optical wavelengths. This opened the way to fabricate photonic crystals in semiconductor materials by borrowing methods from the semiconductor industry. Pavel Cheben demonstrated a new type of photonic crystal waveguide – subwavelength grating (SWG) waveguide. The SWG waveguide operates in subwavelength region, away from the bandgap. It allows the waveguide properties to be controlled directly by the nanoscale engineering of the resulting metamaterial while mitigating wave interference effects. This provided “a missing degree of freedom in photonics” and resolved an important limitation in silicon photonics which was its restricted set of available materials insufficient to achieve complex optical on-chip functions. Today, such techniques use photonic crystal slabs, which are two dimensional photonic crystals "etched" into slabs of semiconductor. Total internal reflection confines light to the slab, and allows photonic crystal effects, such as engineering photonic dispersion in the slab. Researchers around the world are looking for ways to use photonic crystal slabs in integrated computer chips, to improve optical processing of communications—both on-chip and between chips. Autocloning fabrication technique, proposed for infrared and visible range photonic crystals by Sato et al. in 2002, uses electron-beam lithography and dry etching: lithographically formed layers of periodic grooves are stacked by regulated sputter deposition and etching, resulting in "stationary corrugations" and periodicity. Titanium dioxide/silica and tantalum pentoxide/silica devices were produced, exploiting their dispersion characteristics and suitability to sputter deposition. Such techniques have yet to mature into commercial applications, but two-dimensional photonic crystals are commercially used in photonic crystal fibres (otherwise known as holey fibres, because of the air holes that run through them). Photonic crystal fibres were first developed by Philip Russell in 1998, and can be designed to possess enhanced properties over (normal) optical fibres. Study has proceeded more slowly in three-dimensional than in two-dimensional photonic crystals. This is because of more difficult fabrication. Three-dimensional photonic crystal fabrication had no inheritable semiconductor industry techniques to draw on. Attempts have been made, however, to adapt some of the same techniques, and quite advanced examples have been demonstrated, for example in the construction of "woodpile" structures constructed on a planar layer-by-layer basis. Another strand of research has tried to construct three-dimensional photonic structures from self-assembly—essentially letting a mixture of dielectric nano-spheres settle from solution into three-dimensionally periodic structures that have photonic band-gaps. Vasily Astratov's group from the Ioffe Institute realized in 1995 that natural and synthetic opals are photonic crystals with an incomplete bandgap. The first demonstration of an "inverse opal" structure with a complete photonic bandgap came in 2000, from researchers at the University of Toronto, and Institute of Materials Science of Madrid (ICMM-CSIC), Spain. The ever-expanding field of natural photonics, bioinspiration and biomimetics—the study of natural structures to better understand and use them in design—is also helping researchers in photonic crystals. For example, in 2006 a naturally occurring photonic crystal was discovered in the scales of a Brazilian beetle. Analogously, in 2012 a diamond crystal structure was found in a weevil and a gyroid-type architecture in a butterfly. More recently, gyroid photonic crystals have been found in the feather barbs of blue-winged leafbirds and are responsible for the bird's shimmery blue coloration. Some publications suggest the feasibility of the complete photonic band gap in the visible range in photonic crystals with optically saturated media that can be implemented by using laser light as an external optical pump. Construction strategies The fabrication method depends on the number of dimensions that the photonic bandgap must exist in. One-dimensional photonic crystals To produce a one-dimensional photonic crystal, thin film layers of different dielectric constant may be periodically deposited on a surface which leads to a band gap in a particular propagation direction (such as normal to the surface). A Bragg grating is an example of this type of photonic crystal. One-dimensional photonic crystals can include layers of non-linear optical materials in which the non-linear behaviour is accentuated due to field enhancement at wavelengths near a so-called degenerate band edge. This field enhancement (in terms of intensity) can reach where N is the total number of layers. However, by using layers which include an optically anisotropic material, it has been shown that the field enhancement can reach , which, in conjunction with non-linear optics, has potential applications such as in the development of an all-optical switch. A one-dimensional photonic crystal can be implemented using repeated alternating layers of a metamaterial and vacuum. If the metamaterial is such that the relative permittivity and permeability follow the same wavelength dependence, then the photonic crystal behaves identically for TE and TM modes, that is, for both s and p polarizations of light incident at an angle. Recently, researchers fabricated a graphene-based Bragg grating (one-dimensional photonic crystal) and demonstrated that it supports excitation of surface electromagnetic waves in the periodic structure by using 633 nm He-Ne laser as the light source. Besides, a novel type of one-dimensional graphene-dielectric photonic crystal has also been proposed. This structure can act as a far-IR filter and can support low-loss surface plasmons for waveguide and sensing applications. 1D photonic crystals doped with bio-active metals (i.e. silver) have been also proposed as sensing devices for bacterial contaminants. Similar planar 1D photonic crystals made of polymers have been used to detect volatile organic compounds vapors in atmosphere. In addition to solid-phase photonic crystals, some liquid crystals with defined ordering can demonstrate photonic color. For example, studies have shown several liquid crystals with short- or long-range one-dimensional positional ordering can form photonic structures. Two-dimensional photonic crystals In two dimensions, holes may be drilled in a substrate that is transparent to the wavelength of radiation that the bandgap is designed to block. Triangular and square lattices of holes have been successfully employed. The Holey fiber or photonic crystal fiber can be made by taking cylindrical rods of glass in hexagonal lattice, and then heating and stretching them, the triangle-like airgaps between the glass rods become the holes that confine the modes. Three-dimensional photonic crystals There are several structure types that have been constructed: Spheres in a diamond lattice Yablonovite The woodpile structure – "rods" are repeatedly etched with beam lithography, filled in, and covered with a layer of new material. As the process repeats, the channels etched in each layer are perpendicular to the layer below, and parallel to and out of phase with the channels two layers below. The process repeats until the structure is of the desired height. The fill-in material is then dissolved using an agent that dissolves the fill-in material but not the deposition material. It is generally hard to introduce defects into this structure. Inverse opals or Inverse Colloidal Crystals-Spheres (such as polystyrene or silicon dioxide) can be allowed to deposit into a cubic close packed lattice suspended in a solvent. Then a hardener is introduced that makes a transparent solid out of the volume occupied by the solvent. The spheres are then dissolved with an acid such as Hydrochloric acid. The colloids can be either spherical or nonspherical. contains in excess of 750,000 polymer nanorods. Light focused on this beam splitter penetrates or is reflected, depending on polarization. Photonic crystal cavities Not only band gap, photonic crystals may have another effect if we partially remove the symmetry through the creation a nanosize cavity. This defect allows you to guide or to trap the light with the same function as nanophotonic resonator and it is characterized by the strong dielectric modulation in the photonic crystals. For the waveguide, the propagation of light depends on the in-plane control provided by the photonic band gap and to the long confinement of light induced by dielectric mismatch. For the light trap, the light is strongly confined in the cavity resulting further interactions with the materials. First, if we put a pulse of light inside the cavity, it will be delayed by nano- or picoseconds and this is proportional to the quality factor of the cavity. Finally, if we put an emitter inside the cavity, the emission light also can be enhanced significantly and or even the resonant coupling can go through Rabi oscillation. This is related with cavity quantum electrodynamics and the interactions are defined by the weak and strong coupling of the emitter and the cavity. The first studies for the cavity in one-dimensional photonic slabs are usually in grating or distributed feedback structures. For two-dimensional photonic crystal cavities, they are useful to make efficient photonic devices in telecommunication applications as they can provide very high quality factor up to millions with smaller-than-wavelength mode volume. For three-dimensional photonic crystal cavities, several methods have been developed including lithographic layer-by-layer approach, surface ion beam lithography, and micromanipulation technique. All those mentioned photonic crystal cavities that tightly confine light offer very useful functionality for integrated photonic circuits, but it is challenging to produce them in a manner that allows them to be easily relocated. There is no full control with the cavity creation, the cavity location, and the emitter position relative to the maximum field of the cavity while the studies to solve those problems are still ongoing. Movable cavity of nanowire in photonic crystals is one of solutions to tailor this light matter interaction. Fabrication challenges Higher-dimensional photonic crystal fabrication faces two major challenges: Making them with enough precision to prevent scattering losses blurring the crystal properties Designing processes that can robustly mass-produce the crystals One promising fabrication method for two-dimensionally periodic photonic crystals is a photonic-crystal fiber, such as a holey fiber. Using fiber draw techniques developed for communications fiber it meets these two requirements, and photonic crystal fibres are commercially available. Another promising method for developing two-dimensional photonic crystals is the so-called photonic crystal slab. These structures consist of a slab of material—such as silicon—that can be patterned using techniques from the semiconductor industry. Such chips offer the potential to combine photonic processing with electronic processing on a single chip. For three dimensional photonic crystals, various techniques have been used—including photolithography and etching techniques similar to those used for integrated circuits. Some of these techniques are already commercially available. To avoid the complex machinery of nanotechnological methods, some alternate approaches involve growing photonic crystals from colloidal crystals as self-assembled structures. Mass-scale 3D photonic crystal films and fibres can now be produced using a shear-assembly technique that stacks 200–300 nm colloidal polymer spheres into perfect films of fcc lattice. Because the particles have a softer transparent rubber coating, the films can be stretched and molded, tuning the photonic bandgaps and producing striking structural color effects. Computing photonic band structure The photonic band gap (PBG) is essentially the gap between the air-line and the dielectric-line in the dispersion relation of the PBG system. To design photonic crystal systems, it is essential to engineer the location and size of the bandgap by computational modeling using any of the following methods: Plane wave expansion method Inverse dispersion method Finite element method Finite difference time domain method Order-n spectral method KKR method Bloch wave – MoM method Essentially, these methods solve for the frequencies (normal modes) of the photonic crystal for each value of the propagation direction given by the wave vector, or vice versa. The various lines in the band structure, correspond to the different cases of n, the band index. For an introduction to photonic band structure, see K. Sakoda's and Joannopoulos books. The plane wave expansion method can be used to calculate the band structure using an eigen formulation of the Maxwell's equations, and thus solving for the eigen frequencies for each of the propagation directions, of the wave vectors. It directly solves for the dispersion diagram. Electric field strength values can also be calculated over the spatial domain of the problem using the eigen vectors of the same problem. For the picture shown to the right, corresponds to the band-structure of a 1D distributed Bragg reflector (DBR) with air-core interleaved with a dielectric material of relative permittivity 12.25, and a lattice period to air-core thickness ratio (d/a) of 0.8, is solved using 101 planewaves over the first irreducible Brillouin zone. The Inverse dispersion method also exploited plane wave expansion but formulates Maxwell's equation as an eigenproblem for the wave vector k while the frequency is considered as a parameter. Thus, it solves the dispersion relation instead of , which plane wave method does. The inverse dispersion method makes it possible to find complex value of the wave vector e.g. in the bandgap, which allows one to distinguish photonic crystals from metamaterial. Besides, the method is ready for the frequency dispersion of the permittivity to be taken into account. To speed calculation of the frequency band structure, the Reduced Bloch Mode Expansion (RBME) method can be used. The RBME method applies "on top" of any of the primary expansion methods mentioned above. For large unit cell models, the RBME method can reduce time for computing the band structure by up to two orders of magnitude. Applications Photonic crystals are attractive optical materials for controlling and manipulating light flow. One dimensional photonic crystals are already in widespread use, in the form of thin-film optics, with applications from low and high reflection coatings on lenses and mirrors to colour changing paints and inks. Higher-dimensional photonic crystals are of great interest for both fundamental and applied research, and the two dimensional ones are beginning to find commercial applications. The first commercial products involving two-dimensionally periodic photonic crystals are already available in the form of photonic-crystal fibers, which use a microscale structure to confine light with radically different characteristics compared to conventional optical fiber for applications in nonlinear devices and guiding exotic wavelengths. The three-dimensional counterparts are still far from commercialization but may offer additional features such as optical nonlinearity required for the operation of optical transistors used in optical computers, when some technological aspects such as manufacturability and principal difficulties such as disorder are under control. SWG photonic crystal waveguides have facilitated new integrated photonic devices for controlling transmission of light signals in photonic integrated circuits, including fibre-chip couplers, waveguide crossovers, wavelength and mode multiplexers, ultra-fast optical switches, athermal waveguides, biochemical sensors, polarization management circuits, broadband interference couplers, planar waveguide lenses, anisotropic waveguides, nanoantennas and optical phased arrays. SWG nanophotonic couplers permit highly-efficient and polarization-independent coupling between photonic chips and external devices. They have been adopted for fibre-chip coupling in volume optoelectronic chip manufacturing. These coupling interfaces are particularly important because every photonic chip needs to be optically connected with the external world and the chips themselves appear in many established and emerging applications, such as 5G networks, data center interconnects, chip-to-chip interconnects, metro- and long-haul telecommunication systems, and automotive navigation. In addition to the foregoing, photonic crystals have been proposed as platforms for the development of solar cells and optical sensors, including chemical sensors and biosensors. See also References External links Business report on Photonic Crystals in Metamaterials – see also Scope and Analyst Photonic crystals tutorials by Prof S. Johnson at MIT Photonic crystals an introduction Invisibility cloak created in 3-D; Photonic crystals(BBC) Condensed matter physics Metamaterials Photonics
Photonic crystal
Physics,Chemistry,Materials_science,Engineering
5,071
6,336,270
https://en.wikipedia.org/wiki/European%20Astronaut%20Corps
The European Astronaut Corps is a unit of the European Space Agency (ESA) that selects, trains, and provides astronauts as crew members on U.S. and Russian space missions. The corps has 13 active members, able to serve on the International Space Station (ISS). The European Astronaut Corps is based at the European Astronaut Centre in Cologne, Germany. They can be assigned to various projects both in Europe (at ESTEC, for instance) or elsewhere in the world, at NASA Johnson Space Center or Star City. History Current members As of 2024 there are eleven active members of the European Astronaut Corps. Five were selected in 2009, one was selected in 2015, and the remaining five selected in 2022. All of the current members of the corps, other than the 2022 ESA Group, have flown to space and have visited the ISS. French astronaut Thomas Pesquet is the member of the corps who has accumulated the most time in space with 396 days, 11 hours and 34 minutes. He is the record holder for all the European astronauts in history. The corps currently includes one woman, Samantha Cristoforetti, who formerly held the record for the longest spaceflight by a woman. Timothy Peake, a member of the 2009 group, retired in 2023. 2009 Group On 3 April 2008, ESA director general Jean-Jacques Dordain announced that recruiting for a new class of European astronauts will start in the near future. The selection program for 4 new astronauts was launched on 19 May 2008 with applications due by 16 June 2008 so that final selection would be due spring 2009. Almost 10,000 people registered as astronaut candidates as of 18 June 2008. 8,413 fulfilled the initial application criteria. From these 918 were chosen to take part in the first stage of psychological testing which led to 192 candidates on 24 September 2008. After two stage psychological tests 80 candidates continued on to medical evaluation in January–February 2009. 40 or so candidates head to formal interviews to select four new members to European Astronaut Corps. 2022 Group Recruitment for the 2022 ESA Astronaut Group took place over 2021–22 and added five "career" astronauts as well as for the first time a "reserve pool" of 11 astronaut candidates, and also a person with a physical disability through the "parastronaut feasibility project". In June 2023, Marcus Wandt, originally a reserve astronaut, was selected for Axiom Space mission and transitioned to "project" astronaut. This later was set in place for Polish reserve astronaut Sławosz Uznański. The funding by NASA and Russia of the International Space Station is currently planned to end in 2030. Thanks to their involvement with NASA's Orion programme, ESA will receive three flight opportunities for European astronauts to the Lunar Gateway. Former members There are 18 former members of the ESA astronaut corps. Some ESA astronauts were selected by other European agencies and then enrolled into the European Astronaut Corps in 1998. European astronauts outside of ESA Interkosmos Ten Europeans became astronauts within the Soviet Union's Interkosmos program, which allowed citizens of allied nations to fly missions to the Salyut 6, Salyut 7 and Mir space station. Aleksandr Panayotov Aleksandrov Jean-Loup Chrétien Bertalan Farkas Mirosław Hermaszewski Georgi Ivanov Sigmund Jähn Dumitru Prunariu Vladimír Remek Helen Sharman Franz Viehböck Space Shuttle NASA trained and flew astronauts from allied nations on the Space Shuttle, especially as payload specialists for scientific missions such as Spacelab. Prior to the foundation of the ESA astronaut corps, both the French CNES and the German DLR had selected their own rosters of astronauts, notably in preparation for the introduction of the ISS. The following people flew on various Shuttle missions. Patrick Baudry Jean-Jacques Favier Dirk Frimout Reinhard Furrer Leonid Kadeniuk Franco Malerba Ulrich Walter Russian Mir missions The following people flew on missions to Mir under agreements between their nations and Russia. Ivan Bella Klaus-Dietrich Flade Space Shuttle missions Astronauts from the European Astronaut Corps participated in several NASA Space Shuttle missions before the ISS era, in particular as Spacelab payload specialists. NASA considered the full-time ESA astronauts as payload specialists, but offered some the opportunity to train with its own astronauts and become NASA mission specialists. (This list excludes missions to Mir or the ISS) As Payload Specialists Ulf Merbold – STS-9 (Spacelab), STS-42 (Spacelab) Reinhard Furrer – STS-61-A (Spacelab-D1 Mission) Wubbo Ockels – STS-61-A (Spacelab-D1 Mission) Hans Schlegel – STS-55 (Spacelab-D2 Mission) Ulrich Walter – STS-55 (Spacelab-D2 Mission) As Mission Specialists Claude Nicollier – STS-46, STS-61 (Hubble Space Telescope) [STS-75]], STS-103 (Hubble) Maurizio Cheli – STS-75 Jean-François Clervoy – STS-66, STS-103 (Hubble) Gerhard Thiele – STS-99 Pedro Duque – STS-95 Missions to the Mir space stations Astronauts from Europe have flown to Mir both on board Soyuz vehicles (as part of the Euromir programme) or on board the Space Shuttle. Jean-Loup Chrétien – Aragatz (1988) Helen Sharman – Project Juno (1991) Franz Viehböck – Austromir '91 (1991) Klaus-Dietrich Flade – Mir '92 (1992) Michel Tognini – Antarès (1992) Jean-Pierre Haigneré – Altair (1993) Ulf Merbold – Euromir '94 (1994) Thomas Reiter – Euromir '95 (1995) Claudie Haigneré – Cassiopée (1996) Reinhold Ewald – Mir '97 (1997) Jean-Loup Chrétien – STS-86 (1997) Léopold Eyharts – Pégase (1998) Jean-Pierre Haigneré – Perseus (1999) Ivan Bella – Stefanik (1999) Missions to the International Space Station European astronauts to have visited the ISS are: Future missions to the International Space Station Future European astronauts to the ISS are: See also Other astronaut corps: Canadian Astronaut Corps NASA Astronaut Corps (United States) JAXA Astronaut Corps (Japan) Roscosmos Cosmonaut Corps (Russia) People's Liberation Army Astronaut Corps (China) List of astronauts by selection Human spaceflight History of spaceflight European contribution to the International Space Station References External links The European Astronaut Corps European Space Agency Lists of astronauts European astronauts Human spaceflight programs
European Astronaut Corps
Engineering
1,371
53,772,710
https://en.wikipedia.org/wiki/Karl%20Henrik%20Johansson
Karl Henrik Johansson (born 1967 in Växjö, Sweden) is a Swedish researcher and best known for his pioneering contributions to networked control systems, cyber-physical systems, and hybrid systems. His research has had particular application impact in transportation, automation, and energy networks. He holds a Chaired Professorship in Networked Control at the KTH Royal Institute of Technology in Stockholm, Sweden. He is Director of KTH Digital Futures. Career Karl H. Johansson graduated from Lund University in Sweden with an MSc in 1992 and a PhD in 1997. He did a postdoc at UC Berkeley 1998-2000 and has since then held the positions of Assistant, Associate, and Full Professor at the Department of Automatic Control at KTH Royal Institute of Technology. He has directed the ACCESS Linnaeus Centre 2009-2016 and the Strategic Research Area ICT TNG 2013-2020, two of the largest research environments in electrical engineering and computer science in Sweden. He has held visiting positions at UC Berkeley, California Institute of Technology, Nanyang Technological University, Hong Kong University of Science and Technology Institute of Advanced Studies, Norwegian University of Science and Technology, and Zhejiang University. He is the IEEE Control Systems Society Vice President of Diversity, Outreach & Development, an IFAC Council Member, and a past president of the European Control Association. He has served on the Swedish Research Council's Scientific Council for Natural Sciences and Engineering Sciences, IEEE Control Systems Society Board of Governors, IFAC Executive Board. He is past Chair of the IFAC Technical Committee on Networked Systems. He has been on the Editorial Boards of Automatica, IEEE Transactions on Automatic Control, IEEE Transactions on Control of Network Systems, IET Control Theory and Applications, Annual Review of Control, Robotics, and Autonomous Systems, European Journal of Control, and ACM Transactions on Internet of Things, and currently serves on the Editorial Board of ACM Transactions on Cyber-Physical Systems. He was the General Chair of the ACM/IEEE Cyber-Physical Systems Week 2010 and IPC Chair of many conferences. His research focuses on networked control systems, cyber-physical systems, and applications in transportation, energy, and automation networks; areas in which he has co-authored more than 900 journal and conference papers. He has advised more than 60 postdocs and 34 PhD students. Honours Karl H. Johansson has received several best paper awards and other distinctions from IEEE, IFAC, and ACM. He is the recipient of the 2024 IEEE Control Systems Society Hendrik W. Bode Lecture Prize. He received the IFAC Outstanding Service Award 2023. In 2017 he was awarded Distinguished Professor of the Swedish Research Council and in 2009 he was awarded Wallenberg Scholar, as one of the first ten scholars from all sciences, by the Knut and Alice Wallenberg Foundation. He was awarded Future Research Leader from the Swedish Foundation for Strategic Research in 2005. He received the triennial Young Author Prize from IFAC in 1996 and the Peccei Award from IIASA, Austria, in 1993. He was granted Young Researcher Awards from Scania AB in 1996 and from Ericsson in 1998 and 1999. He is Fellow of the IEEE and the Royal Swedish Academy of Engineering Sciences, and he is Distinguished Lecturer with the IEEE Control Systems Society. Works D. V. Dimarogonas, E. Frazzoli, K. H. Johansson, "Distributed event-triggered control for multi-agent systems," IEEE Transactions on Automatic Control, vol. 57, no. 5, pp. 1291-1297, May 2012. doi: 10.1109/TAC.2011.2174666 K. H. Johansson, "The quadruple-tank process: a multivariable laboratory process with an adjustable zero," in IEEE Transactions on Control Systems Technology, vol. 8, no. 3, pp. 456-465, May 2000. doi: 10.1109/87.845876 J. Lygeros, K. H. Johansson, S. N. Simic, J. Zhang, S. S. Sastry, "Dynamical properties of hybrid automata," in IEEE Transactions on Automatic Control, vol. 48, no. 1, pp. 2-17, Jan. 2003. doi: 10.1109/TAC.2002.806650 A. Teixeira, I. Shames, H. Sandberg, K. H. Johansson, "A secure control framework for resource-limited adversaries," Automatica, Volume 51, 2015, Pages 135-148, ISSN 0005-1098, doi.org/10.1016/j.automatica.2014.10.067 B. Besselink, V. Turri, S. H. van de Hoef, K.-Y. Liang, A. Alam, J. Mårtensson, K. H. Johansson, "Cyber–physical control of road freight transport," in Proceedings of the IEEE, vol. 104, no. 5, pp. 1128-1141, May 2016. doi: 10.1109/JPROC.2015.2511446 A. Keimer, N. Laurent-Brouty, F. Farokhi, H. Signargout, V. Cvetkovic, A. M. Bayen, K. H. Johansson, "Information patterns in the modeling and design of mobility management services," in Proceedings of the IEEE, vol. 106, no. 4, pp. 554-576, April 2018. doi: 10.1109/JPROC.2018.2800001 References External links Personal Website Google Scholar KAW article on energy savings through connected devices and vehicles KAW article on theory of the interconnected society KAW article on coming to terms with self-driving vehicles (in Swedish) Living people 1967 births Fellows of the IEEE Swedish electrical engineers Lund University alumni Control theorists
Karl Henrik Johansson
Engineering
1,207
952,908
https://en.wikipedia.org/wiki/Tooltip
The tooltip, also known as infotip or hint, is a common graphical user interface (GUI) element in which, when hovering over a screen element or component, a text box displays information about that element, such as a description of a button's function, what an abbreviation stands for, or the exact absolute time stamp over a relative time ("… ago"). In common practice, the tooltip is displayed continuously as long as the user hovers over the element or the text box provided by the tool. It is sometimes possible for the mouse to hover within the text box provided to activate a nested tooltip, and this can continue to any depth, often with multiple text boxes overlapped. On desktop, it is used in conjunction with a cursor, usually a pointer, whereby the tooltip appears when a user hovers the pointer over an item without clicking it. On touch-screen devices, a tooltip is displayed upon long-pressing—i.e., tapping and holding—an element. Some smartphones have alternative input methods such as a stylus, which can show tooltips when hovering above the screen. A common variant of tooltips, especially in older software, is displaying a description of the tool in a status bar. Microsoft's tooltips feature found in its end-user documentation is named ScreenTips. Apple's tooltips feature found in its developer documentation is named help tags. The Classic Mac OS uses a tooltips feature, though in a slightly different way, known as balloon help. Some software and applications, such as GIMP, provide an option for users to turn off some or all tooltips. However, such options are left to the discretion of the developer, and are often not implemented. Origin The term tooltip originally came from older Microsoft applications (e.g. Microsoft Word 95). These applications would have toolbars wherein, when moving the mouse over the Toolbar icons, displayed a short description of the function of the tool in the toolbar. More recently, these tooltips are used in various parts of an interface, not only on toolbars. Examples CSS, HTML, and JavaScript also other coding systems allow web designers to create customized tooltips. Demonstrations of tooltip usage are prevalent on web pages. Many graphical web browsers display the title attribute of an HTML element as a tooltip when a user hovers the pointer over that element; in such a browser, when hovering over Wikipedia images and hyperlinks a tooltip will appear. See also Mouseover Hoverbox Infobox Dialog box Status bar References Graphical user interface elements User interface techniques
Tooltip
Technology
548
7,393,754
https://en.wikipedia.org/wiki/Advanced%20CCD%20Imaging%20Spectrometer
The Advanced CCD Imaging Spectrometer (ACIS), formerly the AXAF CCD Imaging Spectrometer, is an instrument built by a team from the Massachusetts Institute of Technology's Center for Space Research and the Pennsylvania State University for the Chandra X-ray Observatory. ACIS is a focal plane instrument that uses an array of charge-coupled devices. It serves as an X-ray integral field spectrograph for Chandra. The instrument is capable of measuring both the position and energy of incoming X-rays. The CCD sensors of ACIS operate at and its filters at . It carries a special heater that allows contamination from Chandra to be baked off; the spacecraft contains lubricants, and the ACIS design took this into account in order to clean its sensors. Contamination buildup can reduce the instrument's sensitivity. Radiation in space is another potential danger to the sensor. , after 15 years of operation, there was no indication of a limit to the lifetime of ACIS. Another design feature of the instrument was a calibration source that can be used to understand its health. This allows for a measurement of the level of contamination, if present, as well as any degree of charge transfer inefficiency. References External links ACIS website by the Massachusetts Institute of Technology ACIS website by Pennsylvania State University The Chandra Proposers' Observatory Guide by the Smithsonian Astrophysical Observatory Chandra X-ray Observatory Space telescope sensors
Advanced CCD Imaging Spectrometer
Astronomy
292
3,002,552
https://en.wikipedia.org/wiki/Torelli%20theorem
In mathematics, the Torelli theorem, named after Ruggiero Torelli, is a classical result of algebraic geometry over the complex number field, stating that a non-singular projective algebraic curve (compact Riemann surface) C is determined by its Jacobian variety J(C), when the latter is given in the form of a principally polarized abelian variety. In other words, the complex torus J(C), with certain 'markings', is enough to recover C. The same statement holds over any algebraically closed field. From more precise information on the constructed isomorphism of the curves it follows that if the canonically principally polarized Jacobian varieties of curves of genus are k-isomorphic for k any perfect field, so are the curves. This result has had many important extensions. It can be recast to read that a certain natural morphism, the period mapping, from the moduli space of curves of a fixed genus, to a moduli space of abelian varieties, is injective (on geometric points). Generalizations are in two directions. Firstly, to geometric questions about that morphism, for example the local Torelli theorem. Secondly, to other period mappings. A case that has been investigated deeply is for K3 surfaces (by Viktor S. Kulikov, Ilya Pyatetskii-Shapiro, Igor Shafarevich and Fedor Bogomolov) and hyperkähler manifolds (by Misha Verbitsky, Eyal Markman and Daniel Huybrechts). Notes References Algebraic curves Abelian varieties Moduli theory Theorems in complex geometry Theorems in algebraic geometry
Torelli theorem
Mathematics
339
12,185,751
https://en.wikipedia.org/wiki/International%20Max%20Planck%20Research%20School%20for%20Molecular%20and%20Cellular%20Biology
The International Max Planck Research School for Molecular and Cellular Biology (IMPRS-MCB) is an international PhD program in molecular biology and cellular biology founded in 2006 by the Max Planck Institute of Immunobiology and Epigenetics and the University of Freiburg. The Max Planck Society (MPG) started in 2000 an initiative to attract more international students to Germany to pursue their PhD studies. Therefore, International-Max-Planck-Research-Schools (IMPRS) were established. The number of IMPRS has ever been increasing since then in all three sections of research of the MPG. External links International Max Planck Research School for Molecular and Cellular Biology Molecular Biology and Cellular Biology Molecular biology Cell biology
International Max Planck Research School for Molecular and Cellular Biology
Chemistry,Biology
142
7,704,409
https://en.wikipedia.org/wiki/Swan%20Reach%2C%20South%20Australia
Swan Reach is a river port in South Australia 127 km north-east of Adelaide on the Murray River between Blanchetown and Mannum in South Australia. It is on the left bank of the river. The Swan Reach Ferry is a cable ferry crossing operated by the state government as part of the state's road network. Swan Reach, with all parts below Lock #1, is also one of the lowest parts of the river. In 2009–2010 the river was about 1.5 metres below its normal level. In late 2022 and early 2023 the town experienced a flood with river levels not seen since the devastating flood of 1956. Homes and businesses on Victoria street were inundated, along with most of the holiday homes at Marks Landing. Widespread damage was caused. At the , Swan Reach had a population of 283. History Swan Reach was first settled in the 1850s and was originally the largest of five sheep and cattle stations in the area. It soon became one of the first riverboat ports in South Australia and was a loading port for grain and wool. Swan Reach Mission was established by the United Aborigines Mission (UAM) in 1926 to provide a Christian education to Aboriginal children. It was closed in 1946 due to frequent flooding of the area, and the UAM opened the Gerard Mission near Loxton. Some residents were transferred to the new mission, but some, including the parents of singer-songwriter Ruby Hunter, moved elsewhere for work. Children of the mission became part of the Stolen Generation, later provided with some compensation through the National Redress Scheme. Around the town Swan Reach has an area school, hotel and bottle shop, general store and post office, an op shop that opens Mondays to Fridays and Saturday mornings, and a fast food take-away shop near the ferry. The tourist boat Proud Mary and paddle-wheeler PS Murray Princess stop at the town once a week. There is a Lutheran church, with regular services, and a Lutheran pastor in residence. Anglican and Roman Catholic services are held monthly. Tourism, agriculture and irrigated horticulture are the main industries, and there is a large almond processing plant 1.5 km from town on the Stott Highway. River Murray International Dark Sky Reserve The Swan Reach Conservation Park lies in a area which was named the nation's first, and the world's 15th International Dark sky reserve in October 2019, by the International Dark-Sky Association. The "dark sky" title refers to areas where the night sky has a high darkness rating and there are policy controls to ensure light pollution is kept to a minimum, with reserve status only given when both public and private residential land is included. A multi-million-dollar joint project between Silentium Defence and the Western Sydney University to build a space domain awareness observatory to monitor satellites and other objects orbiting the Earth was announced in June 2020. The Murray Mallee location and terrain of the land was considered ideal for the purpose. The Oculus passive radar observatory opened in December 2021. The reserve's official name is the River Murray International Dark Sky Reserve. See also List of crossings of the Murray River Notes and references External links Township of Swan Reach South Australia - swanreach.sa.au Swan Reach Area School Swan Reach Lutheran Parish Towns in South Australia Tourist attractions in South Australia Populated places on the Murray River International Dark Sky Reserves
Swan Reach, South Australia
Astronomy
675
9,236,652
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Tate%20height
In number theory, the Néron–Tate height (or canonical height) is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field. It is named after André Néron and John Tate. Definition and properties Néron defined the Néron–Tate height as a sum of local heights. Although the global Néron–Tate height is quadratic, the constituent local heights are not quite quadratic. Tate (unpublished) defined it globally by observing that the logarithmic height associated to a symmetric invertible sheaf on an abelian variety is “almost quadratic,” and used this to show that the limit exists, defines a quadratic form on the Mordell–Weil group of rational points, and satisfies where the implied constant is independent of . If is anti-symmetric, that is , then the analogous limit converges and satisfies , but in this case is a linear function on the Mordell-Weil group. For general invertible sheaves, one writes as a product of a symmetric sheaf and an anti-symmetric sheaf, and then is the unique quadratic function satisfying The Néron–Tate height depends on the choice of an invertible sheaf on the abelian variety, although the associated bilinear form depends only on the image of in the Néron–Severi group of . If the abelian variety is defined over a number field K and the invertible sheaf is symmetric and ample, then the Néron–Tate height is positive definite in the sense that it vanishes only on torsion elements of the Mordell–Weil group . More generally, induces a positive definite quadratic form on the real vector space . On an elliptic curve, the Néron–Severi group is of rank one and has a unique ample generator, so this generator is often used to define the Néron–Tate height, which is denoted without reference to a particular line bundle. (However, the height that naturally appears in the statement of the Birch and Swinnerton-Dyer conjecture is twice this height.) On abelian varieties of higher dimension, there need not be a particular choice of smallest ample line bundle to be used in defining the Néron–Tate height, and the height used in the statement of the Birch–Swinnerton-Dyer conjecture is the Néron–Tate height associated to the Poincaré line bundle on , the product of with its dual. The elliptic and abelian regulators The bilinear form associated to the canonical height on an elliptic curve E is The elliptic regulator of E/K is where P1,...,Pr is a basis for the Mordell–Weil group E(K) modulo torsion (cf. Gram determinant). The elliptic regulator does not depend on the choice of basis. More generally, let A/K be an abelian variety, let B ≅ Pic0(A) be the dual abelian variety to A, and let P be the Poincaré line bundle on A × B. Then the abelian regulator of A/K is defined by choosing a basis Q1,...,Qr for the Mordell–Weil group A(K) modulo torsion and a basis η1,...,ηr for the Mordell–Weil group B(K) modulo torsion and setting (The definitions of elliptic and abelian regulator are not entirely consistent, since if A is an elliptic curve, then the latter is 2r times the former.) The elliptic and abelian regulators appear in the Birch–Swinnerton-Dyer conjecture. Lower bounds for the Néron–Tate height There are two fundamental conjectures that give lower bounds for the Néron–Tate height. In the first, the field K is fixed and the elliptic curve E/K and point P ∈ E(K) vary, while in the second, the elliptic Lehmer conjecture, the curve E/K is fixed while the field of definition of the point P varies. (Lang)      for all and all nontorsion (Lehmer)     for all nontorsion In both conjectures, the constants are positive and depend only on the indicated quantities. (A stronger form of Lang's conjecture asserts that depends only on the degree .) It is known that the abc conjecture implies Lang's conjecture, and that the analogue of Lang's conjecture over one dimensional characteristic 0 function fields is unconditionally true. The best general result on Lehmer's conjecture is the weaker estimate due to Masser. When the elliptic curve has complex multiplication, this has been improved to by Laurent. There are analogous conjectures for abelian varieties, with the nontorsion condition replaced by the condition that the multiples of form a Zariski dense subset of , and the lower bound in Lang's conjecture replaced by , where is the Faltings height of . Generalizations A polarized algebraic dynamical system is a triple consisting of a (smooth projective) algebraic variety , an endomorphism , and a line bundle with the property that for some integer . The associated canonical height is given by the Tate limit where is the n-fold iteration of . For example, any morphism of degree yields a canonical height associated to the line bundle relation . If is defined over a number field and is ample, then the canonical height is non-negative, and ( is preperiodic if its forward orbit contains only finitely many distinct points.) References General references for the theory of canonical heights J. H. Silverman, The Arithmetic of Elliptic Curves, External links Number theory Algebraic geometry Abc conjecture
Néron–Tate height
Mathematics
1,173
64,341,482
https://en.wikipedia.org/wiki/American%20Artist%20%28artist%29
American Artist (born 1989) is a contemporary artist working in new media, video, installation and writing. They legally changed their name to American Artist in 2013, in order to re-contextualize the definition of the term "American artist"—at once taking on the name of an anonymous term while becoming the embodiment of its meaning. Their work, in Artist's words, focuses on themes surrounding "blackness, being, and resistance in the context of networked virtual life." Career In a 2022 return to Los Angeles, Artist presented newly commissioned work “Shaper of God” in a solo show at the Roy & Edna Disney CalArts Theatre (REDCAT) to great acclaim. In 2020, Artist had a solo exhibition at the Queens Museum entitled “My Blue Window,” which included a multi-media installation and app download that enabled viewers to download information on surveillance and predictive policing. Artist's 2019 solo exhibition "I’m Blue (If I Was █████ I Would Die)" at Koenig & Clinton, New York, transformed the gallery space into a seminar room for six police cadets as a way to simultaneously explore the Blue Lives Matter movement and how this is at odds with black and brown lives. Their work has a history of addressing police brutality and activism, such as their 2016 piece "Sandy Speaks," in which Artist created a chatbot that imagined Sandra Bland had a means to speak from behind bars, thereby fulfilling her wish posthumously to educate black youth on ways to interact with law enforcement. Artist's work has been exhibited at the Queens Museum, New York; the Studio Museum in Harlem, New York; the Museum of Contemporary Art, Chicago; Koenig & Clinton, New York; HOUSING, New York; and The 8th Floor, New York. They have participated in group exhibitions including Marking Time: Art in the Age of Mass Incarceration, MoMA PS1, Queens, NY (2020); Parallels and Peripheries, Museum of Contemporary Art Detroit, MI (2019); ICONICITY, Paul W. Zuccaire Gallery, Stony Brook University, NY (2019); A Wild Ass Beyond: ApocalypseRN, Performance Space New York, NY (2018) (a project in collaboration with artists Sondra Perry, Caitlin Cherry and Nora Khan); Geographies of Imagination, SAVVY Contemporary, Berlin, Germany (2018); I Was Raised on the Internet, Museum of Contemporary Art Chicago, IL (2018); Screenscapes, Postmasters, New York, NY (2018); Lack of Location is My Location, Koenig & Clinton, Brooklyn, NY (2017); and Off Pink, The Kitchen, New York, NY (2015). Artist was named one of the "30 Young Artists to Watch in 2019" by Cultured Mag Education Artist received a B.F.A. in Graphic Design from California State Polytechnic University, Pomona in 2011. Then an M.F.A. in Fine Arts from Parsons School of Design in 2015. In 2017, they participated in the Independent Study Program at the Whitney Museum of American Art. References 1989 births American new media artists Living people 21st-century American artists Multimedia artists People from Altadena, California Artists from California
American Artist (artist)
Technology
662
36,971,117
https://en.wikipedia.org/wiki/Hyperloop
Hyperloop is a proposed high-speed transportation system for both passengers and freight. The concept behind the Hyperloop originated in the late 17th century with the invention of the world's first artificial vacuum, which led to designs for underground rapid transit systems powered by pneumatics in the decades that followed. In 1799, inventor George Medhurst proposed the idea to move goods through cast-iron pipes using air pressure and in 1844 built a railway station (for passenger carriages) in London that relied on pneumatics until 1847. In 2013, entrepreneur Elon Musk published a white paper, where the hyperloop was described as a transportation system using capsules supported by an air-bearing surface within a low-pressure tube. Hyperloop systems have three essential elements: tubes, pods, and terminals. The tube is a large, sealed low-pressure system (typically a long tunnel). The pod is a coach at atmospheric pressure that experiences low air resistance or friction inside the tube using magnetic propulsion (in the initial design, augmented by a ducted fan). The terminal handles pod arrivals and departures. The hyperloop, in the form proposed by Musk, differs from traditional vactrains by relying on residual air pressure inside the tube to provide lift from aerofoils and propulsion by fans; however, many subsequent variants using the name "hyperloop" have remained relatively close to the core principles of vactrains. Hyperloop was teased by Elon Musk at a 2012 speaking event, and described as a "fifth mode of transport". Musk released details of an alpha-version in a white paper on 22 August 2013, in which the hyperloop design incorporated reduced-pressure tubes with pressurized capsules riding on air bearings driven by linear induction motors and axial compressors. The white paper showed an example hyperloop route running from the Los Angeles region to the San Francisco Bay Area, roughly following the Interstate 5 corridor. Some transportation analysts challenged the cost estimates in the white paper, with some predicting that a hyperloop would run several billion dollars higher. The hyperloop concept has been promoted by Musk and SpaceX, and other companies or organizations were encouraged to collaborate in developing the technology. A Technical University of Munich hyperloop set a speed record of in July 2019 at the pod design competition hosted by SpaceX in Hawthorne, California. Virgin Hyperloop conducted the first human trial in November 2020 at its test site in Las Vegas, reaching a top speed of . Swisspod Technologies unveiled a 1:12 scale testing facility in a circular shape to simulate an "infinite" hyperloop trajectory in July 2021 on the EPFL campus at Lausanne, Switzerland. In 2023, a new European effort to standardize "hyperloop systems" released a draft standard. Hyperloop One, one of the best well-known and well-funded players in the hyperloop space, declared bankruptcy and ceased operations on 31 December 2023. Other companies continue to pursue hyperloop technology development. History Musk first mentioned that he was thinking about a concept for a "fifth mode of transport", calling it the Hyperloop, in July 2012 at a Pando Daily event in Santa Monica, California. This hypothetical high-speed mode of transportation would have the following characteristics: immunity to weather, collision free, twice the speed of a plane, low power consumption, and energy storage for 24-hour operations. The name Hyperloop was chosen because it would go in a loop. In May 2013, Musk likened Hyperloop to a "cross between a Concorde and a railgun and an air hockey table". By 2016, Musk envisioned that more advanced versions could potentially be able to go at hypersonic speed. From late 2012 until August 2013, a group of engineers from both Tesla and SpaceX worked on the modeling of Musk's Hyperloop concept. An early system conceptual model was published on both the Tesla and SpaceX websites which describes one potential design, function, pathway, and cost of a hyperloop system. In the alpha design, pods were envisioned to accelerate to cruising speeds gradually using linear electric motors and glide above their track on air bearings through tubes above ground on columns or below ground in tunnels to avoid the challenges of grade crossings. An ideal hyperloop system was estimated to be more energy-efficient, quiet, and autonomous than existing modes of mass transit in the 2010s. The Hyperloop Alpha was released as an open source design. Musk invited feedback to "see if the people can find ways to improve it". The trademark "HYPERLOOP", applicable to "high-speed transportation of goods in tubes" was issued to SpaceX on 4 April 2017. On 15 June 2015, SpaceX announced that it would build a Hyperloop test track located next to SpaceX's Hawthorne facility. The track was completed and used to test pod designs supplied by third parties in the competition. By 30 November 2015, with several commercial companies and dozens of student teams pursuing the development of Hyperloop technologies, the Wall Street Journal asserted that "'The Hyperloop Movement', as some of its unaffiliated members refer to themselves, is officially bigger than the man who started it." The Massachusetts Institute of Technology (MIT) hyperloop team developed an early hyperloop pod prototype, which they unveiled at the MIT Museum on 13 May 2016. Their design used electrodynamic suspension for levitating and eddy current braking. An early passenger test of low-speed hyperloop technology was conducted by Virgin Hyperloop by two employees of the company in November 2020, where the unit reached a maximum speed of . In January 2023, the European Committee for Electrotechnical Standardization released the first technical standard for hyperloop systems. Hardt Hyperloop demonstrated a Hyperloop lane switch without moving components in the infrastructure in June 2019 at its test site in Delft, The Netherlands. As of 21 December 2023, Hyperloop One, the former, rebranded Virgin Hyperloop, has terminated operations. Work in China on a similar project continued. In July 2024, CASIC conducted a test of their low-vacuum rail system. Theory and operation The much-older vactrain concept resembles a high-speed rail system without substantial air resistance by employing magnetically levitating trains in evacuated (airless) or partly evacuated tubes. However, the difficulty of maintaining a vacuum over large distances has prevented this type of system from ever being built. By contrast, the Hyperloop alpha concept was to operate at approximately of pressure and requires the air for levitation. Initial design concept The hyperloop alpha concept envisioned operation by sending specially designed "capsules" or "pods" through a steel tube maintained at a partial vacuum. In Musk's original concept, each capsule would float on a layer of air provided under pressure to air-caster "skis", similar to how pucks are levitated above an air hockey table, while still allowing higher speeds than wheels can sustain. With rolling resistance eliminated and air resistance greatly reduced, the capsules can glide for the bulk of the journey. In the alpha design concept, an electrically driven inlet fan and axial compressor would be placed at the nose of the capsule to "actively transfer high-pressure air from the front to the rear of the vessel", resolving the problem of air pressure building in front of the vehicle, slowing it down. A fraction of the air was to be shunted to the skis for additional pressure, augmenting that gain passively from lift due to their shape. In the alpha-level concept, passenger-only pods were to be in diameter and were projected to reach a top speed of to maintain aerodynamic efficiency. (Section 4.4) The design proposed passengers experience a maximum inertial acceleration of 0.5 g, about 2 or 3 times that of a commercial airliner on takeoff and landing. Proposed routes Several routes have been proposed that meet the distance conditions for which a hyperloop is hypothesized to provide improved transport times: under approximately . Route proposals range from speculation described in company releases, to business cases, to signed agreements. United States The route suggested in the 2013 alpha-level design document was from the Greater Los Angeles Area to the San Francisco Bay Area. That conceptual system would begin around Sylmar, just south of the Tejon Pass, follow Interstate 5 to the north, and arrive near Hayward on the east side of San Francisco Bay. Proposed branches were shown in the design document, including Sacramento, Anaheim, San Diego, and Las Vegas. No work has been done on the route proposed in Musk's design; one cited reason is that it would terminate on the fringes of two major metropolitan areas, Los Angeles and San Francisco. This would result in significant cost savings in construction, but require passengers traveling to and from Downtown Los Angeles and San Francisco, and any other community beyond Sylmar and Hayward, to transfer to another transportation mode to reach their destination. This would significantly lengthen the total travel time to those destinations. A similar problem already affects present-day air travel, where on short routes (like LAX–SFO) the flight time is only a rather small part of door-to-door travel time. Critics have argued that this would significantly reduce the proposed cost and/or time savings of hyperloop as compared to the proposed California High-Speed Rail project that will serve downtown stations in both San Francisco and Los Angeles. Passengers traveling from financial center to financial center are estimated to save about two hours by taking the Hyperloop instead of driving the whole distance. Others questioned the cost projections for the suggested California route. Some transportation engineers argued in 2013 that they found the alpha-level design cost estimates unrealistically low given the scale of construction and reliance on unproven technology. The technological and economic feasibility of the idea is unproven and a subject of significant debate. In November 2017, Arrivo announced a concept for a maglev automobile transport system from Aurora, Colorado to Denver International Airport, the first leg of a system from downtown Denver. Its contract described potential completion of a first leg in 2021. In February 2018, Hyperloop Transportation Technologies announced a similar plan for a loop connecting Chicago and Cleveland and a loop connecting Washington and New York City. In 2018 the Missouri Hyperloop Coalition was formed between Virgin Hyperloop One, the University of Missouri, and engineering firm Black & Veatch to study a proposed route connecting St. Louis, Columbia, and Kansas City. On 19 December 2018, Elon Musk unveiled a tunnel below Los Angeles. In the presentation, a Tesla Model X drove in a tunnel on the predefined track (rather than in a low-pressure tube). According to Musk, the costs for the system are . Musk said: "The Loop is a stepping stone toward hyperloop. The Loop is for transport within a city. Hyperloop is for transport between cities, and that would go much faster than 150 mph." The Northeast Ohio Areawide Coordinating Agency, or NOACA, partnered with Hyperloop Transportation Technologies to conduct a $1.3 million feasibility study for developing a hyperloop corridor route from Chicago to Cleveland and Pittsburgh for America's first multistate hyperloop system in the Great Lakes Megaregion. Hundreds of thousands of dollars have already been committed to the project. NOACA's Board of Directors has awarded a $550,029 contract to Transportation Economics & Management Systems, Inc. (TEMS) for the Great Lakes Hyperloop Feasibility Study to evaluate the feasibility of an ultra-high speed hyperloop passenger and freight transport system initially linking Cleveland and Chicago. India Hyperloop Transportation Technologies were considering in 2016 with the Indian Government for a proposed route between Chennai and Bengaluru, with a conceptual travel time of in 30 minutes. HTT also signed an agreement in 2018 with Andhra Pradesh government to build India's first hyperloop project connecting Amaravathi to Vijayawada in a 6-minute ride. On 22 February 2018, Hyperloop One entered into a memorandum of understanding with the Government of Maharashtra to build a hyperloop transportation system between Mumbai and Pune that would cut the travel time from the current 180 minutes to 20 minutes. In 2016, Indore-based Dinclix Ground Works' DGW Hyperloop advocates a hyperloop corridor between Mumbai and Delhi, via Indore, Kota, and Jaipur. A worldwide, college-level hyperloop competition is scheduled to take place in India in February 2025 at the Discovery Campus of Thaiyur, IIT Madras. The competition will feature a 410-meter hyperloop vacuum tube. After the expected completion by September 2024, this will be one of the longest hyperloop tunnel in the world. An extended variant of the hyperloop (450 m) will also be constructed. The project was funded by Indian Railways along with the support of L&T Construction, ArcelorMittal and Hindalco Industries. The ultimate target is to initially construct a hyperloop system from Chennai to Bengaluru which can complete the journey of 350 km in 15 minutes. This project can be competed in 5 years if enough funding is provided. Railways Minister Ashwini Vaishnaw shared the achievement on December 5 via X, stating, “Bharat’s first Hyperloop test track (410 meters) completed Saudi Arabia On 6 February 2020, the Ministry of Transport in the Kingdom of Saudi Arabia announced a contract agreement with Virgin Hyperloop One (VHO) to conduct a ground-breaking pre-feasibility study on the use of hyperloop technology for the transport of passengers and cargo. The study will serve as a blueprint for future hyperloop projects and build on the developers long-standing relationship with the kingdom, which has peaked when Crown Prince Mohammed bin Salman viewed VHO's passenger pod during a visit to the United States. Italy In December 2021, the Veneto Regional Council approved a memorandum of understanding with MIMS and CAV for the testing of hyper transfer technology. Canada In 2016, Canadian hyperloop firm TransPod explored the possibility of hyperloop routes which would connect Toronto and Montreal, Toronto to Windsor, and Calgary to Edmonton. Toronto and Montreal, the largest cities in Canada, are connected by Ontario Highway 401, the busiest highway in North America. In March 2019, Transport Canada commissioned a study of hyperloops, so it could be "better informed on the technical, operational, economic, safety, and regulatory aspects of the hyperloop and understand its construction requirements and commercial feasibility." The province of Alberta signed a memorandum of understanding (MOU) to support TransPod for its Calgary to Edmonton hyperloop project. TransPod plans to move forward and has secured in private capital funding for the first phase, which will create an airport link for Edmonton. However, the company will first need to build and test prototypes on test tracks before the project can begin. Elsewhere in the world In 2016, Hyperloop One published the world's first detailed business case for a route between Helsinki and Stockholm, which would tunnel under the Baltic Sea to connect the two capitals in under 30 minutes. Hyperloop One undertook yet another feasibility study in 2016, this time with DP World to move containers from its Port of Jebel Ali in Dubai. In late 2016, Hyperloop One announced a feasibility study with Dubai's Roads and Transport Authority for passenger and freight routes connecting Dubai with the greater United Arab Emirates. Hyperloop One was also considering passenger routes in Moscow during 2016, and a cargo hyperloop to connect Hunchun in north-eastern China to the Port of Zarubino, near Vladivostok and the North Korean border on Russia's Far East. In May 2016, Hyperloop One kicked off their Global Challenge with a call for comprehensive proposals of hyperloop networks around the world. In September 2017, Hyperloop One selected 10 routes from 35 of the strongest proposals: Toronto–Montreal, Cheyenne–Denver–Pueblo, Miami–Orlando, Dallas–Laredo–Houston, Chicago–Columbus–Pittsburgh, Mexico City–Guadalajara, Edinburgh–London, Glasgow–Liverpool, Bengaluru–Chennai, and Mumbai–Chennai. Others put forward European routes, including in 2019 a conceptual route beginning at Amsterdam or Schiphol airport to Frankfurt. In 2016, a Warsaw University of Technology team began evaluating potential routes from Kraków to Gdańsk across Poland proposed by Hyper Poland. Hyperloop Transportation Technologies (HTT) signed an agreement with the government of Slovakia in March 2016 to perform impact studies, with potential links between Bratislava, Vienna, and Budapest, but there have been no further developments. In January 2017, HTT signed an agreement to explore the route Bratislava—Brno—Prague in Central Europe. In 2017, SINTEF, the largest independent research organization in Scandinavia, indicated they were considering building a test lab for hyperloop in Norway. An agreement was signed in June 2017 to co-develop a hyperloop line between Seoul and Busan, South Korea. Mars According to Musk, hyperloop would be useful on Mars as no tubes would be needed because Mars' atmosphere is about 1% the density of the Earth's at sea level. For the hyperloop concept to work on Earth, low-pressure tubes are required to reduce air resistance. However, if they were to be built on Mars, the lower air resistance would allow a hyperloop to be created with no tube, only a track, and so would be just a magnetically levitating train. Open-source design evolution In September 2013, Ansys Corporation ran computational fluid dynamics simulations to model the aerodynamics of the alpha concept capsule and shear stress forces to which the capsule would be subjected. The simulation showed that the capsule design would need to be significantly reshaped to avoid creating supersonic airflow, and that the gap between the tube wall and capsule would need to be larger. Ansys employee Sandeep Sovani said the simulation showed that hyperloop has challenges but that he is convinced it is feasible. In October 2013, the development team of the OpenMDAO software framework released an unfinished, conceptual open-source model of parts of the hyperloop's propulsion system. The team asserted that the model demonstrated the concept's feasibility, although the tube would need to be in diameter, significantly larger than originally projected. However, the team's model is not a true working model of the propulsion system, as it did not account for a wide range of technical factors required to physically construct a hyperloop based on Musk's concept, and in particular had no significant estimations of component weight. In November 2013, MathWorks analyzed the alpha proposal's suggested route and concluded that the route was mainly feasible. The analysis focused on the acceleration experienced by passengers and the necessary deviations from public roads in order to keep the accelerations reasonable; it did highlight that maintaining a trajectory along I-580 east of San Francisco at the planned speeds was not possible without significant deviation into heavily populated areas. In January 2015, a paper based on the NASA OpenMDAO open-source model reiterated the need for a larger diameter tube and a reduced cruise speed closer to Mach 0.85. It recommended removing on-board heat exchangers based on thermal models of the interactions between the compressor cycle, tube, and ambient environment. The compression cycle would only contribute 5% of the heat added to the tube, with 95% of the heat attributed to radiation and convection into the tube. The weight and volume penalty of on-board heat exchangers would not be worth the minor benefit, and regardless the steady-state temperature in the tube would only reach above ambient temperature. According to Musk, various aspects of the hyperloop have technology applications to other Musk interests, including surface transportation on Mars and electric jet propulsion. Researchers associated with MIT's department of Aeronautics and Astronautics published research in June 2017 that verified the challenge of aerodynamic design near the Kantrowitz limit that had been theorized in the original SpaceX Alpha-design concept released in 2013. In 2017, Dr. Richard Geddes and others formed the Hyperloop Advanced Research Partnership to act as a clearinghouse of Hyperloop public domain reports and data. In February 2020, Hardt Hyperloop, Nevomo (formerly Hyper Poland), TransPod and Zeleros formed a consortium to drive standardization efforts, as part of a joint technical committee (JTC20) set up by European standards bodies CEN and CENELEC to develop common standards aimed at ensuring the safety and interoperability of infrastructure, rolling stock, signaling and other systems. Hyperloop Association In December 2022 Hyperloop companies Hardt, Hyperloop One, Hyperloop Transport Technologies, Nevomo, Swisspod Technologies, TransPod, and Zeleros formed the Hyperloop Association. The Association's stated aims are to stimulate the development and growth of this emerging new transport market, participate and support institutes in collaborating with government and regulatory agencies on transportation policymaking. The Hyperloop Association is represented by Ben Paczek, CEO and co-founder of Nevomo. Hyperloop research programs Eurotube EuroTube is a non-profit research organization for the development of vacuum transport technology. EuroTube is currently developing a test tube in Collombey-Muraz, Switzerland. The organization was founded in 2017 at ETH Zurich as a Swiss association and became a Swiss foundation in 2019. The test tube is planned on a 2:1 scale with a diameter of 2.2 m and designed for Hyperloop Development Program (HDP) The Hyperloop Development Program is a public-private partnership of public sector partners, industry parties, and research institutions dedicated to prove the feasibility of hyperloop, test and demonstrate in the European Hyperloop Center Groningen, and identify the future prospects and opportunities for industry and stakeholders. The European Hyperloop Center is under constructions and will have a 420-meter test facility including a lane switch and is planned to commence testing in 2024. The total program size is €30 million and it is co-funded with €4.5 million by the Dutch Ministry of Infrastructure and Water Management and Ministry of Economic Affairs and Climate Policy, and €3 million by the Dutch Province of Groningen. Partners in the program include AndAnotherday, ADSE, Royal BAM Group, Berenschot, Busch, Delft Hyperloop, Denys, Dutch Boosting Group, EuroTube, Hardt Hyperloop, the Institute of Hyperloop Technology, Royal IHC, INTIS, Mercon, Nevomo, Nederlandse Spoorwegen, POSCO International, Schiphol Group, Schweizer Design Consulting, Tata Steel, TÜV Rheinland, UNStudio, Vattenfall. TUM Hyperloop (previously WARR Hyperloop) TUM Hyperloop is a research program that emerged in 2019 from the team of hyperloop pod competition from the Technical University of Munich. The TUM Hyperloop team had won the latest three competitions in a row, achieving the world record of , which is still valid today. The research program has the goals to investigate the technical feasibility by means of a demonstrator, as well as by simulating the economic and technical feasibility of the hyperloop system. The planned 24 meter demonstrator will consist of a tube and the full-size pod. The next steps after completion of the first project phase are the extension to 400 meters to investigate higher speeds. This is planned in the Munich area, in Taufkirchen, Ottobrunn or at the Oberpfaffenhofen airfield. Certification for operation started in Ottobrun in July 2023. Hyperloop pod competition A number of student and non-student teams were participating in a hyperloop pod competition in 2015–16, and at least 22 of them built hardware to compete on a sponsored hyperloop test track in mid-2016. In June 2015, SpaceX announced that they would sponsor a hyperloop pod design competition and would build a subscale test track near SpaceX's headquarters in Hawthorne, California, for the competitive event in 2016. SpaceX stated in their announcement, "Neither SpaceX nor Elon Musk is affiliated with any Hyperloop companies. While we are not developing a commercial Hyperloop ourselves, we are interested in helping to accelerate development of a functional Hyperloop prototype." More than 700 teams had submitted preliminary applications by July. A preliminary design briefing was held in November 2015, where more than 120 student engineering teams were selected to submit Final Design Packages due by 13 January 2016. A Design Weekend was held at Texas A&M University 29–30 January 2016, for all invited entrants. Engineers from the Massachusetts Institute of Technology were named the winners of the competition. While the University of Washington team won the Safety Subsystem Award, Delft University won the Pod Innovation Award as well as the second place, followed by the University of Wisconsin–Madison, Virginia Tech, and the University of California, Irvine. In the Design Category, the winning team was Hyperloop UPV from Universitat Politècnica de València, Spain. On 29 January 2017, Delft Hyperloop (Delft University of Technology) won the prize for the "best overall design" at the final stage of the SpaceX hyperloop competition, while WARR Hyperloop of the Technical University of Munich won the prize for "fastest pod". The Massachusetts Institute of Technology placed third. The second hyperloop pod competition took place from 25 to 27 August 2017. The only judging criteria being top speed provided it is followed by successful deceleration. WARR Hyperloop from the Technical University of Munich won the competition by reaching a top speed of . A third hyperloop pod competition took place in July 2018. The defending champions, the WARR Hyperloop team from the Technical University of Munich, beat their own record with a top speed of during their run. The Delft Hyperloop team representing Delft University of Technology landed in second place, while the EPFLoop team from École Polytechnique Fédérale de Lausanne (EPFL) earned the third-place finish. The fourth competition in August 2019 saw the team from the Technical University of Munich, now known as TUM Hyperloop (by NEXT Prototypes e.V.), again winning the competition and beating their own record with a top speed of . Criticism Rider experience Some critics of Hyperloop focus on the experience—possibly unpleasant and frightening—of riding in a narrow, sealed, windowless capsule inside a sealed steel tunnel, that is subjected to significant acceleration forces; high noise levels due to air being compressed and ducted around the capsule at near-sonic speeds; and the vibration and jostling. Even if the tube is initially smooth, ground may shift with seismic activity. At high speeds, even minor deviations from a straight path may add considerable buffeting. This is in addition to practical and logistical questions regarding how to best deal with safety issues such as equipment malfunction, accidents, and emergency evacuations. Design and safety YouTube creator Adam Kovacs has described Hyperloop as a kind of gadgetbahn because it would be an expensive, unproven system that is no better than existing technologies such as traditional high-speed rail. John Hansman, professor of aeronautics and astronautics at MIT, has pointed out potential design problems, such as how a slight misalignment in the tube would be compensated for, and the potential interplay between the air cushion and the low-pressure air. He has also questioned what would happen if the power were to go out when the pod was miles away from a city. UC Berkeley physics professor Richard Muller has also expressed concern regarding "[the Hyperloop's] novelty and the vulnerability of its tubes, [which] would be a tempting target for terrorists", and that the system could be disrupted by everyday dirt and grime. The solar panels Musk plans to install along the length of the hyperloop system have been criticized by engineering professor Roger Goodall of Loughborough University, as not being feasible enough to return enough energy to power the hyperloop system, arguing that the air pumps and propulsion would require much more power than the solar panels could generate. Costs The alpha proposal projected that cost savings compared with conventional rail would come from a combination of several factors. The small profile and elevated nature of the alpha route would enable Hyperloop to be constructed primarily in the median of Interstate 5. However, whether this would be truly feasible is a matter of debate. The low profile would reduce tunnel boring requirements and the light weight of the capsules is projected to reduce construction costs over conventional passenger rail. It was asserted that there would be less right-of-way opposition and environmental impact as well due to its small, sealed, elevated profile versus that of a rail easement; however, other commentators contend that a smaller footprint does not guarantee less opposition. In criticizing this assumption, mass transportation writer Alon Levy said, "In reality, an all-elevated system (which is what Musk proposes with the Hyperloop) is a bug rather than a feature. Central Valley land is cheap; pylons are expensive, as can be readily seen by the costs of elevated highways and trains all over the world". Michael Anderson, a professor of agricultural and resource economics at UC Berkeley, predicted that costs would amount to around . Projected low ticket prices by Hyperloop developers have been questioned by Dan Sperling, director of the Institute of Transportation Studies at University of California Davis, who stated that "there's no way the economics on that would ever work out." Some critics have argued that, since Hyperloop is designed to carry fewer passengers than typical public train systems, it could make it difficult to price tickets to cover the costs of construction and running. In a study done by the TU Delft researchers claim that the fares would have to be higher than €0.30 per passenger kilometer, compared to €0.174/p-km for high speed rail and €0.183/p-km for air travel. The early cost estimates of the hyperloop are a subject of debate. A number of economists and transportation experts have expressed the belief that the price tag dramatically understates the cost of designing, developing, constructing, and testing an all-new form of transportation. The Economist magazine said that the estimates are unlikely to "be immune to the hypertrophication of cost that every other grand infrastructure project seems doomed to suffer." Hyperloop One estimated that for a loop around the Bay Area the costs were in a range on $9 billion to $13 billion in total, or from $84 million to $121 million per mile. For another project in the United Arab Emirates the company estimated $52 million per mile and for a Stockholm-Helsinki route the company reported a cost of $64 million per mile. Political considerations Political impediments to the construction of such a project in California may be large due to the "political and reputation capital" invested in the existing mega-project of California High-Speed Rail. Because replacing that with a different design would not be straightforward given California's political economy, Texas has been suggested as an alternate for its more amenable political and economic environment. Building a successful hyperloop sub-scale demonstration project could reduce the political impediments and improve cost estimates. In 2013, Musk suggested that he might become personally involved in building a demonstration prototype of the hyperloop concept, including funding the development effort. According to The New York Times, "The central impediment" to the Hyperloop is that it "would require creating an entire infrastructure. That means constructing miles-long systems of tubes and stations, acquiring rights of way, adhering to government regulations and standards, and avoiding changes to the ecology along its routes." Hyperloop companies Related concepts Historical related concepts The pneumatic tube, using high pressures behind a capsule to move it forward, was suggested in 1799 by the British mechanical engineer and inventor George Medhurst. In 1812, Medhurst wrote a book detailing his idea of transporting passengers and goods through airtight tubes using air propulsion. Beach Pneumatic Transit was operated from 1870 to 1873 as a one-block-long prototype of an underground tube transport public transit system in New York City, following a concept by Alfred Ely Beach. The system worked at near-atmospheric pressure, with the passenger car moved by means of higher pressure air applied to the back of the car while comparatively lower pressure air was maintained in front of the car. Vactrains were explored in the 1910s, as described by American rocket pioneer Robert Goddard and others. Unlike pneumatic tubes, these do not use pressure for propulsion, but instead utilize a hard vacuum to eliminate drag ahead of the vehicle. The vehicle is both suspended and propelled by magnetic levitation. Swissmetro was a proposal to run a maglev train in a low-pressure environment. Concessions were granted to Swissmetro in the early 2000s to connect the Swiss cities of St. Gallen, Zurich, Basel, and Geneva. Studies of commercial feasibility reached differing conclusions and the vactrain was never built. ET3 Global Alliance (ET3) was founded by Daryl Oster in 1997 with the goal of establishing a global transportation system using passenger capsules in frictionless maglev full-vacuum tubes. Oster received interest from Elon Musk potentially investing in a prototype of ET3's proposed design. In 2003 Franco Cotana led the development of Pipenet, with a -long diameter prototype system constructed in Italy in 2005, with a vision to use an evacuated tube for moving freight at up to using linear synchronous motors and magnetic levitation. However development stopped after funding ceased. In August 2010, a vacuum-based maglev train able to move at was proposed for China, projected to cost ( at the August 2010 exchange rate) more per kilometer than regular high-speed rail. In 2018 a short loop test track was completed to test some parts of the technology. Vactrains using the moniker 'Hyperloop' In 2018, a concept for creating and using intermodal Hyperloop capsules was presented in an academic journal. After detaching the drive elements, capsules could potentially be used in a way similar to traditional containers for fast transport of goods or individuals. It was further proposed that specialized airplanes, dedicated high-speed trains, road tractors or watercraft could perform "last mile" transport for solving the problem of fast transportation to centers where hyperloop terminals are locally unavailable or infeasible to be constructed. In May 2021, it was reported that a low-vacuum sealed tube test system capable of reaching speeds around had begun construction in Datong, Shanxi Province. An initial section was completed in 2022 and the full test line is planned to be completed within two years. The line is being constructed by the North University of China and the Third Research Institute of China Aerospace Science and Industry Corporation. In July 2021, an experimental European operational Hyperloop testing facility concept was begun. The test tube was made of an aluminum alloy, with a loop diameter of and long, built by the Swiss-American startup Swisspod Technologies and the Distributed Electrical Systems Laboratory (DESL) of École Polytechnique Fédérale de Lausanne. In September 2021, Swisspod Technologies and MxV Rail (formerly TTCI), a subsidiary of the Association of American Railroads (AAR), began collaboration to potentially build a full-scale testing facility for Hyperloop technology on the Pueblo Plex campus in Pueblo, Colorado, US. The primary purpose of this facility would be to conduct research and development activities on Swisspod's proprietary Hyperloop propulsion system. See also Gravity train Gravity-vacuum transit Ground effect train High-speed rail Kantrowitz limit Maglev Pneumatic tube Swissmetro Transatlantic tunnel Vactrain European Hyperloop Week References External links Video of First Successful Test Ride—Wired via YouTube Eurotube.org
Hyperloop
Technology,Engineering
7,432
59,096,122
https://en.wikipedia.org/wiki/Ocean%20acidification%20in%20the%20Arctic%20Ocean
The Arctic Ocean covers an area of 14,056,000 square kilometers, and supports a diverse and important socioeconomic food web of organisms, despite its average water temperature being 32 degrees Fahrenheit. Over the last three decades, the Arctic Ocean has experienced drastic changes due to climate change. One of the changes is in the acidity levels of the ocean, which have been consistently increasing at twice the rate of the Pacific and Atlantic oceans. Arctic Ocean acidification is a result of feedback from climate system mechanisms, and is having negative impacts on Arctic Ocean ecosystems and the organisms that live within them. Process Ocean acidification is caused by the equilibration of the atmosphere with the ocean, a process that occurs worldwide. Carbon dioxide in the atmosphere equilibrates and dissolves into the ocean. During this reaction, carbon dioxide reacts with water to form carbonic acid. The carbonic acid then dissociates into bicarbonate ions and hydrogen ions. This reaction causes the pH of the water to lower, effectively acidifying it. Ocean acidification is occurring in every ocean across the world. Since the beginning of the Industrial Revolution, the World's oceans have absorbed approximately 525 billion tons of carbon dioxide. During this time, world ocean pH has collectively decreased from 8.2 to 8.1, with climatic modeling predicting a further decrease of pH by 0.3 units by 2100. However, the Arctic Ocean has been affected more due to the cold water temperatures and increased solubility of gases as water temperature decreases. The cold Arctic water is able to absorb higher amounts of carbon dioxide compared to the warmer Pacific and Atlantic Oceans. The chemical changes caused by the acidification of the Arctic Ocean are having negative ecological and socioeconomic repercussions. With the changes in the chemistry of their environment, arctic organisms are challenged with new stressors. These stressors can have damaging effects on these organisms, with some being affected more than others. Calcifying organisms specifically appear to be the most impacted by this changing water composition, as they rely on carbonate availability to survive. Dissolved carbonate concentrations decrease with increasing carbon dioxide and lowered pH in the water. Ecological food webs are also altered by the acidification. Acidification lowers the ability of many fish to grow, which not only impacts food webs but humans that rely on these fisheries as well. Economic effects are resulting from shifting food webs that decrease popular fish populations. These fish populations provide jobs to people who work in the fisheries industry. As is apparent, ocean acidification lacks any positive benefits, and as a result has been placed high on a priority list within the United States and other organizations such as the Scientific Committee on Oceanic Research, UNESCO's Intergovernmental Oceanographic Commission, the Ocean Carbon and Biogeochemistry Program, the Integrated Marine Biogeochemistry and Ecosystem Research Project, and the Consortium for Ocean Leadership. Causes Decreased sea ice Arctic sea ice has experienced an extreme reduction over the past few decades, with the minimum area of sea ice being 4.32 million km2 in 2019, a sharp 38% decrease from 1980, when the minimum area was 7.01 million km2. Sea ice plays an important role in the health of the Arctic Ocean, and its decline has had detrimental effects on Arctic Ocean chemistry. All oceans equilibrate with the atmosphere by pulling carbon dioxide out of the atmosphere and into the ocean, which lowers the pH of the water. Sea ice limits the air-sea gas exchange with carbon dioxide by protecting the water from being completely exposed to the atmosphere. Low carbon dioxide levels are important to the Arctic Ocean due to intense cooling, fresh water runoff, and photosynthesis from marine organisms. Reductions in sea ice have allowed more carbon dioxide to equilibrate with the arctic water, resulting in increased acidification. The decrease in sea ice has also allowed more Pacific Ocean water to flow into in the Arctic Ocean during the winter, called Pacific winter water. Pacific Ocean water is high in carbon dioxide, and with decreased amounts of sea ice, more Pacific Ocean water has been able to enter the Arctic Ocean, carrying carbon dioxide with it. This Pacific winter water has further acidified the Arctic Ocean, as well as increased the depth of acidified water. Melting methane hydrates Climate change is causing destabilization of multiple climate systems within the Arctic Ocean. One system that climate change is impacting is methane hydrates. Methane hydrates are located along the continental margins, and are stabilized by high pressure, as well as uniformly low temperatures. Climate change has begun to destabilize these methane hydrates within the Arctic Ocean by decreasing pressure and increasing temperatures, allowing methane hydrates to melt and release methane into the arctic waters. When methane is released into the water, it can either be used via anaerobic metabolism or aerobic metabolism by microorganisms in the ocean sediment, or be released from sea into the atmosphere. Most impactful to ocean acidification is aerobic oxidation by microorganisms in the water column. Carbon dioxide is produced by the reaction of methane and oxygen in water. Carbon dioxide then equilibrates with water, producing carbonic acid, which then equilibrates to release hydrogen ions and bicarbonate and further contributes to ocean acidification. Effects on Arctic organisms Organisms in Arctic waters are under high environmental stress such as extremely cold water. It is believed that this high stress environment will cause ocean acidification factors to have a stronger effect on these organisms. It could also cause these effects to appear in the Arctic before it appears in other parts of the ocean. There is a significant variation in the sensitivity of marine organisms to increased ocean acidification. Calcifying organisms generally exhibit larger negative responses from ocean acidification than non-calcifying organisms across numerous response variables, with the exception of crustaceans, which calcify but don't seem to be negatively affected. This is due, mainly, to the process of marine biogenic calcification, that calcifying organisms utilize. Calcifying organisms Carbonate ions (CO32-) are essential in marine calcifying organisms, like plankton and shellfish, as they are required to produce their calcium carbonate () shells and skeletons. As the ocean acidifies, the increased uptake of CO2 by seawater increases the concentration of hydrogen ions, which lowers the pH of the water. This change in the chemical equilibrium of the inorganic carbon system reduces the concentration of these carbonate ions. This reduces the ability of these organisms to create their shells and skeletons. The two polymorphs of calcium carbonate that are produced by marine organisms are aragonite and calcite. These are the materials that makes up most of the shells and skeletons of these calcifying organisms. Aragonite, for example, makes up nearly all mollusc shells, as well as the exoskeleton of corals. The formation of these materials is dependent on the saturation state of CaCO3 in ocean water. Waters which are saturated in are favorable to precipitation and formation of shells and skeletons, but waters which are undersaturated are corrosive to shells. In the absence of protective mechanisms, dissolution of calcium carbonate will occur. As colder arctic water absorbs more , the concentration of CO32- is reduced, therefore the saturation of calcium carbonate is lower in high-latitude oceans than it is in tropical or temperate oceans. The undersaturation of CaCO3 causes the shells of calcifying organisms to dissolve, which can have devastating consequences to the ecosystem. As the shells dissolve, the organisms struggle to maintain proper health, which can lead to mass mortality. The loss of many of these species can lead to intense consequences on the marine food web in the Arctic Ocean, as many of these marine calcifying organisms are keystone species. Laboratory experiments on various marine biota in an elevated environment show that changes in aragonite saturation cause substantial changes in overall calcification rates for many species of marine organisms, including coccolithophore, foraminifera, pteropods, mussels, and clams. Although the undersaturation of arctic water has been proven to have an effect on the ability of organisms to precipitate their shells, recent studies have shown that the calcification rate of calcifiers, such as corals, coccolithophores, foraminiferans and bivalves, decrease with increasing p, even in seawater supersaturated with respect to . Additionally, increased p has been found to have complex effects on the physiology, growth and reproductive success of various marine calcifiers. Life cycle tolerance seems to differ between various marine organisms, as well as tolerance at different life cycle stages (e.g. larva and adult). The first stage in the life cycle of marine calcifiers at serious risk from high content is the planktonic larval stage. The larval development of several marine species, primarily sea urchins and bivalves, are highly affected by elevations of seawater p. In laboratory tests, numerous sea urchin embryos were reared under different concentrations until they developed to the larval stage. It was found that once they reached this stage, larval and arm sizes were significantly smaller, as well as abnormal skeleton morphology was noted with increasing p. Similar findings have been found in treated-mussel larvae, which showed a larval size decrease of about 20% and showed morphological abnormalities such as convex hinges, weaker and thinner shells and protrusion of mantle. The larval body size also impacts the encounter and clearance rates of food particles, and if larval shells are smaller or deformed, these larvae are more prone to starvation. structures also serve vital functions for calcified larvae, such as defense against predation, as well as roles in feeding, buoyancy control and pH regulation. Another example of a species which may be seriously impacted by ocean acidification is Pteropods, which are shelled pelagic molluscs which play an important role in the food-web of various ecosystems. Since they harbour an aragonitic shell, they could be very sensitive to ocean acidification driven by the increase of anthropogenic emissions. Laboratory tests showed that calcification exhibits a 28% decrease of the pH value of the Arctic ocean expected for the year 2100, compared to the present pH value. This 28% decline of calcification in the lower pH condition is within the range reported for other calcifying organisms such as corals. In contrast with sea urchin and bivalve larvae, corals and marine shrimps are more severely impacted by ocean acidification after settlement, while they developed into the polyp stage. From laboratory tests, the morphology of the -treated polyp endoskeleton of corals was disturbed and malformed compared to the radial pattern of control polyps. This variability in the impact of ocean acidification on different life cycle stages of different organisms can be partially explained by the fact that most echinoderms and mollusks start shell and skeleton synthesis at their larval stage, while corals start at the settlement stage. Hence, these stages are highly susceptible to the potential effects of ocean acidification. Most calcifiers, such as corals, echinoderms, bivalves and crustaceans, play important roles in coastal ecosystems as keystone species, bioturbators and ecosystem engineers. The food web in the arctic ocean is somewhat truncated, meaning it is short and simple. Any impacts to key species in the food web can cause exponentially devastating effects on the rest of the food chain as a whole, as they will no longer have a reliable food source. If these larger organisms no longer have any source of nutrients, they too will eventually die off, and the entire Arctic ocean ecosystem will be affected. This would have a huge impact on the arctic people who catch arctic fish for a living, as well as the economic repercussions which would follow such a major shortage of food and living income for these families. Effects on Local Communities Ocean acidification not only has impacts on aquatic life, but also on human communities and the overall livelihood of people living near these waters. For example, as a result of crustaceans being unable to produce their shells and skeletons due to reduced amounts of carbonate ions, populations such as crabs have significantly decreased in some areas in the Northern hemisphere. This has resulted in numerous fisheries in these areas to close down as a result of multi-million dollar losses. In addition, increased temperatures have caused a swift increase in toxic algal blooms, which are known to produce a neurotoxin called domoic acid that can accumulate inside the bodies of certain shellfish. If ingested by humans this toxin can cause severe health issues, which has forced many additional fisheries to close down. Methods to Reduce Acidification Since the carbon cycle is tightly connected to the issue of ocean acidification, the most effective method for minimizing the effects of ocean acidification is to slow climate change. Anthropogenic inputs of CO2 can be reduced through methods such as limiting the use of fossil fuels and employing renewable energies. This will ultimately lower the amount of CO2 in the atmosphere and reduce the amount dissolved into the oceans. More intrusive methods to mitigate acidification involve a technique called enhanced weathering where powdered minerals like silicate are applied to the land or ocean surface. The powdered minerals enable accelerated dissolution, releasing cations, converting CO2 to bicarbonate and increasing the pH of the oceans. Other mitigation methods, like ocean iron fertilization, still need more experimentation and evaluation in order to be deemed effective. Ocean iron fertilization in particular has been shown to increase acidification in the deep ocean while only slightly reducing acidification at the surface. References Arctic Ocean Effects of climate change Biological oceanography Chemical oceanography Geochemistry Aquatic ecology
Ocean acidification in the Arctic Ocean
Chemistry,Biology
2,832
576,646
https://en.wikipedia.org/wiki/2.5D
2.5D (basic pronunciation two-and-a-half dimensional) perspective refers to gameplay or movement in a video game or virtual reality environment that is restricted to a two-dimensional (2D) plane with little to no access to a third dimension in a space that otherwise appears to be three-dimensional and is often simulated and rendered in a 3D digital environment. This is similar but different from pseudo-3D perspective (sometimes called three-quarter view when the environment is portrayed from an angled top-down perspective), which refers to 2D graphical projections and similar techniques used to cause images or scenes to simulate the appearance of being three-dimensional (3D) when in fact they are not. By contrast, games, spaces or perspectives that are simulated and rendered in 3D and used in 3D level design are said to be true 3D, and 2D rendered games made to appear as 2D without approximating a 3D image are said to be true 2D. Common in video games, 2.5D projections have also been useful in geographic visualization (GVIS) to help understand visual-cognitive spatial representations or 3D visualization. The terms three-quarter perspective and three-quarter view trace their origins to the three-quarter profile in portraiture and facial recognition, which depicts a person's face that is partway between a frontal view and a side view. Computer graphics Axonometric and oblique projection In axonometric projection and oblique projection, two forms of parallel projection, the viewpoint is rotated slightly to reveal other facets of the environment than what are visible in a top-down perspective or side view, thereby producing a three-dimensional effect. An object is "considered to be in an inclined position resulting in foreshortening of all three axes", and the image is a "representation on a single plane (as a drawing surface) of a three-dimensional object placed at an angle to the plane of projection." Lines perpendicular to the plane become points, lines parallel to the plane have true length, and lines inclined to the plane are foreshortened. They are popular camera perspectives among 2D video games, most commonly those released for 16-bit or earlier and handheld consoles, as well as in later strategy and role-playing video games. The advantage of these perspectives is that they combine the visibility and mobility of a top-down game with the character recognizability of a side-scrolling game. Thus the player can be presented an overview of the game world in the ability to see it from above, more or less, and with additional details in artwork made possible by using an angle: Instead of showing a humanoid in top-down perspective, as a head and shoulders seen from above, the entire body can be drawn when using a slanted angle; turning a character around would reveal how it looks from the sides, the front and the back, while the top-down perspective will display the same head and shoulders regardless. There are three main divisions of axonometric projection: isometric (equal measure), dimetric (symmetrical and unsymmetrical), and trimetric (single-view or only two sides). The most common of these drawing types in engineering drawing is isometric projection. This projection is tilted so that all three axes create equal angles at intervals of 120 degrees. The result is that all three axes are equally foreshortened. In video games, a form of dimetric projection with a 2:1 pixel ratio is more common due to the problems of anti-aliasing and square pixels found on most computer monitors. In oblique projection typically all three axes are shown without foreshortening. All lines parallel to the axes are drawn to scale, and diagonals and curved lines are distorted. One tell-tale sign of oblique projection is that the face pointed toward the camera retains its right angles with respect to the image plane. Two examples of oblique projection are Ultima VII: The Black Gate and Paperboy. Examples of axonometric projection include SimCity 2000, and the role-playing games Diablo and Baldur's Gate. Billboarding In three-dimensional scenes, the term billboarding is applied to a technique in which objects are sometimes represented by two-dimensional images applied to a single polygon which is typically kept perpendicular to the line of sight. The name refers to the fact that objects are seen as if drawn on a billboard. This technique was commonly used in early 1990s video games when consoles did not have the hardware power to render fully 3D objects. This is also known as a backdrop. This can be used to good effect for a significant performance boost when the geometry is sufficiently distant that it can be seamlessly replaced with a 2D sprite. In games, this technique is most frequently applied to objects such as particles (smoke, sparks, rain) and low-detail vegetation. It has since become mainstream, and is found in many games such as Rome: Total War, where it is exploited to simultaneously display thousands of individual soldiers on a battlefield. Early examples include early first-person shooters like Marathon Trilogy, Wolfenstein 3D, Doom, Hexen and Duke Nukem 3D as well as racing games like Carmageddon and Super Mario Kart and platformers like Super Mario 64. Skyboxes and skydomes Skyboxes and skydomes are methods used to easily create a background to make a game level look bigger than it really is. If the level is enclosed in a cube, the sky, distant mountains, distant buildings, and other unreachable objects are rendered onto the cube's faces using a technique called cube mapping, thus creating the illusion of distant three-dimensional surroundings. A skydome employs the same concept but uses a sphere or hemisphere instead of a cube. As a viewer moves through a 3D scene, it is common for the skybox or skydome to remain stationary with respect to the viewer. This technique gives the skybox the illusion of being very far away since other objects in the scene appear to move, while the skybox does not. This imitates real life, where distant objects such as clouds, stars and even mountains appear to be stationary when the viewpoint is displaced by relatively small distances. Effectively, everything in a skybox will always appear to be infinitely distant from the viewer. This consequence of skyboxes dictates that designers should be careful not to carelessly include images of discrete objects in the textures of a skybox since the viewer may be able to perceive the inconsistencies of those objects' sizes as the scene is traversed. Scaling along the Z axis In some games, sprites are scaled larger or smaller depending on its distance to the player, producing the illusion of motion along the Z (forward) axis. Sega's 1986 video game Out Run, which runs on the Sega OutRun arcade system board, is a good example of this technique. In Out Run, the player drives a Ferrari into depth of the game window. The palms on the left and right side of the street are the same bitmap, but have been scaled to different sizes, creating the illusion that some are closer than others. The angles of movement are "left and right" and "into the depth" (while still capable of doing so technically, this game did not allow making a U-turn or going into reverse, therefore moving "out of the depth", as this did not make sense to the high-speed game play and tense time limit). Notice the view is comparable to that which a driver would have in reality when driving a car. The position and size of any billboard is generated by a (complete 3D) perspective transformation as are the vertices of the poly-line representing the center of the street. Often the center of the street is stored as a spline and sampled in a way that on straight streets every sampling point corresponds to one scan-line on the screen. Hills and curves lead to multiple points on one line and one has to be chosen. Or one line is without any point and has to be interpolated lineary from the adjacent lines. Very memory intensive billboards are used in Out Run to draw corn-fields and water waves which are wider than the screen even at the largest viewing distance and also in Test Drive to draw trees and cliffs. Drakkhen was notable for being among the first role-playing video games to feature a three-dimensional playing field. However, it did not employ a conventional 3D game engine, instead emulating one using character-scaling algorithms. The player's party travels overland on a flat terrain made up of vectors, on which 2D objects are zoomed. Drakkhen features an animated day-night cycle, and the ability to wander freely about the game world, both rarities for a game of its era. This type of engine was later used in the game Eternam. Some mobile games that were released on the Java ME platform, such as the mobile version of Asphalt: Urban GT and Driver: L.A. Undercover, used this method for rendering the scenery. While the technique is similar to some of Sega's arcade games, such as Thunder Blade and Cool Riders and the 32-bit version of Road Rash, it uses polygons instead of sprite scaling for buildings and certain objects though it looks flat shaded. Later mobile games (mainly from Gameloft), such as Asphalt 4: Elite Racing and the mobile version of Iron Man 2, uses a mix of sprite scaling and texture mapping for some buildings and objects. Parallax scrolling Parallaxing refers to when a collection of 2D sprites or layers of sprites are made to move independently of each other and/or the background to create a sense of added depth. This depth cue is created by relative motion of layers. The technique grew out of the multiplane camera technique used in traditional animation since the 1940s. This type of graphical effect was first used in the 1982 arcade game Moon Patrol. Examples include the skies in Rise of the Triad, the arcade version of Rygar, Sonic the Hedgehog, Street Fighter II, Shadow of the Beast and Dracula X Chronicles, as well as Super Mario World. Mode 7 Mode 7, a display system effect that included rotation and scaling, allowed for a 3D effect while moving in any direction without any actual 3D models, and was used to simulate 3D graphics on the SNES. Ray casting Ray casting is a first person pseudo-3D technique in which a ray for every vertical slice of the screen is sent from the position of the camera. These rays shoot out until they hit an object or wall, and that part of the wall is rendered in that vertical screen slice. Due to the limited camera movement and internally 2D playing field, this is often considered 2.5D. Bump, normal and parallax mapping Bump mapping, normal mapping and parallax mapping are techniques applied to textures in 3D rendering applications such as video games to simulate bumps and wrinkles on the surface of an object without using more polygons. To the end user, this means that textures such as stone walls will have more apparent depth and thus greater realism with less of an influence on the performance of the simulation. Bump mapping is achieved by perturbing the surface normals of an object and using a grayscale image and the perturbed normal during illumination calculations. The result is an apparently bumpy surface rather than a perfectly smooth surface although the surface of the underlying object is not actually changed. Bump mapping was introduced by Blinn in 1978. In normal mapping, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the dot product is the intensity of the light on that surface. Imagine a polygonal model of a sphere—you can only approximate the shape of the surface. By using a 3-channel bitmapped image textured across the model, more detailed normal vector information can be encoded. Each channel in the bitmap corresponds to a spatial dimension (x, y and z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques. Parallax mapping (also called offset mapping or virtual displacement mapping) is an enhancement of the bump mapping and normal mapping techniques implemented by displacing the texture coordinates at a point on the rendered polygon by a function of the view angle in tangent space (the angle relative to the surface normal) and the value of the height map at that point. At steeper view-angles, the texture coordinates are displaced more, giving the illusion of depth due to parallax effects as the view changes. Film and animation techniques The term is also used to describe an animation effect commonly used in music videos and, more frequently, title sequences. Brought to wide attention by the motion picture The Kid Stays in the Picture, an adaptation of film producer Robert Evans's memoir, it involves the layering and animating of two-dimensional pictures in three-dimensional space. Earlier examples of this technique include Liz Phair's music video "Down" (directed by Rodney Ascher) and "A Special Tree" (directed by musician Giorgio Moroder). On a larger scale, the 2018 movie In Saturn's Rings used over 7.5 million separate two-dimensional images, captured in space or by telescopes, which were composited and moved using multi-plane animation techniques. Graphic design The term also refers to an often-used effect in the design of icons and graphical user interfaces (GUIs), where a slight 3D illusion is created by the presence of a virtual light source to the left (or in some cases right) side, and above a person's computer monitor. The light source itself is always invisible, but its effects are seen in the lighter colors for the top and left side, simulating reflection, and the darker colours to the right and below of such objects, simulating shadow. An advanced version of this technique can be found in some specialised graphic design software, such as Pixologic's ZBrush. The idea is that the program's canvas represents a normal 2D painting surface, but that the data structure that holds the pixel information is also able to store information with respect to a z-index, as well material settings, specularity, etc. Again, with this data it is thus possible to simulate lighting, shadows, and so forth. History The first video games that used pseudo-3D were primarily arcade games, the earliest known examples dating back to the mid-1970s, when they began using microprocessors. In 1975, Taito released Interceptor, an early first-person shooter and combat flight simulator that involved piloting a jet fighter, using an eight-way joystick to aim with a crosshair and shoot at enemy aircraft that move in formations of two and increase/decrease in size depending on their distance to the player. In 1976, Sega released Moto-Cross, an early black-and-white motorbike racing video game, based on the motocross competition, that was most notable for introducing an early three-dimensional third-person perspective. Later that year, Sega-Gremlin re-branded the game as Fonz, as a tie-in for the popular sitcom Happy Days. Both versions of the game displayed a constantly changing forward-scrolling road and the player's bike in a third-person perspective where objects nearer to the player are larger than those nearer to the horizon, and the aim was to steer the vehicle across the road, racing against the clock, while avoiding any on-coming motorcycles or driving off the road. That same year also saw the release of two arcade games that extended the car driving subgenre into three dimensions with a first-person perspective: Sega's Road Race, which displayed a constantly changing forward-scrolling S-shaped road with two obstacle race cars moving along the road that the player must avoid crashing while racing against the clock, and Atari's Night Driver, which presented a series of posts by the edge of the road though there was no view of the road or the player's car. Games using vector graphics had an advantage in creating pseudo-3D effects. 1979's Speed Freak recreated the perspective of Night Driver in greater detail. In 1979, Nintendo debuted Radar Scope, a shoot 'em up that introduced a three-dimensional third-person perspective to the genre, imitated years later by shooters such as Konami's Juno First and Activision's Beamrider. In 1980, Atari's Battlezone was a breakthrough for pseudo-3D gaming, recreating a 3D perspective with unprecedented realism, though the gameplay was still planar. It was followed up that same year by Red Baron, which used scaling vector images to create a forward scrolling rail shooter. Sega's arcade shooter Space Tactics, released in 1980, allowed players to take aim using crosshairs and shoot lasers into the screen at enemies coming towards them, creating an early 3D effect. It was followed by other arcade shooters with a first-person perspective during the early 1980s, including Taito's 1981 release Space Seeker, and Sega's Star Trek in 1982. Sega's SubRoc-3D in 1982 also featured a first-person perspective and introduced the use of stereoscopic 3-D through a special eyepiece. Sega's Astron Belt in 1983 was the first laserdisc video game, using full-motion video to display the graphics from a first-person perspective. Third-person rail shooters were also released in arcades at the time, including Sega's Tac/Scan in 1982, Nippon's Ambush in 1983, Nichibutsu's Tube Panic in 1983, and Sega's 1982 release Buck Rogers: Planet of Zoom, notable for its fast pseudo-3D scaling and detailed sprites. In 1981, Sega's Turbo was the first racing game to use sprite scaling with full-colour graphics. Pole Position by Namco is one of the first racing games to use the trailing camera effect that is now so familiar . In this particular example, the effect was produced by linescroll—the practice of scrolling each line independently in order to warp an image. In this case, the warping would simulate curves and steering. To make the road appear to move towards the player, per-line color changes were used, though many console versions opted for palette animation instead. Zaxxon, a shooter introduced by Sega in 1982, was the first game to use isometric axonometric projection, from which its name is derived. Though Zaxxon's playing field is semantically 3D, the game has many constraints which classify it as 2.5D: a fixed point of view, scene composition from sprites, and movements such as bullet shots restricted to straight lines along the axes. It was also one of the first video games to display shadows. The following year, Sega released the first pseudo-3D isometric platformer, Congo Bongo. Another early pseudo-3D platform game released that year was Konami's Antarctic Adventure, where the player controls a penguin in a forward-scrolling third-person perspective while having to jump over pits and obstacles. It was one of the earliest pseudo-3D games available on a computer, released for the MSX in 1983. That same year, Irem's Moon Patrol was a side-scrolling run & gun platform-shooter that introduced the use of layered parallax scrolling to give a pseudo-3D effect. In 1985, Space Harrier introduced Sega's "Super Scaler" technology that allowed pseudo-3D sprite-scaling at high frame rates, with the ability to scale 32,000 sprites and fill a moving landscape with them. The first original home console game to use pseudo-3D, and also the first to use multiple camera angles mirrored on television sports broadcasts, was Intellivision World Series Baseball (1983) by Don Daglow and Eddie Dombrower, published by Mattel. Its television sports style of display was later adopted by 3D sports games and is now used by virtually all major team sports titles. In 1984, Sega ported several pseudo-3D arcade games to the Sega SG-1000 console, including a smooth conversion of the third-person pseudo-3D rail shooter Buck Rogers: Planet of Zoom. By 1989, 2.5D representations were surfaces drawn with depth cues and a part of graphic libraries like GINO. 2.5D was also used in terrain modeling with software packages such as ISM from Dynamic Graphics, GEOPAK from Uniras and the Intergraph DTM system. 2.5D surface techniques gained popularity within the geography community because of its ability to visualize the normal thickness to area ratio used in many geographic models; this ratio was very small and reflected the thinness of the object in relation to its width, which made it the object realistic in a specific plane. These representations were axiomatic in that the entire subsurface domain was not used or the entire domain could not be reconstructed; therefore, it used only a surface and a surface is one aspect not the full 3D identity. The specific term "two-and-a-half-D" was used as early as 1994 by Warren Spector in an interview in the North American premiere issue of PC Gamer magazine. At the time, the term was understood to refer specifically to first-person shooters like Wolfenstein 3D and Doom, to distinguish them from System Shock's "true" 3D engine. With the advent of consoles and computer systems that were able to handle several thousand polygons (the most basic element of 3D computer graphics) per second and the usage of 3D specialized graphics processing units, pseudo-3D became obsolete. But even today, there are computer systems in production, such as cellphones, which are often not powerful enough to display true 3D graphics, and therefore use pseudo-3D for that purpose. Many games from the 1980s' pseudo-3D arcade era and 16-bit console era are ported to these systems, giving the manufacturers the possibility to earn revenues from games that are several decades old. The resurgence of 2.5D or visual analysis, in natural and earth science, has increased the role of computer systems in the creation of spatial information in mapping. GVIS has made real the search for unknowns, real-time interaction with spatial data, and control over map display and has paid particular attention to three-dimensional representations. Efforts in GVIS have attempted to expand higher dimensions and make them more visible; most efforts have focused on "tricking" vision into seeing three dimensions in a 2D plane. Much like 2.5D displays where the surface of a three-dimensional object is represented but locations within the solid are distorted or not accessible. Technical aspects and generalizations The reason for using pseudo-3D instead of "real" 3D computer graphics is that the system that has to simulate a 3D-looking graphic is not powerful enough to handle the calculation-intensive routines of 3D computer graphics, yet is capable of using tricks of modifying 2D graphics like bitmaps. One of these tricks is to stretch a bitmap more and more, therefore making it larger with each step, as to give the effect of an object coming closer and closer towards the player. Even simple shading and size of an image could be considered pseudo-3D, as shading makes it look more realistic. If the light in a 2D game were 2D, it would only be visible on the outline, and because outlines are often dark, they would not be very clearly visible. However, any visible shading would indicate the usage of pseudo-3D lighting and that the image uses pseudo-3D graphics. Changing the size of an image can cause the image to appear to be moving closer or further away, which could be considered simulating a third dimension. Dimensions are the variables of the data and can be mapped to specific locations in space; 2D data can be given 3D volume by adding a value to the x, y, or z plane. "Assigning height to 2D regions of a topographic map" associating every 2D location with a height/elevation value creates a 2.5D projection; this is not considered a "true 3D representation", however is used like 3D visual representation to "simplify visual processing of imagery and the resulting spatial cognition". See also 3D computer graphics Bas-relief Cel-shaded animation Flash animation Head-coupled perspective Isometric graphics in video games Limited animation List of stereoscopic video games Live2D Ray casting Trompe-l'œil Vector graphics References Video game development Video game graphics Dimension
2.5D
Physics
5,047
21,130,805
https://en.wikipedia.org/wiki/Mark%20Z.%20Jacobson
Mark Zachary Jacobson (born 1965) is a professor of civil and environmental engineering at Stanford University and director of its Atmosphere/Energy Program. He is also a co-founder of the non-profit, Solutions Project. Overview Jacobson pursued "better understanding air pollution and global warming problems and developing large-scale clean, renewable energy solutions to them". He has developed computer models to study the effects of fossil fuels, biofuels, and biomass burning on air pollution, weather, and climate. With these models, Jacobson examined the impacts of anthropogenic particles (black carbon and brown carbon) on health and climate. He presented such particles as the second-leading cause of global warming after carbon dioxide. Due to their strong health impacts and their short time in the air, he has also hypothesized that reducing their emissions may improve people's health and rapidly slow down global warming. In a 2009 Scientific American paper, Jacobson and Mark Delucchi proposed that the world should move to 100% clean, renewable energy, namely wind, water, and solar power, across all energy sectors. He discussed and promoted the conversion of worldwide energy infrastructure to "100% wind, water, and sunlight (WWS) for all purposes" in many interviews Jacobson's 2015 study on transitioning the 50 states to WWS was cited as the scientific basis in House Resolution 540 (2015) and in the 2015 New York Senate Bill S5527 on renewable energy The Green New Deal appears compatible with Jacobson's scholarship. Jacobson's clean energy solutions exclude nuclear power, carbon capture, and bioenergy, prompting a pushback by proponents of these technologies in the form of peer-reviewed letters and journal papers He has published peer-reviewed responses to these critics. A controversy developed in September 2017 when Jacobson sued the journal and one author of a critique for $10M, for defamation. He voluntarily dismissed his lawsuit without prejudice five months later, but was ordered to reimburse defendants more than $500,000 in legal fees. In June 2022, the California Labor Commissioner ordered Stanford University to pay Jacobson's own legal fees and reserved judgment on the remaining fees Jacobson paid. Stanford appealed. Jacobson has built his own net-zero home to run on renewable energy. He was also an expert witness in Held v. Montana, the first climate trial in U.S. history. Research Jacobson has published research on the role of black carbon and other aerosol chemical components on global and regional climates. Jacobson advocates a speedy transition to 100% renewable energy in order to limit climate change, air pollution damage, and energy security issues. Jacobson co-founded the non-profit Solutions Project in 2011 along with Marco Krapels, Mark Ruffalo, and Josh Fox. The Solutions Project was started to combine science, business, and culture in an effort to educate the public and policymakers about the ability U.S. states and communities to switch to a "100% renewable world". Soot and aerosol Jacobson, as a PhD student at UCLA under Richard P. Turco, began computer model development in 1990 with the development of algorithms for what is now called GATOR-GCMOM (Gas, Aerosol, Transport, Radiation, General Circulation, Mesoscale, and Ocean Model). This model simulates air pollution, weather, and climate from the local to global scale. Zhang (2008, pp. 2901, 2902) calls Jacobson's model "the first fully-coupled online model in the history that accounts for all major feedbacks among major atmospheric processes based on first principles." Several of the individual computer code solvers Jacobson developed for GATOR-GCMOM include the gas and aqueous chemistry ordinary differential equations solvers SMVGEAR and SMVGEAR II, alongside a slew of other related and different modules, The GATOR-GCMOM model has incorporated these processes and has evolved over several decades. One of the most important fields of research that Jacobson has added to, with the aid of GATOR-GCMOM, is re-defining the range of values on exactly how much diffuse tropospheric black carbon from fossil fuel, biofuel, and biomass burning affects the climate. Unlike greenhouse gases, black carbon absorbs solar radiation. It then converts the solar energy to heat, which is re-emitted to the atmosphere. Without such absorption, much of the sunlight would potentially reflect back out to space since it would have struck a more reflective surface. Therefore, as a whole, soot affects the planets albedo, a unit of reflectance. On the other hand, greenhouse gases warm the atmosphere by trapping thermal-infrared heat radiation that is emitted by the surface of the Earth. Jacobson found that, as soot particles in the air age, they grow larger due to condensation by gases and collision/coalescence with other particles. He further found that when a soot particle obtained such a coating, more sunlight enters the particles, bounces around, and eventually gets absorbed by the black carbon. On a global scale, this may result in twice the heating by black carbon as uncoated particles. Upon detailed calculations, he concluded that black carbon may be the second-leading cause of global warming in terms of radiative forcing. Jacobson further found that soot from diesel engines, coal-fired power plants and burning wood is a "major cause of the rapid melting of the Arctic's sea ice. Jacobson's refinement to the warming impacts of soot and his conclusion that black carbon may be the second leading cause of global warming in terms of radiative forcing was affirmed in the comprehensive review of Bond et al. (2013). For this body of work, he received the Henry G. Houghton Award from the American Meteorological Society in 2005 and the American Geophysical Union Ascent Award in 2013. Jacobson has also independently modeled and corroborated the work of World Health Organization researchers, who likewise estimate that soot/particulate matter produced from the burning of fossil fuels and biofuels may cause over 1.5 million premature deaths each year from diseases such as respiratory illness, heart disease and asthma. These deaths occur mostly in the developing world where wood, animal dung, kerosene, and coal are used for cooking. Because of the short atmospheric lifetime of black carbon, in 2002 Jacobson concluded that controlling soot is the fastest way to begin to control global warming and that it will likewise improve human health. However, he cautioned that controlling carbon dioxide, the leading cause of global warming, was imperative for stopping warming. 100% renewable energy Jacobson has published papers about transitioning to 100% renewable energy systems, including the grid integration of renewable energy. He has concluded that wind, water, and solar (WWS) power can be scaled up in cost-effective ways to fulfill world energy demands in all energy sectors, In 2009 Jacobson and Mark A. Delucchi published "A Plan to Power 100 Percent of the Planet with Renewables" in Scientific American. The article addressed several issues related to transitioning to 100% WWS, such as the energy required in a 100% electric world, the worldwide spatial footprint of wind farms, the availability of scarce materials needed to manufacture new systems and the ability to produce reliable energy on demand. Jacobson has updated and expanded this 2009 paper as the years progress, including a two-part article in the journal Energy Policy in 2010. Jacobson and his colleague estimated that 3.8 million wind turbines of 5-Megawatt (MW) size, 49,000 300-MW concentrated solar power plants, 40,000 300-MW solar PV power plants, 1.7 billion 3-kW rooftop PV systems, 5350 100-MW geothermal power plants, and some 270 new 1300-MW hydroelectric power plants would be needed. All of which would require approximately 1% of the world's land to be achieved. Jacobson and his colleagues then published papers on transitioning three states to 100% renewable/WWS energy by 2050. In 2015, Jacobson was the lead author of two peer reviewed papers, one of which examined the feasibility of transitioning each of the 50 United States to a 100% energy system, powered exclusively by wind, water and sunlight (WWS), and the other that provided one proposed method to solve the grid reliability problem with high shares of intermittent sources. In 2016 the editorial board of PNAS selected the grid integration study of Jacobson and his co-workers as best paper in the category "Applied Biological, Agricultural, and Environmental Sciences" and awarded him a Cozzarelli Prize. Jacobson has also published papers to transition 139 and 145 countries and cities and 74 metropolitan areas to 100% WWS renewable energy for all purposes. For his work on solving large-scale air pollution and climate problems, Jacobson was awarded the Judi Friedman Lifetime Achievement award in 2018. Jacobson is co-founder of the non-profit The Solutions Project along with Marco Krapels, Mark Ruffalo, and Josh Fox. This organization "helps to educate the public about science-based 100% renewable energy transition roadmaps and facility a transition to a 100% renewable world". Decarbonization assessments Jacobson's 100% renewable world approach is supported by publications among at least 17 international research groups that find 100% renewables possible at low cost throughout the world. It is also supported by the Global 100RE Strategy Group, a coalition of 47 scientists supporting 100% renewable energy to solve the climate problem. His work is also consistent with results from a study out of the U.S. National Renewable Energy Laboratory (NREL), which found that a 100% clean, renewable U.S. electricity grid with no combustion turbines might cost ~4.8 ¢/kWh to keep the grid stable. This is less than the cost of electricity from a new natural gas plant. His work is further supported by a 2016 publication by Mark Cooper, who has previously evaluated the economics of nuclear energy at the Vermont Law School, In 2016 Cooper published, a comparison of the 100% WWS roadmaps of Jacobson with deep decarbonization proposals that included nuclear power and fossil fuels with carbon capture. Cooper concluded that the 100% WWS pathway was the least cost and “Neither fossil fuels with CCS or nuclear power enters the least-cost, low-carbon portfolio.” Earlier publications, from 2011 to 2015, that analyzed, with different methodologies, various strategies to get to a global zero or low carbon economy, by circa 2050, reported that a renewables-alone approach, would be "orders of magnitude" more expensive and more difficult to achieve than other energy paths that have been assessed. The more recent studies, including the NREL study, dispute these claims. Opinions on nuclear energy Jacobson argues that if the United States wants to reduce global warming, air pollution and energy instability, it should invest only in the best energy options, and that nuclear power is not one of them. To support his claim, Jacobson provided an analysis in 2009 that intended to inform policy makers on which energy sources are best for solving the air pollution, climate, and energy security problems the world faces. He updated this analysis in his 2020 textbook. That analysis accounted for some emission sources not included in previous analyses, The primary emissions due to nuclear energy are called “opportunity-cost emissions.” These are the emissions due to the long time lag between planning and operation of a nuclear plant (10 to 19 years) versus a wind or solar farm (2 to 5 years), for example. Of the total estimated emissions from nuclear in the 2009 study (68–180.1 g/kWh), 59–106 g/kWh was due to opportunity-cost emissions. Most of the rest (9-70 g/kWh) was due to lifecycle emissions, and a small amount (0-4.1 g/kWh) was due to the risk of carbon emissions associated with the burning of cities resulting from a nuclear war aided by the expansion of nuclear energy to countries previously without them, and the subsequent development of weapons in those countries. Jacobson raised this last assumption during a Ted talk Does the world need nuclear energy? in 2010, with Jacobson heading the debate in the negative. Like his PhD advisor Richard P. Turco, who notably coined the phrase "nuclear winter", Jacobson has taken a similar approach to calculating the hypothetical effects of nuclear wars on the climate but has further extended this into providing an analysis that intends to inform policy makers on which energy sources to support, as of 2009. Jacobson's analyses suggest that "nuclear power results in up to 25 times more carbon emissions per unit energy than wind energy". This analysis is controversial. Jacobson arrived at this conclusion of "25 times more carbon emissions than wind, per unit of energy generated" (68–180.1 g/kWh), by specifically expanding on some concepts that are highly contested. These include, though are not limited to, the suggestion that emissions associated with civil nuclear energy should, in the upper limit, include the risk of carbon emissions associated with the burning of cities resulting from a nuclear war aided by the expansion of nuclear energy and weapons to countries previously without them. An assumption that Jacobson's debating opponent similarly raised, during the Ted talk Does the world need nuclear energy? in 2010, with Jacobson heading the debate in the negative. Jacobson assumes, at the high end (180.1 g/kWh), that 4.1 g/kWh are due to some form of nuclear induced burning that will occur once every 30 years. At the low end, 0 g/kWh are due to nuclear induced burning. Responding to a commentary on his work in the Journal Environmental Science and Technology in 2013, James Hansen has characterized Jacobson's analysis on this topic of greenhouse gas emissions, as "lack(ing) credibility" and similarly regards Jacobson's other viewpoint of extra "opportunity-cost" emissions as "dubious". With the foundation of Hansen's incredulity being based on French experience, that decarbonized ~80% of the grid in 15 years, completed 56 reactors in the 15-year period, thus raising the fact that depending on the existence of established regulator certainty & political conditions, nuclear energy facilities have been accelerated through the licensing/planning phase and have therefore rapidly decarbonizated electric grids. The Intergovernmental Panel on Climate Change(IPCC) regard Yale University's Warner and Heath's methodology, used to determine the Life-cycle greenhouse-gas emissions of energy sources, as the most credible, reporting that the conceivable range of total-life-cycle nuclear power emission figures, are between 4-110 g/kWh, with the specific median value of 12 g/kWh, being deemed the strongest supported and 11 g/kWh for Wind. While Jacobson's limited lifecycle figures, of 9-70 g/kWh, falls within this IPCC range. The IPCC however, does not factor in Jacobson's "opportunity cost" emissions on any energy source. The IPCC has not provided a detailed explanation for not including Jacobson's "opportunity costs". Aside from the time required for planning, financing, permitting, and constructing a power plant, for every energy source that can be analyzed, the time required and therefore Jacobson's "opportunity costs" also depends on political factors, for example hypothetical legal cases that can stall construction and other issues that can arise from site specific NIMBYISM. It is the delay/opportunity cost of emissions that are the bulk of the difference between Jacobson's overall emissions for nuclear of 68–180.1 g/kWh and the IPCC's lifecycle emissions. Although nuclear advocates have balked at the idea of including even a small risk of emissions, even at the high end, from a potential nuclear war arising from the spread of nuclear energy, the IPCC has stated that, "Barriers to and risks associated with an increasing use of nuclear energy include operational risks and the associated safety concerns, uranium mining risks, financial and regulatory risks, unresolved waste management issues, nuclear weapons proliferation concerns, and adverse public opinion.” In 2012, Jacobson coauthored a paper estimating the health effects of the Fukushima nuclear disaster. The paper projected approximately 180 "cancer-related morbidities" to eventually occur in the public. Health physicist Kathryn Higley of Oregon State University wrote in 2012, "The methods of the study were solid, and the estimates were reasonable, although there is still uncertainty around them. But given how much cancer already exists in the world, it would be very difficult to prove that anyone’s cancer was caused by the incident at Fukushima Daiichi." Burton Richter, tenured in Stanford with Jacobson, who analyzed the use of the disputed Linear no-Threshold (LNT) model in the paper, similarly stated in his critique, "It is a first rate job and uses sources of radioactivity measurements that have not been used before to get a very good picture of the geographic distribution of radiation, a very good idea". Richter also noted that "I also think there is too much editorializing about accident potential at Diablo Canyon which makes [Jacobson's] paper sound a bit like an anti-nuclear piece instead of the very good analysis that it is," and "It seems clear that considering only the electricity generated by the Fukushima plant, nuclear is much less damaging to health than coal and somewhat better gas even after including the accident. If nuclear power had never been deployed in Japan the effects on the public would have [been] much worse." Critiques of 100% renewable papers and court controversy Jacobson's renewable energy solutions exclude nuclear power, carbon capture, and bioenergy. This has resulted in pushback by some scientists. 21 researchers published a critique in 2017 of Jacobson's "100% Renewable" paper of the United States. Jacobson and his coauthors published a response to the critical paper and also requested the journal and authors to either correct "false factual claims" of modeling error or retract the article. After both declined, Jacobson filed a lawsuit in 2017 against the Proceedings of the National Academy of Sciences and Christopher Clack as the principal author of the paper for defamation. Jacobson’s critics described the lawsuit as an attack on free speech and scientific inquiry, however Jacobson disagreed with this characterization. Jacobson voluntarily dismissed his lawsuit without prejudice in February 2018, two days after a court hearing on the defendants’ special motion to dismiss pursuant to the D.C. Anti-SLAPP (Strategic Litigation Against Public Participation) Act. Jacobson explained his dismissal as follows: "It became clear… that it is possible that there could be no end to this case for years." In 2022, Jacobson appealed a trial court order for him to pay $428K in legal fees incurred by defendants in his lawsuit prior to his voluntary dismissal of it. In February 2024, Jacobson lost the appeal and must pay defendants more than $500,000 in legal fees. On June 26, 2022, the California Labor Commissioner ordered Stanford University to pay nearly $70,000 to Jacobson for legal expenses he incurred in the Washington D.C. case and reserved a decision on indemnifying him for his remaining expenses, reasoning that because the critique in question "tarnished Plaintiff's reputation," "defending his reputation" was necessary for his job. Stanford, which had declined to intervene on behalf of Jacobson, has appealed that ruling. Jacobson was also an expert witness on behalf of 16 youth plaintiffs in Held v. Montana, the first climate trial in U.S. history. Jacobson testified that the state could transition to renewable energy. The judge ruled in favor of the youth plaintiffs. Publications Books Jacobson, M. Z., Fundamentals of Atmospheric Modeling. Cambridge University Press, New York, 656 pp., 1999. Jacobson, M. Z., Atmospheric Pollution: History, Science, and Regulation, Cambridge University Press, New York, 399 pp., 2002. Jacobson, M. Z., Fundamentals of Atmospheric Modeling, Second Edition, Cambridge University Press, New York, 813 pp., 2005. Jacobson, M. Z., Air Pollution and Global Warming: History, Science, and Solutions, Cambridge University Press, New York, 2011. Jacobson, M.Z., 100% Clean, Renewable Energy and Storage for Everything, Cambridge University Press, New York, 427 pp., 2020. Jacobson, M.Z., No Miracles Needed: How Today's Technology Can Save Our Climate and Clean Our Air, Cambridge University Press, New York, 454 pp., 2023. Selected articles See also Amory Lovins Benjamin K. Sovacool Kick The Fossil Fuel Habit Mark Diesendorf Nuclear power debate Renewable energy commercialization Renewable energy debate Stephen Thomas Vaclav Smil References External links Precourt Institute for Energy "Debate: Does the world need nuclear energy?" (TED2010) 1965 births 20th-century American engineers 20th-century American writers 21st-century American engineers 21st-century American writers Environmental engineers Living people Energy engineers Stanford University School of Engineering faculty American environmentalists
Mark Z. Jacobson
Engineering
4,393
27,364,746
https://en.wikipedia.org/wiki/Gemopatrilat
Gemopatrilat (INN) is an experimental drug that was never marketed. It acts as a vasopeptidase inhibitor. It inhibits both angiotensin-converting enzyme (ACE) and neutral endopeptidase (neprilysin). References ACE inhibitors Acetic acids Azepanes Carboxamides Lactams Thiols Abandoned drugs
Gemopatrilat
Chemistry
79
2,338,947
https://en.wikipedia.org/wiki/Abrin
Abrin is an extremely toxic toxalbumin found in the seeds of the rosary pea (or jequirity pea), Abrus precatorius. It has a median lethal dose of 0.7 micrograms per kilogram of body mass when given to mice intravenously (approximately 3.86 times more toxic than ricin, being 2.7 micrograms per kilogram). The median toxic dose for humans ranges from 10 to 1000 micrograms per kilogram when ingested and is 3.3 micrograms per kilogram when inhaled. Abrin is a ribosome inhibiting protein like ricin, a toxin which can be found in the seeds of the castor oil plant, and pulchellin, a toxin which can be found in the seeds of Abrus pulchellus. Abrin is classed as a "select agent" under U.S. law. Occurrence Abrin is only formed in nature by the rosary pea. The brightly coloured seeds of this plant contain about 0.08% of abrin. The toxin is found within the seeds, and its release is prevented by the seed coat. If the seed coat is injured or destroyed (by chewing, for example) the toxin may be released. Physical properties Abrin is a water-soluble lectin. Abrin in powdered form is yellowish-white. It is a stable substance and can withstand extreme environmental conditions. Though it is combustible, it does not polymerize easily and is not particularly volatile. Biochemistry Chemically, abrin is a mixture of four isotoxins, these being abrin-a (), -b (), -c (), and -d (). Occasionally, the non-toxic hemagglutinin of Abrus precatorius (; AAG) is also included as the fifth protein under the collective name 'abrin'. Abrin-a is the most potent of the four isotoxins, encoded for by an intron-free gene, and consists of two subunits or chains, A and B. The primary product of protein biosynthesis, preproabrin, consists of a signal peptide sequence, the amino acid sequences for subunits A and B, and a linker. A molecule of abrin-a has a total of 528 amino acids and is about 65 kDa in mass. Abrin-a is formed after the cleavage of a signal peptide sequence and post-translational modifications such as glycosylation and disulfide bridge formation in the endoplasmic reticulum (ER). The other three abrins, as well as the agglutinin, have a similar structure. In terms of structure, abrin-a is related to the lectin, ricin, produced in the seeds of Ricinus communis. Use Abrin is not known to have been weaponised. However, due to its high toxicity and the possibility of being processed into an aerosol, the use of abrin as a biological weapon is possible in principle. Despite this, the rosary pea yields only small quantities of abrin, which reduces the risk. The rosary pea is common to tropical regions, and is occasionally employed as an herbal remedy for certain conditions. While the outer shell of the seed protects its contents from the stomachs of most mammals, the seed coats are occasionally punctured to make beaded jewelry. This can lead to poisoning if a seed is swallowed, or if such jewelry is worn against damaged skin. Abrin has been shown to act as an immunoadjuvant in the treatment of cancer in mice. Toxicology Symptoms of abrin poisoning include diarrhea, vomiting, colic, tachycardia, and tremors. Death usually occurs after a few days due to kidney failure, heart failure, and/or respiratory paralysis. Toxicity Although there is no consensus on the level of lethal dose in humans after oral intake, it is assumed that the intake of 0.1 to 1 microgram per kilogram of body weight, or the consumption of a single seed of the rosary pea, may be fatal, but this information is insufficiently documented. According to other estimates, the LD50 value of abrin is between 10 and 1000 μg/kg and is comparable to that of ricin. The severity of the effects of abrin poisoning vary on the means of exposure to the substance (whether inhaled, ingested, or injected). Exposure to abrin on the skin can cause an allergic reaction, indicated by blisters, redness, irritation, and pain, however, there is no evidence of toxicity after skin contact. Abrin is significantly more toxic following intravenous administration. The LD50 values obtained vary between 0.03 and 0.06 μg/kg in rabbits and between 1.25 and 1.3 μg/kg in dogs, depending on the species. In clinical studies involving cancer patients, up to 0.3 μg/kg of intravenous abrin immunotoxin was tolerated without the development of serious symptoms of toxicity. The toxicity of abrin is increased if it is inhaled. In rats, the LD50 for this route of administration is 3.3 μg/kg. Toxicodynamics Abrin resembles ricin, in that it also is a type 2 ribosome-inactivating protein (RIP-II) with a similar mode of action, but the effect of abrin is more potent than that of ricin. The toxic effect of abrin is due to an intracellular, multi-step process. Abrin binds to and penetrates the cells of the body, inhibiting cell protein synthesis after being transported to the endoplasmic reticulum (ER). By attaching its non-specifically binding B chain, which acts as a haptomer, to the carbohydrate chain of a glycoprotein on the cell surface, the abrin molecule anchors itself to the cell and is subsequently engulfed. However, both specific and nonspecific binding result in the uptake of abrin via endocytosis, as well as the activation of the A chain, caused by the cleavage of the B chain. The activated A chain of abrin, the effectomer, then enters the inner parts of the cell, where it cleaves an adenine (A4324) nucleobase from the 28S rRNA of the large ribosomal subunit of a ribosome on or near the ER, inhibiting the regular process of cellular protein synthesis. Without these proteins, cells cannot survive. This is harmful to the human body and can be fatal in small exposures. Additionally, abrin also may bind to cells specifically bearing the mannose receptor on their surface; since this receptor is found in a particularly high density on cells of the reticulohistiocytic system, that system in particular is affected by the toxicity of abrin. Toxicokinetics Information dealing with the toxicokinetics of abrin is limited and debated. Due to its biochemical properties and its similarity to ricin, it is believed that abrin is at least partially degraded in the gastrointestinal tract. The size of the molecule also restricts absorption through the gastrointestinal tract. Nevertheless, the numerous deaths caused from consuming rosary pea seeds confirm that enough of the toxin can be absorbed into the systemic circulation via the gastrointestinal tract to cause death. Murine studies show that there is an accumulation of abrin after injection, in the liver, kidneys, spleen, blood cells, lungs, and heart. The molecule is excreted via the kidneys after it undergoes proteolytic cleavage. Signs and symptoms of abrin exposure The major symptoms of abrin poisoning depend on the route of exposure and the dose received, though many organs may be affected in severe cases. In general, symptoms can appear anywhere between several hours to several days after exposure. Initial symptoms of abrin poisoning by inhalation may occur within 8 hours of exposure but a more typical time course is 18–24 hours; they can prove fatal within 36–72 hours. Following ingestion of abrin, initial symptoms usually occur rapidly, but can take up to five days to appear. The later signs and symptoms of exposure are caused by abrin's cytotoxic effects, killing cells in the kidney, liver, adrenal glands, and central nervous system. Inhalation Within a few hours of inhaling abrin, common symptoms include fever, cough, airway irritation, chest tightness, pulmonary edema (excess fluid accumulated in the lungs), and nausea. This makes breathing difficult (called dyspnea), and the skin might turn blue or black in a condition called cyanosis, which is a symptom of hypoxia. Excess fluid in the lungs can be diagnosed by x-ray or by listening to the chest with a stethoscope. As the effects of abrin progress, a person can become diaphoretic (sweating heavily) and fluid can build up further. Their blood pressure may drop dramatically, keeping oxygen from reaching the brain and other vital organs in a condition called shock, and respiratory failure may occur, which can be fatal within 36 to 72 hours. If an exposure to abrin by inhalation is not fatal, the airway can become sensitized or irritated. Ingestion Swallowing any amount of abrin can lead to a slow-burn process of severe symptoms. Early symptoms include nausea, vomiting, pain in the mouth, throat, and esophagus, diarrhea, dysphagia (trouble swallowing), and abdominal cramps and pain. As the symptoms progress, bleeding and inflammation begins in the gastrointestinal tract. The affected person can vomit up blood (hematemesis), have blood in their feces, which creates a black, tarry stool called melena, and more internal bleeding. Loss of blood volume and water from nausea, vomiting, diarrhea, and bleeding causes blood pressure to drop and organ damage to begin, which can be seen as the person begins to have somnolence/drowsiness, hematuria (blood in the urine), stupor, convulsions, polydipsia (excessive thirst), and oliguria (low urine production). This ultimately results in multi-system organ failure, hypovolemic shock, vascular collapse, and death. Absorption Abrin can be absorbed through broken skin or absorbed through the skin if dissolved in certain solvents. It can also be injected in small pellets and absorbed through contact with the eyes. Abrin in the powder or mist form can cause redness and pain in the eyes (i.e. conjunctivitis) in small doses. Small doses absorbed through the eyes can also cause tearing (lacrimation). Higher doses can cause tissue damage, severe bleeding at the back of the eye (retinal hemorrhage), and vision impairment or blindness. A large enough dose can be absorbed into the bloodstream and lead to systemic toxicity. Treatment Because no antidote exists for abrin, the most important factor is avoiding abrin exposure in the first place. If exposure cannot be avoided, the most important factor is then getting the abrin off or out of the body as quickly as possible. Abrin exposure can be prevented when it is present in large quantities by wearing appropriate personal protective equipment. Abrin poisoning is treated with supportive care to minimize the effects of the poisoning. This care varies based on the route of exposure and the time since exposure. For recent ingestion, administration of activated charcoal and gastric lavage are both options. Using an emetic (vomiting agent) is not a useful treatment. In cases of eye exposure, flushing the eye with saline helps to remove abrin. Oxygen therapy, airway management, assisted ventilation, monitoring, IV fluid administration, and electrolyte replacement are also important components of treatment. See also European mistletoe Ricin References External links Biological toxin weapons Lectins Legume lectins Plant toxins Ribosome-inactivating proteins
Abrin
Chemistry
2,515
50,407,105
https://en.wikipedia.org/wiki/Pyramidal%20inversion
In chemistry, pyramidal inversion (also umbrella inversion) is a fluxional process in compounds with a pyramidal molecule, such as ammonia (NH3) "turns inside out". It is a rapid oscillation of the atom and substituents, the molecule or ion passing through a planar transition state. For a compound that would otherwise be chiral due to a stereocenter, pyramidal inversion allows its enantiomers to racemize. The general phenomenon of pyramidal inversion applies to many types of molecules, including carbanions, amines, phosphines, arsines, stibines, and sulfoxides. Energy barrier The identity of the inverting atom has a dominating influence on the barrier. Inversion of ammonia is rapid at room temperature, inverting 30 billion times per second. Three factors contribute to the rapidity of the inversion: a low energy barrier (24.2 kJ/mol; 5.8 kcal/mol), a narrow barrier width (distance between geometries), and the low mass of hydrogen atoms, which combine to give a further 80-fold rate enhancement due to quantum tunnelling. In contrast, phosphine (PH3) inverts very slowly at room temperature (energy barrier: 132 kJ/mol). Consequently, amines of the type RR′R"N usually are not optically stable (enantiomers racemize rapidly at room temperature), but P-chiral phosphines are. Appropriately substituted sulfonium salts, sulfoxides, arsines, etc. are also optically stable near room temperature. Steric effects can also influence the barrier. Nitrogen inversion Pyramidal inversion in nitrogen and amines is known as nitrogen inversion. It is a rapid oscillation of the nitrogen atom and substituents, the nitrogen "moving" through the plane formed by the substituents (although the substituents also move - in the other direction); the molecule passing through a planar transition state. For a compound that would otherwise be chiral due to a nitrogen stereocenter, nitrogen inversion provides a low energy pathway for racemization, usually making chiral resolution impossible. Quantum effects Ammonia exhibits a quantum tunnelling due to a narrow tunneling barrier, and not due to thermal excitation. Superposition of two states leads to energy level splitting, which is used in ammonia masers. Examples The inversion of ammonia was first detected by microwave spectroscopy in 1934. In one study the inversion in an aziridine was slowed by a factor of 50 by placing the nitrogen atom in the vicinity of a phenolic alcohol group compared to the oxidized hydroquinone. The system interconverts by oxidation by oxygen and reduction by sodium dithionite. Exceptions Conformational strain and structural rigidity can effectively prevent the inversion of amine groups. Tröger's base analogs (including the Hünlich's base) are examples of compounds whose nitrogen atoms are chirally stable stereocenters and therefore have significant optical activity. References Physical chemistry Stereochemistry Organic chemistry
Pyramidal inversion
Physics,Chemistry
649
3,694,654
https://en.wikipedia.org/wiki/Gestell
(or sometimes Ge-stell) is a German word used by twentieth-century German philosopher Martin Heidegger to describe what lies behind or beneath modern technology. Heidegger introduced the term in 1954 in The Question Concerning Technology, a text based on the lecture "The Framework" ("Das Gestell") first presented on December 1, 1949, in Bremen. It was derived from the root word stellen, which means "to put" or "to place" and combined with the German prefix Ge-, which denotes a form of "gathering" or "collection". The term encompasses all types of entities and orders them in a certain way. Heidegger's notion of Gestell Heidegger applied the concept of Gestell to his exposition of the essence of technology. He concluded that technology is fundamentally Enframing (Gestell). As such, the essence of technology is Gestell. Indeed, "Gestell, literally 'framing', is an all-encompassing view of technology, not as a means to an end, but rather a mode of human existence". Heidegger further explained that in a more comprehensive sense, the concept is the final mode of the historical self-concealment of primordial φύσις. In defining the essence of technology as Gestell, Heidegger indicated that all that has come to presence in the world has been enframed. Such enframing pertains to the manner reality appears or unveils itself in the period of modern technology and people born into this "mode of ordering" are always embedded into the Gestell (enframing). Thus what is revealed in the world, what has shown itself as itself (the truth of itself) required first an Enframing, literally a way to exist in the world, to be able to be seen and understood. Concerning the essence of technology and how we see things in our technological age, the world has been framed as the "standing-reserve." Heidegger writes, Enframing means the gathering together of that setting-upon which sets upon man, i.e., challenges him forth, to reveal the real, in the mode of ordering, as standing-reserve. Enframing means that way of revealing which holds sway in the essence of modern technology and which is itself nothing technological. Furthermore, Heidegger uses the word in a way that is uncommon by giving Gestell an active role. In ordinary usage the word would signify simply a display apparatus of some sort, like a book rack, or picture frame; but for Heidegger, Gestell is literally a challenging forth, or performative "gathering together", for the purpose of revealing or presentation. If applied to science and modern technology, "standing reserve" is active in the case of a river once it generates electricity or the earth if revealed as a coal-mining district or the soil as a mineral deposit. For some scholars, Gestell effectively explains the violence of technology. This is attributed to Heidegger's explanation that, when Gestell holds sway, "it drives out every other possibility of revealing" and that it "conceals that revealing which, in the sense of poiesis, lets what presences come forth into appearance." Later uses of the concept Giorgio Agamben drew heavily from Heidegger in his interpretation of Foucault's concept of dispositif (apparatus). In his work, What is an Apparatus, he described apparatus as the "decisive technical term in the strategy of Foucault's thought". Agamben maintained that Gestell is nothing more than what appears as oikonomia. Agamben cited cinema as an apparatus of Gestell since films capture and record the gestures of human beings. Albert Borgmann expanded Heidegger's concept of Gestell by offering a more practical conceptualization of the essence of technology. Heidegger's enframing became Borgmann's Device paradigm, which explains the intimate relationship between people, things and technological devices. Claudio Ciborra developed another interpretation, which focused on the analyses of the Information System infrastructure using the concept of Gestell. He based his improvement of the original meaning of "structural" with "processual" on the etymology of Gestell so that it indicates the pervasive process of arranging, regulating, and ordering of resources that involve both human and natural resources. Ciborra has likened information infrastructure with Gestell and this association was used to philosophically ground many aspects of his works such as his description of its inherent self-feeding process. References Concepts in metaphysics Concepts in the philosophy of science Existentialist concepts Philosophy of technology Martin Heidegger de:Martin Heidegger#Technik als Gestell
Gestell
Technology
994
17,507,934
https://en.wikipedia.org/wiki/Budapest%20Declaration%20on%20Machine%20Readable%20Travel%20Documents
The Budapest Declaration on Machine Readable Travel Documents is a declaration issued by the Future of Identity in the Information Society (FIDIS), a Network of Excellence, to raise the concern to the public to the risks associated by a security architecture related to the management of Machine Readable Travel Documents (MRTDs), and its current implementation in passports of the European Union that creates some threats related to identity theft, and privacy. The declaration was proclaimed in Budapest in September 2006. References International travel documents Passports Biometrics Data security
Budapest Declaration on Machine Readable Travel Documents
Engineering
107
30,367,887
https://en.wikipedia.org/wiki/Time%20travel%20claims%20and%20urban%20legends
There have been multiple accounts of people who allegedly travelled through time reported by the press or circulated online. These reports have turned out either to be hoaxes or to be based on incorrect assumptions, incomplete information, or interpretation of fiction as fact, many being now recognized as urban legends. Alleged time travelers Charlotte Anne Moberly and Eleanor Jourdain In 1911, Charlotte Anne Moberly (1846–1937) and Eleanor Jourdain (1863–1924) published a book entitled An Adventure, under the names of "Elizabeth Morison" and "Frances Lamont". They described a visit to the Petit Trianon, a small château in the grounds of the Palace of Versailles, where they claimed they saw ghosts including Marie Antoinette and others. Their story caused a sensation, and was subject to much ridicule. "Chaplin's Time Traveller" In October 2010, Northern Irish filmmaker George Clarke uploaded a video clip entitled "Chaplin's Time Traveller" to YouTube. The clip analyzes bonus material in a DVD of the Charlie Chaplin film The Circus. Included in the DVD is footage from the film's Los Angeles premiere at Grauman's Chinese Theatre in 1928. At one point, a woman is seen walking by, holding up an object to her ear. Clarke said that, on closer examination, she was talking into a thin, black device that had appeared to be a "phone". Clarke concluded that the woman was possibly a time traveller. The clip received millions of hits and was the subject of televised news stories. Nicholas Jackson, associate editor for The Atlantic, says the most likely answer is that she was using a portable hearing aid, a technology that was just being developed at the time. Philip Skroska, an archivist at the Bernard Becker Medical Library of Washington University School of Medicine, thought that the woman might have been holding a rectangular ear trumpet. New York Daily News writer Michael Sheridan said the device was probably an early hearing aid, perhaps an Acousticon manufactured by Miller Reese Hutchison. Present-day hipster at 1941 bridge opening A photograph from 1941 of genuine authenticity of the re-opening of the South Fork Bridge in Gold Bridge, British Columbia is sometimes alleged on the internet to show a time traveler. It was claimed that his clothing and sunglasses were of the present day and not of the styles worn in the '40s, while his camera was anachronistically small. Further research suggested that the present-day appearance of the man would not have necessarily been out of place in 1941. The style of sunglasses he is wearing first appeared in the 1920s. On first glance the man is taken by many to be wearing a printed T-shirt, but on closer inspection it seems to be a sweater with a sewn-on emblem, the kind of clothing often worn by sports teams of the period. The shirt resembles one that was used by the Montreal Maroons, an ice hockey team from that era. The remainder of his clothing would appear to have been available at the time, though his clothes are far more casual than those worn by the other individuals in the photograph. His camera is smaller than most of that era, but cameras of that size did exist; while it is unclear what make his camera was, Kodak had manufactured portable cameras of equivalent size since 1938. The "Time Traveling Hipster" became a case study in viral Internet phenomena which was presented at the Museums and the Web 2011 conference in Philadelphia. Mobile device in 1943 A photograph from 1943 of genuine authenticity, showing a scene of holidaymakers on Towan Beach in Newquay, Cornwall, was uploaded to Twitter in November 2018 by multimedia artist Stuart Humphryes, which was alleged by some viewers to show a time traveller operating an anachronistic mobile device, such as a phone. This tweet was picked up by news outlets including Fox News in the US, and various tabloid newspapers in the UK, such as The Daily Mirror. Fuelled by media websites such as LADbible it gained global coverage via news outlets in Russia, Iran, Taiwan, Hungary, China and Vietnam. amongst others. Humphryes, the original uploader, was quoted in these stories as dismissing the time travel theories, stating that the man in question was probably just rolling a cigarette. Rudolph Fentz The story of Rudolph Fentz is an urban legend from the early 1950s and has been repeated since as a reproduction of facts and presented as evidence for the existence of time travel. The essence of the legend is that in New York City in 1951 a man wearing 19th-century clothes was hit by a car. The subsequent investigation revealed that the man had disappeared without a trace in 1876. The items in his possession suggested that the man had traveled through time from 1876 to 1951 directly. The folklorist Chris Aubeck investigated the story and found it originated in a science fiction book, A Voice from the Gallery (1953) by Ralph M. Holland, which had copied the tale from "I'm Scared" (1951), a short story by Jack Finney (1911–1995). Mike "Madman" Marcum In 1995, a caller to Art Bell's syndicated radio show Coast to Coast AM named Mike Marcum claimed to have discovered a means of time travel using a Jacob's ladder. In attempting to build a larger version of this device, Marcum admitted that he had stolen several power transformers from the local power company, St. Joseph Light and Power in King City, Missouri; in using them to power his "time machine" he caused a local black-out that brought him to the attention of authorities. Police records confirmed that he had been arrested for this theft, and sentenced to 60 days in jail plus a suspended sentence. He called the radio show again in 1996, stating he was building a second "time machine" from legally-acquired parts, and was 30 days from completing the device. He claimed to have sent around 200 items and small animals through this device, and announced he was going to travel through it himself. Marcum then "disappeared" in 1997. His absence led to a number of theories, as well as giving birth to a number of urban legends. In 2015 Bell interviewed him again on his radio show, Midnight in the Desert, where Marcum claimed that he had been transported two years into the future and 800 miles away, landing near Fairfield, Ohio, but suffered amnesia. While living in a homeless shelter, Marcum slowly remembered his name, his social security number, and other memories that enabled him to re-enter society. However, the Missouri State Highway Patrol has no record of him being reported as a missing person. John Titor Between 2000 and 2001, an online bulletin board user self-identified as John Titor became popular as he claimed to be a time traveler from 2036 on a military mission. Holding the many-worlds interpretation as correct and consequently every time travel paradox as impossible, he stated that many events which occurred up to his time would indeed occur in this timeline. These included a devastating civil war in the US in 2008 followed by a short nuclear World War III in 2015, which will "kill three billion people". In the years following his last posts and disappearance in 2001, the non-fulfilment of his specific predictions made his popularity decrease. Criticism has pointed out flaws in Titor's stories and investigations suggested his character may be a hoax and a creation of two siblings from Florida. The story has been retold on numerous web sites, in a book, in the Japanese visual novel/anime Steins;Gate, and in a play. He may also have been discussed occasionally on the radio show Coast to Coast AM. In this respect, the Titor story may be unique in terms of broad appeal from an originally limited medium, an Internet discussion board. Bob White/Tim Jones Similar to John Titor, Bob White or Tim Jones sent an unknown number of spam emails between 2001 and 2003. The subject of the emails was always the same: that an individual was seeking to find someone who could supply a "Dimensional Warp Generator." In some instances, he claimed to be a time traveler stuck in 2003, and in others he claimed to be seeking the parts only from other time travelers. Several recipients began to respond in kind, claiming to have equipment such as the requested dimensional warp generator. One recipient, Dave Hill, set up an online shop from which the time traveler purchased the warp generator (formerly a Hard Drive Motor), while another Dave charged thousands of dollars for time-travel "courses" before he would sell the requested hardware. The name "Bob White" was taken from an alias that the second Dave used when responding (a reference to the "Bobwhites" of Trixie Belden-fame). Soon afterward, the time traveler was identified as professional spammer Robert J. Todino (known as "Robby"). Todino's attempts to travel in time were a serious belief, and while he believed he was "perfectly mentally stable," his father was concerned that those replying to his emails had been preying on Todino's psychological problems. In his book Spam Kings, journalist Brian S. McWilliams, who had originally uncovered Todino's identity for Wired magazine, revealed that Todino had been previously diagnosed with dissociative disorder and schizophrenia, explaining the psychological problems of which his father had spoken. Todino's time traveller was referenced in the song "Rewind" by jazz trio Groovelily on their 2003 album Are we there yet? The song used phrases taken from Todino's emails within its lyrics. Andrew Carlssin Andrew Carlssin was supposedly arrested in March 2003 for SEC violations for making 126 high-risk stock trades and being successful on every one. As reported, Carlssin started with an initial investment of $800 and ended with over $350,000,000 which drew the attention of the SEC. Later reports suggest that after his arrest, he submitted a four-hour confession wherein he claimed to be a time traveler from 200 years in the future. He offered to tell investigators such things as the whereabouts of Osama bin Laden and the cure for AIDS in return for a lesser punishment and to be allowed to return to his time craft, although he refused to tell investigators the location or workings of his craft. A mysterious man posted his bail and Carlssin was scheduled for court hearing but was never seen again; records show that he never existed. The Carlssin story likely originated as a fictional piece in Weekly World News, a satirical newspaper, and was later repeated by Yahoo! News, where its fictitious nature became less apparent. It was soon reported by other newspapers and magazines as fact. This in turn drove word-of-mouth spread through email inboxes and internet forums, leading to far more detailed descriptions of events. Håkan Nordkvist A video uploaded in 2006 shows a Swedish man named Håkan Nordkvist claiming that he had been accidentally transported to 2046 when attempting to fix the sink in his kitchen. There in the future, he immediately met someone who revealed and proved to be himself about 70 years old, and with whom he "had a great time". He filmed a short footage of the two smiling and hugging each other and showing the tattoo they had on their right arms. The story was a marketing campaign promoting the pension plans of the insurance company AMF. iPhone in an 1860 painting Some online viewers claimed that an 1860 painting by Austrian artist Ferdinand Georg Waldmüller titled The Expected depicted a woman holding and staring down at a mobile phone while strolling along a path in the countryside. However, art experts debunked these claims and stated that the alleged mobile phone the woman was holding in the painting was actually a prayer book. Alleged time-travel technology Die Glocke Die Glocke ("The Bell") is a purported Nazi time machine that was supposedly part of a flying saucer. The Chronovisor Italian Benedictine monk claimed to have used a time viewer which could film the past without sound called a chronovisor to obtain a photograph of the Crucifixion of Jesus and view scenes from ancient Rome, including a performance of the lost play Thyestes. According to author Paul J. Nahin, a short story by Horace Gold (using the penname Dudley Dell) called "The Biography Project" published in Galaxy Science Fiction magazine may have influenced Ernetti's claim. According to Guardian writer Mark Pilkington, "Ernetti's glory was shortlived. Another magazine revealed that Christ was a reversed image of a postcard from the Santuario dell'Amore Misericordioso, in the town of Collevalenza. More recently, doubt has been cast on his "transcription" of Thyestes, and an apparent deathbed confession has also surfaced." Iranian time machine In April 2013, the Iranian news agency Fars carried a story claiming a 27-year-old Iranian scientist had invented a time machine that allowed people to see into the future. A few days later, the story was removed, and replaced with a story quoting an Iranian government official that no such device had been registered. Philadelphia Experiment and Montauk Project The Philadelphia Experiment is the name given to a naval military experiment which was supposedly carried out at the Philadelphia Naval Shipyard in Philadelphia, Pennsylvania, USA, sometime around 28 October 1943. It is alleged that the U.S. Navy destroyer escort USS Eldridge was to be rendered invisible (or "cloaked") to enemy devices. The experiment is also referred to as Project Rainbow. Some reports allege that the warship travelled back in time for about 10 seconds; however, popular culture has represented far bigger time jumps. The story is widely regarded as a hoax. The U.S. Navy maintains that no such experiment occurred, and details of the story contradict well-established facts about the Eldridge as well as the known laws of physics. The Montauk Project was alleged to be a series of secret United States government projects conducted at Camp Hero or Montauk Air Force Station on Montauk, Long Island, for the purpose of exotic research, including time travel. Jacques Vallée describes allegations of the Montauk Project as an outgrowth of stories about the Philadelphia Experiment. References Hoaxes in science Time travel Urban legends
Time travel claims and urban legends
Physics
2,922
77,008,302
https://en.wikipedia.org/wiki/Tara%20Pukala
Tara Louise Pukala is an Australian scientist who is a professor of biological chemistry at the University of Adelaide, board member of Nature Scientific Reports, Superstar of STEM, 2023–2024, and Director of the Adelaide Proteomics Centre. Education and career Pukala was awarded a PhD from the University of Adelaide in 2006 for her thesis "Structural and mechanistic studies of bioactive peptides". She then moved to the University of Cambridge in a post-doctoral role, researching native mass spectrometry. Since 2017, she has been the director of the Adelaide Proteomics Centre, leading a multidisciplinary group of researchers. Pukala is a member of the editorial board for Analytical Chemistry, and an associate editor for Frontiers in Chemistry (Medicinal and Pharmaceutical Chemistry). She is also associate editor of Rapid Communications in Mass Spectrometry, editor of the European Journal of Mass Spectrometry, and a board member of Nature Scientific Reports. She was also vice-president of the Australian and New Zealand Society for Mass Spectrometry (ANZSMS). Pukala's research involves the intersection of chemistry and biology, working with molecular proteins, including DNA and other biomolecules. She is interested in visualising the structures, shapes, and ways that various biomolecules interact with each other. This research helps with understanding medical and health sciences, and the mechanistic biological and chemical processes. Awards 2017 – Australian and New Zealand Society of Mass Spectrometry Bowie Medal. 2021 – Faculty of Sciences Mid Career Research Excellence Award. 2022 – Superstar of STEM, Science Technology, Australia. Selected publications References External links Superstars of STEM Science Technology Australia Living people Australian women scientists Year of birth missing (living people) University of Adelaide alumni Academics of the University of Cambridge Academic staff of the University of Adelaide Analytical chemists
Tara Pukala
Chemistry
380
76,367,615
https://en.wikipedia.org/wiki/First%20European%20congress%20of%20astronomers
The first European congress of astronomers took place in August 1798 at the Seeberg Observatory. It lasted around ten days. Invitations The Seeberg Observatory, commissioned in 1790 by Franz Xaver von Zach, quickly became a centre of the European astronomical community. Zach corresponded with almost all colleagues in the field, and the observatory he designed was visited often because of its innovative features. At the beginning of 1798, the French astronomer Jérôme Lalande expressed a desire to visit Gotha Observatory, where he hoped to meet the Berlin astronomer Johann Elert Bode. Zach sent invitations to astronomy-related professionals; the meeting was scheduled for early August. Among the invitees were Taddäus Derfflinger (Kremsmünster), Barry (Mannheim), Rüdiger (Leipzig), M. A. David (Prague) and Strnadt. In most cases, these invitations were received positively and supported by the respective sovereigns. However, some feared the influence of revolutionary French ideas. Jurij Vega from Vienna, who was invited by Lalande, was not allowed to travel to Gotha. Johann Hieronymus Schroeter in Lilienthal and Heinrich Wilhelm Olbers in Bremen stayed away on their own initiative because they suspected that the metric system of units was being propagated. Participants These were the participants at the congress: Johann Elert Bode (1747–1826), Berlin, astronomer George Butler (1774–1853), Cambridge, traveller, student Johannes Feer (1763–1823), Zürich, surveyor Ludwig Wilhelm Gilbert (1769–1824), Halle, professor of physics Johann Kaspar Horner (1774–1834), Gotha, assistant to Zach Johann Jakob Huber (1733–1798), Basel, astronomer Georg Simon Klügel (1739–1812), Halle, professor and optician Johann Gottfried Köhler (1745–1800), Dresden, Mathematial-Physical Salon Jérôme Lalande (1732–1807), Paris, astronomer Marie-Jeanne de Lalande (1768–1832), Paris, astronomical calculator Carl Philipp Heinrich Pistor (1778–1847), Berlin, postal secretary and instrument maker Johann Konrad Schaubach (1764–1849), Meiningen, high school director Karl Felix Seyffer (1762–1822), Göttingen, astronomer Johann Heinrich Seyffert (1751–1817), Dresden, finance secretary Johann Friedrich Wurm (1760–1833), Nürtingen, astronomical calculator, priest Franz Xaver von Zach (1754–1832), Gotha, astronomer possibly also Martin van Marum (1750–1837), Haarlem, physician and chemist Proceedings and results Jérôme Lalande arrived at the Seeberg early, on 25 July, together with his niece, the astronomical calculator Marie-Jeanne de Lalande. Most of the other participants followed between the beginning of August and 9 August, when Bode arrived. Wurm and Huber arrived after Bode; Seyffer may have left before 9 August. Zach could accommodate most of the participants in the observatory buildings, but some had to stay at the inn Zur Schelle on Gotha's Hauptmarkt square. On clear evenings, everyone gathered in the Seeberg Observatory for observations and discussions. The scope of the discussions was broad. It was clear from the outset that only closer cooperation could secure the desired successes. Star atlases and the reduction of star positions for aberration and nutation were discussed. Several participants were working on star catalogues and atlases or contributed data. The comparison of instruments brought along, especially chronometers and sextants, was a topic of discussion. An excursion to the Inselsberg on 14 August 1798 provided an opportunity for practical exercises. Duchess Charlotte of Saxe-Gotha-Altenburg also participated in this working trip. The benefit of a common, decimal system of units (the metric system) and of a common time (Central European Time) was evident to those present, and they adopted these for their work. Introducing these more widely, beyond science, was politically difficult, as it was seen as a product of the French Revolution. Proposals for new constellations were controversial among astronomers. Lalande and Bode had designed new constellations before and brought new proposals to the congress. Others, including Olbers, opposed new constellations. Astronomical journals were likely also discussed. Although there was already the Berliner Astronomisches Jahrbuch, edited by Bode, this series of publications took too long to make new research results known. Further, comparatively little space was given to descriptive texts. Von Zach started editing the Allgemeine Geographische Ephemeriden the same year, 1798. Not on the agenda were emerging fields like spectroscopy, or William Herschel's work on stellar statistics and the structure of the Milky Way. The social gathering was also not neglected. As the duke's brother, Prince August, reported, Lalande's niece's name day was celebrated with a banquet, dance and small cannon. Johann Jakob Huber, who travelled from Basel, fell ill shortly after his arrival and died unexpectedly on 21 August. His son Daniel Huber, who was a mathematician and, like his father, an astronomer, arrived in Gotha and made the acquaintance of Lalande and other scholars. By the end of August 1798 all participants had left. Aftermath A second congress was held in 1800 in Lilienthal, with six participants who, apart from von Zach, were not present in 1798. This meeting founded the Vereinigte Astronomische Gesellschaft, better known as the Celestial police. Eventually, European countries followed the scientists’ lead and adopted their standards for units and time. New constellations met with gradually increasing opposition among astronomers but were abolished only in 1925 by the International Astronomical Union, when a variation of the spherical rectangles of John Herschel, Airy and Baily were implemented. The Astronomische Gesellschaft was founded in 1863 in Heidelberg. On the occasion of the 200th anniversary of the first European congress of astronomers, the Astronomische Gesellschaft held its 1998 spring meeting in Gotha. More than 120 astronomers from 15 countries attended. In honour of the anniversary, the asteroid (8130) Seeberg was named. A globe and a metric ruler, presented by Lalande, are among the memorabilia in the Gotha museum of regional history. References History of astronomy Gotha 1798 in science
First European congress of astronomers
Astronomy
1,321
77,811,592
https://en.wikipedia.org/wiki/Kaabas
Ka'abas also spelt Ka'bas (Arabic: الكعبات) are the plural term used to describe houses of worship mainly located in the Arabian Peninsula that are cubic in shape and resemble the Kaaba structure from Mecca. They are mainly dedicated to various gods from the Arabian pantheon, although the term has been used to describe some Christian churches built in a similar style in the Arabian Peninsula. Architectural style A typical Kaaba building is shaped like a cube or block and functions as a place for the devotees of a particular god or goddess to worship in. The name "Kaaba" was used by ancient Arabians to describe and label these sites because of their resemblance to the Kaaba at Mecca and the purpose of doing pilgrimage to them. They were located throughout the Arabian Peninsula, although some of them even appeared in Persia and the region of Mesopotamia. List of historical Kaabas Here is a list of some of these Kaaba structures that are mentioned in the writings of Muslim scholars and historians. Arabian Peninsula Kaaba of Dushara, worshipped by the Nabataeans Kaaba of Dhu-Ghabat, worshipped by the Banu Lihyan tribe Kaaba of al-Lat, worshipped by the Thaqif tribe Kaaba of Dhu al-Khalasa, worshipped by the Daws tribe Kaaba Najran, worshipped by the inhabitants of Najran before their conversion to Christianity Yemeni Kaaba, a church built by the Aksumite garrison in Yemen to rival the Kaaba of Mecca Mesopotamia Kaaba Sindad, used by the migrant Arabs as a place for celebrations to be held instead of a place of worship. Persia Kaaba of Zoroaster, a place of worship for Zoroastrians. It is unlikely to have been a temple; although it did reportedly contain statues of gods that were destroyed by Bardiya according to inscriptions and texts from the Achaemenid period. Fate of the Kaabas Most of the Kaabas dedicated to pagan gods in the Arabian Peninsula were destroyed after Islam. Among the destroyed Kaabas include that of the Kaaba of al-Lat that was worshipped by the Thaqif. Conversion into other places of worship Some said that the Kaaba of Najran in the ancient city of Al-Okhdood became a church after the Aksumites entered Najran as a relief for their Christian brethren who had been persecuted by Dhu Nuwas. The Kaaba of Najran still survives today, although in ruins, and is part of an archaeological site. The traveller Yaqut al-Hamawi mentions that the Kaaba of Dhu al-Khalasa was converted into a mosque. The site of the Kaaba of al-Lat is also now where the Abd Allah ibn al-Abbas Mosque stands. Notes References Architecture Building Temples
Kaabas
Engineering
579
5,768,086
https://en.wikipedia.org/wiki/Semi-deciduous
Semi-deciduous or semi-evergreen is a botanical term which refers to plants that lose their foliage for a very short period, when old leaves fall off and new foliage growth is starting. This phenomenon occurs in tropical and sub-tropical woody species, for example in Dipteryx odorata. Semi-deciduous or semi-evergreen may also describe some trees, bushes or plants that normally only lose part of their foliage in autumn/winter or during the dry season, but might lose all their leaves in a manner similar to deciduous trees in an especially cold autumn/winter or severe dry season (drought). See also Brevideciduous Evergreen Marcescence Hedera References Botany
Semi-deciduous
Biology
138
33,307,786
https://en.wikipedia.org/wiki/Supercapacitor
A supercapacitor (SC), also called an ultracapacitor, is a high-capacity capacitor, with a capacitance value much higher than solid-state capacitors but with lower voltage limits. It bridges the gap between electrolytic capacitors and rechargeable batteries. It typically stores 10 to 100 times more energy per unit volume or mass than electrolytic capacitors, can accept and deliver charge much faster than batteries, and tolerates many more charge and discharge cycles than rechargeable batteries. Unlike ordinary capacitors, supercapacitors do not use the conventional solid dielectric, but rather, they use electrostatic double-layer capacitance and electrochemical pseudocapacitance, both of which contribute to the total energy storage of the capacitor. Supercapacitors are used in applications requiring many rapid charge/discharge cycles, rather than long-term compact energy storage: in automobiles, buses, trains, cranes and elevators, where they are used for regenerative braking, short-term energy storage, or burst-mode power delivery. Smaller units are used as power backup for static random-access memory (SRAM). Background The electrochemical charge storage mechanisms in solid media can be roughly (there is an overlap in some systems) classified into 3 types: Electrostatic double-layer capacitors (EDLCs) use carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance, achieving separation of charge in a Helmholtz double layer at the interface between the surface of a conductive electrode and an electrolyte. The separation of charge is of the order of a few ångströms (0.3–0.8 nm), much smaller than in a conventional capacitor. The electric charge in EDLCs is stored in a two-dimensional interphase (surface) of an electronic conductor (e.g. carbon particle) and ionic conductor (electrolyte solution). Batteries with solid electroactive materials store charge in bulk solid phases by virtue of redox chemical reactions. Electrochemical supercapacitors (ECSCs) fall in between EDLs and batteries. ECSCs use metal oxide or conducting polymer electrodes with a high amount of electrochemical pseudocapacitance additional to the double-layer capacitance. Pseudocapacitance is achieved by Faradaic electron charge-transfer with redox reactions, intercalation or electrosorption. In solid-state capacitors, the mobile charges are electrons, and the gap between electrodes is a layer of a dielectric. In electrochemical double-layer capacitors, the mobile charges are solvated ions (cations and anions), and the effective thickness is determined on each of the two electrodes by their electrochemical double layer structure. In batteries the charge is stored in the bulk volume of solid phases, which have both electronic and ionic conductivities. In electrochemical supercapacitors, the charge storage mechanisms either combine the double-layer and battery mechanisms, or are based on mechanisms, which are intermediate between true double layer and true battery. History In the early 1950s, General Electric engineers began experimenting with porous carbon electrodes in the design of capacitors, from the design of fuel cells and rechargeable batteries. Activated charcoal is an electrical conductor that is an extremely porous "spongy" form of carbon with a high specific surface area. In 1957 H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." General Electric did not immediately pursue this work. In 1966 researchers at Standard Oil of Ohio (SOHIO) developed another version of the component as "electrical energy storage apparatus", while working on experimental fuel cell designs. The nature of electrochemical energy storage was not described in this patent. Even in 1970, the electrochemical capacitor patented by Donald L. Boos was registered as an electrolytic capacitor with activated carbon electrodes. Early electrochemical capacitors used two aluminum foils covered with activated carbon (the electrodes) that were soaked in an electrolyte and separated by a thin porous insulator. This design gave a capacitor with a capacitance on the order of one farad, significantly higher than electrolytic capacitors of the same dimensions. This basic mechanical design remains the basis of most electrochemical capacitors. SOHIO did not commercialize their invention, licensing the technology to NEC, who finally marketed the results as "supercapacitors" in 1978, to provide backup power for computer memory. Between 1975 and 1980 Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991 he described the difference between "supercapacitor" and "battery" behaviour in electrochemical energy storage. In 1999 he defined the term "supercapacitor" to make reference to the increase in observed capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions. His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption (adsorption onto a surface). With his research, Conway greatly expanded the knowledge of electrochemical capacitors. The market expanded slowly. That changed around 1978 as Panasonic marketed its Goldcaps brand. This product became a successful energy source for memory backup applications. Competition started only years later. In 1987 ELNA "Dynacap"s entered the market. First generation EDLC's had relatively high internal resistance that limited the discharge current. They were used for low current applications such as powering SRAM chips or for data backup. At the end of the 1980s, improved electrode materials increased capacitance values. At the same time, the development of electrolytes with better conductivity lowered the equivalent series resistance (ESR) increasing charge/discharge currents. The first supercapacitor with low internal resistance was developed in 1982 for military applications through the Pinnacle Research Institute (PRI), and were marketed under the brand name "PRI Ultracapacitor". In 1992, Maxwell Laboratories (later Maxwell Technologies) took over this development. Maxwell adopted the term Ultracapacitor from PRI and called them "Boost Caps" to underline their use for power applications. Since capacitors' energy content increases with the square of the voltage, researchers were looking for a way to increase the electrolyte's breakdown voltage. In 1994 using the anode of a 200 V high-voltage tantalum electrolytic capacitor, David A. Evans developed an "Electrolytic-Hybrid Electrochemical Capacitor". These capacitors combine features of electrolytic and electrochemical capacitors. They combine the high dielectric strength of an anode from an electrolytic capacitor with the high capacitance of a pseudocapacitive metal oxide (ruthenium (IV) oxide) cathode from an electrochemical capacitor, yielding a hybrid electrochemical capacitor. Evans' capacitors, coined Capattery, had an energy content about a factor of 5 higher than a comparable tantalum electrolytic capacitor of the same size. Their high costs limited them to specific military applications. Recent developments include lithium-ion capacitors. These hybrid capacitors were pioneered by Fujitsu's FDK in 2007. They combine an electrostatic carbon electrode with a pre-doped lithium-ion electrochemical electrode. This combination increases the capacitance value. Additionally, the pre-doping process lowers the anode potential and results in a high cell output voltage, further increasing specific energy. Research departments active in many companies and universities are working to improve characteristics such as specific energy, specific power, and cycle stability and to reduce production costs. Design Basic design Electrochemical capacitors (supercapacitors) consist of two electrodes separated by an ion-permeable membrane (separator), and an electrolyte ionically connecting both electrodes. When the electrodes are polarized by an applied voltage, ions in the electrolyte form electric double layers of opposite polarity to the electrode's polarity. For example, positively polarized electrodes will have a layer of negative ions at the electrode/electrolyte interface along with a charge-balancing layer of positive ions adsorbing onto the negative layer. The opposite is true for the negatively polarized electrode. Additionally, depending on electrode material and surface shape, some ions may permeate the double layer becoming specifically adsorbed ions and contribute with pseudocapacitance to the total capacitance of the supercapacitor. Capacitance distribution The two electrodes form a series circuit of two individual capacitors C1 and C2. The total capacitance Ctotal is given by the formula Supercapacitors may have either symmetric or asymmetric electrodes. Symmetry implies that both electrodes have the same capacitance value, yielding a total capacitance of half the value of each single electrode (if C1 = C2, then Ctotal = ½ C1). For asymmetric capacitors, the total capacitance can be taken as that of the electrode with the smaller capacitance (if C1 >> C2, then Ctotal ≈ C2). Storage principles Electrochemical capacitors use the double-layer effect to store electric energy; however, this double-layer has no conventional solid dielectric to separate the charges. There are two storage principles in the electric double-layer of the electrodes that contribute to the total capacitance of an electrochemical capacitor: Double-layer capacitance, electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer. Pseudocapacitance, electrochemical storage of the electrical energy. The original type uses faradaic redox reactions with charge-transfer. Both capacitances are only separable by measurement techniques. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size, although the amount of capacitance of each storage principle can vary extremely. Electrical double-layer capacitance Every electrochemical capacitor has two electrodes, mechanically separated by a separator, which are ionically connected to each other via the electrolyte. The electrolyte is a mixture of positive and negative ions dissolved in a solvent such as water. At each of the two electrode surfaces originates an area in which the liquid electrolyte contacts the conductive metallic surface of the electrode. This interface forms a common boundary among two different phases of matter, such as an insoluble solid electrode surface and an adjacent liquid electrolyte. In this interface occurs a very special phenomenon of the double layer effect. Applying a voltage to an electrochemical capacitor causes both electrodes in the capacitor to generate electrical double-layers. These double-layers consist of two layers of charges: one electronic layer is in the surface lattice structure of the electrode, and the other, with opposite polarity, emerges from dissolved and solvated ions in the electrolyte. The two layers are separated by a monolayer of solvent molecules, e.g., for water as solvent by water molecules, called inner Helmholtz plane (IHP). Solvent molecules adhere by physical adsorption on the surface of the electrode and separate the oppositely polarized ions from each other, and can be idealised as a molecular dielectric. In the process, there is no transfer of charge between electrode and electrolyte, so the forces that cause the adhesion are not chemical bonds, but physical forces, e.g., electrostatic forces. The adsorbed molecules are polarized, but, due to the lack of transfer of charge between electrolyte and electrode, suffered no chemical changes. The amount of charge in the electrode is matched by the magnitude of counter-charges in outer Helmholtz plane (OHP). This double-layer phenomena stores electrical charges as in a conventional capacitor. The double-layer charge forms a static electric field in the molecular layer of the solvent molecules in the IHP that corresponds to the strength of the applied voltage. The double-layer serves approximately as the dielectric layer in a conventional capacitor, albeit with the thickness of a single molecule. Thus, the standard formula for conventional plate capacitors can be used to calculate their capacitance: . Accordingly, capacitance C is greatest in capacitors made from materials with a high permittivity ε, large electrode plate surface areas A and small distance between plates d. As a result, double-layer capacitors have much higher capacitance values than conventional capacitors, arising from the extremely large surface area of activated carbon electrodes and the extremely thin double-layer distance on the order of a few ångströms (0.3–0.8 nm), of order of the Debye length. Assuming that the minimum distance between the electrode and the charge accumulating region cannot be less than the typical distance between negative and positive charges in atoms of ~0.05 nm a general capacitance upper limit of ~18 μF/cm2 has been predicted for non-faradaic capacitors. The main drawback of carbon electrodes of double-layer SCs is small values of quantum capacitance which act in series with capacitance of ionic space charge. Therefore, further increase of density of capacitance in SCs can be connected with increasing of quantum capacitance of carbon electrode nanostructures. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size. The electrostatic storage of energy in the double-layers is linear with respect to the stored charge, and correspond to the concentration of the adsorbed ions. Also, while charge in conventional capacitors is transferred via electrons, capacitance in double-layer capacitors is related to the limited moving speed of ions in the electrolyte and the resistive porous structure of the electrodes. Since no chemical changes take place within the electrode or electrolyte, charging and discharging electric double-layers in principle is unlimited. Real supercapacitors lifetimes are only limited by electrolyte evaporation effects. Electrochemical pseudocapacitance Applying a voltage at the electrochemical capacitor terminals moves electrolyte ions to the opposite polarized electrode and forms a double-layer in which a single layer of solvent molecules acts as separator. Pseudocapacitance can originate when specifically adsorbed ions out of the electrolyte pervade the double-layer. This pseudocapacitance stores electrical energy by means of reversible faradaic redox reactions on the surface of suitable electrodes in an electrochemical capacitor with an electric double-layer. Pseudocapacitance is accompanied with an electron charge-transfer between electrolyte and electrode coming from a de-solvated and adsorbed ion whereby only one electron per charge unit is participating. This faradaic charge transfer originates by a very fast sequence of reversible redox, intercalation or electrosorption processes. The adsorbed ion has no chemical reaction with the atoms of the electrode (no chemical bonds arise) since only a charge-transfer take place. The electrons involved in the faradaic processes are transferred to or from valence electron states (orbitals) of the redox electrode reagent. They enter the negative electrode and flow through the external circuit to the positive electrode where a second double-layer with an equal number of anions has formed. The electrons reaching the positive electrode are not transferred to the anions forming the double-layer, instead they remain in the strongly ionized and "electron hungry" transition-metal ions of the electrode's surface. As such, the storage capacity of faradaic pseudocapacitance is limited by the finite quantity of reagent in the available surface. A faradaic pseudocapacitance only occurs together with a static double-layer capacitance, and its magnitude may exceed the value of double-layer capacitance for the same surface area by factor of 100, depending on the nature and the structure of the electrode, because all the pseudocapacitance reactions take place only with de-solvated ions, which are much smaller than solvated ion with their solvating shell. The amount of pseudocapacitance has a linear function within narrow limits determined by the potential-dependent degree of surface coverage of the adsorbed anions. The ability of electrodes to accomplish pseudocapacitance effects by redox reactions, intercalation or electrosorption strongly depends on the chemical affinity of electrode materials to the ions adsorbed on the electrode surface as well as on the structure and dimension of the electrode pores. Materials exhibiting redox behavior for use as electrodes in pseudocapacitors are transition-metal oxides like RuO2, IrO2, or MnO2 inserted by doping in the conductive electrode material such as active carbon, as well as conducting polymers such as polyaniline or derivatives of polythiophene covering the electrode material. The amount of electric charge stored in a pseudocapacitance is linearly proportional to the applied voltage. The unit of pseudocapacitance is farad, same as that of capacitance. Although conventional battery-type electrode materials also use chemical reactions to store charge, they show very different electrical profiles, as the rate of discharge is limited by the speed of diffusion. Grinding those materials down to nanoscale frees them of the diffusion limit and give them a more pseudocapacitative behavior, making them extrinsic pseudocapacitors. Chodankar et al. 2020, figure 2 shows the representative voltage-capacity curves for bulk LiCoO2, nano LiCoO2, a redox pseudocapacitor (RuO2), and a intercalation pseudocapacitor (T-Nb2O5). Asymmetric capacitors Supercapacitors can also be made with different materials and principles at the electrodes. If both of those materials use a fast, supercapacitor-type reaction (capacitance or pseudocapacitance), the result is called an asymmetric capacitor. The two electrodes have different electric potentials; when combined with proper balancing, the result is improved energy density with no loss of lifespan or current capacity. Hybrid capacitors A number of newer supercapacitors are "hybrid": only one electrode uses a fast reaction (capacitance or pseudocapacitance), the other using a more "battery-like" (slower but higher-capacity) material. For example, an EDLC anode can be combined with an activated carbon–Ni(OH)2 cathode, the latter being a slow faradaic material. The and profiles of a hybrid capacitor have a shape between that of a battery and an SC, more similar to that of an SC. Hybrid capacitors have much higher energy density, but have inferior cycle life and current capacity owing to the slower electrode. Potential distribution Conventional capacitors (also known as electrostatic capacitors), such as ceramic capacitors and film capacitors, consist of two electrodes separated by a dielectric material. When charged, the energy is stored in a static electric field that permeates the dielectric between the electrodes. The total energy increases with the amount of stored charge, which in turn correlates linearly with the potential (voltage) between the plates. The maximum potential difference between the plates (the maximal voltage) is limited by the dielectric's breakdown field strength. The same static storage also applies for electrolytic capacitors in which most of the potential decreases over the anode's thin oxide layer. The somewhat resistive liquid electrolyte (cathode) accounts for a small decrease of potential for "wet" electrolytic capacitors, while electrolytic capacitors with solid conductive polymer electrolyte this voltage drop is negligible. In contrast, electrochemical capacitors (supercapacitors) consists of two electrodes separated by an ion-permeable membrane (separator) and electrically connected via an electrolyte. Energy storage occurs within the double-layers of both electrodes as a mixture of a double-layer capacitance and pseudocapacitance. When both electrodes have approximately the same resistance (internal resistance), the potential of the capacitor decreases symmetrically over both double-layers, whereby a voltage drop across the equivalent series resistance (ESR) of the electrolyte is achieved. For asymmetrical supercapacitors like hybrid capacitors the voltage drop between the electrodes could be asymmetrical. The maximum potential across the capacitor (the maximal voltage) is limited by the electrolyte decomposition voltage. Both electrostatic and electrochemical energy storage in supercapacitors are linear with respect to the stored charge, just as in conventional capacitors. The voltage between the capacitor terminals is linear with respect to the amount of stored energy. Such linear voltage gradient differs from rechargeable electrochemical batteries, in which the voltage between the terminals remains independent of the amount of stored energy, providing a relatively constant voltage. Comparison with other storage technologies Supercapacitors compete with electrolytic capacitors and rechargeable batteries, especially lithium-ion batteries. The following table compares the major parameters of the three main supercapacitor families with electrolytic capacitors and batteries. Electrolytic capacitors feature nearly unlimited charge/discharge cycles, high dielectric strength (up to 550 V) and good frequency response as alternating current (AC) reactance in the lower frequency range. Supercapacitors can store 10 to 100 times more energy than electrolytic capacitors, but they do not support AC applications. With regards to rechargeable batteries, supercapacitors feature higher peak currents, low cost per cycle, no danger of overcharging, good reversibility, non-corrosive electrolyte and low material toxicity. Batteries offer lower purchase cost and stable voltage under discharge, but require complex electronic control and switching equipment, with consequent energy loss and spark hazard given a short. Styles Supercapacitors are made in different styles, such as flat with a single pair of electrodes, wound in a cylindrical case, or stacked in a rectangular case. Because they cover a broad range of capacitance values, the size of the cases can vary. Supercapacitors are constructed with two metal foils (current collectors), each coated with an electrode material such as activated carbon, which serve as the power connection between the electrode material and the external terminals of the capacitor. Specifically to the electrode material is a very large surface area. In this example the activated carbon is electrochemically etched, so that the surface area of the material is about 100,000 times greater than the smooth surface. The electrodes are kept apart by an ion-permeable membrane (separator) used as an insulator to protect the electrodes against short circuits. This construction is subsequently rolled or folded into a cylindrical or rectangular shape and can be stacked in an aluminum can or an adaptable rectangular housing. The cell is then impregnated with a liquid or viscous electrolyte of organic or aqueous type. The electrolyte, an ionic conductor, enters the pores of the electrodes and serves as the conductive connection between the electrodes across the separator. Finally, the housing is hermetically sealed to ensure stable behavior over the specified lifetime. Types Electrical energy is stored in supercapacitors via two storage principles, static double-layer capacitance and electrochemical pseudocapacitance; and the distribution of the two types of capacitance depends on the material and structure of the electrodes. There are three types of supercapacitors based on storage principle: Double-layer capacitors (EDLCs): with activated carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance Pseudocapacitors: with transition metal oxide or conducting polymer electrodes with a high electrochemical pseudocapacitance Hybrid capacitors: with asymmetric electrodes, one of which exhibits mostly electrostatic and the other mostly electrochemical capacitance, such as lithium-ion capacitors Because double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of an electrochemical capacitor, a correct description of these capacitors only can be given under the generic term. The concepts of supercapattery and supercabattery have been recently proposed to better represent those hybrid devices that behave more like the supercapacitor and the rechargeable battery, respectively. The capacitance value of a supercapacitor is determined by two storage principles: Double-layer capacitance – electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer at the interface between the surface of a conductor electrode and an electrolytic solution electrolyte. The separation of charge distance in a double-layer is on the order of a few ångströms (0.3–0.8 nm) and is static in origin. Pseudocapacitance – Electrochemical storage of the electrical energy, achieved by redox reactions, electrosorption or intercalation on the surface of the electrode by specifically adsorbed ions, that results in a reversible faradaic charge-transfer on the electrode. Double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of a supercapacitor. However, the ratio of the two can vary greatly, depending on the design of the electrodes and the composition of the electrolyte. Pseudocapacitance can increase the capacitance value by as much as a factor of ten over that of the double-layer by itself. Electric double-layer capacitors (EDLC) are electrochemical capacitors in which energy storage predominantly is achieved by double-layer capacitance. In the past, all electrochemical capacitors were called "double-layer capacitors". Contemporary usage sees double-layer capacitors, together with pseudocapacitors, as part of a larger family of electrochemical capacitors called supercapacitors. They are also known as ultracapacitors. Materials The properties of supercapacitors come from the interaction of their internal materials. Especially, the combination of electrode material and type of electrolyte determine the functionality and thermal and electrical characteristics of the capacitors. Electrodes Supercapacitor electrodes are generally thin coatings applied and electrically connected to a conductive, metallic current collector. Electrodes must have good conductivity, high temperature stability, long-term chemical stability (inertness), high corrosion resistance and high surface areas per unit volume and mass. Other requirements include environmental friendliness and low cost. The amount of double-layer as well as pseudocapacitance stored per unit voltage in a supercapacitor is predominantly a function of the electrode surface area. Therefore, supercapacitor electrodes are typically made of porous, spongy material with an extraordinarily high specific surface area, such as activated carbon. Additionally, the ability of the electrode material to perform faradaic charge transfers enhances the total capacitance. Generally the smaller the electrode's pores, the greater the capacitance and specific energy. However, smaller pores increase equivalent series resistance (ESR) and decrease specific power. Applications with high peak currents require larger pores and low internal losses, while applications requiring high specific energy need small pores. Electrodes for EDLCs The most commonly used electrode material for supercapacitors is carbon in various manifestations such as activated carbon (AC), carbon fibre-cloth (AFC), carbide-derived carbon (CDC), carbon aerogel, graphite (graphene), graphane and carbon nanotubes (CNTs). Carbon-based electrodes exhibit predominantly static double-layer capacitance, even though a small amount of pseudocapacitance may also be present depending on the pore size distribution. Pore sizes in carbons typically range from micropores (less than 2 nm) to mesopores (2-50 nm), but only micropores (<2 nm) contribute to pseudocapacitance. As pore size approaches the solvation shell size, solvent molecules are excluded and only unsolvated ions fill the pores (even for large ions), increasing ionic packing density and storage capability by faradaic intercalation. Activated carbon Activated carbon was the first material chosen for EDLC electrodes. Even though its electrical conductivity is approximately 0.003% that of metals (1,250 to 2,000 S/m), it is sufficient for supercapacitors. Activated carbon is an extremely porous form of carbon with a high specific surface area — a common approximation is that 1 gram (0.035 oz) (a pencil-eraser-sized amount) has a surface area of roughly — about the size of 4 to 12 tennis courts. The bulk form used in electrodes is low-density with many pores, giving high double-layer capacitance. Solid activated carbon, also termed consolidated amorphous carbon (CAC) is the most used electrode material for supercapacitors and may be cheaper than other carbon derivatives. It is produced from activated carbon powder pressed into the desired shape, forming a block with a wide distribution of pore sizes. An electrode with a surface area of about 1000 m2/g results in a typical double-layer capacitance of about 10 μF/cm2 and a specific capacitance of 100 F/g. virtually all commercial supercapacitors use powdered activated carbon made from coconut shells. Coconut shells produce activated carbon with more micropores than does charcoal made from wood. Activated carbon fibres Activated carbon fibres (ACF) are produced from activated carbon and have a typical diameter of 10 μm. They can have micropores with a very narrow pore-size distribution that can be readily controlled. The surface area of ACF woven into a textile is about . Advantages of ACF electrodes include low electrical resistance along the fibre axis and good contact to the collector. As for activated carbon, ACF electrodes exhibit predominantly double-layer capacitance with a small amount of pseudocapacitance due to their micropores. Carbon aerogel Carbon aerogel is a highly porous, synthetic, ultralight material derived from an organic gel in which the liquid component of the gel has been replaced with a gas. Aerogel electrodes are made via pyrolysis of resorcinol-formaldehyde aerogels and are more conductive than most activated carbons. They enable thin and mechanically stable electrodes with a thickness in the range of several hundred micrometres (μm) and with uniform pore size. Aerogel electrodes also provide mechanical and vibration stability for supercapacitors used in high-vibration environments. Researchers have created a carbon aerogel electrode with gravimetric densities of about 400–1200 m2/g and volumetric capacitance of 104 F/cm3, yielding a specific energy of () and specific power of . Standard aerogel electrodes exhibit predominantly double-layer capacitance. Aerogel electrodes that incorporate composite material can add a high amount of pseudocapacitance. Carbide-derived carbon Carbide-derived carbon (CDC), also known as tunable nanoporous carbon, is a family of carbon materials derived from carbide precursors, such as binary silicon carbide and titanium carbide, that are transformed into pure carbon via physical, e.g., thermal decomposition or chemical, e.g., halogenation) processes. Carbide-derived carbons can exhibit high surface area and tunable pore diameters (from micropores to mesopores) to maximize ion confinement, increasing pseudocapacitance by faradaic adsorption treatment. CDC electrodes with tailored pore design offer as much as 75% greater specific energy than conventional activated carbons. , a CDC supercapacitor offered a specific energy of 10.1 Wh/kg, 3,500 F capacitance and over one million charge-discharge cycles. Graphene Graphene is a one-atom thick sheet of graphite, with atoms arranged in a regular hexagonal pattern, also called "nanocomposite paper". Graphene has a theoretical specific surface area of 2630 m2/g which can theoretically lead to a capacitance of 550 F/g. In addition, an advantage of graphene over activated carbon is its higher electrical conductivity. , a new development used graphene sheets directly as electrodes without collectors for portable applications. In one embodiment, a graphene-based supercapacitor uses curved graphene sheets that do not stack face-to-face, forming mesopores that are accessible to and wettable by ionic electrolytes at voltages up to 4 V. A specific energy of () is obtained at room temperature equaling that of a conventional nickel–metal hydride battery, but with 100–1000 times greater specific power. The two-dimensional structure of graphene improves charging and discharging. Charge carriers in vertically oriented sheets can quickly migrate into or out of the deeper structures of the electrode, thus increasing currents. Such capacitors may be suitable for 100/120 Hz filter applications, which are unreachable for supercapacitors using other carbon materials. Carbon nanotubes Carbon nanotubes (CNTs), also called buckytubes, are carbon molecules with a cylindrical nanostructure. They have a hollow structure with walls formed by one-atom-thick sheets of graphite. These sheets are rolled at specific and discrete ("chiral") angles, and the combination of chiral angle and radius controls properties such as electrical conductivity, electrolyte wettability and ion access. Nanotubes are categorized as single-walled nanotubes (SWNTs) or multi-walled nanotubes (MWNTs). The latter have one or more outer tubes successively enveloping a SWNT, much like the Russian matryoshka dolls. SWNTs have diameters ranging between 1 and 3 nm. MWNTs have thicker coaxial walls, separated by spacing (0.34 nm) that is close to graphene's interlayer distance. Nanotubes can grow vertically on the collector substrate, such as a silicon wafer. Typical lengths are 20 to 100 μm. Carbon nanotubes can greatly improve capacitor performance, due to the highly wettable surface area and high conductivity. A SWNT-based supercapacitor with aqueous electrolyte was systematically studied at University of Delaware in Prof. Bingqing Wei's group. Li et al., for the first time, discovered that the ion-size effect and the electrode-electrolyte wettability are the dominant factors affecting the electrochemical behavior of flexible SWCNTs-supercapacitors in different 1 molar aqueous electrolytes with different anions and cations. The experimental results also showed for flexible supercapacitor that it is suggested to put enough pressure between the two electrodes to improve the aqueous electrolyte CNT supercapacitor. CNTs can store about the same charge as activated carbon per unit surface area, but nanotubes' surface is arranged in a regular pattern, providing greater wettability. SWNTs have a high theoretical specific surface area of 1315 m2/g, while that for MWNTs is lower and is determined by the diameter of the tubes and degree of nesting, compared with a surface area of about 3000 m2/g of activated carbons. Nevertheless, CNTs have higher capacitance than activated carbon electrodes, e.g., 102 F/g for MWNTs and 180 F/g for SWNTs. MWNTs have mesopores that allow for easy access of ions at the electrode–electrolyte interface. As the pore size approaches the size of the ion solvation shell, the solvent molecules are partially stripped, resulting in larger ionic packing density and increased faradaic storage capability. However, the considerable volume change during repeated intercalation and depletion decreases their mechanical stability. To this end, research to increase surface area, mechanical strength, electrical conductivity and chemical stability is ongoing. Electrodes for pseudocapacitors MnO2 and RuO2 are typical materials used as electrodes for pseudocapacitors, since they have the electrochemical signature of a capacitive electrode (linear dependence on current versus voltage curve) as well as exhibiting aic behavior. Additionally, the charge storage originates from electron-transfer mechanisms rather than accumulation of ions in the electrochemical double layer. Pseudocapacitors were created through faradaic redox reactions that occur within the active electrode materials. More research was focused on transition-metal oxides such as MnO2 since transition-metal oxides have a lower cost compared to noble metal oxides such as RuO2. Moreover, the charge storage mechanisms of transition-metal oxides are based predominantly on pseudocapacitance. Two mechanisms of MnO2 charge storage behavior were introduced. The first mechanism implies the intercalation of protons (H+) or alkali metal cations (C+) in the bulk of the material upon reduction followed by deintercalation upon oxidation. MnO2 + H+ (C+) + e− MnOOH(C) The second mechanism is based on the surface adsorption of electrolyte cations on MnO2. (MnO2)surface + C+ + e− (MnO2− C+)surface Not every material that exhibits faradaic behavior can be used as an electrode for pseudocapacitors, such as Ni(OH)2 since it is a battery type electrode (non-linear dependence on current versus voltage curve). Metal oxides Brian Evans Conway's research described electrodes of transition metal oxides that exhibited high amounts of pseudocapacitance. Oxides of transition metals including ruthenium (), iridium (), iron (), manganese () or sulfides such as titanium sulfide () alone or in combination generate strong faradaic electron–transferring reactions combined with low resistance. Ruthenium dioxide in combination with electrolyte provides specific capacitance of 720 F/g and a high specific energy of 26.7 Wh/kg (). Charge/discharge takes place over a window of about 1.2 V per electrode. This pseudocapacitance of about 720 F/g is roughly 100 times higher than for double-layer capacitance using activated carbon electrodes. These transition metal electrodes offer excellent reversibility, with several hundred-thousand cycles. However, ruthenium is expensive and the 2.4 V voltage window for this capacitor limits their applications to military and space applications. Das et al. reported highest capacitance value (1715 F/g) for ruthenium oxide based supercapacitor with electrodeposited ruthenium oxide onto porous single wall carbon nanotube film electrode. A high specific capacitance of 1715 F/g has been reported which closely approaches the predicted theoretical maximum capacitance of 2000 F/g. In 2014, a supercapacitor anchored on a graphene foam electrode delivered specific capacitance of 502.78 F/g and areal capacitance of 1.11 F/cm2) leading to a specific energy of 39.28 Wh/kg and specific power of 128.01 kW/kg over 8,000 cycles with constant performance. The device was a three-dimensional (3D) sub-5 nm hydrous ruthenium-anchored graphene and carbon nanotube (CNT) hybrid foam (RGM) architecture. The graphene foam was conformally covered with hybrid networks of nanoparticles and anchored CNTs. Less expensive oxides of iron, vanadium, nickel and cobalt have been tested in aqueous electrolytes, but none has been investigated as much as manganese dioxide (). However, none of these oxides are in commercial use. Conductive polymers Another approach uses electron-conducting polymers as pseudocapacitive material. Although mechanically weak, conductive polymers have high conductivity, resulting in a low ESR and a relatively high capacitance. Such conducting polymers include polyaniline, polythiophene, polypyrrole and polyacetylene. Such electrodes also employ electrochemical doping or dedoping of the polymers with anions and cations. Electrodes made from, or coated with, conductive polymers have costs comparable to carbon electrodes. Conducting polymer electrodes generally suffer from limited cycling stability. However, polyacene electrodes provide up to 10,000 cycles, much better than batteries. Electrodes for hybrid capacitors All commercial hybrid supercapacitors are asymmetric. They combine an electrode with high amount of pseudocapacitance with an electrode with a high amount of double-layer capacitance. In such systems the faradaic pseudocapacitance electrode with their higher capacitance provides high specific energy while the non-faradaic EDLC electrode enables high specific power. An advantage of the hybrid-type supercapacitors compared with symmetrical EDLC's is their higher specific capacitance value as well as their higher rated voltage and correspondingly their higher specific energy. Composite electrodes Composite electrodes for hybrid-type supercapacitors are constructed from carbon-based material with incorporated or deposited pseudocapacitive active materials like metal oxides and conducting polymers. most research for supercapacitors explores composite electrodes. CNTs give a backbone for a homogeneous distribution of metal oxide or electrically conducting polymers (ECPs), producing good pseudocapacitance and good double-layer capacitance. These electrodes achieve higher capacitances than either pure carbon or pure metal oxide or polymer-based electrodes. This is attributed to the accessibility of the nanotubes' tangled mat structure, which allows a uniform coating of pseudocapacitive materials and three-dimensional charge distribution. The process to anchor pseudocapactive materials usually uses a hydrothermal process. However, a recent researcher, Li et al., from the University of Delaware found a facile and scalable approach to precipitate MnO2 on a SWNT film to make an organic-electrolyte based supercapacitor. Another way to enhance CNT electrodes is by doping with a pseudocapacitive dopant as in lithium-ion capacitors. In this case the relatively small lithium atoms intercalate between the layers of carbon. The anode is made of lithium-doped carbon, which enables lower negative potential with a cathode made of activated carbon. This results in a larger voltage of 3.8-4 V that prevents electrolyte oxidation. As of 2007 they had achieved capacitance of 550 F/g. and reach a specific energy up to 14 Wh/kg (). Battery-type electrodes Rechargeable battery electrodes influenced the development of electrodes for new hybrid-type supercapacitor electrodes as for lithium-ion capacitors. Together with a carbon EDLC electrode in an asymmetric construction offers this configuration higher specific energy than typical supercapacitors with higher specific power, longer cycle life and faster charging and recharging times than batteries. Asymmetric electrodes (pseudo/EDLC) Recently some asymmetric hybrid supercapacitors were developed in which the positive electrode were based on a real pseudocapacitive metal oxide electrode (not a composite electrode), and the negative electrode on an EDLC activated carbon electrode. Asymmetric supercapacitors (ASC) have shown a great potential candidate for high-performance supercapacitor due to their wide operating potential which can remarkably enhance the capacitive behavior. An advantage of this type of supercapacitors is their higher voltage and correspondingly their higher specific energy (up to 10-20 Wh/kg (36-72 kJ/kg)).And they also have good cycling stability. For example, researchers use a kind of novel skutterudite Ni–CoP3 nanosheets and use it as positive electrodes with activated carbon (AC) as negative electrodes to fabricate asymmetric supercapacitor (ASC). It exhibits high energy density of 89.6 Wh/kg at 796 W/kg and stability of 93% after 10,000 cycles, which can be a great potential to be an excellent next-generation electrode candidate. Also, carbon nanofibers/poly(3,4-ethylenedioxythiophene)/manganese oxide (f-CNFs/PEDOT/MnO2) were used as positive electrodes and AC as negative electrodes. It has high specific energy of 49.4 Wh/kg and good cycling stability (81.06% after cycling 8000 times). Besides, many kinds of nanocomposite are being studied as electrodes, like NiCo2S4@NiO, MgCo2O4@MnO2 and so on. For example, Fe-SnO2@CeO2 nanocomposite used as electrode can provide a specific energy and specific power of 32.2 Wh/kg and 747 W/kg. The device exhibited the capacitance retention of 85.05 % over 5000 cycles of operation. As far as known no commercial offered supercapacitors with such kind of asymmetric electrodes are on the market. Electrolytes Electrolytes consist of a solvent and dissolved chemicals that dissociate into positive cations and negative anions, making the electrolyte electrically conductive. The more ions the electrolyte contains, the better its conductivity. In supercapacitors electrolytes are the electrically conductive connection between the two electrodes. Additionally, in supercapacitors the electrolyte provides the molecules for the separating monolayer in the Helmholtz double-layer and delivers the ions for pseudocapacitance. The electrolyte determines the capacitor's characteristics: its operating voltage, temperature range, ESR and capacitance. With the same activated carbon electrode an aqueous electrolyte achieves capacitance values of 160 F/g, while an organic electrolyte achieves only 100 F/g. The electrolyte must be chemically inert and not chemically attack the other materials in the capacitor to ensure long time stable behavior of the capacitor's electrical parameters. The electrolyte's viscosity must be low enough to wet the porous, sponge-like structure of the electrodes. An ideal electrolyte does not exist, forcing a compromise between performance and other requirements. Water is a relatively good solvent for inorganic chemicals. Treated with acids such as sulfuric acid (), alkalis such as potassium hydroxide (KOH), or salts such as quaternary phosphonium salts, sodium perchlorate (), lithium perchlorate () or lithium hexafluoride arsenate (), water offers relatively high conductivity values of about 100 to 1000 mS/cm. Aqueous electrolytes have a dissociation voltage of 1.15 V per electrode (2.3 V capacitor voltage) and a relatively low operating temperature range. They are used in supercapacitors with low specific energy and high specific power. Electrolytes with organic solvents such as acetonitrile, propylene carbonate, tetrahydrofuran, diethyl carbonate, γ-butyrolactone and solutions with quaternary ammonium salts or alkyl ammonium salts such as tetraethylammonium tetrafluoroborate () or triethyl (metyl) tetrafluoroborate () are more expensive than aqueous electrolytes, but they have a higher dissociation voltage of typically 1.35 V per electrode (2.7 V capacitor voltage), and a higher temperature range. The lower electrical conductivity of organic solvents (10 to 60 mS/cm) leads to a lower specific power, but since the specific energy increases with the square of the voltage, a higher specific energy. Ionic electrolytes consists of liquid salts that can be stable in a wider electrochemical window, enabling capacitor voltages above 3.5 V. Ionic electrolytes typically have an ionic conductivity of a few mS/cm, lower than aqueous or organic electrolytes. Separators Separators have to physically separate the two electrodes to prevent a short circuit by direct contact. It can be very thin (a few hundredths of a millimeter) and must be very porous to the conducting ions to minimize ESR. Furthermore, separators must be chemically inert to protect the electrolyte's stability and conductivity. Inexpensive components use open capacitor papers. More sophisticated designs use nonwoven porous polymeric films like polyacrylonitrile or Kapton, woven glass fibers or porous woven ceramic fibres. Collectors and housing Current collectors connect the electrodes to the capacitor's terminals. The collector is either sprayed onto the electrode or is a metal foil. They must be able to distribute peak currents of up to 100 A. If the housing is made out of a metal (typically aluminum) the collectors should be made from the same material to avoid forming a corrosive galvanic cell. Electrical parameters Capacitance Capacitance values for commercial capacitors are specified as "rated capacitance CR". This is the value for which the capacitor has been designed. The value for an actual component must be within the limits given by the specified tolerance. Typical values are in the range of farads (F), three to six orders of magnitude larger than those of electrolytic capacitors. The capacitance value results from the energy (expressed in Joule) of a loaded capacitor loaded via a DC voltage VDC. This value is also called the "DC capacitance". Measurement Conventional capacitors are normally measured with a small AC voltage (0.5 V) and a frequency of 100 Hz or 1 kHz depending on the capacitor type. The AC capacitance measurement offers fast results, important for industrial production lines. The capacitance value of a supercapacitor depends strongly on the measurement frequency, which is related to the porous electrode structure and the limited electrolyte's ion mobility. Even at a low frequency of 10 Hz, the measured capacitance value drops from 100 to 20 percent of the DC capacitance value. This extraordinarily strong frequency dependence can be explained by the different distances the ions have to move in the electrode's pores. The area at the beginning of the pores can be easily accessed by the ions; this short distance is accompanied by low electrical resistance. The greater the distance the ions have to cover, the higher the resistance. This phenomenon can be described with a series circuit of cascaded RC (resistor/capacitor) elements with serial RC time constants. These result in delayed current flow, reducing the total electrode surface area that can be covered with ions if polarity changes – capacitance decreases with increasing AC frequency. Thus, the total capacitance is achieved only after longer measuring times. Out of the reason of the very strong frequency dependence of the capacitance, this electrical parameter has to be measured with a special constant current charge and discharge measurement, defined in IEC standards 62391-1 and -2. Measurement starts with charging the capacitor. The voltage has to be applied and after the constant current/constant voltage power supply has achieved the rated voltage, the capacitor must be charged for 30 minutes. Next, the capacitor has to be discharged with a constant discharge current Idischarge. Then the time t1 and t2, for the voltage to drop from 80% (V1) to 40% (V2) of the rated voltage is measured. The capacitance value is calculated as: The value of the discharge current is determined by the application. The IEC standard defines four classes: Memory backup, discharge current in mA = 1 • C (F) Energy storage, discharge current in mA = 0,4 • C (F) • V (V) Power, discharge current in mA = 4 • C (F) • V (V) Instantaneous power, discharge current in mA = 40 • C (F) • V (V) The measurement methods employed by individual manufacturers are mainly comparable to the standardized methods. The standardized measuring method is too time consuming for manufacturers to use during production for each individual component. For industrial-produced capacitors, the capacitance value is instead measured with a faster, low-frequency AC voltage, and a correlation factor is used to compute the rated capacitance. This frequency dependence affects capacitor operation. Rapid charge and discharge cycles mean that neither the rated capacitance value nor specific energy are available. In this case the rated capacitance value is recalculated for each application condition. The time t a supercapacitor can deliver a constant current I can be calculated as: as the capacitor voltage decreases from Ucharge down to Umin. If the application needs a constant power P for a certain time t this can be calculated as: wherein also the capacitor voltage decreases from Ucharge down to Umin. Operating voltage Supercapacitors are low voltage components. Safe operation requires that the voltage remain within specified limits. The rated voltage UR is the maximum DC voltage or peak pulse voltage that may be applied continuously and remain within the specified temperature range. Capacitors should never be subjected to voltages continuously in excess of the rated voltage. The rated voltage includes a safety margin against the electrolyte's breakdown voltage at which the electrolyte decomposes. The breakdown voltage decomposes the separating solvent molecules in the Helmholtz double-layer, e.g. water splits into hydrogen and oxygen. The solvent molecules then cannot separate the electrical charges from each other. Higher voltages than rated voltage cause hydrogen gas formation or a short circuit. Standard supercapacitors with aqueous electrolyte normally are specified with a rated voltage of 2.1 to 2.3 V and capacitors with organic solvents with 2.5 to 2.7 V. Lithium-ion capacitors with doped electrodes may reach a rated voltage of 3.8 to 4 V, but have a low voltage limit of about 2.2 V. Supercapacitors with ionic electrolytes can exceed an operating voltage of 3.5 V. Operating supercapacitors below the rated voltage improves the long-time behavior of the electrical parameters. Capacitance values and internal resistance during cycling are more stable and lifetime and charge/discharge cycles may be extended. Higher application voltages require connecting cells in series. Since each component has a slight difference in capacitance value and ESR, it is necessary to actively or passively balance them to stabilize the applied voltage. Passive balancing employs resistors in parallel with the supercapacitors. Active balancing may include electronic voltage management above a threshold that varies the current. Internal resistance Charging/discharging a supercapacitor is connected to the movement of charge carriers (ions) in the electrolyte across the separator to the electrodes and into their porous structure. Losses occur during this movement that can be measured as the internal DC resistance. With the electrical model of cascaded, series-connected RC (resistor/capacitor) elements in the electrode pores, the internal resistance increases with the increasing penetration depth of the charge carriers into the pores. The internal DC resistance is time dependent and increases during charge/discharge. In applications often only the switch-on and switch-off range is interesting. The internal resistance Ri can be calculated from the voltage drop ΔV2 at the time of discharge, starting with a constant discharge current Idischarge. It is obtained from the intersection of the auxiliary line extended from the straight part and the time base at the time of discharge start (see picture right). Resistance can be calculated by: The discharge current Idischarge for the measurement of internal resistance can be taken from the classification according to IEC 62391-1. This internal DC resistance Ri should not be confused with the internal AC resistance called equivalent series resistance (ESR) normally specified for capacitors. It is measured at 1 kHz. ESR is much smaller than DC resistance. ESR is not relevant for calculating supercapacitor inrush currents or other peak currents. Ri determines several supercapacitor properties. It limits the charge and discharge peak currents as well as charge/discharge times. Ri and the capacitance C results in the time constant This time constant determines the charge/discharge time. A 100 F capacitor with an internal resistance of 30 mΩ for example, has a time constant of 0.03 • 100 = 3 s. After 3 seconds charging with a current limited only by internal resistance, the capacitor has 63.2% of full charge (or is discharged to 36.8% of full charge). Standard capacitors with constant internal resistance fully charge during about 5 τ. Since internal resistance increases with charge/discharge, actual times cannot be calculated with this formula. Thus, charge/discharge time depends on specific individual construction details. Current load and cycle stability Because supercapacitors operate without forming chemical bonds, current loads, including charge, discharge and peak currents are not limited by reaction constraints. Current load and cycle stability can be much higher than for rechargeable batteries. Current loads are limited only by internal resistance, which may be substantially lower than for batteries. Internal resistance "Ri" and charge/discharge currents or peak currents "I" generate internal heat losses "Ploss" according to: This heat must be released and distributed to the ambient environment to maintain operating temperatures below the specified maximum temperature. Heat generally defines capacitor lifetime due to electrolyte diffusion. The heat generation coming from current loads should be smaller than 5 to 10 K at maximum ambient temperature (which has only minor influence on expected lifetime). For that reason the specified charge and discharge currents for frequent cycling are determined by internal resistance. The specified cycle parameters under maximal conditions include charge and discharge current, pulse duration and frequency. They are specified for a defined temperature range and over the full voltage range for a defined lifetime. They can differ enormously depending on the combination of electrode porosity, pore size and electrolyte. Generally a lower current load increases capacitor life and increases the number of cycles. This can be achieved either by a lower voltage range or slower charging and discharging. Supercapacitors (except those with polymer electrodes) can potentially support more than one million charge/discharge cycles without substantial capacity drops or internal resistance increases. Beneath the higher current load is this the second great advantage of supercapacitors over batteries. The stability results from the dual electrostatic and electrochemical storage principles. The specified charge and discharge currents can be significantly exceeded by lowering the frequency or by single pulses. Heat generated by a single pulse may be spread over the time until the next pulse occurs to ensure a relatively small average heat increase. Such a "peak power current" for power applications for supercapacitors of more than 1000 F can provide a maximum peak current of about 1000 A. Such high currents generate high thermal stress and high electromagnetic forces that can damage the electrode-collector connection requiring robust design and construction of the capacitors. Device capacitance and resistance dependence on operating voltage and temperature Device parameters such as capacitance initial resistance and steady state resistance are not constant, but are variable and dependent on the device's operating voltage. Device capacitance will have a measurable increase as the operating voltage increases. For example: a 100F device can be seen to vary 26% from its maximum capacitance over its entire operational voltage range. Similar dependence on operating voltage is seen in steady state resistance (Rss) and initial resistance (Ri). Device properties can also be seen to be dependent on device temperature. As the temperature of the device changes either through operation of varying ambient temperature, the internal properties such as capacitance and resistance will vary as well. Device capacitance is seen to increase as the operating temperature increases. Energy capacity Supercapacitors occupy the gap between high power/low energy electrolytic capacitors and low power/high energy rechargeable batteries. The energy Wmax (expressed in Joule) that can be stored in a capacitor is given by the formula This formula describes the amount of energy stored and is often used to describe new research successes. However, only part of the stored energy is available to applications, because the voltage drop and the time constant over the internal resistance mean that some of the stored charge is inaccessible. The effective realized amount of energy Weff is reduced by the used voltage difference between Vmax and Vmin and can be represented as: This formula also represents the energy asymmetric voltage components such as lithium ion capacitors. Specific energy and specific power The amount of energy that can be stored in a capacitor per mass of that capacitor is called its specific energy. Specific energy is measured gravimetrically (per unit of mass) in watt-hours per kilogram (Wh/kg). The amount of energy can be stored in a capacitor per volume of that capacitor is called its energy density (also called volumetric specific energy in some literature). Energy density is measured volumetrically (per unit of volume) in watt-hours per litre (Wh/L). Units of liters and dm3 can be used interchangeably. commercial energy density varies widely, but in general range from around 5 to . In comparison, petrol fuel has an energy density of 32.4 MJ/L or . Commercial specific energies range from around 0.5 to . For comparison, an aluminum electrolytic capacitor stores typically 0.01 to , while a conventional lead–acid battery stores typically 30 to and modern lithium-ion batteries 100 to . Supercapacitors can therefore store 10 to 100 times more energy than electrolytic capacitors, but only one tenth as much as batteries. For reference, petrol fuel has a specific energy of 44.4 MJ/kg or . Although the specific energy of supercapacitors is defavorably compared with batteries, capacitors have the important advantage of the specific power. Specific power describes the speed at which energy can be delivered to the load (or, in charging the device, absorbed from the generator). The maximum power Pmax specifies the power of a theoretical rectangular single maximum current peak of a given voltage. In real circuits the current peak is not rectangular and the voltage is smaller, caused by the voltage drop, so IEC 62391–2 established a more realistic effective power Peff for supercapacitors for power applications, which is half the maximum and given by the following formulas : , with V = voltage applied and Ri, the internal DC resistance of the capacitor. Just like specific energy, specific power is measured either gravimetrically in kilowatts per kilogram (kW/kg, specific power) or volumetrically in kilowatts per litre (kW/L, power density). Supercapacitor specific power is typically 10 to 100 times greater than for batteries and can reach values up to 15 kW/kg. Ragone charts relate energy to power and are a valuable tool for characterizing and visualizing energy storage components. With such a diagram, the position of specific power and specific energy of different storage technologies is easily to compare, see diagram. Lifetime Since supercapacitors do not rely on chemical changes in the electrodes (except for those with polymer electrodes), lifetimes depend mostly on the rate of evaporation of the liquid electrolyte. This evaporation is generally a function of temperature, current load, current cycle frequency and voltage. Current load and cycle frequency generate internal heat, so that the evaporation-determining temperature is the sum of ambient and internal heat. This temperature is measurable as core temperature in the center of a capacitor body. The higher the core temperature, the faster the evaporation, and the shorter the lifetime. Evaporation generally results in decreasing capacitance and increasing internal resistance. According to IEC/EN 62391-2, capacitance reductions of over 30%, or internal resistance exceeding four times its data sheet specifications, are considered "wear-out failures," implying that the component has reached end-of-life. The capacitors are operable, but with reduced capabilities. Whether the aberration of the parameters have any influence on the proper functionality depends on the application of the capacitors. Such large changes of electrical parameters specified in IEC/EN 62391-2 are usually unacceptable for high current load applications. Components that support high current loads use much smaller limits, e.g., 20% loss of capacitance or double the internal resistance. The narrower definition is important for such applications, since heat increases linearly with increasing internal resistance, and the maximum temperature should not be exceeded. Temperatures higher than specified can destroy the capacitor. The real application lifetime of supercapacitors, also called "service life," "life expectancy," or "load life," can reach 10 to 15 years or more, at room temperature. Such long periods cannot be tested by manufacturers. Hence, they specify the expected capacitor lifetime at the maximum temperature and voltage conditions. The results are specified in datasheets using the notation "tested time (hours)/max. temperature (°C)," such as "5000 h/65 °C". With this value, and expressions derived from historical data, lifetimes can be estimated for lower temperature conditions. Datasheet lifetime specification is tested by the manufactures using an accelerated aging test called an "endurance test," with maximum temperature and voltage over a specified time. For a "zero defect" product policy, no wear out or total failure may occur during this test. The lifetime specification from datasheets can be used to estimate the expected lifetime for a given design. The "10-degrees-rule" used for electrolytic capacitors with non-solid electrolyte is used in those estimations, and can be used for supercapacitors. This rule employs the Arrhenius equation: a simple formula for the temperature dependence of reaction rates. For every 10 °C reduction in operating temperature, the estimated life doubles. With: Lx = estimated lifetime L0 = specified lifetime T0 = upper specified capacitor temperature Tx = actual operating temperature of the capacitor cell Calculated with this formula, capacitors specified with 5000 h at 65 °C, have an estimated lifetime of 20,000 h at 45 °C. Lifetimes are also dependent on the operating voltage, because the development of gas in the liquid electrolyte depends on the voltage. The lower the voltage, the smaller the gas development, and the longer the lifetime. No general formula relates voltage to lifetime. The voltage dependent curves shown from the picture are an empirical result from one manufacturer. Life expectancy for power applications may be also limited by current load or number of cycles. This limitation has to be specified by the relevant manufacturer and is strongly type dependent. Self-discharge Storing electrical energy in the double-layer separates the charge carriers within the pores by distances in the range of molecules. Irregularities can occur over this short distance, leading to a small exchange of charge carriers and gradual discharge. This self-discharge is called leakage current. Leakage depends on capacitance, voltage, temperature, and the chemical stability of the electrode/electrolyte combination. At room temperature, leakage is so low that it is specified as time to self-discharge in hours, days, or weeks. As an example, a 5.5 V/F Panasonic "Goldcapacitor" specifies a voltage drop at 20 °C from 5.5 V to 3 V in 600 hours (25 days or 3.6 weeks) for a double cell capacitor. Post charge voltage relaxation It has been noticed that after the EDLC experiences a charge or discharge, the voltage will drift over time, relaxing toward its previous voltage level. The observed relaxation can occur over several hours and is likely due to long diffusion time constants of the porous electrodes within the EDLC. Polarity Since the positive and negative electrodes (or simply positrode and negatrode, respectively) of symmetric supercapacitors consist of the same material, theoretically supercapacitors have no true polarity and catastrophic failure does not normally occur. However reverse-charging a supercapacitor lowers its capacity, so it is recommended practice to maintain the polarity resulting from the formation of the electrodes during production. Asymmetric supercapacitors are inherently polar. Pseudocapacitor and hybrid supercapacitors which have electrochemical charge properties may not be operated with reverse polarity, precluding their use in AC operation. However, this limitation does not apply to EDLC supercapacitors A bar in the insulating sleeve identifies the negative terminal in a polarized component. In some literature, the terms "anode" and "cathode" are used in place of negative electrode and positive electrode. Using anode and cathode to describe the electrodes in supercapacitors (and also rechargeable batteries, including lithium-ion batteries) can lead to confusion, because the polarity changes depending on whether a component is considered as a generator or as a consumer of current. In electrochemistry, cathode and anode are related to reduction and oxidation reactions, respectively. However, in supercapacitors based on electric double-layer capacitance, there is no oxidation nor reduction reactions on any of the two electrodes. Therefore, the concepts of cathode and anode do not apply. Comparison of selected commercial supercapacitors The range of electrodes and electrolytes available yields a variety of components suitable for diverse applications. The development of low-ohmic electrolyte systems, in combination with electrodes with high pseudocapacitance, enable many more technical solutions. The following table shows differences among capacitors of various manufacturers in capacitance range, cell voltage, internal resistance (ESR, DC or AC value) and volumetric and gravimetric specific energy. In the table, ESR refers to the component with the largest capacitance value of the respective manufacturer. Roughly, they divide supercapacitors into two groups. The first group offers greater ESR values of about 20 milliohms and relatively small capacitance of 0.1 to 470 F. These are "double-layer capacitors" for memory back-up or similar applications. The second group offers 100 to 10,000 F with a significantly lower ESR value under 1 milliohm. These components are suitable for power applications. A correlation of some supercapacitor series of different manufacturers to the various construction features is provided in Pandolfo and Hollenkamp. In commercial double-layer capacitors, or, more specifically, EDLCs in which energy storage is predominantly achieved by double-layer capacitance, energy is stored by forming an electrical double layer of electrolyte ions on the surface of conductive electrodes. Since EDLCs are not limited by the electrochemical charge transfer kinetics of batteries, they can charge and discharge at a much higher rate, with lifetimes of more than 1 million cycles. The EDLC energy density is determined by operating voltage and the specific capacitance (farad/gram or farad/cm3) of the electrode/electrolyte system. The specific capacitance is related to the Specific Surface Area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Commercial EDLCs are based on two symmetric electrodes impregnated with electrolytes comprising tetraethylammonium tetrafluoroborate salts in organic solvents. Current EDLCs containing organic electrolytes operate at 2.7 V and reach energy densities around 5-8 Wh/kg and 7 to 10 Wh/L. The specific capacitance is related to the specific surface area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Graphene-based platelets with mesoporous spacer material is a promising structure for increasing the SSA of the electrolyte. Standards Supercapacitors vary sufficiently that they are rarely interchangeable, especially those with higher specific energy. Applications range from low to high peak currents, requiring standardized test protocols. Test specifications and parameter requirements are specified in the generic specification IEC/EN 62391–1, Fixed electric double layer capacitors for use in electronic equipment. The standard defines four application classes, according to discharge current levels: Memory backup Energy storage, mainly used for driving motors require a short time operation, Power, higher power demand for a long time operation, Instantaneous power, for applications that requires relatively high current units or peak currents ranging up to several hundreds of amperes even with a short operating time Three further standards describe special applications: IEC 62391–2, Fixed electric double-layer capacitors for use in electronic equipment - Blank detail specification - Electric double-layer capacitors for power application IEC 62576, Electric double-layer capacitors for use in hybrid electric vehicles. Test methods for electrical characteristics BS/EN 61881-3, Railway applications. Rolling stock equipment. Capacitors for power electronics. Electric double-layer capacitors Applications Supercapacitors have advantages in applications where a large amount of power is needed for a relatively short time, where a very high number of charge/discharge cycles or a longer lifetime is required. Typical applications range from milliamp currents or milliwatts of power for up to a few minutes to several amps current or several hundred kilowatts power for much shorter periods. Supercapacitors do not support alternating current (AC) applications. Consumer electronics In applications with fluctuating loads, such as laptop computers, PDAs, GPS, portable media players, hand-held devices, and photovoltaic systems, supercapacitors can stabilize the power supply. Supercapacitors deliver power for photographic flashes in digital cameras and for LED flashlights that can be charged in much shorter periods of time, e.g., 90 seconds. Some portable speakers are powered by supercapacitors. A cordless electric screwdriver with supercapacitors for energy storage has about half the run time of a comparable battery model, but can be fully charged in 90 seconds. It retains 85% of its charge after three months left idle. Power generation and distribution Grid power buffering Numerous non-linear loads, such as EV chargers, HEVs, air conditioning systems, and advanced power conversion systems cause current fluctuations and harmonics. These current differences create unwanted voltage fluctuations and therefore power oscillations on the grid. Power oscillations not only reduce the efficiency of the grid, but can cause voltage drops in the common coupling bus, and considerable frequency fluctuations throughout the entire system. To overcome this problem, supercapacitors can be implemented as an interface between the load and the grid to act as a buffer between the grid and the high pulse power drawn from the charging station. Low-power equipment power buffering Supercapacitors provide backup or emergency shutdown power to low-power equipment such as RAM, SRAM, micro-controllers and PC Cards. They are the sole power source for low energy applications such as automated meter reading (AMR) equipment or for event notification in industrial electronics. Supercapacitors buffer power to and from rechargeable batteries, mitigating the effects of short power interruptions and high current peaks. Batteries kick in only during extended interruptions, e.g., if the mains power or a fuel cell fails, which lengthens battery life. Uninterruptible power supplies (UPS) may be powered by supercapacitors, which can replace much larger banks of electrolytic capacitors. This combination reduces the cost per cycle, saves on replacement and maintenance costs, enables the battery to be downsized and extends battery life. Supercapacitors provide backup power for actuators in wind turbine pitch systems, so that blade pitch can be adjusted even if the main supply fails. Voltage stabilization Supercapacitors can stabilize voltage fluctuations for powerlines by acting as dampers. Wind and photovoltaic systems exhibit fluctuating supply evoked by gusting or clouds that supercapacitors can buffer within milliseconds. Micro grids Micro grids are usually powered by clean and renewable energy. Most of this energy generation, however, is not constant throughout the day and does not usually match demand. Supercapacitors can be used for micro grid storage to instantaneously inject power when the demand is high and the production dips momentarily, and to store energy in the reverse conditions. They are useful in this scenario, because micro grids are increasingly producing power in DC, and capacitors can be utilized in both DC and AC applications. Supercapacitors work best in conjunction with chemical batteries. They provide an immediate voltage buffer to compensate for quick changing power loads due to their high charge and discharge rate through an active control system. Once the voltage is buffered, it is put through an inverter to supply AC power to the grid. Supercapacitors cannot provide frequency correction in this form directly in the AC grid. Energy harvesting Supercapacitors are suitable temporary energy storage devices for energy harvesting systems. In energy harvesting systems, the energy is collected from the ambient or renewable sources, e.g., mechanical movement, light or electromagnetic fields, and converted to electrical energy in an energy storage device. For example, it was demonstrated that energy collected from RF (radio frequency) fields (using an RF antenna as an appropriate rectifier circuit) can be stored to a printed supercapacitor. The harvested energy was then used to power an application-specific integrated circuit (ASIC) for over 10 hours. Batteries The UltraBattery is a hybrid rechargeable lead-acid battery and a supercapacitor. Its cell construction contains a standard lead-acid battery positive electrode, standard sulphuric acid electrolyte and a specially prepared negative carbon-based electrode that store electrical energy with double-layer capacitance. The presence of the supercapacitor electrode alters the chemistry of the battery and affords it significant protection from sulfation in high rate partial state of charge use, which is the typical failure mode of valve regulated lead-acid cells used this way. The resulting cell performs with characteristics beyond either a lead-acid cell or a supercapacitor, with charge and discharge rates, cycle life, efficiency and performance all enhanced. Medical Supercapacitors are used in defibrillators where they can deliver 500 joules to shock the heart back into sinus rhythm. Military Supercapacitors' low internal resistance supports applications that require short-term high currents. Among the earliest uses were motor startup (cold engine starts, particularly with diesels) for large engines in tanks and submarines. Supercapacitors buffer the battery, handling short current peaks, reducing cycling and extending battery life. Further military applications that require high specific power are phased array radar antennae, laser power supplies, military radio communications, avionics displays and instrumentation, backup power for airbag deployment and GPS-guided missiles and projectiles. Transport A primary challenge of all transport is reducing energy consumption and reducing emissions. Recovery of braking energy (recuperation or regenerative braking) helps with both. This requires components that can quickly store and release energy over long times with a high cycle rate. Supercapacitors fulfill these requirements and are therefore used in various applications in transportation. Aviation In 2005, aerospace systems and controls company Diehl Luftfahrt Elektronik GmbH chose supercapacitors to power emergency actuators for doors and evacuation slides used in airliners, including the Airbus 380. Cars The Toyota Yaris Hybrid-R concept car uses a supercapacitor to provide bursts of power. PSA Peugeot Citroën started using supercapacitors (circa 2014) as part of its stop-start fuel-saving system, which permits faster initial acceleration. Mazda's i-ELOOP system stores energy in a supercapacitor during deceleration and uses it to power on-board electrical systems while the engine is stopped by the stop-start system. Rail Supercapacitors can be used to supplement batteries in starter systems in diesel railroad locomotives with diesel–electric transmission. The capacitors capture the braking energy of a full stop and deliver the peak current for starting the diesel engine and acceleration of the train and ensures the stabilization of line voltage. Depending on the driving mode up to 30% energy saving is possible by recovery of braking energy. Low maintenance and environmentally friendly materials encouraged the choice of supercapacitors. Plant machinery Mobile hybrid Diesel–electric rubber tyred gantry cranes move and stack containers within a terminal. Lifting the boxes requires large amounts of energy. Some of the energy could be recaptured while lowering the load, resulting in improved efficiency. A triple hybrid forklift truck uses fuel cells and batteries as primary energy storage and supercapacitors to buffer power peaks by storing braking energy. They provide the fork lift with peak power over 30 kW. The triple-hybrid system offers over 50% energy savings compared with Diesel or fuel-cell systems. Supercapacitor-powered terminal tractors transport containers to warehouses. They provide an economical, quiet and pollution-free alternative to Diesel terminal tractors. Light rail Supercapacitors make it possible not only to reduce energy, but to replace overhead lines in historical city areas, so preserving the city's architectural heritage. This approach may allow many new light rail city lines to replace overhead wires that are too expensive to fully route. In 2003 Mannheim adopted a prototype light-rail vehicle (LRV) using the MITRAC Energy Saver system from Bombardier Transportation to store mechanical braking energy with a roof-mounted supercapacitor unit. It contains several units each made of 192 capacitors with 2700 F / 2.7 V interconnected in three parallel lines. This circuit results in a 518 V system with an energy content of 1.5 kWh. For acceleration when starting this "on-board-system" can provide the LRV with 600 kW and can drive the vehicle up to 1 km without overhead line supply, thus better integrating the LRV into the urban environment. Compared to conventional LRVs or Metro vehicles that return energy into the grid, onboard energy storage saves up to 30% and reduces peak grid demand by up to 50%. In 2009 supercapacitors enabled LRVs to operate in the historical city area of Heidelberg without overhead wires, thus preserving the city's architectural heritage. The SC equipment cost an additional €270,000 per vehicle, which was expected to be recovered over the first 15 years of operation. The supercapacitors are charged at stop-over stations when the vehicle is at a scheduled stop. In April 2011 German regional transport operator Rhein-Neckar, responsible for Heidelberg, ordered a further 11 units. In 2009, Alstom and RATP equipped a Citadis tram with an experimental energy recovery system called "STEEM". The system is fitted with 48 roof-mounted supercapacitors to store braking energy, which provides tramways with a high level of energy autonomy by enabling them to run without overhead power lines on parts of its route, recharging while traveling on powered stop-over stations. During the tests, which took place between the Porte d'Italie and Porte de Choisy stops on line T3 of the tramway network in Paris, the tramset used an average of approximately 16% less energy. In 2012 tram operator Geneva Public Transport began tests of an LRV equipped with a prototype roof-mounted supercapacitor unit to recover braking energy. Siemens is delivering supercapacitor-enhanced light-rail transport systems that include mobile storage. Hong Kong's South Island metro line is to be equipped with two 2 MW energy storage units that are expected to reduce energy consumption by 10%. In August 2012 the CSR Zhuzhou Electric Locomotive corporation of China presented a prototype two-car light metro train equipped with a roof-mounted supercapacitor unit. The train can travel up 2 km without wires, recharging in 30 seconds at stations via a ground mounted pickup. The supplier claimed the trains could be used in 100 small and medium-sized Chinese cities. Seven trams (street cars) powered by supercapacitors were scheduled to go into operation in 2014 in Guangzhou, China. The supercapacitors are recharged in 30 seconds by a device positioned between the rails. That powers the tram for up to . As of 2017, Zhuzhou's supercapacitor vehicles are also used on the new Nanjing streetcar system, and are undergoing trials in Wuhan. In 2012, in Lyon (France), the SYTRAL (Lyon public transportation administration) started experiments of a "way side regeneration" system built by Adetel Group which has developed its own energy saver named "NeoGreen" for LRV, LRT and metros. In 2014 China began using trams powered with supercapacitors that are recharged in 30 seconds by a device positioned between the rails, storing power to run the tram for up to 4 km — more than enough to reach the next stop, where the cycle can be repeated. In 2015, Alstom announced SRS, an energy storage system that charges supercapacitors on board a tram by means of ground-level conductor rails located at tram stops. This allows trams to operate without overhead lines for short distances. The system has been touted as an alternative to the company's ground-level power supply (APS) system, or can be used in conjunction with it, as in the case of the VLT network in Rio de Janeiro, Brazil, which opened in 2016. CAF also offers supercapacitors on their Urbos 3 trams in the form of their ACR system. Buses Maxwell Technologies, an American supercapacitor maker, claimed that more than 20,000 hybrid buses use the devices to increase acceleration, particularly in China. The first hybrid electric bus with supercapacitors in Europe came in 2001 in Nuremberg, Germany. It was MAN's so-called "Ultracapbus", and was tested in real operation in 2001/2002. The test vehicle was equipped with a diesel-electric drive in combination with supercapacitors. The system was supplied with 8 Ultracap modules of 80 V, each containing 36 components. The system worked with 640 V and could be charged/discharged at 400 A. Its energy content was 0.4 kWh with a weight of 400 kg. The supercapacitors recaptured braking energy and delivered starting energy. Fuel consumption was reduced by 10 to 15% compared to conventional diesel vehicles. Other advantages included reduction of emissions, quiet and emissions-free engine starts, lower vibration and reduced maintenance costs. in Luzern, Switzerland an electric bus fleet called TOHYCO-Rider was tested. The supercapacitors could be recharged via an inductive contactless high-speed power charger after every transportation cycle, within 3 to 4 minutes. In early 2005 Shanghai tested a new form of electric bus called capabus that runs without powerlines (catenary free operation) using large onboard supercapacitors that partially recharge whenever the bus is at a stop (under so-called electric umbrellas), and fully charge in the terminus. In 2006, two commercial bus routes began to use the capabuses; one of them is route 11 in Shanghai. It was estimated that the supercapacitor bus was cheaper than a lithium-ion battery bus, and one of its buses had one-tenth the energy cost of a diesel bus with lifetime fuel savings of $200,000. A hybrid electric bus called tribrid was unveiled in 2008 by the University of Glamorgan, Wales, for use as student transport. It is powered by hydrogen fuel or solar cells, batteries and ultracapacitors. Motor racing The FIA, a governing body for motor racing events, proposed in the Power-Train Regulation Framework for Formula 1 version 1.3 of 23 May 2007 that a new set of power train regulations be issued that includes a hybrid drive of up to 200 kW input and output power using "superbatteries" made with batteries and supercapacitors connected in parallel (KERS). About 20% tank-to-wheel efficiency could be reached using the KERS system. The Toyota TS030 Hybrid LMP1 car, a racing car developed under Le Mans Prototype rules, uses a hybrid drivetrain with supercapacitors. In the 2012 24 Hours of Le Mans race a TS030 qualified with a fastest lap only 1.055 seconds slower (3:24.842 versus 3:23.787) than the fastest car, an Audi R18 e-tron quattro with flywheel energy storage. The supercapacitor and flywheel components, whose rapid charge-discharge capabilities help in both braking and acceleration, made the Audi and Toyota hybrids the fastest cars in the race. In the 2012 Le Mans race the two competing TS030s, one of which was in the lead for part of the race, both retired for reasons unrelated to the supercapacitors. The TS030 won three of the 8 races in the 2012 FIA World Endurance Championship season. In 2014 the Toyota TS040 Hybrid used a supercapacitor to add 480 horsepower from two electric motors. Hybrid electric vehicles Supercapacitor/battery combinations in electric vehicles (EV) and hybrid electric vehicles (HEV) are well investigated. A 20 to 60% fuel reduction has been claimed by recovering brake energy in EVs or HEVs. The ability of supercapacitors to charge much faster than batteries, their stable electrical properties, broader temperature range and longer lifetime are suitable, but weight, volume and especially cost mitigate those advantages. Supercapacitors' lower specific energy makes them unsuitable for use as a stand-alone energy source for long distance driving. The fuel economy improvement between a capacitor and a battery solution is about 20% and is available only for shorter trips. For long distance driving the advantage decreases to 6%. Vehicles combining capacitors and batteries run only in experimental vehicles. all automotive manufacturers of EV or HEVs have developed prototypes that uses supercapacitors instead of batteries to store braking energy in order to improve driveline efficiency. The Mazda 6 is the only production car that uses supercapacitors to recover braking energy. Branded as i-eloop, the regenerative braking is claimed to reduce fuel consumption by about 10%. Russian Yo-cars Ё-mobile series was a concept and crossover hybrid vehicle working with a gasoline driven rotary vane type and an electric generator for driving the traction motors. A supercapacitor with relatively low capacitance recovers brake energy to power the electric motor when accelerating from a stop. Toyota's Yaris Hybrid-R concept car uses a supercapacitor to provide quick bursts of power. PSA Peugeot Citroën fit supercapacitors to some of its cars as part of its stop-start fuel-saving system, as this permits faster start-ups when the traffic lights turn green. Gondolas In Zell am See, Austria, an aerial lift connects the city with Schmittenhöhe mountain. The gondolas sometimes run 24 hours per day, using electricity for lights, door opening and communication. The only available time for recharging batteries at the stations is during the brief intervals of guest loading and unloading, which is too short to recharge batteries. Supercapacitors offer a fast charge, higher number of cycles and longer life time than batteries. Emirates Air Line (cable car), also known as the Thames cable car, is a 1-kilometre (0.62 mi) gondola line in London, UK, that crosses the Thames from the Greenwich Peninsula to the Royal Docks. The cabins are equipped with a modern infotainment system, which is powered by supercapacitors. Developments commercially available lithium-ion supercapacitors offered the highest gravimetric specific energy to date, reaching 15 Wh/kg (). Research focuses on improving specific energy, reducing internal resistance, expanding temperature range, increasing lifetimes and reducing costs. Projects include tailored-pore-size electrodes, pseudocapacitive coating or doping materials and improved electrolytes. Research into electrode materials requires measurement of individual components, such as an electrode or half-cell. By using a counterelectrode that does not affect the measurements, the characteristics of only the electrode of interest can be revealed. Specific energy and power for real supercapacitors only have more or less roughly 1/3 of the electrode density. Market worldwide sales of supercapacitors is about US$400 million. The market for batteries (estimated by Frost & Sullivan) grew from US$47.5 billion, (76.4% or US$36.3 billion of which was rechargeable batteries) to US$95 billion. The market for supercapacitors is still a small niche market that is not keeping pace with its larger rival. In 2016, IDTechEx forecast sales to grow from $240 million to $2 billion by 2026, an annual increase of about 24%. Supercapacitor costs in 2006 were US$0.01 per farad or US$2.85 per kilojoule, moving in 2008 below US$0.01 per farad, and were expected to drop further in the medium term. See also References Further reading External links Supercapacitors: A Brief Overview Capacitors Energy conversion
Supercapacitor
Physics
19,995
3,795,031
https://en.wikipedia.org/wiki/Screw-propelled%20vehicle
A screw-propelled vehicle is a land or amphibious vehicle designed to cope with difficult terrain, such as snow, ice, mud, and swamp. Such vehicles are distinguished by being moved by the rotation of one or more auger-like cylinders fitted with a helical flange that engages with the medium through or over which the vehicle is moving. They have been called Archimedes screw vehicles by the US military, where they are classified as a type of marginal terrain vehicle (MTV). Modern vehicles called Amphirols and other similar vehicles have specialised uses. The weight of the vehicle is typically borne by one or more pairs of large flanged cylinders; sometimes a single flanged cylinder is used with additional stabilising skis. These cylinders each have a helical spiral flange like the thread of a screw. On each matched pair of cylinders, one will have its flange running clockwise and the other counter-clockwise. The flange engages with the surface on which the vehicle rests. Ideally this should be slightly soft material such as snow, sand or mud so that the flange can get a good bite. An engine is used to counter-rotate the cylinders—one cylinder turns clockwise and the other counter-clockwise. The counter-rotations cancel out so that the vehicle moves forwards (or backwards) along the axis of rotation. The principle of the operation is the inverse of the screw conveyor. A screw conveyor uses a helical screw to move semi-solid materials horizontally or at a slight incline; in a screw propelled vehicle, the semi-solid substrate remains stationary and the machine itself moves. Early developments One of the earliest examples of a screw-propelled vehicle was designed by Jacob Morath, a native of Switzerland who settled in St. Louis, Missouri in the United States in 1868. Morath's machine was designed for agricultural work such as hauling a plough. The augers were designed with cutting edges so that they would break up roots in the ground as the machine moved. One of the first screw-propelled vehicles that was actually built was designed by James and Ira Peavey of Maine. It was patented by Ira Peavey in 1907; the Peavey family has been famous for its contributions to the lumber industry ever since blacksmith Joseph Peavey of Stillwater, Maine, invented the tool known to this day as a Peavey (sometimes "pevy" or "pivie"). The Peavey Manufacturing Co. is still located in Maine. The Peaveys' machine had two pairs of cylinders with an articulation between the pairs to effect steering. At least two prototype vehicles were constructed: one was steam powered the other used a petrol engine. The prototypes worked well on hard packed snow but failed in soft powder because the flanges had nothing to grip into. The machine was designed to haul logs, but its length and rigid construction meant that it had difficulty with the uneven winter roads for which it was intended. Peavey's invention could not compete with the Lombard Steam Log Hauler built by Alvin Lombard and it was not produced commercially. (The Lombard vehicle was an early example of a half-track vehicle, it resembled a railway locomotive with a sled or wheels in front for steering and caterpillar tracks for traction.) Armstead Snow Motor In the 1920s the Armstead Snow Motor was developed. This was used to convert a Fordson tractor into a screw-propelled vehicle with a single pair of cylinders. A machine used in the Truckee, CA area was referred to by locals as the "Snow Devil" and that name has been erroneously attached to these machines, although no known advertising of the time referred to them as such. A film was made to show the capabilities of the vehicle as well as a Chevrolet car fitted with an Armstead Snow Motor. The film clearly shows that the vehicle copes well in snow. Steering was effected by having each cylinder receive power from a separate clutch which, depending on the position of the steering gear, engages and disengages; this results in a vehicle that is relatively maneuverable. The promotional film shows the Armstead snow motor hauling 20 tons of logs. In January 1926, Time magazine reported: An extant example is in the collection of the Hays Antique Truck Museum in Woodland, California. This particular vehicle is said to have been used to haul mail from Truckee to North Lake Tahoe. The Second World War period With the occupation of Norway by Nazi Germany in World War II, the quixotic Geoffrey Pyke considered the problem of transporting soldiers rapidly over snow. He proposed the development of a screw-propelled vehicle based on the Armstead snow motor. Pyke envisaged that the vehicles would be used by a small force of highly mobile soldiers. The damage and casualties that a small force could inflict might be slight, but they would oblige the enemy to keep many men stationed in Norway in order to guard against every possible point of attack. Pyke's ideas were initially rejected, but in October 1941, Louis Mountbatten became Chief of Combined Operations and Pyke's ideas received a more sympathetic hearing. Mountbatten became convinced that Pyke's plan was worthwhile and adopted it. The scheme became Project Plough and many high-level conferences were dedicated to it. The problem of developing a suitable vehicle was passed to the Americans, and Pyke went to the US to oversee the development. However, Pyke, who could be very inflexible, fell out with various individuals on the project and the Americans moved on to design a more conventional tracked vehicle, the M29 Weasel. In 1944, Johannes Raedel, a soldier of the German Army and veteran of the Eastern Front invented his schraubenantrieb schneemaschine (screw-propelled snow machine). Raedel had seen the problems of operating tracked vehicles in the deep snows of Russia where a tank would dig out the snow under the tracks leaving the tank stuck on the snow compressed under the hull. According to Siegfried Raedel, son of Johannes: Amphibians The threaded cylinders are necessarily large to ensure a substantial area of contact and buoyancy. Being lightweight, the cylinders may conveniently serve as floats and the arrangement may be used in the design of an amphibious vehicle. During the Vietnam War, the American Waterways Experiment Station (WES) tested the Marsh Screw Amphibian, designed by the Chrysler Corporation. The vehicle's barge-like hull was built of aluminum. It was fitted with vertical supports at the four corners that supported the two rotating, bladed drums. The vehicle weighed under 2,500 pounds and could carry a 1,000 pound load. The Marsh Screw Amphibian proved fastest on packed snow, where it could exceed . It could move at in marshy conditions and in water. The vehicle "failed miserably on soil surfaces, especially sand" where it traveled only ." Despite such disappointing results, Chrysler produced a much larger vehicle, the Riverine Utility Craft (RUC) for the Navy in 1969. The RUC travelled on two aluminium rotors, in diameter. The RUC achieved impressive speeds of on water and nearly on marsh. Again, however, speeds on firm soils proved disappointing, reaching only and crossing dykes proved difficult – the vehicle would get stuck. It was powered with two Chrysler marine V-8 engines and pair of two-speed automatic transmissions. The Soviets built a screw-propelled vehicle, the ZIL-2906, specifically for the challenging task of recovering cosmonauts who landed in inaccessible areas. In the 1960s, Joseph Jean de Bakker was the busy owner of the De Bakker machine factory in Hulst in the southwest of the Netherlands. He was also a keen fisherman, but he did not want his fishing time to be constrained by the vagaries of the tide. His solution was the Amphirol, a screw-propelled vehicle superficially similar to the Marsh Screw Amphibian. The Amphirol was able to convey him over the sticky clay revealed by the outgoing tide and to swim in water at high tide. De Bakker's Amphirol had a top speed of on mud and in water. It was powered by two modified DAF 44/55 variomatic transmission units; this made possible the significant innovation that the flanged cylinders could be deliberately driven in the same direction so that the vehicle could crab sideways on dry land at the alarming speed of . Also, when moving sideways, steering is effected by shifting the front of the cylinders so that they are no longer parallel – giving a large minimum turning radius. Amphirols are used for ground surveying, for grooving the surface of newly drained polders to assist drying, and to carry soil-drilling teams. Today modern vehicles, widely known as amphirols, perform specialised tasks such as compacting tailings from industrial processes. The advantage of these machines to tailings densification is that they provide a means to allow water or process liquor to run off without repulping the profile. This approach subsequently largely negates the impact of rainfall on densification and dewatering. However, the lighter, faster machines are better suited to marginal terrain access, but not densification due to repulping and their limited penetration depth. The process of using these machines specifically for tailings and dredge spoil densification is commonly termed "mud farming" in the mining industry. Recent developments The British Ice Challenger exploration team used a screw drive in their Snowbird 6 vehicle (a modified Bombardier tracked craft) to traverse the ice floes in the Bering Strait. The rotating cylinders allowed Snowbird 6 to move over ice and to propel itself through water, but the screw system was not considered suitable for long distances, and the cylinders could be raised so that the vehicle could also run on conventional caterpillar tracks. The Ice Challenger website says that the design was inspired by a Russian vehicle used to pick up cosmonauts who landed in Siberia (perhaps the ZIL-2906). Russian inventor Alexey Burdin has come up with a screw-propulsion system "TESH-drive Transformable worms". More recently, mud farming with larger machines capable of deep profile penetration (termed MudMasters by their manufacturer) has proven to be an efficient method for high intensity tailings management. See also Roller ship Metal Gear Solid 3: Snake Eater, which prominently features a fictitious screw-propelled vehicle called the Shagohod. References Notes General references Patents A screw-propelled sleigh with refinements to keep the screw clear of ice. A single-screw, low-speed tractor mechanism. A self-propelled sleigh. A self-propelled sleigh with open screws. A self-propelled amphibious vehicle. An adaptation of an automobile to drive on ice and snow. An adaptor that can temporarily adapt an automobile to ice and snow. A hand-propelled boat with emphasis on safety. It is hard to see how these would work! An amphibious vehicle for snow, ice tundra etc. A screw-driven traction unit used to push or pull a sleigh or skiers. A device that can climb up or down steps. A tractor for swampy or rough terrain. A boat with small screws that allow it to climb onto land. A screw-driven vehicle with the option of controlling the angle of the augers and of driving them in the same direction. Chrysler Corporation design. An amphibious vehicle with non-continuous screws. An unusual arrangement with screws at 90 degrees to each other. A design for traversing the sea bed. An amphibious vehicle able to climb steeply out of the water. A peddled powered boat with emphasis on safety. External links Flixxy.com video of the Armstead machines Amphibious vehicles Off-road vehicles Vehicle technology Vehicles by type
Screw-propelled vehicle
Engineering
2,419
36,448,616
https://en.wikipedia.org/wiki/Association%20for%20Computers%20and%20the%20Humanities
The Association for Computers and the Humanities (ACH) is the primary international professional society for digital humanities. ACH was founded in 1978. According to the official website, the organization "support[s] and disseminate[s] research and cultivate[s] a vibrant professional community through conferences, publications, and outreach activities." ACH is based in the United States, and has an international membership. ACH is a founding member of the Alliance of Digital Humanities Organizations (ADHO), a co-originator of the Text Encoding Initiative, and a co-sponsor of an annual conference. Conference ACH has been a co-sponsor of the annual Digital Humanities conference (formerly ACH/ALLC, before that International Conference on Computing in the Humanities or ICCH) since 1989. From 2006, when ADHO was founded, the larger umbrella organization is the conference's official sponsor. Journals Until 2004, Computers and the Humanities was the official journal of ACH. (In 2005 it was renamed to Language Resources and Evaluation. The print journal most closely associated with ACH is Literary and Linguistic Computing (Oxford University Press). The open-access, peer-reviewed journal of ACH is Digital Humanities Quarterly (ADHO). Associated Organizations ACH is joined in ADHO by: Association for Literary and Linguistic Computing (ALLC) Canadian Society for Digital Humanities/Société canadienne des humanités numériques (CSDH-SCHN) Other related Organizations: Association for Computational Linguistics Text Encoding Initiative References External links Computing and society Digital humanities Humanities organizations Professional associations based in the United States
Association for Computers and the Humanities
Technology
326
22,012,250
https://en.wikipedia.org/wiki/Watford%20Electronics
Watford Electronics was a British computer electronics company. It was founded in 1972 in a bedroom belonging to brothers Nazir and Raza Jessa, and grew to become one of the best-known suppliers of microcomputers and micro peripherals during the 1980s. In the 1970s Watford Electronics sold components and kits, through advertising in electronics magazines, and a paper catalogue. They had one shop in Watford, but mostly traded as a mail-order company. In the early 1980s Watford Electronics expanded into the home computer market. It was particularly active in the BBC Micro scene, producing a variety of peripherals for the computer, as well as a version of the Disc Filing System. They sold their own hardware under the Aries brand. Watford Electronics gradually moved over to supporting the Wintel market in the 1990s. In the 21st century, the company opened an online store, Savastore, but in 2007 Watford collapsed into administration. Watford Electronics was then bought out by Globally Limited, and in April that year, the website became known as Saverstore. Notes British companies established in 1972 Electronics companies of the United Kingdom Consumer electronics retailers of the United Kingdom Electronics companies established in 1972 Electronics companies disestablished in 2007 Mail-order retailers Computer hardware companies
Watford Electronics
Technology
249
24,205,365
https://en.wikipedia.org/wiki/Winton%20Train
The Winton Train was a private passenger train that travelled from the Czech Republic to Great Britain in September 2009 in tribute to the wartime efforts of Sir Nicholas Winton, described as the 'British Schindler' for his part in saving refugee children from Czechoslovakia. As a result of Sir Nicholas' efforts in the months leading up to the outbreak of World War II in 1939, a total of seven locomotives transported 669 Czechoslovak children of mainly Jewish heritage from Prague to safety in Great Britain. Sir Nicholas' kindertransport efforts remained largely unrecognised until 1988, when they came to public attention after his wife found a scrapbook in their attic documenting the details. Only then did the individuals whom Winton arranged to have transported to safety as children learn the story of how they survived the Holocaust. As the majority of 'Winton's Children' (as they came to be known) were Jewish, it is believed this saved them from certain death had they stayed in Czechoslovakia. As of 2009, the direct descendants of Winton's Children numbered over 5000 people. The 2009 tribute Winton Train carried some of the individuals Sir Nicholas arranged to have transported to safety in 1939, along with their families, as it retraced the original kindertransport route taken by the trains on which they rode as children to safety in Great Britain 70 years earlier. The Winton Train departed on 1 September 2009, the 70th anniversary of the eighth, and intended last, train arranged by Winton to carry children to safety but was prevented from doing so due to the outbreak of World War II on that very day. It departed from Prague Main railway station and travelled through Germany and the Netherlands. After a transfer by ferry to Harwich, the journey resumed by train again to arrive in London's Liverpool Street station on 4 September, where it was met by the 100-year-old Sir Nicholas himself. For the journey across mainland Europe, the train was formed of period carriages and was hauled by historically authentic steam locomotives, while the British leg was hauled by 60163 Tornado, a brand-new, main-line British steam locomotive completed in 2008, along with carriages that were constructed in the 1950s. The tribute train was the centrepiece of a wider cultural awareness project known as 'Inspiration by Goodness', organised by the Czech government. Background Original Winton trains Between March and September 1939, the months leading up to the outbreak of World War II, Nicholas Winton, a 29-year-old British stockbroker whose parents were of German Jewish descent, organised eight trains to transport mainly Jewish Czech and Slovak refugee children from Czechoslovakia to homes in Britain. In 1939, Winton cancelled a trip to a Swiss holiday resort to go to Prague, having heard of a growing refugee crisis resulting from the German occupation of Czechoslovakia from a friend. His friend was working in the British embassy for the British Committee for Refugees from Czechoslovakia, which was already working to help adults escape from Czechoslovakia. When Winton learned that refugee children could not leave unless accompanied, he decided to arrange their evacuation to Britain. While the Winton evacuations later became known by the collective label of the children's Kindertransports, which were officially being organised elsewhere in other countries, no official Kindertransports had been arranged in Prague at that time. Later rescues were organised by Gertruida Wijsmuller-Meijer from the Netherlands in cooperation with Jewish committees in Britain, Holland, Nazi-Germany and Nazi-Austria from December 1938 through August 1939, totalling 10,000 children. Winton organised the transfer of the children from the Nazi-appointed Protectorate of Bohemia and Moravia to homes in Britain, in the process arranging the necessary bonds and permits for their departure, and finding the families in Britain who would receive the children. Winton and a team identified those children most at risk from the thousands of refugees driven south following the Nazi invasion of the Sudetenland. For each child to be accepted, British officials required a confirmed foster home and a £50 guarantee. Beginning in March, Winton organised eight trains, which in total transported 669 mainly Jewish children from Nazi occupied Czechoslovakia to Great Britain. A ninth and final train with 250 children on board was stopped at the last minute, due to the outbreak of the war. Only two children on this ninth aborted train survived the war. According to Channel 4 News reporting on the 2009 Winton Train, "Sir Nicholas has said many times that the vision that haunts him most is the families waiting at Liverpool Street for the train that never arrived". The original trains left from Prague Wilson railway station (now Prague Main station). While most of the children were met by their new families at London's Liverpool Street station, some of the children got off the trains at Harwich, where they were placed with local families. Few if any of the Winton children saw their parents again. Winton's efforts did not come to public light until 1988, when his wife discovered papers in their loft, whereupon Winton began to publicly talk about his work, and he came to be known as the 'British Schindler', in comparison to Oskar Schindler. Sir Nicholas himself believed this was undeserved, because unlike Schindler, his life had never been in danger. Inspiration by Goodness project A project to run a train in tribute to the original Winton trains was announced on 21 January 2008 as the Train Prague – London project, and the organisers were negotiating to have the train named after Sir Nicholas. The train was run by Czech Railways and sponsored by the Czech government, with the project being dedicated to their holding of the Presidency of the Council of the European Union from January to June 2009. The train was part of a wider project encompassing social and cultural events along the route to "inspire young people through the deeds of Nicholas Winton", and with the theme of "Inspiration by Goodness", it incorporated art, film, photographic and literary contests by university students and school children. The project was to follow on from the work of documentary film maker Matej Mináč about Sir Nicholas, including his new film project Nicky's Family. While travelling on the train, Mináč filmed scenes for a new version of the Winton story. The Czech Senate President Přemysl Sobotka said of the project that it "should warn against rising extremism and anti-Semitism in Europe and in the world". 2009 Winton Train Journey The motive power for the train journey was provided by six different steam locomotives in total, two as a double-headed train in the Czech Republic due to the terrain, two in Germany, one in the Netherlands and one in Great Britain. The entire journey was scheduled to take four days, involving a European train leg, a crossing by the passengers on a ferry and a British train leg. It covered a distance of , of which was the train journey across mainland Europe. On 1 September the train departed Prague Main railway station. On this first day it travelled to the German city of Nuremberg, crossing the Czech - German border at Furth im Wald in the Bavarian Forest. The following day, the train was to travel across Germany to Cologne, via Frankfurt am Main. Instead of Frankfurt however, it travelled via Wiesbaden and the right bank of the River Rhine. On the third day, the train arrived on the North Sea coast at the Dutch ferry port, Hook of Holland, crossing the Dutch - German border at Emmerich am Rhein and passing through the Netherlands via Rotterdam. The passengers disembarked the train to cross the North Sea to Great Britain overnight on the Stena Line ferry Stena Britannica to Harwich, a port in the East of England on the country boundary between Essex and Suffolk. The British train journey formed the fourth day of the journey, travelling from Harwich to the London terminus of Liverpool Street station. On this final day the train departed Harwich International railway station at 09:12. It travelled via and , arriving at Liverpool Street station at 10:37 on Platform 10. Platform 10 was the platform number that the original Winton trains had used. Sir Nicholas, now 100 years old, met the train at Liverpool Street as guest of honour. Also at Liverpool Street to meet the train was Štefan Füle, the Czech Minister for European Affairs, and a former Czech ambassador to Britain. Motive power and rolling stock Travelling through the Czech Republic from Prague to Furth im Wald, the train was double-headed by locomotives No. 486.007 and 498.022, with 486.007 leading as the train left Prague. No. 486.007, known as the Green Anton is a preserved steam locomotive built in 1936 and based in Vrútky, Slovakia, owned by Slovak Republic Railways (ŽSR). It is one of the :de:ČSD-Baureihe 486.0 class, and has a green livery. No. 498.022, which is one of the :de:ČSD-Baureihe 498.0 class, has a blue livery and is owned by Czech Railways, who store it at Libeň in Prague, Czech Republic. Travelling through Germany from Furth im Wald to Emmerich am Rhein, the train was hauled by locomotive No. 41 018. No. 41 018 is a preserved steam locomotive built in 1939 and based at the Augsburg Railway Park railway museum in Augsburg, Bavaria. It is one of the locomotive class DRG Class 41, and it is owned by DG München. As the train travelled through the Netherlands from Emmerich am Rhein to Hook of Holland, it was hauled by Locomotive No. 01 1075. No. 01 1075 is a preserved steam locomotive built in 1940 and based at the Stoom Stichting Nederland (SSN) railway museum in Rotterdam. Travelling through England (from Harwich to London), the train was hauled by No. 60163 Tornado, a British mainline steam locomotive built in 2008 by the A1 Steam Locomotive Trust, the construction of which began in 1994 and was completed in 2008. The passenger rolling stock for the European leg from Prague to Hook of Holland comprised nine historic railway carriages of Hungarian and German origin, with a capacity for 240 passengers. The train included the blue liveried state luxury saloon carriage of Tomáš Garrigue Masaryk, the first president of Czechoslovakia, which entered service on 7 March 1930, Masaryk's 80th birthday. For the British leg, behind Tornado and her maroon support coach, the train was headed by Pegasus, a cream and brown Pullman Bar Car incorporating the Trianon Bar, followed by the historic 1950s built red and cream The Royal Scot rake of British Railways Mark 1 passenger coaches of Riviera Trains. Pegasus was built in 1951 for the famous Golden Arrow boat train, and later rebuilt for heritage mainline use. Passengers The 2009 Winton train carried 170 passengers, including 22 of those originally rescued, who came to be known as 'Winton's Children'. Passengers included the first, second, or even third generation of descendants of the original children rescued by Sir Nicholas. The descendants of the children Sir Nicholas rescued had by 2009 grown to number 5,000 people. Passengers on the 2009 train also included Sir Nicholas's daughter Barbara. Other survivors who did not travel on the reunion train instead met it at Liverpool Street. Subsequent projects It was the hope of the project to follow up the 2009 Winton Train to London with other Winton Trains to other European cities, and for it to become a tradition. In May 2011, an exhibition entitled Winton's Trains opened in London at Liverpool Street station. References Further reading Winton's children: Lisa Dasch Profile: Nicholas Winton Sir Nicholas Winton, Schindler of Britain External links Footage of Green Anton and carriages in Prague Footage of the Czech leg Footage of the German leg In pictures: Winton Train departs Harwich Footage of the British leg and Liverpool Street station Lady Milena Grenfell-Baines talks about her memories of being on 'The Last Train Out Of Prague' Commemoration International response to the Holocaust Jewish emigration from Nazi Germany International named passenger trains Rail transport in Europe Rescue of Jews during the Holocaust Kindertransport 2009 in rail transport
Winton Train
Biology
2,513
61,348,137
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20precursor
A gamma-ray precursor is a short X-ray outburst event that comes before the main outburst of the gamma-ray burst progenitor. There is no consensus on the mechanism for this event, although several theories have been suggested. History The first gamma-ray precursor event was from GRB 900126, a long GRB. Immediately, because of the non-thermal nature of the emission, it was recognized that the mechanism of emission for this event was likely internal to the neutron star and not from the accretion disk. Systematic surveys were subsequently carried out to find the percentages of gamma-ray bursts that contained precursor events. Although it was found that 3% of bursts in the BATSE catalogue had a precursor event, a later review found that 20% of long GRBs have a precursor event, although slightly different search criteria were used in that review. That percentage was also found in another study, although others have also found percentages wavering around 10%. Properties The precursor event occurs in a wide range of time frames before the main burst. This time can range up to hundreds of seconds. The precursors typically, but not always, show a non-thermal spectrum. Notably, the first gamma-ray precursor to be detected showed a thermal spectrum, with a peak in the X-ray wavelengths. There is no set definition of a precursor. Some allow a broad definition, where the precursor is merely a less-energetic event that happens before the main burst, while some impose additional restrictions, such as the precursor having a longer duration than the actual burst. This is the main reason varying percentages of precursors in samples have been found. Model No consensus model exists for gamma-ray burst precursors. According to the collapsar model, a long-GRB results from the collision of jets with the material surrounding a collapsed star. In this model, the precursor could be generated from the jet becomes optically thin. Under this theory, it is difficult to explain the large time gap (hundreds of seconds in some cases) between the precursor and the gamma-ray burst. Various mechanisms for precursors being completely separate phenomena from the main GRB event have also been proposed. In one such scenario, the precursor occurs from the formation of a weak jet during the collapse of the progenitor. This theory explains the time gap between the precursor and burst, although no experimental evidence has differentiated it from others. References Gamma-ray bursts Gamma-ray astronomy
Gamma-ray burst precursor
Physics,Astronomy
500
45,315,837
https://en.wikipedia.org/wiki/Sleep%20in%20space
Sleeping in space is part of space medicine and mission planning, with impacts on the health, capabilities and morale of astronauts. Human spaceflight often requires astronaut crews to endure long periods without rest. Studies have shown that lack of sleep can cause fatigue that leads to errors while performing critical tasks. Also, individuals who are fatigued often cannot determine the degree of their impairment. Astronauts and ground crews frequently suffer from the effects of sleep deprivation and circadian rhythm disruption. Fatigue due to sleep loss, sleep shifting and work overload could cause performance errors that put space flight participants at risk of compromising mission objectives as well as the health and safety of those on board. Description Sleeping in space requires that astronauts sleep in a crew cabin, a small room about the size of a shower stall. They lie in a sleeping bag which is strapped to the wall. Astronauts have reported having nightmares and dreams, and snoring while sleeping in space. Sleeping and crew accommodations need to be well-ventilated. In the early 21st century, crew on the ISS were said to average about six hours of sleep per day. On the ground Chronic sleep loss can impact performance similarly to total sleep loss and recent studies have shown that cognitive impairment after 17 hours of wakefulness is similar to impairment from an elevated blood alcohol level. It has been suggested that work overload and circadian desynchronization may cause performance impairment. Those who perform shift work suffer from increased fatigue because the timing of their sleep/wake schedule is out of sync with natural daylight (see Shift work syndrome). They are more prone to auto and industrial accidents as well as a decreased quality of work and productivity on the job. Ground crews at NASA are also affected by slam shifting (sleep shifting) while supporting critical International Space Station operations during overnight shifts. In space During the Apollo program, it was discovered that adequate sleep in the small volumes available in the command module and Lunar Module was most easily achieved if (1) there was minimum disruption to the pre-flight circadian rhythm of the crew members; (2) all crew members in the spacecraft slept at the same time; (3) crew members were able to doff their suits before sleeping; (4) work schedules were organized – and revised as needed – to provide an undisturbed (radio quiet) 6-8 hour rest period during each 24-hour period; (5) in zero-g, loose restraints were provided to keep the crewmen from drifting; (6) on the lunar surface, a hammock or other form of bed was provided; (7) there was an adequate combination of cabin temperature and sleepwear for comfort; (8) the crew could dim instrument lights and either cover their eyes or exclude sunlight from the cabin; and (9) equipment such as pumps were adequately muffled. NASA management currently has limits in place to restrict the number of hours in which astronauts are to complete tasks and events. This is known as the "Fitness for Duty Standards". Space crews' current nominal number of work hours is 6.5 hours per day, and weekly work time should not exceed 48 hours. NASA defines critical workload overload for a space flight crew as 10-hour work days for 3 days per work week, or more than 60 hours per week (NASA STD-3001, Vol. 1). Astronauts have reported that periods of high-intensity workload can result in mental and physical fatigue. Studies from the medical and aviation industries have shown that increased and intense workloads combined with disturbed sleep and fatigue can lead to significant health issues and performance errors. Research suggests that astronauts' quality and quantity of sleep while in space is markedly reduced than while on Earth. The use of sleep-inducing medication could be indicative of poor sleep due to disturbances. Current space flight data shows that accuracy, response time and recall tasks are all affected by sleep loss, work overload, fatigue and circadian desynchronization. Factors that contribute to sleep loss and fatigue The most common factors that can affect the length and quality of sleep while in space include: noise physical discomfort voids disturbances caused by other crew members temperature An evidence gathering effort is currently underway to evaluate the impact of these individual, physiological and environmental factors on sleep and fatigue. The effects of work-rest schedules, environmental conditions and flight rules and requirements on sleep, fatigue and performance are also being evaluated. Paul J. Weitz said that on Skylab he could not sleep vertically despite being weightless, so removed the metal frame in his sleeping bag and slept horizontally on it. Factors that contribute to circadian desynchronization Exposure to light is the largest contributor to circadian desynchronization on board the ISS. Since the ISS orbits the Earth every 1.5 hours, the flight crew experiences 16 sunrises and sunsets per day. Slam shifting (sleep shifting) is also a considerable external factor that causes circadian desynchronization in the current space flight environment. Other factors that may cause circadian desynchronization in space: shift work extended work hours timeline changes slam shifting (sleep shifting) prolonged light of lunar day Mars sol on Earth Mars sol on Mars abnormal environmental cues (i.e.: unnatural light exposure) Sleep loss, genetics, and space Both acute and chronic partial sleep loss occur frequently in space flight due to operational demands and for physiological reasons not yet entirely understood. Some astronauts are affected more than others. Earth-based research has demonstrated that sleep loss poses risks to astronaut performance, and that there are large, highly reliable individual differences in the magnitude of cognitive performance, fatigue and sleepiness, and sleep homeostatic vulnerability to acute total sleep deprivation and to chronic sleep restriction in healthy adults. The stable, trait-like (phenotypic) inter-individual differences observed in response to sleep loss point to an underlying genetic component. Indeed, data suggest that common genetic variations (polymorphisms) involved in sleep-wake, circadian, and cognitive regulation may serve as markers for prediction of inter-individual differences in sleep homeostatic and neurobehavioral vulnerability to sleep restriction in healthy adults. Identification of genetic predictors of differential vulnerability to sleep restriction will help identify astronauts most in need of fatigue countermeasures in space flight and inform medical standards for obtaining adequate sleep in space. Computer-based simulation information Biomathematical models are being developed to instantiate the biological dynamics of sleep need and circadian timing. These models could predict astronaut performance relative to fatigue and circadian desynchronization. See also Effects of sleep deprivation in space Effects of sleep deprivation on cognitive performance Shift work sleep disorder Skylab 4 Sleep deprivation References Sources http://www.asc-csa.gc.ca/eng/astronauts/living-sleeping.asp http://science.howstuffworks.com/sleep-in-space.htm http://www.theatlantic.com/technology/archive/2013/02/what-its-like-for-astronauts-to-sleep-in-space/273146/ NASA.gov NASA.gov Further reading External links NASA - Sleeping in Space The Atlantic - What It's Like for Astronauts to Sleep in Space - February 2013 Sleep Space medicine Human spaceflight
Sleep in space
Biology
1,478
74,905,142
https://en.wikipedia.org/wiki/Novolipetsk%20Metallurgical%20Plant
The Novolipetsk Metallurgical Plant, also known as NLMK, () is a Soviet and Russian metallurgical plant located in the Left Bank district of Lipetsk. The largest steel plant in Russia and the 17th in the world in terms of production in 2018. Full name - public joint stock company "Novolipetsk Metallurgical Plant". The Kursk Magnetic Anomaly, the main supplier of raw materials for the enterprise, is located 350 km away. Part of the Novolipetsk Steel. The novolipetsk Metallurgical Plant was hit by ukrainian drones on the 24th of February. The strike caused major damage to the plant The specificity of the enterprise is associated with an increased burden on the environment. According to the results of an audit initiated in 2006 by the Accounts Chamber, it followed that “NLMK OJSC accounted for 88% of the volume of pollutant emissions in the Lipetsk Region”. From 2007 to 2012, the plant implemented a number of investment projects for environmental protection, including in the areas of "Water" and "Air". In 2009, the plant completely stopped the discharge of industrial wastewater into the Voronezh River. All these actions led to the fact that, according to representatives of the ecology of the plant, the environmental situation in Lipetsk has improved remarkably. However, despite this, emissions of harmful substances and controversial environmental situations и спорные экологические ситуации are regularly observed at the plant. In 2019, the European Court of Human Rights communicated the complaint of 22 residents of Lipetsk, demanding that the Russian government take care of the environmental situation in the city (the complaint was filed in 2009). In 2020, the volume of harmful emissions of the enterprise amounted to 270 thousand tons and became the leader, along with the enterprises of Cherepovets, in terms of carbon monoxide emissions. As of 2021, Lipetsk is still among the ten cities in Russia with the worst air pollution. On the night of 28 June 2024, according to Russian media, seven Ukrainian drones struck the Novolipetsk Metallurgical Plant. No casualties were reported however the oxygen station and oxygen separation unit were damaged. Owner Lisin Vladimir Sergeevich is the main owner and owns 79.3% of NLMK shares through the Cyprus holding Fletcher Group. The remaining shares are in free circulation. See also Serafim Kolpakov References External links Companies based in Lipetsk Oblast Lipetsk NLMK Group Iron and steel mills
Novolipetsk Metallurgical Plant
Chemistry
542
65,310,418
https://en.wikipedia.org/wiki/Tropical%20Cyclone%20Heat%20Potential
Tropical Cyclone Heat Potential (TCHP) is one of such non-conventional oceanographic parameters influencing the tropical cyclone intensity. The relationship between Sea Surface Temperature (SST) and cyclone intensity has been long studied in statistical intensity prediction schemes such as the National Hurricane Center Statistical Hurricane Intensity Prediction Scheme (SHIPS) and Statistical Typhoon Intensity Prediction Scheme (STIPS). STIPS is run at the Naval Research Laboratory in Monterey, California, and is provided to Joint Typhoon Warning Centre (JTWC) to make cyclone intensity forecasts in the western North Pacific, South Pacific, and Indian Oceans. In most of the cyclone models, SST is the only oceanographic parameter representing heat exchange. However, cyclones have long been known to interact with the deeper layers of ocean rather than sea surface alone. Using a coupled ocean atmospheric model, Mao et al., concluded that the rate of intensification and final intensity of cyclone were sensitive to the initial spatial distribution of the mixed layer rather than to SST alone. Similarly, Namias and Canyan observed patterns of lower atmospheric anomalies being more consistent with the upper ocean thermal structure variability than SST. References Oceanography Oceanographic instrumentation
Tropical Cyclone Heat Potential
Physics,Technology,Engineering,Environmental_science
239
21,378,595
https://en.wikipedia.org/wiki/SeHCAT
SeHCAT (23-seleno-25-homotaurocholic acid, selenium homocholic acid taurine, or tauroselcholic acid) is a drug used in a clinical test to diagnose bile acid malabsorption. Development SeHCAT is a taurine-conjugated bile acid analog which was synthesized for use as a radiopharmaceutical to investigate in vivo the enterohepatic circulation of bile salts. By incorporating the gamma-emitter 75Se into the SeHCAT molecule, the retention in the body or the loss of this compound into the feces could be studied easily using a standard gamma camera, available in most clinical nuclear medicine departments. SeHCAT has been shown to be absorbed from the gut and excreted into the bile at the same rate as cholic acid, one of the major natural bile acids in humans. It undergoes secretion into the biliary tree, gallbladder and intestine in response to food, and is reabsorbed efficiently in the ileum, with kinetics similar to natural bile acids. It was soon shown to be the most convenient and accurate method available to assess and measure bile acid turnover in the intestine. SeHCAT testing was commercially developed by Amersham International Ltd (Amersham plc is now part of GE Healthcare Medical Diagnostics division) for clinical use to investigate malabsorption in patients with diarrhea. This test has replaced 14C-labeled glycocholic acid (or taurocholic acid) breath tests and fecal bile acid measurements, which now have no place in the routine clinical investigation of malabsorption. Procedure A capsule containing radiolabelled 75SeHCAT (with 370 kBq of Selenium-75 and less than 0.1 mg SeHCAT) is taken orally with water, to ensure passage of the capsule into the gastrointestinal tract. The physical half life of 75Se is approximately 118 days; activity is adjusted to a standard reference date. Patients may be given instructions to fast prior to capsule administration; there is significant variation in clinical practice in this regard. The effective dose of radiation for an adult given 370 kBq of SeHCAT is 0.26 mSv. (For comparison, the radiation exposure from an abdominal CT scan is quoted at 5.3 mSv and annual background exposure in the UK 1-3 mSv.) Measurements were originally performed with a whole-body counter but are usually performed now with an uncollimated gamma camera. The patient is scanned supine or prone with anterior and posterior acquisition from head to thigh 1 to 3 hours after taking the capsule. Scanning is repeated after 7 days. Background values are subtracted and care must be taken to avoid external sources of radiation in a nuclear medicine department. From these measurements, the percent retention of SeHCAT at 7 days is calculated. A 7-day SeHCAT retention value greater than 15% is considered to be normal, with values less than 15% signifying excessive bile acid loss, as found in bile acid malabsorption. With more frequent measurements, it is possible to calculate SeHCAT retention whole-body half-life; this is not routinely measured in a clinical setting. A half-life of greater than 2.8 days has been quoted as normal. Clinical use The SeHCAT test is used to investigate patients with suspected bile acid malabsorption, who usually experience chronic diarrhea, often passing watery feces 5 to 10 times each day. When ileum has been removed following surgery, or is inflamed in Crohn's disease, the 7-day SeHCAT retention usually is abnormal, and most of these patients will benefit from treatment with bile acid sequestrants. The enterohepatic circulation of bile acids is reduced in these patients with ileal abnormalities and, as the normal bile acid retention exceeds 95%, only a small degree of change is needed. Bile acid malabsorption can also be secondary to cholecystectomy, vagotomy and other disorders affecting intestinal motility or digestion such as radiation enteritis, celiac disease, and small intestinal bacterial overgrowth. A similar picture of chronic diarrhea, an abnormal SeHCAT retention and a response to bile acid sequestrants, in the absence of other disorders of the intestine, is characteristic of idiopathic bile acid malabsorption – also called primary bile acid diarrhea. These patients are frequently misdiagnosed as having the irritable bowel syndrome, as clinicians fail to recognize the condition, do not think of performing a SeHCAT test, or do not have it available. There have been at least 18 studies of the use of SeHCAT testing in diarrhea-predominant irritable bowel syndrome patients. When these data were combined, 32% of 1223 patients had a SeHCAT 7-day retention of less than 10%, and 80% of these reported a response to cholestyramine, a bile acid sequestrant. References External links GE Healthcare SeHCAT site Diagnostic gastroenterology Radiopharmaceuticals Gastroenterology Bile acids Cholanes Organoselenium compounds Selenium(−II) compounds Selenoethers
SeHCAT
Chemistry
1,102
2,797,809
https://en.wikipedia.org/wiki/Photographic%20magnitude
Photographic magnitude ( or ) is a measure of the relative brightness of a star or other astronomical object as imaged on a photographic film emulsion with a camera attached to a telescope. An object's apparent photographic magnitude depends on its intrinsic luminosity, its distance and any extinction of light by interstellar matter existing along the line of sight to the observer. Photographic observations have now been superseded by electronic photometry such as CCD charge-coupled device cameras that convert the incoming light into an electric current by the photoelectric effect. Determination of magnitude is made using a photometer. Method Prior to photographic methods to determine magnitude, the brightness of celestial objects was determined by visual photometric methods. This was simply achieved with the human eye by compared the brightness of an astronomical object with other nearby objects of known or fixed magnitude: especially regarding stars, planets and other planetary objects in the Solar System, variable stars and deep-sky objects. By the late 19th Century, an improved measure of the apparent magnitude of astronomical objects was obtained by photography, often attached as a dedicated plate camera at the prime focus of the telescope. Images were made on orthochromatic photoemulsive film or plates. These photographs were created by exposing the film over a short or long period of time, whose total exposure length accumulates photons and reveals fainter stars or astronomical objects invisible to the human eye. Although stars viewed in the sky are approximate point sources, the process in collecting their light cause each star to appear as small round disk, whose brightness is approximately proportional to the disk's diameter or its area. Simple measurement of the disk size can be optically judged by either a microscope or by an specially designed astronomical microdensitometer. Early black and white photographic plates used silver halide emulsions that were more sensitive to the blue end of the visual spectrum. This caused bluer stars to have a brighter photographic magnitude against the equivalent visual magnitude: appearing brighter on the photograph than the human eye or modern electronic photometers. Conversely, redder stars appear dimmer, and have a fainter photographic magnitude than its visual magnitude. For example, the red supergiant star KW Sagittarii has the photographic magnitude range of 11.0p to 13.2p but in the visual magnitude of about 8.5p to 11.0p. It is also common for variable star charts to feature several blue magnitude (B) comparison stars. e.g. S Doradus and WZ Sagittae. Photographic photometric methods define magnitudes and colours of astronomical objects using astronomical photographic images as viewed through selected or standard coloured bandpass filters. This differs from other expressions of apparent visual magnitude observed by the human eye or obtained by photography: that usually appear in older astronomical texts and catalogues. Early photographic images initially employed inconsistent quality or unstable yellow coloured filters, though later filter systems adopted more standardised bandpass filters which are still used with today's CCD photometers. Magnitudes and colour indices Apparent photographic magnitude is usually given as mpg or mp, or photovisual magnitudes mp or mpv. Absolute photographic magnitude is Mpg. These are different from the commonplace photometric systems (UBV, UBVRI or JHK) that are expressed with a capital letter. e.g. 'V" (mV), "B" (mB), etc. Other visual magnitudes estimated by the human eye are expressed using lower case letters. e.g. "v" or "b", etc. e.g. Visual magnitudes as mv. Hence, a 6th magnitude star might be stated as 6.0V, 6.0B, 6.0v or 6.0p. Because starlight is measured over a different range of wavelengths across the electromagnetic spectrum and are affected by different instrumental photometric sensitivities to light, they are not necessarily equivalent in numerical value. See also Apparent magnitude Absolute magnitude Araucaria Project Magnitude (astronomy) Photometry (astronomy) Surface brightness References Astrophysics
Photographic magnitude
Physics,Astronomy
821
17,327,236
https://en.wikipedia.org/wiki/Secnidazole
Secnidazole (trade names Flagentyl, Sindose, Secnil, Solosec) is a nitroimidazole anti-infective. Structurally it actually methyl-metronidazole. Effectiveness in the treatment of dientamoebiasis has been reported. It has also been tested against Atopobium vaginae. In the United States, secnidazole is FDA approved for the treatment of bacterial vaginosis and trichomoniasis in adult women. References Further reading Nitroimidazole antibiotics Antiprotozoal agents
Secnidazole
Biology
125
53,123,871
https://en.wikipedia.org/wiki/Disordered%20Structure%20Refinement
The Disordered Structure Refinement program (DSR), written by Daniel Kratzert, is designed to simplify the modeling of molecular disorder in crystal structures using SHELXL by George M. Sheldrick. It has a database of approximately 120 standard solvent molecules and molecular moieties. These can be inserted into the crystal structure with little effort, while at the same time chemically meaningful binding and angular restraints are set. DSR was developed because the previous description of disorder in crystal structures with SHELXL was very lengthy and error-prone. Instead of editing large text files manually and defining restraints manually, this process is automated with DSR. Application DSR can be started in a command line. The call has the basic form: dsr [option] (SHELXL file) DSR is controlled with a special command in the corresponding SHELXL file. This has the following syntax: REM DSR PUT/REPLACE "fragment" WITH (atoms) ON (atoms or q-peaks) PART 1 OCC -21 = RESI DFIX The DSR command must always start with REM so that SHELXL does not recognize this line as its own command. Which atom of the molecule fragment from the database corresponds to which atom or q-peak in the crystal structure is specified in the list following WITH and ON. By running dsr -r file.res the fragment fit is performed and the restraints transferred. Graphical user interface Since 2016 ShelXle has a graphical interface to DSR. Most commands of the command line version can be executed there. In order to transfer a fragment into a structure, three atoms / q-peaks have to be selected in ShelXle and in the DSR GUI each to specify the position of the fragment. The 3D view of the fragment then shows a preview of the subsequent fragment fit. Programming DSR is only programmed in Python. Therefore, it runs in any Python-supported operating system. It is under the free Beerware license and can be downloaded free of charge and changed as desired. References External links Crystallography software Python (programming language) software
Disordered Structure Refinement
Chemistry,Materials_science
440
70,202,879
https://en.wikipedia.org/wiki/Gitee
Gitee () is a proprietary online forge that allows software version control using Git and is intended primarily for the hosting of open source software. It is a fork of Gitea and uses a compatible API. It was launched by Shenzhen-based OSChina in 2013. Gitee claims to have more than 10 million repositories and 5 million users. Gitee was chosen by the Ministry of Industry and Information Technology of the Chinese government to make an "independent, open-source code hosting platform for China." Censorship On May 18, 2022, Gitee announced all code will be manually reviewed before public availability. Gitee did not specify a reason for the change, though there was widespread speculation it was ordered by the Chinese government amid increasing online censorship in China. See also GitHub GitLab Bitbucket References External links 2013 software Git (software) Version control Bug and issue tracking software Computing websites Collaborative projects Project hosting websites Project management software Free and open-source software
Gitee
Technology,Engineering
202
1,908,527
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20spectroscopy
Nuclear magnetic resonance spectroscopy, most commonly known as NMR spectroscopy or magnetic resonance spectroscopy (MRS), is a spectroscopic technique based on re-orientation of atomic nuclei with non-zero nuclear spins in an external magnetic field. This re-orientation occurs with absorption of electromagnetic radiation in the radio frequency region from roughly 4 to 900 MHz, which depends on the isotopic nature of the nucleus and increased proportionally to the strength of the external magnetic field. Notably, the resonance frequency of each NMR-active nucleus depends on its chemical environment. As a result, NMR spectra provide information about individual functional groups present in the sample, as well as about connections between nearby nuclei in the same molecule. As the NMR spectra are unique or highly characteristic to individual compounds and functional groups, NMR spectroscopy is one of the most important methods to identify molecular structures, particularly of organic compounds. The principle of NMR usually involves three sequential steps: The alignment (polarization) of the magnetic nuclear spins in an applied, constant magnetic field B0. The perturbation of this alignment of the nuclear spins by a weak oscillating magnetic field, usually referred to as a radio-frequency (RF) pulse. Detection and analysis of the electromagnetic waves emitted by the nuclei of the sample as a result of this perturbation. Similarly, biochemists use NMR to identify proteins and other complex molecules. Besides identification, NMR spectroscopy provides detailed information about the structure, dynamics, reaction state, and chemical environment of molecules. The most common types of NMR are proton and carbon-13 NMR spectroscopy, but it is applicable to any kind of sample that contains nuclei possessing spin. NMR spectra are unique, well-resolved, analytically tractable and often highly predictable for small molecules. Different functional groups are obviously distinguishable, and identical functional groups with differing neighboring substituents still give distinguishable signals. NMR has largely replaced traditional wet chemistry tests such as color reagents or typical chromatography for identification. The most significant drawback of NMR spectroscopy is its poor sensitivity (compared to other analytical methods, such as mass spectrometry). Typically 2–50 mg of a substance is required to record a decent-quality NMR spectrum. The NMR method is non-destructive, thus the substance may be recovered. To obtain high-resolution NMR spectra, solid substances are usually dissolved to make liquid solutions, although solid-state NMR spectroscopy is also possible. The timescale of NMR is relatively long, and thus it is not suitable for observing fast phenomena, producing only an averaged spectrum. Although large amounts of impurities do show on an NMR spectrum, better methods exist for detecting impurities, as NMR is inherently not very sensitive though at higher frequencies, sensitivity is higher. Correlation spectroscopy is a development of ordinary NMR. In two-dimensional NMR, the emission is centered around a single frequency, and correlated resonances are observed. This allows identifying the neighboring substituents of the observed functional group, allowing unambiguous identification of the resonances. There are also more complex 3D and 4D methods and a variety of methods designed to suppress or amplify particular types of resonances. In nuclear Overhauser effect (NOE) spectroscopy, the relaxation of the resonances is observed. As NOE depends on the proximity of the nuclei, quantifying the NOE for each nucleus allows construction of a three-dimensional model of the molecule. NMR spectrometers are relatively expensive; universities usually have them, but they are less common in private companies. Between 2000 and 2015, an NMR spectrometer cost around 0.5–5 million USD. Modern NMR spectrometers have a very strong, large and expensive liquid-helium-cooled superconducting magnet, because resolution directly depends on magnetic field strength. Higher magnetic field also improves the sensitivity of the NMR spectroscopy, which depends on the population difference between the two nuclear levels, which increases exponentially with the magnetic field strength. Less expensive machines using permanent magnets and lower resolution are also available, which still give sufficient performance for certain applications such as reaction monitoring and quick checking of samples. There are even benchtop nuclear magnetic resonance spectrometers. NMR spectra of protons (1H nuclei) can be observed even in Earth magnetic field. Low-resolution NMR produces broader peaks, which can easily overlap one another, causing issues in resolving complex structures. The use of higher-strength magnetic fields result in a better sensitivity and higher resolution of the peaks, and it is preferred for research purposes. History Credit for the discovery of NMR goes to Isidor Isaac Rabi, who received the Nobel Prize in Physics in 1944. The Purcell group at Harvard University and the Bloch group at Stanford University independently developed NMR spectroscopy in the late 1940s and early 1950s. Edward Mills Purcell and Felix Bloch shared the 1952 Nobel Prize in Physics for their inventions. NMR-active criteria The key determinant of NMR activity in atomic nuclei is the nuclear spin quantum number (I). This intrinsic quantum property, similar to an atom's "spin", characterizes the angular momentum of the nucleus. To be NMR-active, a nucleus must have a non-zero nuclear spin (I ≠ 0). It is this non-zero spin that enables nuclei to interact with external magnetic fields and show signals in NMR. Atoms with an odd sum of protons and neutrons exhibit half-integer values for the nuclear spin quantum number (I = 1/2, 3/2, 5/2, and so on). These atoms are NMR-active because they possess non-zero nuclear spin. Atoms with an even sum but both an odd number of protons and an odd number of neutrons exhibit integer nuclear spins (I = 1, 2, 3, and so on). Conversely, atoms with an even number of both protons and neutrons have a nuclear spin quantum number of zero (I = 0), and therefore are not NMR-active. NMR-active nuclei, particularly those with a spin quantum number of 1/2, are of great significance in NMR spectroscopy. Examples include 1H, 13C, 15N, and 31P. Some atoms with very high spin (as 9/2 for 99Tc atom) are also extensively studied with NMR spectroscopy. Main aspects of NMR techniques Resonant frequency When placed in a magnetic field, NMR active nuclei (such as 1H or 13C) absorb electromagnetic radiation at a frequency characteristic of the isotope. The resonant frequency, energy of the radiation absorbed, and the intensity of the signal are proportional to the strength of the magnetic field. For example, in a 21-tesla magnetic field, hydrogen nuclei (protons) resonate at 900 MHz. It is common to refer to a 21 T magnet as a 900 MHz magnet, since hydrogen is the most common nucleus detected. However, different nuclei will resonate at different frequencies at this field strength in proportion to their nuclear magnetic moments. Sample handling An NMR spectrometer typically consists of a spinning sample-holder inside a very strong magnet, a radio-frequency emitter, and a receiver with a probe (an antenna assembly) that goes inside the magnet to surround the sample, optionally gradient coils for diffusion measurements, and electronics to control the system. Spinning the sample is usually necessary to average out diffusional motion, however, some experiments call for a stationary sample when solution movement is an important variable. For instance, measurements of diffusion constants (diffusion ordered spectroscopy or DOSY) are done using a stationary sample with spinning off, and flow cells can be used for online analysis of process flows. Deuterated solvents The vast majority of molecules in a solution are solvent molecules, and most regular solvents are hydrocarbons and so contain NMR-active hydrogen-1 nuclei. In order to avoid having the signals from solvent hydrogen atoms overwhelm the experiment and interfere in analysis of the dissolved analyte, deuterated solvents are used where >99% of the protons are replaced with deuterium (hydrogen-2). The most widely used deuterated solvent is deuterochloroform (CDCl3), although other solvents may be used for various reasons, such as solubility of a sample, desire to control hydrogen bonding, or melting or boiling points. The chemical shifts of a molecule change slightly between solvents, and therefore the solvent used is almost always reported with chemical shifts. Proton NMR spectra are often calibrated against the known solvent residual proton peak as an internal standard instead of adding tetramethylsilane (TMS), which is conventionally defined as having a chemical shift of zero. Shim and lock To detect the very small frequency shifts due to nuclear magnetic resonance, the applied magnetic field must be extremely uniform throughout the sample volume. High-resolution NMR spectrometers use shims to adjust the homogeneity of the magnetic field to parts per billion (ppb) in a volume of a few cubic centimeters. In order to detect and compensate for inhomogeneity and drift in the magnetic field, the spectrometer maintains a "lock" on the solvent deuterium frequency with a separate lock unit, which is essentially an additional transmitter and RF processor tuned to the lock nucleus (deuterium) rather than the nuclei of the sample of interest. In modern NMR spectrometers shimming is adjusted automatically, though in some cases the operator has to optimize the shim parameters manually to obtain the best possible resolution. Acquisition of spectra Upon excitation of the sample with a radio frequency (60–1000 MHz) pulse, a nuclear magnetic resonance response a free induction decay (FID) is obtained. It is a very weak signal and requires sensitive radio receivers to pick up. A Fourier transform is carried out to extract the frequency-domain spectrum from the raw time-domain FID. A spectrum from a single FID has a low signal-to-noise ratio, but it improves readily with averaging of repeated acquisitions. Good 1H NMR spectra can be acquired with 16 repeats, which takes only minutes. However, for elements heavier than hydrogen, the relaxation time is rather long, e.g. around 8 seconds for 13C. Thus, acquisition of quantitative heavy-element spectra can be time-consuming, taking tens of minutes to hours. Following the pulse, the nuclei are, on average, excited to a certain angle vs. the spectrometer magnetic field. The extent of excitation can be controlled with the pulse width, typically about 3–8 μs for the optimal 90° pulse. The pulse width can be determined by plotting the (signed) intensity as a function of pulse width. It follows a sine curve and, accordingly, changes sign at pulse widths corresponding to 180° and 360° pulses. Decay times of the excitation, typically measured in seconds, depend on the effectiveness of relaxation, which is faster for lighter nuclei and in solids, slower for heavier nuclei and in solutions, and can be very long in gases. If the second excitation pulse is sent prematurely before the relaxation is complete, the average magnetization vector has not decayed to ground state, which affects the strength of the signal in an unpredictable manner. In practice, the peak areas are then not proportional to the stoichiometry; only the presence, but not the amount of functional groups is possible to discern. An inversion recovery experiment can be done to determine the relaxation time and thus the required delay between pulses. A 180° pulse, an adjustable delay, and a 90° pulse is transmitted. When the 90° pulse exactly cancels out the signal, the delay corresponds to the time needed for 90° of relaxation.<ref>{{cite web |url=http://triton.iqfr.csic.es/guide/eNMR/eNMR1D/invrec.html |title='T1 Measurement using Inversion-Recovery |first=Teodor |last=Parella |work=NMRGuide3.5 |url-status=dead |archive-url=https://web.archive.org/web/20210428064003/triton.iqfr.csic.es/guide/eNMR/eNMR1D/invrec.html |archive-date=2021-04-28}}</ref> Inversion recovery is worthwhile for quantitative 13C, 2D and other time-consuming experiments. Spectral interpretation NMR signals are ordinarily characterized by three variables: chemical shift, spin–spin coupling, and relaxation time. Chemical shift The energy difference ΔE between nuclear spin states is proportional to the magnetic field (Zeeman effect). ΔE is also sensitive to electronic environment of the nucleus, giving rise to what is known as the chemical shift, δ. The simplest types of NMR graphs are plots of the different chemical shifts of the nuclei being studied in the molecule. The value of δ is often expressed in terms of "shielding": shielded nuclei have higher ΔE. The range of δ values is called the dispersion. It is rather small for 1H signals, but much larger for other nuclei. NMR signals are reported relative to a reference signal, usually that of TMS (tetramethylsilane). Additionally, since the distribution of NMR signals is field-dependent, these frequencies are divided by the spectrometer frequency. However, since we are dividing Hz by MHz, the resulting number would be too small, and thus it is multiplied by a million. This operation therefore gives a locator number called the "chemical shift" with units of parts per million. The chemical shift provides structural information. The conversion of chemical shifts (and J's, see below) is called assigning the spectrum. For diamagnetic organic compounds, assignments of 1H and 13C NMR spectra are extremely sophisticated because of the large databases and easy computational tools. In general, chemical shifts for protons are highly predictable, since the shifts are primarily determined by shielding effects (electron density). The chemical shifts for many heavier nuclei are more strongly influenced by other factors, including excited states ("paramagnetic" contribution to shielding tensor). This paramagnetic contribution, which is unrelated to paramagnetism) not only disrupts trends in chemical shifts, which complicates assignments, but it also gives rise to very large chemical shift ranges. For example, most 1H NMR signals for most organic compounds are within 15 ppm. For 31P NMR, the range is hundreds of ppm. In paramagnetic NMR spectroscopy, the samples are paramagnetic, i.e. they contain unpaired electrons. The paramagnetism gives rise to very diverse chemical shifts. In 1H NMR spectroscopy, the chemical shift range can span up to thousands of ppm. J-coupling Some of the most useful information for structure determination in a one-dimensional NMR spectrum comes from J-coupling, or scalar coupling (a special case of spin–spin coupling), between NMR active nuclei. This coupling arises from the interaction of different spin states through the chemical bonds of a molecule and results in the splitting of NMR signals. For a proton, the local magnetic field is slightly different depending on whether an adjacent nucleus points towards or against the spectrometer magnetic field, which gives rise to two signals per proton instead of one. These splitting patterns can be complex or simple and, likewise, can be straightforwardly interpretable or deceptive. This coupling provides detailed insight into the connectivity of atoms in a molecule. The multiplicity of the splitting is an effect of the spins of the nuclei that are coupled and the number of such nuclei involved in the coupling. Coupling to n equivalent spin-1/2 nuclei splits the signal into a n + 1 multiplet with intensity ratios following Pascal's triangle as described in the table. Coupling to additional spins leads to further splittings of each component of the multiplet, e.g. coupling to two different spin-1/2 nuclei with significantly different coupling constants leads to a doublet of doublets (abbreviation: dd). Note that coupling between nuclei that are chemically equivalent (that is, have the same chemical shift) has no effect on the NMR spectra, and couplings between nuclei that are distant (usually more than 3 bonds apart for protons in flexible molecules) are usually too small to cause observable splittings. Long-range couplings over more than three bonds can often be observed in cyclic and aromatic compounds, leading to more complex splitting patterns. For example, in the proton spectrum for ethanol, the CH3 group is split into a triplet with an intensity ratio of 1:2:1 by the two neighboring CH2 protons. Similarly, the CH2 is split into a quartet with an intensity ratio of 1:3:3:1 by the three neighboring CH3 protons. In principle, the two CH2 protons would also be split again into a doublet to form a doublet of quartets by the hydroxyl proton, but intermolecular exchange of the acidic hydroxyl proton often results in a loss of coupling information. Coupling to any spin-1/2 nuclei such as phosphorus-31 or fluorine-19 works in this fashion (although the magnitudes of the coupling constants may be very different). But the splitting patterns differ from those described above for nuclei with spin greater than 1/2 because the spin quantum number has more than two possible values. For instance, coupling to deuterium (a spin-1 nucleus) splits the signal into a 1:1:1 triplet because the spin 1 has three spin states. Similarly, a spin-3/2 nucleus such as 35Cl splits a signal into a 1:1:1:1 quartet and so on. Coupling combined with the chemical shift (and the integration for protons) tells us not only about the chemical environment of the nuclei, but also the number of neighboring NMR active nuclei within the molecule. In more complex spectra with multiple peaks at similar chemical shifts or in spectra of nuclei other than hydrogen, coupling is often the only way to distinguish different nuclei. The magnitude of the coupling (the coupling constant J) is an effect of how strongly the nuclei are coupled to each other. For simple cases, this is an effect of the bonding distance between the nuclei, the magnetic moment of the nuclei, and the dihedral angle between them. Second-order (or strong) coupling The above description assumes that the coupling constant is small in comparison with the difference in NMR frequencies between the inequivalent spins. If the shift separation decreases (or the coupling strength increases), the multiplet intensity patterns are first distorted, and then become more complex and less easily analyzed (especially if more than two spins are involved). Intensification of some peaks in a multiplet is achieved at the expense of the remainder, which sometimes almost disappear in the background noise, although the integrated area under the peaks remains constant. In most high-field NMR, however, the distortions are usually modest, and the characteristic distortions (roofing'') can in fact help to identify related peaks. Some of these patterns can be analyzed with the method published by John Pople, though it has limited scope. Second-order effects decrease as the frequency difference between multiplets increases, so that high-field (i.e. high-frequency) NMR spectra display less distortion than lower-frequency spectra. Early spectra at 60 MHz were more prone to distortion than spectra from later machines typically operating at frequencies at 200 MHz or above. Furthermore, as in the figure to the right, J-coupling can be used to identify ortho-meta-para substitution of a ring. Ortho coupling is the strongest at 15 Hz, Meta follows with an average of 2 Hz, and finally para coupling is usually insignificant for studies. Magnetic inequivalence More subtle effects can occur if chemically equivalent spins (i.e., nuclei related by symmetry and so having the same NMR frequency) have different coupling relationships to external spins. Spins that are chemically equivalent but are not indistinguishable (based on their coupling relationships) are termed magnetically inequivalent. For example, the 4 H sites of 1,2-dichlorobenzene divide into two chemically equivalent pairs by symmetry, but an individual member of one of the pairs has different couplings to the spins making up the other pair. Magnetic inequivalence can lead to highly complex spectra, which can only be analyzed by computational modeling. Such effects are more common in NMR spectra of aromatic and other non-flexible systems, while conformational averaging about C−C bonds in flexible molecules tends to equalize the couplings between protons on adjacent carbons, reducing problems with magnetic inequivalence. Correlation spectroscopy Correlation spectroscopy is one of several types of two-dimensional nuclear magnetic resonance (NMR) spectroscopy or 2D-NMR. This type of NMR experiment is best known by its acronym, COSY. Other types of two-dimensional NMR include J-spectroscopy, exchange spectroscopy (EXSY), Nuclear Overhauser effect spectroscopy (NOESY), total correlation spectroscopy (TOCSY), and heteronuclear correlation experiments, such as HSQC, HMQC, and HMBC. In correlation spectroscopy, emission is centered on the peak of an individual nucleus; if its magnetic field is correlated with another nucleus by through-bond (COSY, HSQC, etc.) or through-space (NOE) coupling, a response can also be detected on the frequency of the correlated nucleus. Two-dimensional NMR spectra provide more information about a molecule than one-dimensional NMR spectra and are especially useful in determining the structure of a molecule, particularly for molecules that are too complicated to work with using one-dimensional NMR. The first two-dimensional experiment, COSY, was proposed by Jean Jeener, a professor at Université Libre de Bruxelles, in 1971. This experiment was later implemented by Walter P. Aue, Enrico Bartholdi and Richard R. Ernst, who published their work in 1976. Solid-state nuclear magnetic resonance A variety of physical circumstances do not allow molecules to be studied in solution, and at the same time not by other spectroscopic techniques to an atomic level, either. In solid-phase media, such as crystals, microcrystalline powders, gels, anisotropic solutions, etc., it is in particular the dipolar coupling and chemical shift anisotropy that become dominant to the behaviour of the nuclear spin systems. In conventional solution-state NMR spectroscopy, these additional interactions would lead to a significant broadening of spectral lines. A variety of techniques allows establishing high-resolution conditions, that can, at least for 13C spectra, be comparable to solution-state NMR spectra. Two important concepts for high-resolution solid-state NMR spectroscopy are the limitation of possible molecular orientation by sample orientation, and the reduction of anisotropic nuclear magnetic interactions by sample spinning. Of the latter approach, fast spinning around the magic angle is a very prominent method, when the system comprises spin-1/2 nuclei. Spinning rates of about 20 kHz are used, which demands special equipment. A number of intermediate techniques, with samples of partial alignment or reduced mobility, is currently being used in NMR spectroscopy. Applications in which solid-state NMR effects occur are often related to structure investigations on membrane proteins, protein fibrils or all kinds of polymers, and chemical analysis in inorganic chemistry, but also include "exotic" applications like the plant leaves and fuel cells. For example, Rahmani et al. studied the effect of pressure and temperature on the bicellar structures' self-assembly using deuterium NMR spectroscopy. Solid-state NMR is usefull also for metal structure understanding in case of X-ray amorphous metal samples (like nano-size refractory metal 99Tc) . Biomolecular NMR spectroscopy Proteins Much of the innovation within NMR spectroscopy has been within the field of protein NMR spectroscopy, an important technique in structural biology. A common goal of these investigations is to obtain high resolution 3-dimensional structures of the protein, similar to what can be achieved by X-ray crystallography. In contrast to X-ray crystallography, NMR spectroscopy is usually limited to proteins smaller than 35 kDa, although larger structures have been solved. NMR spectroscopy is often the only way to obtain high resolution information on partially or wholly intrinsically unstructured proteins. It is now a common tool for the determination of Conformation Activity Relationships where the structure before and after interaction with, for example, a drug candidate is compared to its known biochemical activity. Proteins are orders of magnitude larger than the small organic molecules discussed earlier in this article, but the basic NMR techniques and some NMR theory also applies. Because of the much higher number of atoms present in a protein molecule in comparison with a small organic compound, the basic 1D spectra become crowded with overlapping signals to an extent where direct spectral analysis becomes untenable. Therefore, multidimensional (2, 3 or 4D) experiments have been devised to deal with this problem. To facilitate these experiments, it is desirable to isotopically label the protein with 13C and 15N because the predominant naturally occurring isotope 12C is not NMR-active and the nuclear quadrupole moment of the predominant naturally occurring 14N isotope prevents high resolution information from being obtained from this nitrogen isotope. The most important method used for structure determination of proteins utilizes NOE experiments to measure distances between atoms within the molecule. Subsequently, the distances obtained are used to generate a 3D structure of the molecule by solving a distance geometry problem. NMR can also be used to obtain information on the dynamics and conformational flexibility of different regions of a protein. Nucleic acids Nucleic acid NMR is the use of NMR spectroscopy to obtain information about the structure and dynamics of polynucleic acids, such as DNA or RNA. , nearly half of all known RNA structures had been determined by NMR spectroscopy. Nucleic acid and protein NMR spectroscopy are similar but differences exist. Nucleic acids have a smaller percentage of hydrogen atoms, which are the atoms usually observed in NMR spectroscopy, and because nucleic acid double helices are stiff and roughly linear, they do not fold back on themselves to give "long-range" correlations. The types of NMR usually done with nucleic acids are 1H or proton NMR, 13C NMR, 15N NMR, and 31P NMR. Two-dimensional NMR methods are almost always used, such as correlation spectroscopy (COSY) and total coherence transfer spectroscopy (TOCSY) to detect through-bond nuclear couplings, and nuclear Overhauser effect spectroscopy (NOESY) to detect couplings between nuclei that are close to each other in space. Parameters taken from the spectrum, mainly NOESY cross-peaks and coupling constants, can be used to determine local structural features such as glycosidic bond angles, dihedral angles (using the Karplus equation), and sugar pucker conformations. For large-scale structure, these local parameters must be supplemented with other structural assumptions or models, because errors add up as the double helix is traversed, and unlike with proteins, the double helix does not have a compact interior and does not fold back upon itself. NMR is also useful for investigating nonstandard geometries such as bent helices, non-Watson–Crick basepairing, and coaxial stacking. It has been especially useful in probing the structure of natural RNA oligonucleotides, which tend to adopt complex conformations such as stem-loops and pseudoknots. NMR is also useful for probing the binding of nucleic acid molecules to other molecules, such as proteins or drugs, by seeing which resonances are shifted upon binding of the other molecule. Carbohydrates Carbohydrate NMR spectroscopy addresses questions on the structure and conformation of carbohydrates. The analysis of carbohydrates by 1H NMR is challenging due to the limited variation in functional groups, which leads to 1H resonances concentrated in narrow bands of the NMR spectrum. In other words, there is poor spectral dispersion. The anomeric proton resonances are segregated from the others due to fact that the anomeric carbons bear two oxygen atoms. For smaller carbohydrates, the dispersion of the anomeric proton resonances facilitates the use of 1D TOCSY experiments to investigate the entire spin systems of individual carbohydrate residues. Drug discovery Knowledge of energy minima and rotational energy barriers of small molecules in solution can be found using NMR, e.g. looking at free ligand conformational preferences and conformational dynamics, respectively. This can be used to guide drug design hypotheses, since experimental and calculated values are comparable. For example, AstraZeneca uses NMR for its oncology research & development. High-pressure NMR spectroscopy One of the first scientific works devoted to the use of pressure as a variable parameter in NMR experiments was the work of J. Jonas published in the journal Annual Review of Biophysics in 1994. The use of high pressures in NMR spectroscopy was primarily driven by the desire to study biochemical systems, where the use of high pressure allows controlled changes in intermolecular interactions without significant perturbations. Of course, attempts have been made to solve scientific problems using high-pressure NMR spectroscopy. However, most of them were difficult to reproduce due to the problem of equipment for creating and maintaining high pressure. In the most common types of NMR cells for realization of high-pressure NMR experiments are given. High-pressure NMR spectroscopy has been widely used for a variety of applications, mainly related to the characterization of the structure of protein molecules. However, in recent years, software and design solutions have been proposed to characterize the chemical and spatial structures of small molecules in a supercritical fluid environment, using state parameters as a driving force for such changes. See also Quantum mechanics of nuclear magnetic resonance (NMR) spectroscopy Related methods of nuclear spectroscopy: Mössbauer effect Muon spin spectroscopy Perturbed angular correlation References Further reading External links The Basics of NMR - A non-technical overview of NMR theory, equipment, and techniques by Dr. Joseph Hornak, Professor of Chemistry at RIT GAMMA and PyGAMMA Libraries - GAMMA is an open source C++ library written for the simulation of Nuclear Magnetic Resonance Spectroscopy experiments. PyGAMMA is a Python wrapper around GAMMA. relax Software for the analysis of NMR dynamics Vespa - VeSPA (Versatile Simulation, Pulses and Analysis) is a free software suite composed of three Python applications. These GUI based tools are for magnetic resonance (MR) spectral simulation, RF pulse design, and spectral processing and analysis of MR data.
Nuclear magnetic resonance spectroscopy
Physics,Chemistry
6,455
1,628,643
https://en.wikipedia.org/wiki/GM-1
GM-1 (Göring Mischung 1) was a system for injecting nitrous oxide into aircraft engines that was used by the Luftwaffe in World War II. This increased the amount of oxygen in the fuel mixture, and thereby improved high-altitude performance. GM-1 was used on several modifications of existing fighter designs to counter the increasing performance of Allied fighters at higher altitudes. A different system for low-altitude boost known as MW 50 was also used, although GM-1 and MW 50 were rarely used on the same engine. MW-50 injected a methanol-water mixture into the cylinders to cool the mix. Cooling causes the air to become denser, therefore allowing more air into each cylinder for a given volume. This is the same principle that intercoolers use. GM-1 was developed in 1940 by Otto Lutz to improve high-altitude performance. It could be used by fighters, destroyers, bombers and reconnaissance aircraft, though its first use was in the Bf 109E/Z fighter. Originally, it was liquified under high pressure and stored in several high-pressure vessels until it was found that low-temperature liquefied nitrous oxide gave better performance due to improved charge cooling. It could also be stored and handled more conveniently and was less vulnerable to enemy fire. GM-1 was typically sprayed in liquid form directly into the supercharger intake from two jets of different bore while at the same time, the fuel flow was increased to take advantage of the additional oxygen from the nitrous oxide. The jets could be operated individually or in combination, yielding three steps of power increase, for example 120/240/360 HP at different GM-1 flow rates (60, 100 and 150 grams/sec). The development of a continuously variable injection system was considered, but apparently it never saw operational use. Initially intended as standard equipment for the Luftwaffe, in operational service it was found that GM-1 had some drawbacks. The additional weight of the equipment reduced performance on all missions, while the system was only used in the cases where the aircraft went to very high altitudes. GM-1 also became less attractive than originally imagined when in 1943, the previous trend towards ever increasing combat altitudes ended. While GM-1 saw little use in the second half of the war, the Focke-Wulf Ta 152H, which had been developed as a dedicated high-altitude interceptor, also received a GM-1 system to provide it with superior performance at high altitude. The Ta 152H was one of the few designs to support both GM-1 and MW 50. Similar systems have been used in racing cars and hot rods. See also Nitrous oxide engine Citations Bibliography External links Messerschmitt Bf 109 Site - Short Operating Manual for pilots and ground personnel for the GM-1 system in the Bf 109 G (In German). Aircraft engines
GM-1
Technology
590
10,353,431
https://en.wikipedia.org/wiki/Massachusetts%20statistical%20areas
The United States currently has 12 statistical areas that have been delineated by the Office of Management and Budget (OMB). On July 21, 2023, the OMB delineated two combined statistical area, seven metropolitan statistical areas, and three micropolitan statistical area in Massachusetts. As of 2023, the largest of these is the Boston-Worcester-Providence, MA-RI-NH CSA, comprising the area around Massachusetts' capital and largest city of Boston. Table Primary statistical areas Primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Of the 12 statistical areas of Massachusetts, five are PSAs comprising two combined statistical areas, one metropolitan statistical area and two micropolitan statistical areas. See also Geography of Massachusetts Demographics of Massachusetts Notes References External links Office of Management and Budget United States Census Bureau United States statistical areas Statistical Areas Of Massachusetts Statistical Areas Of Massachusetts
Massachusetts statistical areas
Mathematics
195
67,692,128
https://en.wikipedia.org/wiki/Thermohaline%20staircase
Thermohaline staircases are patterns that form in oceans and other bodies of salt water, characterised by step-like structures observed in vertical temperature and salinity profiles; the patterns are formed and maintained by double diffusion of heat and salt. The ocean phenomenon consists of well-mixed layers of ocean water stacked on top of each other. The well-mixed layers are separated by high-gradient interfaces, which can be several meters thick. The total thickness of staircases ranges typically from tens to hundreds of meters. Two types of staircases are distinguished. Salt-fingering staircases can be found at locations where relatively warm, salty water overlies relatively colder, fresher water. Here, large-scale temperature and salinity both increase upward, making the mixing process of salt fingering possible. Locations where you can find these type of staircases are for example beneath the Mediterranean outflow, in the Tyrrhenian Sea, and northeast Caribbean. Diffusive staircases can be found at locations where both temperature and salinity increase downward, for example in the Arctic Ocean and in the Weddell Sea. An important feature of thermohaline staircases is their extreme stability in space and time. They can persist several years or more and can extend for hundreds of kilometers. The interest in thermohaline staircases is partly due to the fact that the staircases represent mixing hot spots in the main thermocline. Extensive definition and detection To determine the presence of thermohaline staircases, the following steps can be taken according to the algorithm designed by Van der Boog. The first step of the algorithm is to identify the mixed layers by locating weak vertical density gradients in conservative temperature and absolute salinity. To do so, the threshold gradient method is used with a threshold of , with the pressure and the reference pressure. The vertical conservative temperature, absolute salinity, and potential density gradients are all below the threshold value by meeting these three conditions: with the thermal expansion coefficient, the haline contraction coefficient, the reference density, the conservative temperature, and the salinity. The second step is to define the interface, which is the part of the water column in the middle of two mixed layers. It is required that the conservative temperature, absolute salinity, and potential density variations in the interface should be larger than the variations within each mixed layer to ensure a stepped structure. Therefore the following conditions should be met: where subscript 1 corresponds to the mixed layer above the interface and subscript 2 corresponds to the mixed layer below the interface. The third step is to limit the interface height . The interface height should be smaller than the height of the mixed layers directly above and below the interface . This condition has to be met in order to ensure that the interface is relatively thin compared to the mixed layers surrounding it. Furthermore, the algorithm removes all interfaces with conservative temperature or absolute salinity inversions to make sure that it only detects step-like structures that are associated with the presence of thermohaline staircases. The fourth step is to determine the double-diffusive regime (salt-fingering or diffusive) of each interface. When both conservative temperature and absolute salinity of the mixed layers above and below the interface increase downward, the interface belongs to the diffusive regime. When both conservative temperature and absolute salinity of the mixed layers above and below the interface both increase upward, the interface is classified as the salt-fingering regime. Finally, only vertical sequences of at least two interfaces in the same double-diffusive regime are selected, where the interfaces should be separated from each other by only one mixed layer. This way, most thermohaline intrusions are removed, as these are characterised by alternating mixed layers in the diffusive and salt-finger regimes. Furthermore, the algorithm removes salt-fingering interfaces and diffusive-convective interfaces outside their favourable Turner angle , a parameter used to describe the local stability of an inviscid water column. Interfaces with salt-fingering characteristics should correspond to Turner angles of and interfaces with diffusive-convective characteristics should correspond to Turner angles of . Staircase origin The origin of thermohaline staircases relies on double diffusive convection, and specifically on the fact that heated water diffuses more readily than salty water. However, there is still much debate on which specific mechanism of layering plays a role. Six possible mechanisms are described below. Collective instability mechanism This mechanism, involving collective instability, relies on the idea that after a period of active internal wave motion, layers appear. This hypothesis was motivated by laboratory experiments in which staircases formed from the initially uniform temperature and salinity gradients. Growing waves might overturn and generate the stepped structure of thermohaline staircases. Thermohaline intrusion mechanism This hypothesis states that staircases represent the final stage in the evolution of thermohaline intrusions. Intrusions can evolve either to a state consisting of alternating salt-finger and diffusive interfaces separated by convecting layers, which is common at high density ratio , or to a series of salt-finger interfaces when the density ratio is low . This proposition relies on the presence of lateral property gradients to drive interleaving. This mechanism, where thermohaline intrusions are transformed into staircases, are likely to exist in strong temperature-salinity fronts. Metastable equilibria mechanism A different theory states that staircases represent distinct metastable equilibria. It is suggested that finite amplitude perturbations to the gradient state force the system into a layered regime where it can remain for long periods of time. Large initial perturbations to the gradient state make the transition to the staircase more likely and accelerate the process. Once the staircase is created, the system becomes resilient to further structural changes. Applied flux mechanism The applied flux mechanism was mainly tested in laboratory experiments, and is most likely at work in cases when layering is caused by geothermal heating. When a stable salinity gradient is heated from below, top-heavy convection will take place in the lower part of the water column. The well-mixed convecting layer is bounded from above by a thin high-gradient interface. By a combination of molecular diffusion and entrainment across the interface, heat is transferred upward from the convecting layer. The molecular transfer of heat exceeds that of salt, resulting in a supply of buoyancy to the region immediately above the interface. This leads to the formation of a second convecting layer. The process can repeat itself over and over, which results in a sequence of mixed layers separated by sharp interfaces, a thermohaline staircase. Negative density diffusion In salt-fingering staircases, vertical temperature and salinity fluxes are downgradient, while the vertical density flux is upgradient. This is explained by the fact that the potential energy released in transporting salt downward must exceed that expended in transporting heat upward, resulting in a net downward transport of mass. This negative diffusion sharpens the fluctuations and therefore suggests a means for generating and maintaining staircases. Instability of flux-gradient laws This mechanism is based on negative density diffusion as well. However, instead of combining temperature and salinity into a single density term, it treats both density components individually. In a publication by Radko, it is shown that formation of steps in numerical models is caused by the parametric variation of the flux ratio as a function of the density ratio , leading to an instability of equilibrium with uniform stratification. These unstable perturbations continuously grow in time until well-defined layers are formed. Observations Two types of staircases exist: salt-fingering staircases, where both temperature and salinity of the mixed layers decrease with pressure (and therefore with depth); and diffusive staircases, where both temperature and salinity of the mixed layers increase with pressure (so with depth). Salt-fingering staircases Most observations of salt-fingering staircases have come from three locations: the western Tropical Atlantic, the Tyrrhenian Sea, and the Mediterranean outflow. In these regions the density ratio has a very low value, which appears to be a condition for sufficient staircase formation. No staircases have been reported for values below 2. For values below 1.7, the step-like structures in vertical temperature and salinity profiles becomes apparent. Moreover, the spatial pattern of staircases is very sensitive to . With decreasing , the height of steps sharply increases and the staircases become more pronounced. The importance of the density ratio for the formation is a sign that staircases are a product of double diffusive convection. In the Tyrrhenian Sea, thermohaline staircases due to salt fingers are observed. The step-like shape is visible in the vertical temperature and salinity profiles. Staircases in the Tyrrhenian Sea show a very high stability in space and time. The weak deep circulation in this area might be an explanation for this stability. Diffusive staircases Diffusive staircases are found at higher latitudes. In the Arctic Ocean, warm and salty water from the Atlantic enters the Arctic basin and subducts beneath the colder and fresher waters of the upper Arctic. In some regions, also Pacific waters sit below the mixed layer and above the Atlantic layer. A thermocline is found at the top of the Atlantic Water layer. In that region, temperature and salinity increases with depth and step-like patterns are observed in vertical temperature and salinity profiles. These staircases mediate the heat transport from the warm water of Atlantic origin to the Arctic halocline and therefore serve as an important process in determining the heat flux from the Atlantic Water upward to the sea ice. Staircases in the Arctic are characterised by much smaller steps than in salt-fingering staircases. On a much smaller scale, diffusive staircases have also been observed in low- and mid-latitudes. For example, Lake Kivu and Lake Nyos show characteristic staircase patterns. In these salt-water lakes, geothermal springs supply heat at the bottom resulting in the diffusive background stratification. See also Salt fingering Double diffusive convection Oceans portal References Patterns Physical oceanography
Thermohaline staircase
Physics
2,142
53,870,076
https://en.wikipedia.org/wiki/Miguel%20Garc%C3%ADa-Garibay
Miguel A. García-Garibay is a professor of chemistry and biochemistry and the dean of physical sciences at University of California, Los Angeles (UCLA). His research focuses on solid state organic chemistry, photochemistry and spectroscopy, artificial molecular machines, and mesoscale phenomena. Education García-Garibay received his B.S. from the University of Michoacán, Mexico, in 1982. After completing a combined degree in Chemistry, Biology, and Pharmacy, García-Garibay went on to get a PhD degree in Chemistry at the University of British Columbia, where he joined the group of John Scheffer. After that, he joined the group of Nicholas Turro as a postdoctoral fellow at Columbia University. García-Garibay received an Arthur C. Cope Scholar Award in 2015. Awards and positions National Academy of Sciences, 2023 ACS Fellow, 2019 ACS Cope Scholar Award, 2015 Appointment to the Chemical Sciences Roundtable of the NAS Board on Chemical Sciences and Technology, 2012–2018 Associate Editor of the Journal of the ACS, 2009–2016 NSF Creativity Award, 2009–2011 American Competitiveness and Innovation Fellow, 2008 Fellow of the AAAS, 2007 Herbert Newby McCoy Award, UCLA, 1999 Dean's Marshal Award for the Division of Physical Sciences, UCLA, 1997 NSF Career Award 1996–99 References Mexican chemists People from Morelia University of California, Los Angeles faculty 21st-century American chemists Year of birth missing (living people) Living people Universidad Michoacana de San Nicolás de Hidalgo alumni University of British Columbia alumni Solid state chemists Photochemists
Miguel García-Garibay
Chemistry
329
3,748,263
https://en.wikipedia.org/wiki/Alliance%20to%20Rescue%20Civilization
The Alliance to Rescue Civilization was an organization devoted to the establishment of an off-Earth "backup" of human civilization. This facility, or group of facilities, would serve to repopulate the Earth after a worldwide disaster or war, preserving as much as possible both the sciences and the arts. The organization had called for such a backup facility to be built on the Moon in lieu of NASA's plan to return there no earlier than 2026. It was founded by the author and journalist William E. Burrows and the biochemist Robert Shapiro. The organization was absorbed into the Lifeboat Foundation in 2007. References External links The Alliance to Rescue Civilization - An Organizational Framework - Internet Archive ARC website An Alliance to Rescue Civilization, Ad Astra, 1999 emergency organizations space exploration
Alliance to Rescue Civilization
Astronomy
154
1,430,888
https://en.wikipedia.org/wiki/Hypotrachelium
The hypotrachelium is the upper part or groove in the shaft of a Doric column, beneath the trachelium. The Greek form is hypotrakhelion. In classical architecture, it is the space between the annulet of the echinus and the upper bed of the shafts, including, according to C. R. Cockerell, the three grooves or sinkings found in some of the older examples, as in the temple of Neptune at Paestum and the temple of Aphaea at Aegina; there being only one groove in the Parthenon, the Theseum and later examples. In the temple of Ceres and the so-called Basilica at Paestum the hypotrachelium consists of a concave sinking carved with vertical lines suggestive of leaves, the tops of which project forward. A similar decoration is found in the capital of the columns flanking the tomb of Agamemnon at Mycenae, but here the hypotrachelium projects forward with a cavetto moulding, and is carved with triple leaves like the buds of a rose. In the Doric order the term was sometimes applied to that which is generally known as the "necking," the space between the fillet and the annulet. The hypotrachelium was also called a collarino, or colarino, or colarin. References "Collarino". Oxford English Dictionary. Oxford University Press. 2nd ed. 1989. Ancient Greek architecture Ancient Roman architecture Architectural elements
Hypotrachelium
Technology,Engineering
317
35,114,371
https://en.wikipedia.org/wiki/KPD%201930%2B2752
KPD 1930+2752 is a binary star system including a subdwarf B star and a probable white dwarf with relatively high mass. Due to the nature of this astronomical system, it seems like a likely candidate for a potential type Ia supernova, a type of supernova which occurs when a white dwarf star takes on enough matter to approach the Chandrasekhar limit, the point at which electron degeneracy pressure would not be enough to support its mass. However, carbon fusion would occur before this limit was reached, releasing enough energy to overcome the force of gravity holding the star together and resulting in a supernova. The total mass of the binary star system slightly exceeds the Chandrasekhar limit, making this system a candidate as a progenitor for a future type Ia supernova, although future mass loss is likely to reduce system mass below threshold. See also IK Pegasi, the nearest supernova progenitor candidate References Cygnus (constellation) Binary stars Cygni, V2214 White dwarfs B-type subdwarfs J19321480+2758354
KPD 1930+2752
Astronomy
224
5,007,538
https://en.wikipedia.org/wiki/Paraboloidal%20coordinates
Paraboloidal coordinates are three-dimensional orthogonal coordinates that generalize two-dimensional parabolic coordinates. They possess elliptic paraboloids as one-coordinate surfaces. As such, they should be distinguished from parabolic cylindrical coordinates and parabolic rotational coordinates, both of which are also generalizations of two-dimensional parabolic coordinates. The coordinate surfaces of the former are parabolic cylinders, and the coordinate surfaces of the latter are circular paraboloids. Differently from cylindrical and rotational parabolic coordinates, but similarly to the related ellipsoidal coordinates, the coordinate surfaces of the paraboloidal coordinate system are not produced by rotating or projecting any two-dimensional orthogonal coordinate system. Basic formulas The Cartesian coordinates can be produced from the ellipsoidal coordinates by the equations with Consequently, surfaces of constant are downward opening elliptic paraboloids: Similarly, surfaces of constant are upward opening elliptic paraboloids, whereas surfaces of constant are hyperbolic paraboloids: Scale factors The scale factors for the paraboloidal coordinates are Hence, the infinitesimal volume element is Differential operators Common differential operators can be expressed in the coordinates by substituting the scale factors into the general formulas for these operators, which are applicable to any three-dimensional orthogonal coordinates. For instance, the gradient operator is and the Laplacian is Applications Paraboloidal coordinates can be useful for solving certain partial differential equations. For instance, the Laplace equation and Helmholtz equation are both separable in paraboloidal coordinates. Hence, the coordinates can be used to solve these equations in geometries with paraboloidal symmetry, i.e. with boundary conditions specified on sections of paraboloids. The Helmholtz equation is . Taking , the separated equations are where and are the two separation constants. Similarly, the separated equations for the Laplace equation can be obtained by setting in the above. Each of the separated equations can be cast in the form of the Baer equation. Direct solution of the equations is difficult, however, in part because the separation constants and appear simultaneously in all three equations. Following the above approach, paraboloidal coordinates have been used to solve for the electric field surrounding a conducting paraboloid. References Bibliography Same as Morse & Feshbach (1953), substituting uk for ξk. External links MathWorld description of confocal paraboloidal coordinates Three-dimensional coordinate systems Orthogonal coordinate systems
Paraboloidal coordinates
Mathematics
493
74,652,606
https://en.wikipedia.org/wiki/Zhen%20Tou
Parade formations, also known as "Tīn-thâu" in Taiwanese Hokkien or "Zhen Tou" in Mandarin (), are a traditional folk art originating from China. As a part of worship activities and temple festivals, members of communities express gratitude to the gods by escorting them in a procession. This practice can be either performed while walking or in fixed locations along the streets, incorporating elements of acrobatics and folk dramas. The term “Zhen Tou” came from coastal regions like Fuzhou and Minnan in China. The folk art has flourished in Taiwan, particularly in the southern regions where the highest number and variety of Zhen Tou can be found. History and evolution During the Ming and Qing dynasties, when Chinese people immigrated to Taiwan, they brought along with them their folk beliefs and Zhen Tou performances. As the times change, the influences of globalization and localization have allowed Zhen Tou diverse developments. For instance, during the period of Japanese rule in Taiwan, Japanese-style palanquins emerged; the 1960s to 1970s, influenced by popular culture, saw the emergence of performances like “Sulan's Wedding Parade” and “Filial Daughter Bai Qin.” In the 1980s, electronic floats became popular, and after 2000, traditional music for Zhen Tou gradually gave way to popular music. Electric-Techno Neon Gods performances emerged, and pole dancing gained popularity. While the displays of Zhen Tou are gradually declining in urban areas of northern Taiwan, it is still thriving in the southern regions. As preserving and inheriting folk culture are being valued, Zhen Tou has found its way onto the stages of arts and cultural events, and it's being promoted on campuses as well. Characteristics Zhen Tou performances reflect each distinct local social life and cultural tradition, which take place while in procession or in stationary locations such as streets or squares. The performances are relatively short with simple storylines, basic costumes, props, and music. The format is flexible, allowing for improvised performances in accordance with different venues or circumstances. Categorization There are various types and formats of Zhen Tou in Taiwan, contributing to the intricacy of their categorization. The common categorizations include: Based on performance characteristics: literary parade formations (“Wen Zhen”) and martial parade formation (“Wu Zhen”). “Wen Zhen” focuses on singing and dancing, often accompanied by relatively complete musical arrangements. Examples include " Niu Li Ge Zhen " (Plowing Ox Song Formation) and " Qian Wang Ge Zhen " (Guiding the Deceased Song Formation). "Wu Zhen" usually involves martial arts performances with relatively basic musical accompaniments. Examples include Song Jiang Battle Array and dragon and lion dances. Based on personnel organization: community-based parade formations (“ Zhuang Tou Zhen”) and professional parade formations. “Zhuang Tou Zhen” are amateur groups formed spontaneously by community residents based on blood or geographical relations, often trained by local temples. Due to rural population outflows since the 1980s, forming Zhen Tou has become more challenging. As a response to temple festivals and celebratory events, there has been a rise in professional parade formations, which are professional performing groups catering to commercial events. References Taiwanese folk culture Religious practices
Zhen Tou
Biology
682
37,155,129
https://en.wikipedia.org/wiki/Sporisorium%20ehrenbergii
Sporisorium ehrenbergi (syn. Tolyposporium ehrenbergi) is a species of fungus in the Ustilaginaceae family. It is a plant pathogen, causing long smut of Sorghum spp. References Fungal plant pathogens and diseases Sorghum diseases Ustilaginomycotina Fungi described in 1903 Fungus species
Sporisorium ehrenbergii
Biology
76
61,594,661
https://en.wikipedia.org/wiki/Murid%20betaherpesvirus%202
Murid betaherpesvirus 2 (MuHV-2) is a species of virus in the genus Muromegalovirus, subfamily Betaherpesvirinae, family Herpesviridae, and order Herpesvirales. References External links Betaherpesvirinae
Murid betaherpesvirus 2
Biology
58
53,664
https://en.wikipedia.org/wiki/Kingdom%20%28biology%29
In biology, a kingdom is the second highest taxonomic rank, just below domain. Kingdoms are divided into smaller groups called phyla (singular phylum). Traditionally, textbooks from Canada and the United States have used a system of six kingdoms (Animalia, Plantae, Fungi, Protista, Archaea/Archaebacteria, and Bacteria or Eubacteria), while textbooks in other parts of the world, such as Bangladesh, Brazil, Greece, India, Pakistan, Spain, and the United Kingdom have used five kingdoms (Animalia, Plantae, Fungi, Protista and Monera). Some recent classifications based on modern cladistics have explicitly abandoned the term kingdom, noting that some traditional kingdoms are not monophyletic, meaning that they do not consist of all the descendants of a common ancestor. The terms flora (for plants), fauna (for animals), and, in the 21st century, funga (for fungi) are also used for life present in a particular region or time. Definition and associated terms When Carl Linnaeus introduced the rank-based system of nomenclature into biology in 1735, the highest rank was given the name "kingdom" and was followed by four other main or principal ranks: class, order, genus and species. Later two further main ranks were introduced, making the sequence kingdom, phylum or division, class, order, family, genus and species. In 1990, the rank of domain was introduced above kingdom. Prefixes can be added so subkingdom (subregnum) and infrakingdom (also known as infraregnum) are the two ranks immediately below kingdom. Superkingdom may be considered as an equivalent of domain or empire or as an independent rank between kingdom and domain or subdomain. In some classification systems the additional rank branch (Latin: ramus) can be inserted between subkingdom and infrakingdom, e.g., Protostomia and Deuterostomia in the classification of Cavalier-Smith. History Two kingdoms of life The classification of living things into animals and plants is an ancient one. Aristotle (384–322 BC) classified animal species in his History of Animals, while his pupil Theophrastus (–) wrote a parallel work, the Historia Plantarum, on plants. Carl Linnaeus (1707–1778) laid the foundations for modern biological nomenclature, now regulated by the Nomenclature Codes, in 1735. He distinguished two kingdoms of living things: Regnum Animale ('animal kingdom') and Regnum Vegetabile ('vegetable kingdom', for plants). Linnaeus also included minerals in his classification system, placing them in a third kingdom, Regnum Lapideum. Three kingdoms of life In 1674, Antonie van Leeuwenhoek, often called the "father of microscopy", sent the Royal Society of London a copy of his first observations of microscopic single-celled organisms. Until then, the existence of such microscopic organisms was entirely unknown. Despite this, Linnaeus did not include any microscopic creatures in his original taxonomy. At first, microscopic organisms were classified within the animal and plant kingdoms. However, by the mid–19th century, it had become clear to many that "the existing dichotomy of the plant and animal kingdoms [had become] rapidly blurred at its boundaries and outmoded". In 1860 John Hogg proposed the Protoctista, a third kingdom of life composed of "all the lower creatures, or the primary organic beings"; he retained Regnum Lapideum as a fourth kingdom of minerals. In 1866, Ernst Haeckel also proposed a third kingdom of life, the Protista, for "neutral organisms" or "the kingdom of primitive forms", which were neither animal nor plant; he did not include the Regnum Lapideum in his scheme. Haeckel revised the content of this kingdom a number of times before settling on a division based on whether organisms were unicellular (Protista) or multicellular (animals and plants). Four kingdoms The development of microscopy revealed important distinctions between those organisms whose cells do not have a distinct nucleus (prokaryotes) and organisms whose cells do have a distinct nucleus (eukaryotes). In 1937 Édouard Chatton introduced the terms "prokaryote" and "eukaryote" to differentiate these organisms. In 1938, Herbert F. Copeland proposed a four-kingdom classification by creating the novel Kingdom Monera of prokaryotic organisms; as a revised phylum Monera of the Protista, it included organisms now classified as Bacteria and Archaea. Ernst Haeckel, in his 1904 book The Wonders of Life, had placed the blue-green algae (or Phycochromacea) in Monera; this would gradually gain acceptance, and the blue-green algae would become classified as bacteria in the phylum Cyanobacteria. In the 1960s, Roger Stanier and C. B. van Niel promoted and popularized Édouard Chatton's earlier work, particularly in their paper of 1962, "The Concept of a Bacterium"; this created, for the first time, a rank above kingdom—a superkingdom or empire—with the two-empire system of prokaryotes and eukaryotes. The two-empire system would later be expanded to the three-domain system of Archaea, Bacteria, and Eukaryota. Five kingdoms The differences between fungi and other organisms regarded as plants had long been recognised by some; Haeckel had moved the fungi out of Plantae into Protista after his original classification, but was largely ignored in this separation by scientists of his time. Robert Whittaker recognized an additional kingdom for the Fungi. The resulting five-kingdom system, proposed in 1969 by Whittaker, has become a popular standard and with some refinement is still used in many works and forms the basis for new multi-kingdom systems. It is based mainly upon differences in nutrition; his Plantae were mostly multicellular autotrophs, his Animalia multicellular heterotrophs, and his Fungi multicellular saprotrophs. The remaining two kingdoms, Protista and Monera, included unicellular and simple cellular colonies. The five kingdom system may be combined with the two empire system. In the Whittaker system, Plantae included some algae. In other systems, such as Lynn Margulis's system of five kingdoms, the plants included just the land plants (Embryophyta), and Protoctista has a broader definition. Following publication of Whittaker's system, the five-kingdom model began to be commonly used in high school biology textbooks. But despite the development from two kingdoms to five among most scientists, some authors as late as 1975 continued to employ a traditional two-kingdom system of animals and plants, dividing the plant kingdom into subkingdoms Prokaryota (bacteria and cyanobacteria), Mycota (fungi and supposed relatives), and Chlorota (algae and land plants). Six kingdoms In 1977, Carl Woese and colleagues proposed the fundamental subdivision of the prokaryotes into the Eubacteria (later called the Bacteria) and Archaebacteria (later called the Archaea), based on ribosomal RNA structure; this would later lead to the proposal of three "domains" of life, of Bacteria, Archaea, and Eukaryota. Combined with the five-kingdom model, this created a six-kingdom model, where the kingdom Monera is replaced by the kingdoms Bacteria and Archaea. This six-kingdom model is commonly used in recent US high school biology textbooks, but has received criticism for compromising the current scientific consensus. But the division of prokaryotes into two kingdoms remains in use with the recent seven kingdoms scheme of Thomas Cavalier-Smith, although it primarily differs in that Protista is replaced by Protozoa and Chromista. Eight kingdoms Thomas Cavalier-Smith supported the consensus at that time, that the difference between Eubacteria and Archaebacteria was so great (particularly considering the genetic distance of ribosomal genes) that the prokaryotes needed to be separated into two different kingdoms. He then divided Eubacteria into two subkingdoms: Negibacteria (Gram-negative bacteria) and Posibacteria (Gram-positive bacteria). Technological advances in electron microscopy allowed the separation of the Chromista from the Plantae kingdom. Indeed, the chloroplast of the chromists is located in the lumen of the endoplasmic reticulum instead of in the cytosol. Moreover, only chromists contain chlorophyll c. Since then, many non-photosynthetic phyla of protists, thought to have secondarily lost their chloroplasts, were integrated into the kingdom Chromista. Finally, some protists lacking mitochondria were discovered. As mitochondria were known to be the result of the endosymbiosis of a proteobacterium, it was thought that these amitochondriate eukaryotes were primitively so, marking an important step in eukaryogenesis. As a result, these amitochondriate protists were separated from the protist kingdom, giving rise to the, at the same time, superkingdom and kingdom Archezoa. This superkingdom was opposed to the Metakaryota superkingdom, grouping together the five other eukaryotic kingdoms (Animalia, Protozoa, Fungi, Plantae and Chromista). This was known as the Archezoa hypothesis, which has since been abandoned; later schemes did not include the Archezoa–Metakaryota divide. ‡ No longer recognized by taxonomists. Six kingdoms (1998) In 1998, Cavalier-Smith published a six-kingdom model, which has been revised in subsequent papers. The version published in 2009 is shown below. Cavalier-Smith no longer accepted the importance of the fundamental Eubacteria–Archaebacteria divide put forward by Woese and others and supported by recent research. The kingdom Bacteria (sole kingdom of empire Prokaryota) was subdivided into two sub-kingdoms according to their membrane topologies: Unibacteria and Negibacteria. Unibacteria was divided into phyla Archaebacteria and Posibacteria; the bimembranous-unimembranous transition was thought to be far more fundamental than the long branch of genetic distance of Archaebacteria, viewed as having no particular biological significance. Cavalier-Smith does not accept the requirement for taxa to be monophyletic ("holophyletic" in his terminology) to be valid. He defines Prokaryota, Bacteria, Negibacteria, Unibacteria, and Posibacteria as valid paraphyla (therefore "monophyletic" in the sense he uses this term) taxa, marking important innovations of biological significance (in regard of the concept of biological niche). In the same way, his paraphyletic kingdom Protozoa includes the ancestors of Animalia, Fungi, Plantae, and Chromista. The advances of phylogenetic studies allowed Cavalier-Smith to realize that all the phyla thought to be archezoans (i.e. primitively amitochondriate eukaryotes) had in fact secondarily lost their mitochondria, typically by transforming them into new organelles: Hydrogenosomes. This means that all living eukaryotes are in fact metakaryotes, according to the significance of the term given by Cavalier-Smith. Some of the members of the defunct kingdom Archezoa, like the phylum Microsporidia, were reclassified into kingdom Fungi. Others were reclassified in kingdom Protozoa, like Metamonada which is now part of infrakingdom Excavata. Because Cavalier-Smith allows paraphyly, the diagram below is an 'organization chart', not an 'ancestor chart', and does not represent an evolutionary tree. Seven kingdoms Cavalier-Smith and his collaborators revised their classification in 2015. In this scheme they introduced two superkingdoms of Prokaryota and Eukaryota and seven kingdoms. Prokaryota have two kingdoms: Bacteria and Archaea. (This was based on the consensus in the Taxonomic Outline of Bacteria and Archaea, and the Catalogue of Life). The Eukaryota have five kingdoms: Protozoa, Chromista, Plantae, Fungi, and Animalia. In this classification a protist is any of the eukaryotic unicellular organisms. Summary The kingdom-level classification of life is still widely employed as a useful way of grouping organisms, notwithstanding some problems with this approach: Kingdoms such as Protozoa represent grades rather than clades, and so are rejected by phylogenetic classification systems. The most recent research does not support the classification of the eukaryotes into any of the standard systems. In 2009, Andrew Roger and Alastair Simpson emphasized the need for diligence in analyzing new discoveries: "With the current pace of change in our understanding of the eukaryote tree of life, we should proceed with caution." Kingdoms are rarely used in academic phylogeny and are more common in introductory education, where 5-6 kingdom models are preferred. Beyond traditional kingdoms While the concept of kingdoms continues to be used by some taxonomists, there has been a movement away from traditional kingdoms, as they are no longer seen as providing a cladistic classification, where there is emphasis in arranging organisms into natural groups. Three domains of life Based on RNA studies, Carl Woese thought life could be divided into three large divisions and referred to them as the "three primary kingdom" model or "urkingdom" model. In 1990, the name "domain" was proposed for the highest rank. This term represents a synonym for the category of dominion (lat. dominium), introduced by Moore in 1974. Unlike Moore, Woese et al. (1990) did not suggest a Latin term for this category, which represents a further argument supporting the accurately introduced term dominion. Woese divided the prokaryotes (previously classified as the Kingdom Monera) into two groups, called Eubacteria and Archaebacteria, stressing that there was as much genetic difference between these two groups as between either of them and all eukaryotes. According to genetic data, although eukaryote groups such as plants, fungi, and animals may look different, they are more closely related to each other than they are to either the Eubacteria or Archaea. It was also found that the eukaryotes are more closely related to the Archaea than they are to the Eubacteria. Although the primacy of the Eubacteria-Archaea divide has been questioned, it has been upheld by subsequent research. There is no consensus on how many kingdoms exist in the classification scheme proposed by Woese. Eukaryotic supergroups In 2004, a review article by Simpson and Roger noted that the Protista were "a grab-bag for all eukaryotes that are not animals, plants or fungi". They held that only monophyletic groups should be accepted as formal ranks in a classification and that – while this approach had been impractical previously (necessitating "literally dozens of eukaryotic 'kingdoms) – it had now become possible to divide the eukaryotes into "just a few major groups that are probably all monophyletic". On this basis, the diagram opposite (redrawn from their article) showed the real "kingdoms" (their quotation marks) of the eukaryotes. A classification which followed this approach was produced in 2005 for the International Society of Protistologists, by a committee which "worked in collaboration with specialists from many societies". It divided the eukaryotes into the same six "supergroups". The published classification deliberately did not use formal taxonomic ranks, including that of "kingdom". In this system the multicellular animals (Metazoa) are descended from the same ancestor as both the unicellular choanoflagellates and the fungi which form the Opisthokonta. Plants are thought to be more distantly related to animals and fungi. However, in the same year as the International Society of Protistologists' classification was published (2005), doubts were being expressed as to whether some of these supergroups were monophyletic, particularly the Chromalveolata, and a review in 2006 noted the lack of evidence for several of the six proposed supergroups. , there is widespread agreement that the Rhizaria belong with the Stramenopiles and the Alveolata, in a clade dubbed the SAR supergroup, so that Rhizaria is not one of the main eukaryote groups. Comparison of top level classification Some authors have added non-cellular life to their classifications. This can create a "superdomain" called "Acytota", also called "Aphanobionta", of non-cellular life; with the other superdomain being "cytota" or cellular life. The eocyte hypothesis proposes that the eukaryotes emerged from a phylum within the archaea called the Thermoproteota (formerly known as eocytes or Crenarchaeota). Viruses The International Committee on Taxonomy of Viruses uses the taxonomic rank "kingdom" in the classification of viruses (with the suffix -virae); but this is beneath the top level classifications of realm and subrealm. There is ongoing debate as to whether viruses can be included in the tree of life. The arguments against include the fact that they are obligate intracellular parasites that lack metabolism and are not capable of replication outside of a host cell. Another argument is that their placement in the tree would be problematic, since it is suspected that viruses have various evolutionary origins, and they have a penchant for harvesting nucleotide sequences from their hosts. On the other hand, there are arguments in favor of their inclusion. One of these comes from the discovery of unusually large and complex viruses, such as Mimivirus, that possess typical cellular genes. See also Cladistics Phylogenetics Systematics Taxonomy Notes References Further reading Pelentier, B. (2007-2015). Empire Biota: a comprehensive taxonomy, . [Historical overview.] Peter H. Raven and Helena Curtis (1970), Biology of Plants, New York: Worth Publishers. [Early presentation of five-kingdom system.] External links A Brief History of the Kingdoms of Life at Earthling Nature The five kingdom concept Whittaker's classification Kingdom
Kingdom (biology)
Biology
3,918
12,529,976
https://en.wikipedia.org/wiki/Tubex%20%28syringe%20cartridge%29
The Tubex Syringe cartridge was developed 1943 during World War II by the Wyeth company. It is a drug pre-filled glass cartridge syringe with an attached sterile needle, which is inserted in a reusable stainless steel holder (now plastic). The product was manufactured for immediate injection once the pre-filled cartridge was attached to the reusable holder and the needle protector was removed. Its development followed the use of several other immediate use products, such as the Syrette, a flexible tube, not unlike an ophthalmic ointment tube designed to hold a needle. The Syrette was developed by Squibb and was used for immediate use of morphine on the battle front. However it fell into disuse because of leakage and sterility problems. Another product, called the Ampin proved problematic as well. The Tubex system was widely used after World War II and expanded as a system of distributing and administration a large variety of drugs from antibiotics to vaccines in a pre-filled glass cartridge syringe with attached sterile needle. It aided in a standardization of an immediate use sterile dosage forms. It was a time saver for nursing administration time, as nurses no longer had to draw up an injection. It was conducive to inventory control and accountability for narcotic substances in a tamper proof packing. It was widely used by doctors, nurses and pharmacists for the administration of drugs. Although a few products are still manufactured in Tubex, the Wyeth company has discontinued the entire line of products and has licensed its use to other companies. The carpuject Hospira has now replaced the tubex as the sole competitor in this unitized syringe medication delivery system. References Bibliography Medical equipment
Tubex (syringe cartridge)
Biology
364
22,446,545
https://en.wikipedia.org/wiki/Hebeloma%20dunense
Hebeloma dunense is a species of agaric fungus in the family Hymenogastraceae. References dunense Fungi described in 1929 Fungi of Europe Fungus species
Hebeloma dunense
Biology
37
9,356,096
https://en.wikipedia.org/wiki/Polymeric%20liquid%20crystal
Polymeric liquid crystals are similar to monomeric liquid crystals used in displays. Both have dielectric anitroscopy, or the ability to change directions and absorb or transmit light depending on electric fields. Polymeric liquid crystals form long head-to-tail or side chain polymers, which are woven in thick mats and therefore have high viscosities. The high viscosities allow the polymeric liquid crystals to be used in complex structures, but they are harder to align, limiting their usefulness. The polymerics align in microdomains facing all different directions, which ruins the optical effect. One solution to this is to mix in a small amount of photo-curing polymer, which when spin-coated onto a surface can be hardened. Basically, the polymeric liquid crystal and photocurer are aligned in one direction, and then the photo curer is cured, "freezing" the polymeric in one direction. References Liquid crystals
Polymeric liquid crystal
Physics,Materials_science
192
418,307
https://en.wikipedia.org/wiki/Weismann%20barrier
The Weismann barrier, proposed by August Weismann, is the strict distinction between the "immortal" germ cell lineages producing gametes and "disposable" somatic cells in animals (but not plants), in contrast to Charles Darwin's proposed pangenesis mechanism for inheritance. In more precise terminology, hereditary information is copied only from germline cells to somatic cells. This means that new information from somatic mutation is not passed on to the germline. This barrier concept implies that somatic mutations are not inherited. Weismann set out the concept in his 1892 book ″Das Keimplasma: eine Theorie der Vererbung″ (German for The Germ Plasm: a theory of inheritance). The use of this theory, commonly in the context of the germ plasm theory of the late 19th century, before the development of better-based and more sophisticated concepts of genetics in the early 20th century, is sometimes referred to as Weismannism. Some authors distinguish Weismannist development (either preformistic or epigenetic) that in which there is a distinct germline, from somatic embryogenesis. This type of development is correlated with the evolution of death of the somatic line. The Weismann barrier was of great importance in its day and among other influences it effectively banished certain Lamarckian concepts: in particular, it would make Lamarckian inheritance from changes to the body (the soma) difficult or impossible. It remains important, but has however required qualification in the light of modern understanding of horizontal gene transfer and some other genetic and histological developments. Immortality of the germline The Russian biologist and historian Zhores A. Medvedev, reviewing Weismann's theory a century later, considered that the accuracy of genome replicative and other synthetic systems alone could not explain the "immortal" germ cell lineages proposed by Weismann. Rather Medvedev thought that known features of the biochemistry and genetics of sexual reproduction indicated the presence of unique information maintenance and restoration processes at the different stages of gametogenesis. In particular, Medvedev considered that the most important opportunities for information maintenance of germ cells are created by recombination during meiosis and DNA repair; he saw these as processes within the germ cells that were capable of restoring the integrity of DNA and chromosomes from the types of damage that caused irreversible ageing in somatic cells. Basal animals Basal animals such as sponges (Porifera) and corals (Anthozoa) contain multipotent stem cell lineages, that give rise to both somatic and reproductive cells. The Weismann barrier appears to be of a more recent evolutionary origin among animals. Plants In plants, genetic changes in somatic lines can and do result in genetic changes in the germ lines, because the germ cells are produced by somatic cell lineages (vegetative meristems), which may be old enough (many years) to have accumulated multiple mutations since seed germination, some of them subject to natural selection. It is noteworthy in this context that, generally speaking, adult, reproducing plants tend to produce many more offspring in number than animal organisms. See also References Genetics Lamarckism 1892 in science 1892 in Germany
Weismann barrier
Biology
684
30,857,555
https://en.wikipedia.org/wiki/A%20Question%20and%20Answer%20Guide%20to%20Astronomy
A Question and Answer Guide to Astronomy is a book about astronomy and cosmology, and is intended for a general audience. The book was written by Pierre-Yves Bely, Carol Christian, and Jean-Rene Roy, and published in English by Cambridge University Press in 2010. It was originally written in French. The content within the book is written using a question and answer format. It contains some 250 questions, which The Science Teacher states each are answered with a "concise and well-formulated essay that is informative and readable." The Science Teacher review goes on to state that many of the answers given in the book are "little gems of science writing". The Science Teacher summarizes by stating that each question is likely to be thought of by a student, and that "the answers are informative, well constructed, and thorough". The book covers information about the planets, the Earth, the Universe, practical astronomy, history, and awkward questions such as astronomy in the Bible, UFOs, and aliens. Also covered are subjects such as the Big Bang, comprehension of large numbers, and the Moon illusion. See also Bibliography of encyclopedias: astronomy and astronomers References Additional reviews Mutel, R. L. "A question and answer guide to astronomy", Choice: Current Reviews for Academic Libraries; Jan2011, Vol. 48 Issue 5, p920-920 Whitt, April S. "A Question and Answer Guide to Astronomy", Planetarian; Sep2012, Vol. 41 Issue 3, p60-60 Mizon, Bob "A question and answer guide to astronomy", Journal of the British Astronomical Association; Jun2010, Vol. 120 Issue 3, p186-186, 1/2p Zetie, Ken "A Question and Answer Guide to Astronomy", Contemporary Physics; Sep/Oct2011, Vol. 52 Issue 5, p482-482, 1p "A Question and Answer Guide to Astronomy", MNASSA Monthly Notes of the Astronomical Society of Southern Africa; Aug2011, Vol. 70 Issue 7/8, p159-161, 3p External links Cambridge University Press — A Question and Answer Guide to Astronomy Astronomy books 2010 non-fiction books Cambridge University Press books
A Question and Answer Guide to Astronomy
Astronomy
464
73,081,332
https://en.wikipedia.org/wiki/Aluminium%E2%80%93copper%20alloys
Aluminium–copper alloys (AlCu) are aluminium alloys that consist largely of aluminium (Al) and traces of copper (Cu) as the main alloying elements. Important grades also contain additives of magnesium, iron, nickel and silicon (AlCu(Mg, Fe, Ni, Si)), often manganese is also included to increase strength (see aluminium-manganese alloys). The main area of application is aircraft construction. The alloys have medium to high strength and can be age hardened. They are both . Also available as cast alloy. Their susceptibility to corrosion and their poor weldability are disadvantageous. Duralumin is the oldest variety in this group and goes back to Alfred Wilm, who discovered it in 1903. Aluminium could only be used as a widespread construction material thanks to the aluminium-copper alloys, as pure aluminium is much too soft for this and other hardenable alloys such as aluminium-magnesium-silicon alloys (AlMgSi) or the naturally hard (non-hardenable) alloys. Aluminium–copper alloys were standardised in the 2000 series by the international alloy designation system (IADS) which was originally created in 1970 by The Aluminum Association. The 2000 series includes 2014 and 2024 alloys used in airframe fabrication. Copper alloys with aluminium as the main alloying metal are known as aluminium bronze, the amount of aluminium is generally less than 12%. History Duralumin is a trade name for one of the earliest types of age-hardenable aluminium alloys. The term is a combination of Dürener and aluminium. Its use as a trade name is obsolete. Duralumin was developed by the German metallurgist Alfred Wilm at Dürener Metallwerke AG. In 1903, Wilm discovered that after quenching, an aluminium alloy containing 4% copper would harden when left at room temperature for several days. Further improvements led to the introduction of duralumin in 1909. The name is mainly used in pop-science to describe all Al-Cu alloys system. Aluminium–copper alloys were standardised in the 2000 series by the international alloy designation system (IADS) which was originally created in 1970 by the Aluminum Association. 2000s series includes 2014 and 2024 alloys used in airframe fabrication. Pure AlCu wrought alloys All AlCu alloys are based on the system of pure AlCu alloys. Solubility of copper and phases Aluminium forms a eutectic with copper at 547 °C and 33 mass percent copper, which also corresponds to the maximum solubility. At lower temperatures, the solubility drops sharply; at room temperature it is only 0.1%. At higher copper contents, Al2Cu is formed, an intermetallic phase. It is present in a tetragonal structure, which is so different from the cubic crystal system of aluminium that the -phase exists only as an . There are also the partially coherent ones - and -phases. Microstructural transformations After casting, the material is usually oversaturated - Mixed crystal, which also contains more copper at room temperature than could actually be dissolved at this temperature. After that, GP zones (GP(I) zones) form at temperatures below 80 °C, in which increased concentrations of copper are present, but which do not yet have a structure or form their own phases. At somewhat higher temperatures of up to 250 °C, this forms -phase (also called GP(II) zones), which increases strength. At even higher temperatures, the partially coherent forms -Phase. At still higher temperatures of about 300 °C the incoherent one forms -phase in which the strength decreases again. The individual temperature ranges overlap: Even at low temperatures, there is formation of - or phases, but these form much more slowly than the GP(I/II) zones. Each of the phases forms faster the higher the temperature. GP(I) zones The formation of GP(I) zones is referred to as natural hardening and occurs at temperatures up to 80 °C. They are tiny disc-shaped layers just one atom thick and 2 to 5 nanometers in diameter. With time, the number of zones increases and the copper concentration in them increases, but not their diameter. They are coherent with the aluminum lattice and form on the {100} planes. GP(II) zones The GP(II) zones (-phases) are largely responsible for the increase in strength of the AlCu alloys. They are coherent with the aluminium crystal and consist of alternating layers of aluminium and copper with layer thicknesses of about 10 nanometers and dimensions of up to 150 nanometers. In contrast to the GP(I) zones, these are three-dimensional precipitations. Their layers are parallel to the aluminium {100} plane. From the -phase forms the -phases, but there are overlaps. The GP(II) zones need vacancies for growth, which is why a lack of these (e.g. due to magnesium) leads to delayed growth. Partially coherent phases The -phase is only partially coherent with the aluminium lattice and forms at temperatures from 150 °C to 300 °C. It has the form of platelets and can arise from the GP(II) zones. However, it can also arise directly as a precipitation from the mixed crystal. In the first case, the increasing surface tension is reduced by dislocations, in the second case, the precipitates form preferentially at dislocations. Incoherent phases The -phase is incoherent with the lattice of the mixed crystal. It forms at temperatures of 300 °C and more. It usually forms larger particles with a larger spacing than the other phases and thus does not lead to any increase in strength or even to a drop if its formation takes place at the expense of the other phases. The -phase also occurs at temperatures between 150 °C and 250 °C as precipitation at grain boundaries, as this reduces the surface tension. The -phase leads to a partial intergranular fracture; however, the fracture behavior remains ductile overall. The change in fracture behavior is caused by precipitation-free zones at the grain boundaries. The -phase has a greater potential difference compared to the mixed crystal, so that layer corrosion and intergranular corrosion can occur. With longer annealing times, the inside of the grains also separate the -phases, the potential difference is additionally lower. Grades, alloying elements and contents As with almost all aluminium alloys, a distinction is made between for rolling and forging and for casting. The copper content is usually between 3 and 6%. Between 0.3% and 6% the alloys are regarded as not weldable or very difficult to weld (by fusion welding), with higher copper contents they become weldable again. Most types also contain additives of magnesium, manganese and silicon to increase strength. Lead and bismuth form small inclusions that melt at low temperatures, resulting in better chip formation, similar to free machining steel. The heat resistance is increased by adding nickel and iron. Iron, is found as an impurity in engineering alloys, preventing strain hardening, but adding magnesium makes the aforementioned process possible again. Larger amounts of magnesium up to 1.5% increase strength and elongation at break (see Aluminium-magnesium alloy). Manganese is also used to increase strength (see AlMn). Larger amounts, however, have negative side effects, so the content is limited to around 1% manganese. Smaller additions of silicon are added to bind iron, since it prefers to form the AlFeSi phase, while the formation of Al7Cu2Fe would remove larger amounts of copper from the material, which then no longer leads to the formation of phases that are actually desired (especially Al2Cu, copper aluminide). Larger amounts of silicon are alloyed to form with magnesium Mg2Si (magnesium silicide) which, like aluminium-magnesium-silicon alloy, improves strength and hardenability. Lithium is added to some alloys with contents between 1.5% and 2.5%. Due to the very low density of Li (0.53 g/cm³ compared to 2.7 g/cm³ of aluminium), this leads to lighter components, which is particularly advantageous in aviation. See aluminium-lithium alloy for details. Cast alloys Cast alloys contain about 4% copper and small amounts of other additives that improve castability, including titanium and magnesium. The starting material is primary aluminium; in contrast to other cast aluminium alloys, secondary aluminium (made from scrap) is not used because it reduces elongation and toughess at break. The AlCu cast alloys are prone to hot cracking and are used in the T4 and T6 hardening states. The following table shows the composition of some grades according to DIN EN 1706. All data is shown in percent by mass, the rest of the materials is aluminium. Wrought alloys AlCuMg(Si,Mn) wrought alloys The AlCuMg alloys represent the most important group of AlCu alloys. Many other phases can form in them: Al8 Mg5 (-phase, see ) Al2 CuMg, the S phase Al6 Mg4 Cu, the T phase The addition of magnesium accelerates the process of cold hardening. Which phases are formed depends primarily on the ratio of copper to magnesium. If the ratio is less than 1/1, clusters containing Cu and Mg are eliminated. At a ratio above 1.5/1, which is the case with most engineering alloys, the forms preferentially phase. These kinds of alloys have significantly higher hardness and strength. Mechanical properties Conditions: O soft (, also hot-formed with the same strength limit values) T3: solution annealed, quenched, work hardened and naturally aged T4: solution annealed, quenched and artificially aged T6: solution heat treated, quenched and artificially aged T8: solution annealed, cold worked and artificially aged 2000 series 2000 series was formerly referred to as duralumin. Applications Aluminium-copper alloys are mainly used in , where their low corrosion resistance plays a subordinate role. Corrosion resistance can be greatly enhanced by the metallurgical bonding of a high-purity aluminium surface layer, referred to as alclad-duralum. To this day alclad materials are used commonly in the aircraft industry. The alloys are processed by rolling, forging, extrusion and partly by casting. Typical uses for wrought Al-Cu alloys include: 2011: Wire, rod, and bar for automatic lathe products. Applications where good machinability and good strength are required. 2014: Heavy-duty forgings, plate, and extrusions for aircraft fittings, wheels, and major structural components, space booster tankage and structure, truck frame and suspension components. Applications requiring high strength and hardness including service at elevated temperatures. 2017 or Avional (France): Around 1% Si. Good machinability. Acceptable resistance to corrosion in air and mechanical properties. Also called AU4G in France. Used for aircraft applications between the wars in France and Italy. Also saw some use in motor-racing applications from the 1960s, as it is a tolerant alloy that could be press-formed with relatively unsophisticated equipment. 2024: Aircraft structures, rivets, hardware, truck wheels, screw machine products, and other structural applications. 2036: Sheet for auto body panels 2048: Sheet and plate in structural components for aerospace application and military equipment Aviation German scientific literature openly published information about duralumin, its composition and heat treatment, before the outbreak of World War I in 1914. Despite this, use of the alloy outside Germany did not occur until after fighting ended in 1918. Reports of German use during World War I, even in technical journals such as Flight International, could still misidentify its key alloying component as magnesium rather than copper. Engineers in the UK showed little interest in duralumin only until after the war. The earliest known attempt to use duralumin for a heavier-than-air aircraft structure occurred in 1916, when Hugo Junkers first introduced its use in the airframe of the Junkers J 3, a single-engined monoplane "technology demonstrator" that marked the first use of the Junkers trademark duralumin corrugated skinning. The Junkers company completed only the covered wings and tubular fuselage framework of the J 3 before abandoning its development. The slightly later, solely IdFlieg-designated Junkers J.I armoured sesquiplane of 1917, known to the factory as the Junkers J 4, had its all-metal wings and horizontal stabilizer made in the same manner as the J 3's wings had been, like the experimental and airworthy all-duralumin Junkers J 7 single-seat fighter design, which led to the Junkers D.I low-wing monoplane fighter, introducing all-duralumin aircraft structural technology to German military aviation in 1918. Its first use in aerostatic airframes came in rigid airship frames, eventually including all those of the "Great Airship" era of the 1920s and 1930s: the British-built R-100, the German passenger Zeppelins LZ 127 Graf Zeppelin, LZ 129 Hindenburg, LZ 130 Graf Zeppelin II, and the U.S. Navy airships USS Los Angeles (ZR-3, ex-LZ 126), USS Akron (ZRS-4) and USS Macon (ZRS-5). 2000 series were once the most common aerospace alloys, but because they were susceptible to stress corrosion cracking, they are increasingly being replaced by 7000 series in new designs. Bicycle Duralumin was used to manufacture bicycle components and framesets from the 1930s to 1990s. Several companies in Saint-Étienne, France stood out for their early, innovative adoption of duralumin: in 1932, Verot et Perrin developed the first light alloy crank arms; in 1934, Haubtmann released a complete crankset; from 1935 on, Duralumin freewheels, derailleurs, pedals, brakes and handlebars were manufactured by several companies. Complete framesets followed quickly, including those manufactured by: Mercier (and Aviac and other licensees) with their popular Meca Dural family of models, the Pelissier brothers and their race-worthy La Perle models, and Nicolas Barra and his exquisite mid-twentieth century “Barralumin” creations. Other names that come up here also included: Pierre Caminade, with his beautiful Caminargent creations and their exotic octagonal tubing, and also Gnome et Rhône, with its deep heritage as an aircraft engine manufacturer that also diversified into motorcycles, velomotors and bicycles after World War Two. Mitsubishi Heavy Industries, which was prohibited from producing aircraft during the American occupation of Japan, manufactured the “cross” bicycle out of surplus wartime duralumin in 1946. The “cross” was designed by Kiro Honjo, a former aircraft designer responsible for the Mitsubishi G4M. Duralumin use in bicycle manufacturing faded in the 1970s and 1980s. Vitus (bicycle company) nonetheless released the venerable “979” frameset in 1979, a “Duralinox” model that became an instant classic among cyclists. The Vitus 979 was the first production aluminium frameset whose thin-wall 5083/5086 tubing was slip-fit and then glued together using a dry heat-activated epoxy. The result was an extremely lightweight but very durable frameset. Production of the Vitus 979 continued until 1992. References Works cited Further reading David Laughlin, Kazuhiro Hono. Handbook of Aluminum: Vol. 1: Physical Metallurgy and Processes. Aluminum Paperback - Volume 1. 16th edition, Aluminum Verlag, Düsseldorf 2002, pp. 101 f., 114–116, 121, 139–141 Friedrich Ostermann: Aluminum application technology. 3rd edition, Springer, 2014, , pp. 117–124 Aluminium alloys Aluminium–copper alloys
Aluminium–copper alloys
Chemistry
3,301
25,208,066
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20April%2011%2C%202051
A partial solar eclipse will occur at the Moon's descending node of orbit between Monday, April 10 and Tuesday, April 11, 2051, with a magnitude of 0.9849. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. The umbral shadow of the Moon will pass just above the North Pole. It will be the largest partial solar eclipse in 21st century. The maximal phase of the partial eclipse (0.98) will be recorded in the Barents Sea. The partial solar eclipse will be visible for parts of Asia, Alaska, and western Canada. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2051 A partial solar eclipse on April 11. A total lunar eclipse on April 26. A partial solar eclipse on October 4. A total lunar eclipse on October 19. Metonic Preceded by: Solar eclipse of June 23, 2047 Followed by: Solar eclipse of January 27, 2055 Tzolkinex Preceded by: Solar eclipse of February 28, 2044 Followed by: Solar eclipse of May 22, 2058 Half-Saros Preceded by: Lunar eclipse of April 5, 2042 Followed by: Lunar eclipse of April 15, 2060 Tritos Preceded by: Solar eclipse of May 11, 2040 Followed by: Solar eclipse of March 11, 2062 Solar Saros 120 Preceded by: Solar eclipse of March 30, 2033 Followed by: Solar eclipse of April 21, 2069 Inex Preceded by: Solar eclipse of April 30, 2022 Followed by: Solar eclipse of March 21, 2080 Triad Preceded by: Solar eclipse of June 10, 1964 Followed by: Solar eclipse of February 9, 2138 Solar eclipses of 2051–2054 Saros 120 Metonic series Tritos series Inex series References External links http://eclipse.gsfc.nasa.gov/SEplot/SEplot2051/SE2051Apr11P.GIF 2051 in science 2051 4 11 2051 4 11
Solar eclipse of April 11, 2051
Astronomy
587
48,845,429
https://en.wikipedia.org/wiki/Enadenotucirev
Enadenotucirev is an investigational oncolytic virus that is in clinical trials for various cancers. It is an oncolytic A11/Ad3 Chimeric Group B Adenovirus, previously described as ColoAd1. Enadenotucirev has also been modified with additional genes using the tumor-specific immuno-gene therapy (T-SIGn) platform to develop novel cancer gene therapy agents. The T-SIGn vectors at clinical study stage are: NG-350A: This vector contains two transgenes expressing the heavy and light chains for a secreted CD40 agonist monoclonal antibody. NG-641: This vector contains four transgenes expressing secreted Interferon alpha, the chemokines CXCL9, CXCL10 and an anti-FAP/anti-CD3 bispecific T-cell activator In Jan 2015 the European Medicines Agency's (EMA) Committee for Orphan Medical Products (COMP) designated enadenotucirev as an orphan medicinal product for the treatment of ovarian cancer. Clinical trials Two clinical trials have been completed with enadenotucirev. The EVOLVE study and the MOA study. , there are two active phase 1 trials: OCTAVE (in ovarian cancer) and SPICE (in multiple solid tumor indications) Of the T-SIGn viruses, NG-350A has an ongoing clinical study. See also Oncolytic adenovirus Oncolytic adenovirus#Directed Evolution References Adenoviridae Biotechnology Experimental cancer treatments Virotherapy
Enadenotucirev
Biology
330
66,114,433
https://en.wikipedia.org/wiki/Bottom%20metal
A bottom metal is a firearm component typically made of metallic material (such as aluminium alloy or steel), that serves as the floor of the action and also helps to clamp the receiver onto the stock. The bottom metal also frequently incorporates the trigger guard, for instance on the Mauser 98 and M1 Garand, although a trigger guard by itself is not considered a bottom metal. In repeating firearms with internal magazines, the bottom metal serves as the magazine floorplate and contains the spring and follower, either as a fixed solid piece or can be opened like a hinged door. Bottom metals designed to accept detachable magazines are called detachable bottom metals (DBM), and contains a rectangular reception slot called the magazine well, with a latch mechanism that securely holds the inserted magazine in place. Single-shot firearms (e.g. SIG Sauer 200 STR) typically do not have bottom metals, and modern firearms with metallic chassis (e.g. SIG Sauer CROSS) do not have separate bottom metals as its function is already integrated into the chassis. Aftermarket bottom metals are available commercially for various models of modern firearms. It is not uncommon to see a firearm with internal magazine (e.g. a Remington 700 rifle) being modified to accept various models of detachable box magazines (e.g. an AICS magazine), simply by replacing the factory bottom metal with an aftermarket one. See also Receiver References Firearm components
Bottom metal
Technology
296
18,832,276
https://en.wikipedia.org/wiki/Septimal%20tritone
A septimal tritone is a tritone (about one half of an octave) that involves the factor seven. There are two that are inverses. The lesser septimal tritone (also Huygens' tritone) is the musical interval with ratio 7:5 (582.51 cents). The greater septimal tritone (also Euler's tritone), is an interval with ratio 10:7 (617.49 cents). They are also known as the sub-fifth and super-fourth, or subminor fifth and supermajor fourth, respectively. The 7:5 interval (diminished fifth) is equal to a 6:5 minor third plus a 7:6 subminor third. The 10:7 interval (augmented fourth) is equal to a 5:4 major third plus an 8:7 supermajor second, or a 9:7 supermajor third plus a 10:9 major second. The difference between these two is the septimal sixth tone (50:49, 34.98 cents) . 12 equal temperament and 22 equal temperament do not distinguish between these tritones; 19 equal temperament does distinguish them but doesn't match them closely. 31 equal temperament and 41 equal temperament both distinguish between and closely match them. The lesser septimal tritone is the most consonant tritone when measured by combination tones, harmonic entropy, and period length. Depending on the temperament used, "the" tritone, defined as three whole tones, may be identified as either a lesser septimal tritone (in septimal meantone systems), a greater septimal tritone (when the tempered fifth is around 703 cents), neither (as in 72 equal temperament), or both (in 12 equal temperament only). References Augmented fourths Diminished fifths Tritones 7-limit tuning and intervals
Septimal tritone
Physics
389
39,103,747
https://en.wikipedia.org/wiki/KaiC
KaiC is a gene belonging to the KaiABC gene cluster (with KaiA, and KaiB) that, together, regulate bacterial circadian rhythms, specifically in cyanobacteria. KaiC encodes the KaiC protein, which interacts with the KaiA and KaiB proteins in a post-translational oscillator (PTO). The PTO is cyanobacteria master clock that is controlled by sequences of phosphorylation of KaiC protein. Regulation of KaiABC expression and KaiABC phosphorylation is essential for cyanobacteria circadian rhythmicity, and is particularly important for regulating cyanobacteria processes such as nitrogen fixation, photosynthesis, and cell division. Studies have shown similarities to Drosophila, Neurospora, and mammalian clock models in that the kaiABC regulation of the cyanobacteria slave circadian clock is also based on a transcription translation feedback loop (TTFL). KaiC protein has both auto-kinase and auto-phosphatase activity and functions as the circadian regulator in both the PTO and the TTFL. KaiC has been found to not only suppress kaiBC when overexpressed, but also suppress circadian expression of all genes in the cyanobacterial genome. Evolutionary History Though the KaiABC gene cluster has been found to exist only in cyanobacteria, evolutionarily KaiC contains homologs that occur in Archaea and Pseudomonadota. It is the oldest circadian gene that has been discovered in prokaryotes. KaiC has a double-domain structure and sequence that classifies it as part of the RecA gene family of ATP-dependent recombinases. Based on a number of single-domain homologous genes in other species, KaiC is hypothesized to have horizontally transferred from Bacteria to Archaea, eventually forming the double-domain KaiC through duplication and fusion. KaiC's key role in circadian control and homology to RecA suggest its individual evolution before its presence in the KaiABC gene cluster. Discovery Masahiro Ishiura, Takao Kondo, Susan S. Golden, Carl H. Johnson, and their colleagues discovered the gene cluster in 1998 and named the gene cluster kaiABC, as "kai" means “cycle” in Japanese. They generated 19 different clock mutants that were mapped to kaiA, kaiB, and kaiC genes, and successfully cloned the gene cluster in the cyanobacteria Synechococcus elongatus. Using a bacterial luciferase reporter to monitor the expression of clock-controlled gene psbAI in Synechococcus, they investigated and reported on the rescue to normal rhythmicity of long-period clock mutant C44a (with a period of 44 hours) by kaiABC. They inserted wild-type DNA through a pNIBB7942 plasmid vector into the C44a mutant, and generated clones that restored normal period (a period of 25 hours). They were eventually able to localize the gene region causing this rescue, and observed circadian rhythmicity in upstream promotor activity of kaiA and kaiB, as well as in the expression of kaiA and kaiBC messenger RNA. They determined abolishing any of the three kai genes would cause arrhythmicity in the circadian clock and reduce kaiBC promoter activity. KaiC was later found to have both autokinase and autophosphatase activity. These findings suggested that circadian rhythm was controlled by a TTFL mechanism, which is consistent with other known biological clocks. In 2000, S. elongatus was observed in constant dark (DD) and constant light (LL). In DD, transcription and translation halted due to the absence of light but the circadian mechanism showed no significant phase shift after transitioning to constant light. In 2005, after closer examination of the KaiABC protein interactions, the phosphorylation of KaiC proved to oscillate with daily rhythms in the absence of light. In addition to the TTFL model, the PTO model was hypothesized for the KaiABC phosphorylation cycle. Also in 2005, Nakajima et al. lysed S. elongatus and isolated KaiABC proteins. In test tubes containing only KaiABC proteins and ATP, in vitro phosphorylation of KaiC oscillated with a near 24 hour period with a slightly smaller amplitude than in vivo oscillation, proving that the KaiABC proteins are sufficient for circadian rhythm solely in the presence of ATP. Combined with the TTFL model, KaiABC as a circadian PTO was shown to be the fundamental clock regulator in S. elongatus Genetics and protein structure On Synechococcus elongatus''' singular circular chromosome, the protein-coding gene kaiC is located at position 380696-382255 (its locus tag is syc0334_d). The gene kaiC has paralogs kaiB (located 380338..380646) and kaiA (located 379394..380248). kaiC encodes the protein KaiC (519 amino acids). KaiC acts as a non-specific transcription regulator that represses transcription of the kaiBC promoter. Its crystal structure has been solved at 2.8 Å resolution; it is a homohexameric complex (approximately 360 kDa) with a double-doughnut structure and a central pore which is open at the N-terminal ends and partially sealed at the C-terminal ends due to the presence of six arginine residues. The hexamer has twelve ATP molecules between the N- (CI) and C-terminal (CII) domains, which demonstrate ATPase activity. The CI and CII domains are linked by the N-terminal region of the CII domain. The last 20 residues from the C-terminal of the CII domain protrude from the doughnut to form what is called the A-loop. Interfaces on KaiC's CII domain are sites for both auto-kinase and auto-phosphatase activity, both in vitro and in vivo. KaiC has two P loops or Walker’s motif As (ATP-/GTP-binding motifs) in the CI and CII domains; the CI domain also contains two DXXG (X represents any amino acid) motifs that are highly conserved among the GTPase super-family. Evolutionary relationships KaiC shares structural similarities to several other proteins with hexameric rings, including RecA, DnaB and ATPases. The hexameric rings of KaiC closely resembles RecA, with 8 α-helices surrounding a twisted β-sheet made up of 7 strands. This structure favours the binding of a nucleotide at the carboxy-end of the β-sheet. KaiC’s structural similarities to these proteins suggests a role for KaiC in transcription regulation. Further, the diameter of the rings in KaiC are suitable to accommodate single stranded DNA. Additionally, the surface potential at the CII ring and the C-terminal channel opening is mostly positive. The compatibility of the diameter as well as the surface potential charge suggests that DNA may be able to bind to the C-terminal channel opening. Mechanism Regulation of KaiC Kai proteins regulate genome-wide gene expression. Protein KaiA enhances the phosphorylation of protein KaiC by binding to the A loop of the CII domain to promote auto-kinase activity during subjective day. Phosphorylation at subunits occurs in an ordered manner, beginning with phosphorylation of Threonine 432 (T432) followed by Serine 431 (S431) on the CII domain. This leads to tight stacking of the CII domain to the CI domain. KaiB then binds to the exposed B loop on the CII domain of KaiC and sequesters KaiA from the C-terminals during subjective night, which inhibits phosphorylation and stimulates auto-phosphatase activity. Dephosphorylation of T432 occurs followed by S431, returning KaiC to its original state. Disruption of KaiC’s CI domain results both in arrhythmia of kaiBC expression and a reduction of ATP-binding activity; this, along with in vitro autophosphorylation of KaiC indicate that ATP binding to KaiC is crucial for Synechococcus circadian oscillation. The phosphorylation status of KaiC has been correlated with Synechococcus clock speed in vivo. Additionally, overexpression of KaiC has been shown to strongly repress the kaiBC promoter, while kaiA overexpression has experimentally enhanced the kaiBC promoter. These positive and negative binding elements mirror a feedback mechanism of rhythm generation conserved across many different species. KaiC phosphorylation oscillates with a period of approximately 24 hours when placed in vitro with the three recombinant Kai proteins, incubated with ATP. The circadian rhythm of KaiC phosphorylation persists in constant darkness, regardless of Synechococcus transcription rates. This oscillation rate is thought to be controlled by the ratio of phosphorylated to unphosphorylated KaiC protein. KaiC phosphorylation ratio is a main factor in the activation of kaiBC promoter as well. The kaiBC operon is transcribed in a circadian fashion and precedes KaiC build up by about 6 hours, a lag thought to play a role in feedback loops. Interdependence of Kai A, B, and CkaiA, kaiB, and kaiC have been shown to be essential genetic components in Synechococcus elongatus for circadian rhythms. Experiments have also shown that KaiC enhances the KaiA-KaiB interaction in yeast cells and in vitro. This implies that there may be the formation of a heteromultimeric complex composed of the three Kai proteins with KaiC serving as a bridge between KaiA and KaiB. Alternatively, KaiC may form a heterodimer with KaiA or KaiB to induce a conformational change. Variations in the C-terminal region of each of their proteins suggest functional divergence between the Kai clock proteins, however there are critical interdependencies between the three paralogs. Function Cyanobacteria are the simplest organisms with a known mechanism for the generation of circadian rhythms. KaiC ATPase activity is temperature compensated from 25 to 50 degrees Celsius and has a Q10 of about 1.1 (Q10 values around 1 indicate temperature compensation). Because the period of KaiC phosphorylation is temperature compensated and agrees with in vivo circadian rhythms, KaiC is thought to be the mechanism for basic circadian timing in Synechococcus. ∆kaiABC individuals, one of the more common mutants, grow just as well as wild type individuals but they lack rhythmicity. This is evidence that the kaiABC gene cluster is not necessary for growth. KaiC’s role in the TTFL In addition to the PTO regulating the autokinase and autophosphatase activities of KaiC, there is also evidence for a TTFL, similar to other eukaryotes, that governs the circadian rhythm in outputs of the clock. By studying the structure and the activities of KaiC, several roles of KaiC in the TTFL were suggested. The similar structures of KaiC to the RecA/DnaB superfamily suggested a possible role for KaiC in direct DNA binding and promoting of transcription. KaiC knock-out(KO) experiments determined KaiC to be a negative regulator of the kaiBC promoter sequence but it was found working through a separate, SasA/RpaA pathway, as KaiC was found to be not a transcription factor. However, elimination of the PTO did not fully eliminate the rhythmicity in kaiBC promoter activities, suggesting that the PTO is not necessary in generating rhythms in the TTFL. In truth, the activities of KaiC outside of the PTO is still relatively unknown. Circadian Regulation of Cell Division Recent experiments have found that the oscillations in the cell cycle and circadian rhythms of Synechococcus are linked together through a one way mechanism. The circadian clock gates cells division, only allowing it to proceed at certain phases. The cell cycle does not appear to have any effect on the circadian clock, though. When binary fission occurs, the daughter cells inherit the mother cell's circadian clock and are in phase with the mother cell. The circadian gating of cell division may be a protective feature to prevent division at a vulnerable phase. Phases in which KaiC has high ATPase activity do not allow for cell division to take place. In mutants with constantly elevated KaiC ATPase activity, the protein CikA is absent. CikA is a major factor in the input pathway and causes KaiC-dependent cell elongation. Notable research The recreation of a circadian oscillator in vitro in the presence of only KaiA, KaiB, KaiC, and ATP has sparked interest in the relationship between cellular biochemical oscillators and their associated transcription-translation feedback loops (TTFLs). TTFLs have long been assumed to be the core of circadian rhythmicity, but that claim is now being tested again due to the possibility that the biochemical oscillators could constitute the central mechanism of the clock system, regulating and operating within TTFLs that control output and restore proteins essential to the oscillators in organisms, such as the KaiABC system in Synechococcus. Two models have been proposed to describe the relationship between the biochemical and TTFL regulation of circadian rhythms: a master/slave oscillator system with the TTFL oscillator synchronizing to the biochemical oscillator and an equally weighted coupled oscillator system in which both oscillators synchronize and influence the other oscillator. Both are coupled oscillator models that account for the high stability of the timing mechanism within Synechococcus. The biochemical oscillator relies on redundant molecular interactions based on the law of mass action, whereas the TTFL relies on cellular machinery that mediates translation, transcription, and degradation of mRNA and proteins. The different types of interactions driving the two oscillators allows the circadian clock to be resilient to changes within the cell, such as metabolic fluctuation, temperature changes, and cell division. Though the period of the circadian clock is temperature compensated, the phosphorylation of KaiC can be stably entrained to a temperature cycle. The phosphorylation of KaiC was successfully entrained in vitro'' to temperature cycles with periods between 20 and 28 hours using temperature steps from 30 °C to 45 °C and vice versa. The results reflect a phase-dependent shift in the phase of the KaiC phosphorylation rhythms. The period of the circadian clock was not changed, reinforcing the temperature compensation of the clock mechanism. A 2012 study out of Vanderbilt University shows evidence that KaiC acts as a phospho-transferase that hands back phosphates to ADP on the T432 (threonine residue at position 432) and S431 (serine residue 431) indicating that KaiC effectively serves as an ATP synthase. Various KaiC mutants have been identified and their phenotypes studied. Many mutants show a change in the period of their circadian rhythms. See also Bacterial circadian rhythms Circadian rhythm Chronobiology Cyanobacteria KaiA KaiB Oscillation Phosphorylation Synechococcus References External links UniProt: Circadian clock protein kinase KaiC Circadian rhythm Gene expression
KaiC
Chemistry,Biology
3,327
302,794
https://en.wikipedia.org/wiki/Depth%20perception
Depth perception is the ability to perceive distance to objects in the world using the visual system and visual perception. It is a major factor in perceiving the world in three dimensions. Depth sensation is the corresponding term for non-human animals, since although it is known that they can sense the distance of an object, it is not known whether they perceive it in the same way that humans do. Depth perception arises from a variety of depth cues. These are typically classified into binocular cues and monocular cues. Binocular cues are based on the receipt of sensory information in three dimensions from both eyes and monocular cues can be observed with just one eye. Binocular cues include retinal disparity, which exploits parallax and vergence. Stereopsis is made possible with binocular vision. Monocular cues include relative size (distant objects subtend smaller visual angles than near objects), texture gradient, occlusion, linear perspective, contrast differences, and motion parallax. Monocular cues Monocular cues provide depth information even when viewing a scene with only one eye. Motion parallax When an observer moves, the apparent relative motion of several stationary objects against a background gives hints about their relative distance. If information about the direction and velocity of movement is known, motion parallax can provide absolute depth information. This effect can be seen clearly when driving in a car. Nearby things pass quickly, while far-off objects appear stationary. Some animals that lack binocular vision due to their eyes having little common field-of-view employ motion parallax more explicitly than humans for depth cueing (for example, some types of birds, which bob their heads to achieve motion parallax, and squirrels, which move in lines orthogonal to an object of interest to do the same). When an object moves toward the observer, the retinal projection of an object expands over a period of time, which leads to the perception of movement in a line toward the observer. Another name for this phenomenon is depth from optical expansion. The dynamic stimulus change enables the observer not only to see the object as moving, but to perceive the distance of the moving object. Thus, in this context, the changing size serves as a distance cue. A related phenomenon is the visual system's capacity to calculate time-to-contact (TTC) of an approaching object from the rate of optical expansiona useful ability in contexts ranging from driving a car to playing a ball game. However, the calculation of TTC is, strictly speaking, a perception of velocity rather than depth. Kinetic depth effect If a stationary rigid figure (for example, a wire cube) is placed in front of a point source of light so that its shadow falls on a translucent screen, an observer on the other side of the screen will see a two-dimensional pattern of lines. But if the cube rotates, the visual system will extract the necessary information for perception of the third dimension from the movements of the lines, and a cube is seen. This is an example of the kinetic depth effect. The effect also occurs when the rotating object is solid (rather than an outline figure), provided that the projected shadow consists of lines which have definite corners or end points, and that these lines change in both length and orientation during the rotation. Perspective The property of parallel lines converging in the distance, at infinity, allows us to reconstruct the relative distance of two parts of an object, or of landscape features. An example would be standing on a straight road, looking down the road, and noticing the road narrows as it goes off in the distance. Visual perception of perspective in real space, for instance in rooms, in settlements and in nature, is a result of several optical impressions and the interpretation by the visual system. The angle of vision is important for the apparent size. A nearby object is imaged on a larger area on the retina, the same object or an object of the same size further away on a smaller area. The perception of perspective is possible when looking with one eye only, but stereoscopic vision enhances the impression of the spatial. Regardless of whether the light rays entering the eye come from a three-dimensional space or from a two-dimensional image, they hit the inside of the eye on the retina as a surface. What a person sees, is based on the reconstruction by their visual system, in which one and the same image on the retina can be interpreted both two-dimensionally and three-dimensionally. If a three-dimensional interpretation has been recognised, it receives a preference and determines the perception. In spatial vision, the horizontal line of sight can play a role. In the picture taken from the window of a house, the horizontal line of sight is at the level of the second floor (yellow line). Below this line, the further away objects are, the higher up in the visual field they appear. Above the horizontal line of sight, objects that are further away appear lower than those that are closer. To represent spatial impressions in graphical perspective, one can use a vanishing point. When looking at long geographical distances, perspective effects also partially result from the angle of vision, but not only by this. In picture 5 of the series, in the background is Mont Blanc, the highest mountain in the Alps. It appears lower than the mountain in front in the center of the picture. Measurements and calculations can be used to determine the proportion of the curvature of Earth in the subjectively perceived proportions. Relative size If two objects are known to be the same size (for example, two trees) but their absolute size is unknown, relative size cues can provide information about the relative depth of the two objects. If one subtends a larger visual angle on the retina than the other, the object which subtends the larger visual angle appears closer. Familiar size Since the visual angle of an object projected onto the retina decreases with distance, this information can be combined with previous knowledge of the object's size to determine the absolute depth of the object. For example, people are generally familiar with the size of an average automobile. This prior knowledge can be combined with information about the angle it subtends on the retina to determine the absolute depth of an automobile in a scene. Absolute size Even if the actual size of the object is unknown and there is only one object visible, a smaller object seems farther away than a large object that is presented at the same location. Aerial perspective Due to light scattering by the atmosphere, objects that are a great distance away have lower luminance contrast and lower color saturation. Due to this, images seem hazy the farther they are away from a person's point of view. In computer graphics, this is often called "distance fog". The foreground has high contrast; the background has low contrast. Objects differing only in their contrast with a background appear to be at different depths. The color of distant objects is also shifted toward the blue end of the spectrum (for example, distant mountains). Some painters, notably Cézanne, employ "warm" pigments (red, yellow and orange) to bring features forward towards the viewer, and "cool" ones (blue, violet, and blue-green) to indicate the part of a form that curves away from the picture plane. Accommodation Accommodation is an oculomotor cue for depth perception. When humans try to focus on distant objects, the ciliary muscles relax, allowing the eye lens to become thinner, which increases the focal length. Depth perception of distant objects is made possible by other methods besides accommodation. The kinesthetic sensations of the contracting and relaxing ciliary muscles (intraocular muscles) are sent to the visual cortex where they are used for interpreting distance and depth. Accommodation is only effective for distances less than 2 meters. Occultation Occultation (also referred to as interposition) happens when near surfaces overlap far surfaces. If one object partially blocks the view of another object, humans perceive it as closer. However, this information only allows the observer to make a "ranking" of relative nearness. The presence of monocular ambient occlusions consist of the object's texture and geometry. These phenomena are able to reduce depth perception latency both in natural and artificial stimuli. Curvilinear perspective At the outer extremes of the visual field, parallel lines become curved, as in a photo taken through a fisheye lens. This effect, although it is usually eliminated from both art and photos by the cropping or framing of a picture, greatly enhances the viewer's sense of being positioned within a real, three-dimensional space. (Classical perspective has no use for this so-called "distortion", although in fact the "distortions" strictly obey optical laws and provide perfectly valid visual information, just as classical perspective does for the part of the field of vision that falls within its frame.) Texture gradient Fine details on nearby objects can be seen clearly, whereas such details are not visible on faraway objects. Texture gradients are the grains of an item. For example, on a long gravel road, the gravel near the observer can be clearly seen of shape, size and colour. In the distance, the road's texture cannot be clearly differentiated. Lighting and shading The way that light falls on an object and reflects off its surfaces, and the shadows that are cast by objects provide an effective cue for the brain to determine the shape of objects and their position in space. Defocus blur Selective image blurring is very commonly used in photography and video to establish the impression of depth. This can act as a monocular cue even when all other cues are removed. It may contribute to depth perception in natural retinal images, because the depth of focus of the human eye is limited. In addition, there are several depth estimation algorithms based on defocus and blurring. Some jumping spiders are known to use image defocus to judge depth. Elevation When an object is visible relative to the horizon, humans tend to perceive objects which are closer to the horizon as being farther away from them, and objects which are farther from the horizon as being closer to them. In addition, if an object moves from a position close to the horizon to a position higher or lower than the horizon, it will appear to move closer to the viewer. Ocular parallax Ocular parallax is a perceptual effect where the rotation of the eye causes perspective-dependent image shifts. This happens because the optical center and the rotation center of the eye are not the same. Ocular parallax does not require head movement. It is separate and distinct from motion parallax. Binocular cues Binocular cues provide depth information when viewing a scene with both eyes. Stereopsis, or retinal (binocular) disparity, or binocular parallax Animals that have their eyes placed frontally can also use information derived from the different projections of objects onto each retina to judge depth. By using two images of the same scene obtained from slightly different angles, it is possible to triangulate the distance to an object with a high degree of accuracy. Each eye views a slightly different angle of an object seen by the left and right eyes. This happens because of the horizontal separation parallax of the eyes. If an object is far away, the disparity of that image falling on both retinas will be small. If the object is close or near, the disparity will be large. It is stereopsis that tricks people into thinking they perceive depth when viewing Magic Eyes, autostereograms, 3-D movies, and stereoscopic photos. Convergence Convergence is a binocular oculomotor cue for distance and depth perception. Because of stereopsis, the two eyeballs focus on the same object; in doing so they converge. The convergence will stretch the extraocular musclesthe receptors for this are muscle spindles. As happens with the monocular accommodation cue, kinesthetic sensations from these extraocular muscles also help in distance and depth perception. The angle of convergence is smaller when the eye is fixating on objects which are far away. Convergence is effective for distances less than 10 meters. Shadow stereopsis Antonio Medina Puerta demonstrated that retinal images with no parallax disparity but with different shadows were fused stereoscopically, imparting depth perception to the imaged scene. He named the phenomenon "shadow stereopsis". Shadows are therefore an important, stereoscopic cue for depth perception. Of these various cues, only convergence, accommodation and familiar size provide absolute distance information. All other cues are relative (as in, they can only be used to tell which objects are closer relative to others). Stereopsis is merely relative because a greater or lesser disparity for nearby objects could either mean that those objects differ more or less substantially in relative depth or that the foveated object is nearer or further away (the further away a scene is, the smaller is the retinal disparity indicating the same depth difference). Theories of evolution The law of Newton–Müller–Gudden Isaac Newton proposed that the optic nerve of humans and other primates has a specific architecture on its way from the eye to the brain. Nearly half of the fibres from the human retina project to the brain hemisphere on the same side as the eye from which they originate. That architecture is labelled hemi-decussation or ipsilateral (same sided) visual projections (IVP). In most other animals, these nerve fibres cross to the opposite side of the brain. Bernhard von Gudden showed that the OC contains both crossed and uncrossed retinal fibers, and Ramon y Cajal observed that the grade of hemidecussation differs between species. Gordon Lynn Walls formalized a commonly accepted notion into the law of Newton–Müller–Gudden (NGM) saying: that the degree of optic fibre decussation in the optic chiasm is contrariwise related to the degree of frontal orientation of the optical axes of the eyes. In other words, that the number of fibers that do not cross the midline is proportional to the size of the binocular visual field. However, an issue of the Newton–Müller–Gudden law is the considerable interspecific variation in IVP seen in non-mammalian species. That variation is unrelated to mode of life, taxonomic situation, and the overlap of visual fields. Thus, the general hypothesis was for long that the arrangement of nerve fibres in the optic chiasm in primates and humans has developed primarily to create accurate depth perception, stereopsis, or explicitly that the eyes observe an object from somewhat dissimilar angles and that this difference in angle assists the brain to evaluate the distance. The eye-forelimb (EF) hypothesis The eye-forelimb (EF) hypothesis suggests that the need for accurate eye-hand control was key in the evolution of stereopsis. According to the EF hypothesis, stereopsis is evolutionary spinoff from a more vital process: that the construction of the optic chiasm and the position of eyes (the degree of lateral or frontal direction) is shaped by evolution to help the animal to coordinate the limbs (hands, claws, wings or fins). The EF hypothesis postulates that it has a selective value to have short neural pathways between areas of the brain that receive visual information about the hand and the motor nuclei that control the coordination of the hand. The essence of the EF hypothesis is that evolutionary transformation in OC will affect the length and thereby speed of these neural pathways. Having the primate type of OC means that motor neurons controlling/executing let us say right hand movement, neurons receiving sensory e.g. tactile information about the right hand, and neurons obtaining visual information about the right hand, all will be situated in the same (left) brain hemisphere. The reverse is true for the left hand, the processing of visual, tactile information, and motor commandall of which takes place in the right hemisphere. Cats and arboreal (tree-climbing) marsupials have analogous arrangements (between 30 and 45% of IVP and forward-directed eyes). The result will be that visual info of their forelimbs reaches the proper (executing) hemisphere. The evolution has resulted in small, and gradual fluctuations in the direction of the nerve pathways in the OC. This transformation can go in either direction. Snakes, cyclostomes and other animals that lack extremities have relatively many IVP. Notably these animals have no limbs (hands, paws, fins or wings) to direct. Besides, the left and right body parts of snakelike animals cannot move independently of each other. For example, if a snake coils clockwise, its left eye only sees the left body-part and in an anti-clock-wise position the same eye will see just the right body-part. For that reason, it is functional for snakes to have some IVP in the OC (Naked). Cyclostome descendants (in other words, most vertebrates) that due to evolution ceased to curl and, instead developed forelimbs would be favored by achieving completely crossed pathways as long as forelimbs were primarily occupied in a lateral direction. Reptiles such as snakes that lost their limbs, would gain by recollecting a cluster of uncrossed fibres in their evolution. That seems to have happened, providing further support for the EF hypothesis. Mice' paws are usually busy only in the lateral visual fields. So, it is in accordance with the EF hypothesis that mice have laterally situated eyes and very few crossings in the OC. The list from the animal kingdom supporting the EF hypothesis is long (BBE). The EF hypothesis applies to essentially all vertebrates while the NGM law and stereopsis hypothesis largely apply just to mammals. Even some mammals display important exceptions, e.g. dolphins have only uncrossed pathways although they are predators. It is a common suggestion that predatory animals generally have frontally-placed eyes since that permit them to evaluate the distance to prey, whereas preyed-upon animals have eyes in a lateral position, since that permit them to scan and detect the enemy in time. However, many predatory animals may also become prey, and several predators, for instance, the crocodile, have laterally situated eyes and no IVP at all. That OC architecture will provide short nerve connections and optimal eye control of the crocodile's front foot. Birds, usually have laterally situated eyes, in spite of that they manage to fly through e.g. a dense wood. In conclusion, the EF hypothesis does not reject a significant role of stereopsis, but proposes that primates' superb depth perception (stereopsis) evolved to be in service of the hand; that the particular architecture of the primate visual system largely evolved to establish rapid neural pathways between neurons involved in hand coordination, assisting the hand in gripping the correct branch Most open-plain herbivores, especially hoofed grazers, lack binocular vision because they have their eyes on the sides of the head, providing a panoramic, almost 360°, view of the horizonenabling them to notice the approach of predators from almost any direction. However, most predators have both eyes looking forwards, allowing binocular depth perception and helping them to judge distances when they pounce or swoop down onto their prey. Animals that spend a lot of time in trees take advantage of binocular vision in order to accurately judge distances when rapidly moving from branch to branch. Matt Cartmill, a physical anthropologist and anatomist at Boston University, has criticized this theory, citing other arboreal species which lack binocular vision, such as squirrels and certain birds. Instead, he proposes a "Visual Predation Hypothesis," which argues that ancestral primates were insectivorous predators resembling tarsiers, subject to the same selection pressure for frontal vision as other predatory species. He also uses this hypothesis to account for the specialization of primate hands, which he suggests became adapted for grasping prey, somewhat like the way raptors employ their talons. In art Photographs capturing perspective are two-dimensional images that often illustrate the illusion of depth. Photography utilizes size, environmental context, lighting, textural gradience, and other effects to capture the illusion of depth. Stereoscopes and Viewmasters, as well as 3D films, employ binocular vision by forcing the viewer to see two images created from slightly different positions (points of view). Charles Wheatstone was the first to discuss depth perception being a cue of binocular disparity. He invented the stereoscope, which is an instrument with two eyepieces that displays two photographs of the same location/scene taken at relatively different angles. When observed, separately by each eye, the pairs of images induced a clear sense of depth. By contrast, a telephoto lens—used in televised sports, for example, to zero in on members of a stadium audience—has the opposite effect. The viewer sees the size and detail of the scene as if it were close enough to touch, but the camera's perspective is still derived from its actual position a hundred meters away, so background faces and objects appear about the same size as those in the foreground. Trained artists are keenly aware of the various methods for indicating spatial depth (color shading, distance fog, perspective and relative size), and take advantage of them to make their works appear "real". The viewer feels it would be possible to reach in and grab the nose of a Rembrandt portrait or an apple in a Cézanne still life—or step inside a landscape and walk around among its trees and rocks. Cubism was based on the idea of incorporating multiple points of view in a painted image, as if to simulate the visual experience of being physically in the presence of the subject, and seeing it from different angles. The radical experiments of Georges Braque, Pablo Picasso, Jean Metzinger's Nu à la cheminée, Albert Gleizes's La Femme aux Phlox, or Robert Delaunay's views of the Eiffel Tower, employ the explosive angularity of Cubism to exaggerate the traditional illusion of three-dimensional space. The subtle use of multiple points of view can be found in the pioneering late work of Cézanne, which both anticipated and inspired the first actual Cubists. Cézanne's landscapes and still lives powerfully suggest the artist's own highly developed depth perception. At the same time, like the other Post-Impressionists, Cézanne had learned from Japanese art the significance of respecting the flat (two-dimensional) rectangle of the picture itself; Hokusai and Hiroshige ignored or even reversed linear perspective and thereby remind the viewer that a picture can only be "true" when it acknowledges the truth of its own flat surface. By contrast, European "academic" painting was devoted to a sort of Big Lie that the surface of the canvas is only an enchanted doorway to a "real" scene unfolding beyond, and that the artist's main task is to distract the viewer from any disenchanting awareness of the presence of the painted canvas. Cubism, and indeed most of modern art is an attempt to confront, if not resolve, the paradox of suggesting spatial depth on a flat surface, and explore that inherent contradiction through innovative ways of seeing, as well as new methods of drawing and painting. In robotics and computer vision In robotics and computer vision, depth perception is often achieved using sensors such as RGBD cameras. See also Arboreal theory Cyclopean stimuli Optical illusion Orthoptics Peripheral vision Senses Vision therapy Visual cliff Vista paradox References Notes Bibliography In three volumes External links Depth perception example | GO Illusions. Monocular Giants What is Binocular (Two-eyed) Depth Perception? Why Some People Can't See in Depth Space perception | Webvision. Depth perception | Webvision. Make3D. Depth Cues for Film, TV and Photography Vision Stereoscopy Visual perception
Depth perception
Physics
4,940
8,656,824
https://en.wikipedia.org/wiki/Jordanian%20Engineers%20Association
The Jordanian Engineers Association (in Arabic: نقابة المهندسين الأردنيين) was established as a society for engineers in 1948, and was licensed in 1949. The first general assembly of the Engineering Professionals Association was established in 1958. Tawfiq Marar became the first Engineers' Association president. The Association has 11 branches in Jordan. There are two centers in Amman and Jerusalem. The first law of the Association was enacted in 1972. The Association has an independent legal personality run by a board elected by the general assembly in accordance with the provisions of the association law, and the association president represents it before the courts, administrative entities, and other departments. The Association makes an annual report stating its achievements and clarifying its financial position in financial reports. Every fund of the association also makes its respective annual report, and the administrative and financial reports are presented to the general assemblies for approval. Membership As of 2017, the JEA had 143549 Registered Members, divided into six engineering chapters. Approximately 25% of its members are female, although this is expected to rise to 30% within the next 5 years. References External links JEA website Engineering societies Trade unions in Jordan Organisations based in Jordan
Jordanian Engineers Association
Engineering
256
2,902,912
https://en.wikipedia.org/wiki/57%20Aquilae
57 Aquilae (abbreviated 57 Aql) is a double star in the constellation of Aquila. 57 Aquilae is its Flamsteed designation. The primary star has an apparent visual magnitude of 5.70, while the secondary is magnitude 6.48. The pair have an angular separation of 35.624 arcseconds and probably form a wide binary star system. The estimated distance of the first component is , while the second is at . However, the margin of errors for their respective distance estimates overlap, indicating a probability that they are actually located much closer to each other. Both stars are massive, B-type main sequence stars with rapid rotation rates. References External links CCDM J19546-0814 HR7593 Image 57 Aquilae 188293 Binary stars Aquila (constellation) Double stars B-type main-sequence stars Aquilae, 57 097966 Durchmusterung objects 7593 4
57 Aquilae
Astronomy
201
11,775,351
https://en.wikipedia.org/wiki/Labrella%20coryli
Labrella coryli is an ascomycete fungus. It is a plant pathogen that causes anthracnose on hazelnut. It was not found in North America prior to 1951. References External links Index Fungorum USDA ARS Fungal Database Fungal tree pathogens and diseases Hazelnut tree diseases Enigmatic Ascomycota taxa Fungus species
Labrella coryli
Biology
73
5,172,550
https://en.wikipedia.org/wiki/K%20Centauri
K Centauri is a possible binary star in the southern constellation of Centaurus. It has a white hue and is bright enough to be visible to the naked eye, having an apparent visual magnitude of +5.04. K Centauri is located at a distance of approximately 410 light years from the Sun based on parallax, and it has an absolute magnitude of −0.91. This is an ordinary A-type main-sequence star with a stellar classification of A1V. It is spinning rapidly with a projected rotational velocity of 220 km/s, which is giving it a pronounced equatorial bulge that is 25% larger than the polar radius. Analysis of Hipparcos and Gaia astrometry suggests that the relatively large margins of error in the calculated parallax may be due to orbital motion caused by an unseen companion. The companion would be an object orbiting at about . References A-type main-sequence stars Centaurus Centauri, K Durchmusterung objects 117150 065810 5071
K Centauri
Astronomy
210
14,021
https://en.wikipedia.org/wiki/History%20of%20astronomy
The history of astronomy focuses on the contributions civilizations have made to further their understanding of the universe beyond earth's atmosphere. Astronomy is one of the oldest natural sciences, achieving a high level of success in the second half of the first millennium. Astronomy has origins in the religious, mythological, cosmological, calendrical, and astrological beliefs and practices of prehistory. Early astronomical records date back to the Babylonians around 1000 BCE. There is also astronomical evidence of interest from early Chinese, Central American and North European cultures. Astronomy was used by early cultures for a variety of reasons. These include timekeeping, navigation, spiritual and religious practices, and agricultural planning. Ancient astronomers used their observations to chart the skies in an effort to learn about the workings of the universe. During the Renaissance Period, revolutionary ideas emerged about astronomy. One such idea was contributed in 1593 by Polish astronomer Nicolaus Copernicus, who developed a heliocentric model that depicted the planets orbiting the sun. This was the start of the Copernican Revolution. The success of astronomy, compared to other sciences, was achieved because of several reasons. Astronomy was the first science to have a mathematical foundation and have sophisticated procedures such as using armillary spheres and quadrants. This provided a solid base for collecting and verifying data. Throughout the years, astronomy has broadened into multiple subfields such as astrophysics, observational astronomy, theoretical astronomy, and astrobiology. Early history Early cultures identified celestial objects with gods and spirits. They related these objects (and their movements) to phenomena such as rain, drought, seasons, and tides. It is generally believed that the first astronomers were priests, and that they understood celestial objects and events to be manifestations of the divine, hence early astronomy's connection to what is now called astrology. A 32,500-year-old carved ivory mammoth tusk could contain the oldest known star chart (resembling the constellation Orion). It has also been suggested that drawings on the wall of the Lascaux caves in France dating from 33,000 to 10,000 years ago could be a graphical representation of the Pleiades, the Summer Triangle, and the Northern Crown. Ancient structures with possibly astronomical alignments (such as Stonehenge) probably fulfilled astronomical, religious, and social functions. Calendars of the world have often been set by observations of the Sun and Moon (marking the day, month and year), and were important to agricultural societies, in which the harvest depended on planting at the correct time of year, and for which the nearly full moon was the only lighting for night-time travel into city markets. The common modern calendar is based on the Roman calendar. Although originally a lunar calendar, it broke the traditional link of the month to the phases of the Moon and divided the year into twelve almost-equal months, that mostly alternated between thirty and thirty-one days. Julius Caesar instigated calendar reform in 46 BC and introduced what is now called the Julian calendar, based upon the day year length originally proposed by the 4th century BC Greek astronomer Callippus. Prehistoric Europe Ancient astronomical artifacts have been found throughout Europe. The artifacts demonstrate that Neolithic and Bronze Age Europeans had a sophisticated knowledge of mathematics and astronomy. Among the discoveries are: Paleolithic archaeologist Alexander Marshack put forward a theory in 1972 that bone sticks from locations like Africa and Europe from possibly as long ago as 35,000 BC could be marked in ways that tracked the Moon's phases, an interpretation that has met with criticism. The Warren Field calendar in the Dee River valley of Scotland's Aberdeenshire. First excavated in 2004 but only in 2013 revealed as a find of huge significance, it is to date the oldest known calendar, created around 8000 BC and predating all other calendars by some 5,000 years. The calendar takes the form of an early Mesolithic monument containing a series of 12 pits which appear to help the observer track lunar months by mimicking the phases of the Moon. It also aligns to sunrise at the winter solstice, thus coordinating the solar year with the lunar cycles. The monument had been maintained and periodically reshaped, perhaps up to hundreds of times, in response to shifting solar/lunar cycles, over the course of 6,000 years, until the calendar fell out of use around 4,000 years ago. Goseck circle is located in Germany and belongs to the linear pottery culture. First discovered in 1991, its significance was only clear after results from archaeological digs became available in 2004. The site is one of hundreds of similar circular enclosures built in a region encompassing Austria, Germany, and the Czech Republic during a 200-year period starting shortly after 5000 BC. The Nebra sky disc is a Bronze Age bronze disc that was buried in Germany, not far from the Goseck circle, around 1600 BC. It measures about diameter with a mass of and displays a blue-green patina (from oxidization) inlaid with gold symbols. Found by archeological thieves in 1999 and recovered in Switzerland in 2002, it was soon recognized as a spectacular discovery, among the most important of the 20th century. Investigations revealed that the object had been in use around 400 years before burial (2000 BC), but that its use had been forgotten by the time of burial. The inlaid gold depicted the full moon, a crescent moon about 4 or 5 days old, and the Pleiades star cluster in a specific arrangement forming the earliest known depiction of celestial phenomena. Twelve lunar months pass in 354 days, requiring a calendar to insert a leap month every two or three years in order to keep synchronized with the solar year's seasons (making it lunisolar). The earliest known descriptions of this coordination were recorded by the Babylonians in 6th or 7th centuries BC, over one thousand years later. Those descriptions verified ancient knowledge of the Nebra sky disc's celestial depiction as the precise arrangement needed to judge when to insert the intercalary month into a lunisolar calendar, making it an astronomical clock for regulating such a calendar a thousand or more years before any other known method. The Kokino site, discovered in 2001, sits atop an extinct volcanic cone at an elevation of , occupying about 0.5 hectares overlooking the surrounding countryside in North Macedonia. A Bronze Age astronomical observatory was constructed there around 1900 BC and continuously served the nearby community that lived there until about 700 BC. The central space was used to observe the rising of the Sun and full moon. Three markings locate sunrise at the summer and winter solstices and at the two equinoxes. Four more give the minimum and maximum declinations of the full moon: in summer, and in winter. Two measure the lengths of lunar months. Together, they reconcile solar and lunar cycles in marking the 235 lunations that occur during 19 solar years, regulating a lunar calendar. On a platform separate from the central space, at lower elevation, four stone seats (thrones) were made in north–south alignment, together with a trench marker cut in the eastern wall. This marker allows the rising Sun's light to fall on only the second throne, at midsummer (about July 31). It was used for ritual ceremony linking the ruler to the local sun god, and also marked the end of the growing season and time for harvest. Golden hats of Germany, France and Switzerland dating from 1400 to 800 BC are associated with the Bronze Age Urnfield culture. The Golden hats are decorated with a spiral motif of the Sun and the Moon. They were probably a kind of calendar used to calibrate between the lunar and solar calendars. Modern scholarship has demonstrated that the ornamentation of the gold leaf cones of the Schifferstadt type, to which the Berlin Gold Hat example belongs, represent systematic sequences in terms of number and types of ornaments per band. A detailed study of the Berlin example, which is the only fully preserved one, showed that the symbols probably represent a lunisolar calendar. The object would have permitted the determination of dates or periods in both lunar and solar calendars. Ancient times Mesopotamia The origins of astronomy can be found in Mesopotamia, the "land between the rivers" Tigris and Euphrates, where the ancient kingdoms of Sumer, Assyria, and Babylonia were located. A form of writing known as cuneiform emerged among the Sumerians around 3500–3000 BC. Our knowledge of Sumerian astronomy is indirect, via the earliest Babylonian star catalogues dating from about 1200 BC. The fact that many star names appear in Sumerian suggests a continuity reaching into the Early Bronze Age. Astral theology, which gave planetary gods an important role in Mesopotamian mythology and religion, began with the Sumerians. They also used a sexagesimal (base 60) place-value number system, which simplified the task of recording very large and very small numbers. The modern practice of dividing a circle into 360 degrees, or an hour into 60 minutes, began with the Sumerians. For more information, see the articles on Babylonian numerals and mathematics. Classical sources frequently use the term Chaldeans for the astronomers of Mesopotamia, who were, in reality, priest-scribes specializing in astrology and other forms of divination. The first evidence of recognition that astronomical phenomena are periodic and of the application of mathematics to their prediction is Babylonian. Tablets dating back to the Old Babylonian period document the application of mathematics to the variation in the length of daylight over a solar year. Centuries of Babylonian observations of celestial phenomena are recorded in the series of cuneiform tablets known as the Enūma Anu Enlil. The oldest significant astronomical text that we possess is Tablet 63 of the Enūma Anu Enlil, the Venus tablet of Ammi-saduqa, which lists the first and last visible risings of Venus over a period of about 21 years and is the earliest evidence that the phenomena of a planet were recognized as periodic. The MUL.APIN, contains catalogues of stars and constellations as well as schemes for predicting heliacal risings and the settings of the planets, lengths of daylight measured by a water clock, gnomon, shadows, and intercalations. The Babylonian GU text arranges stars in 'strings' that lie along declination circles and thus measure right-ascensions or time-intervals, and also employs the stars of the zenith, which are also separated by given right-ascensional differences. A significant increase in the quality and frequency of Babylonian observations appeared during the reign of Nabonassar (747–733 BC). The systematic records of ominous phenomena in Babylonian astronomical diaries that began at this time allowed for the discovery of a repeating 18-year cycle of lunar eclipses, for example. The Greek astronomer Ptolemy later used Nabonassar's reign to fix the beginning of an era, since he felt that the earliest usable observations began at this time. The last stages in the development of Babylonian astronomy took place during the time of the Seleucid Empire (323–60 BC). In the 3rd century BC, astronomers began to use "goal-year texts" to predict the motions of the planets. These texts compiled records of past observations to find repeating occurrences of ominous phenomena for each planet. About the same time, or shortly afterwards, astronomers created mathematical models that allowed them to predict these phenomena directly, without consulting records. A notable Babylonian astronomer from this time was Seleucus of Seleucia, who was a supporter of the heliocentric model. Babylonian astronomy was the basis for much of what was done in Greek and Hellenistic astronomy, in classical Indian astronomy, in Sassanian Iran, in Byzantium, in Syria, in Islamic astronomy, in Central Asia, and in Western Europe. India Astronomy in the Indian subcontinent dates back to the period of Indus Valley Civilisation during 3rd millennium BC, when it was used to create calendars. As the Indus Valley civilization did not leave behind written documents, the oldest extant Indian astronomical text is the Vedanga Jyotisha, dating from the Vedic period. The Vedanga Jyotisha is attributed to Lagadha and has an internal date of approximately 1350 BC, and describes rules for tracking the motions of the Sun and the Moon for the purposes of ritual. It is available in two recensions, one belonging to the Rig Veda, and the other to the Yajur Veda. According to the Vedanga Jyotisha, in a yuga or "era", there are 5 solar years, 67 lunar sidereal cycles, 1,830 days, 1,835 sidereal days and 62 synodic months. During the 6th century, astronomy was influenced by the Greek and Byzantine astronomical traditions. Aryabhata (476–550), in his magnum opus Aryabhatiya (499), propounded a computational system based on a planetary model in which the Earth was taken to be spinning on its axis and the periods of the planets were given with respect to the Sun. He accurately calculated many astronomical constants, such as the periods of the planets, times of the solar and lunar eclipses, and the instantaneous motion of the Moon. Early followers of Aryabhata's model included Varāhamihira, Brahmagupta, and Bhāskara II. Astronomy was advanced during the Shunga Empire and many star catalogues were produced during this time. The Shunga period is known as the "Golden age of astronomy in India". It saw the development of calculations for the motions and places of various planets, their rising and setting, conjunctions, and the calculation of eclipses. Indian astronomers by the 6th century believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varahamihira and Bhadrabahu, and the 10th-century astronomer Bhattotpala listed the names and estimated periods of certain comets, but it is unfortunately not known how these figures were calculated or how accurate they were. Greece and Hellenistic world The Ancient Greeks developed astronomy, which they treated as a branch of mathematics, to a highly sophisticated level. The first geometrical, three-dimensional models to explain the apparent motion of the planets were developed in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus. Their models were based on nested homocentric spheres centered upon the Earth. Their younger contemporary Heraclides Ponticus proposed that the Earth rotates around its axis. A different approach to celestial phenomena was taken by natural philosophers such as Plato and Aristotle. They were less concerned with developing mathematical predictive models than with developing an explanation of the reasons for the motions of the Cosmos. In his Timaeus, Plato described the universe as a spherical body divided into circles carrying the planets and governed according to harmonic intervals by a world soul. Aristotle, drawing on the mathematical model of Eudoxus, proposed that the universe was made of a complex system of concentric spheres, whose circular motions combined to carry the planets around the Earth. This basic cosmological model prevailed, in various forms, until the 16th century. In the 3rd century BC Aristarchus of Samos was the first to suggest a heliocentric system, although only fragmentary descriptions of his idea survive. Eratosthenes estimated the circumference of the Earth with great accuracy (see also: history of geodesy). Greek geometrical astronomy developed away from the model of concentric spheres to employ more complex models in which an eccentric circle would carry around a smaller circle, called an epicycle which in turn carried around a planet. The first such model is attributed to Apollonius of Perga and further developments in it were carried out in the 2nd century BC by Hipparchus of Nicea. Hipparchus made a number of other contributions, including the first measurement of precession and the compilation of the first star catalog in which he proposed our modern system of apparent magnitudes. The Antikythera mechanism, an ancient Greek astronomical observational device for calculating the movements of the Sun and the Moon, possibly the planets, dates from about 150–100 BC, and was the first ancestor of an astronomical computer. It was discovered in an ancient shipwreck off the Greek island of Antikythera, between Kythera and Crete. The device became famous for its use of a differential gear, previously believed to have been invented in the 16th century, and the miniaturization and complexity of its parts, comparable to a clock made in the 18th century. The original mechanism is displayed in the Bronze collection of the National Archaeological Museum of Athens, accompanied by a replica. Ptolemaic system Depending on the historian's viewpoint, the acme or corruption of Classical physical astronomy is seen with Ptolemy, a Greco-Roman astronomer from Alexandria of Egypt, who wrote the classic comprehensive presentation of geocentric astronomy, the Megale Syntaxis (Great Synthesis), better known by its Arabic title Almagest, which had a lasting effect on astronomy up to the Renaissance. In his Planetary Hypotheses, Ptolemy ventured into the realm of cosmology, developing a physical model of his geometric system, in a universe many times smaller than the more realistic conception of Aristarchus of Samos four centuries earlier. Egypt The precise orientation of the Egyptian pyramids affords a lasting demonstration of the high degree of technical skill in watching the heavens attained in the 3rd millennium BC. It has been shown the Pyramids were aligned towards the pole star, which, because of the precession of the equinoxes, was at that time Thuban, a faint star in the constellation of Draco. Evaluation of the site of the temple of Amun-Re at Karnak, taking into account the change over time of the obliquity of the ecliptic, has shown that the Great Temple was aligned on the rising of the midwinter Sun. The length of the corridor down which sunlight would travel would have limited illumination at other times of the year. The Egyptians also found the position of Sirius (the dog star) who they believed was Anubis, their Jackal headed god, moving through the heavens. Its position was critical to their civilisation as when it rose heliacal in the east before sunrise it foretold the flooding of the Nile. It is also the origin of the phrase 'dog days of summer'. Astronomy played a considerable part in religious matters for fixing the dates of festivals and determining the hours of the night. The titles of several temple books are preserved recording the movements and phases of the Sun, Moon and stars. The rising of Sirius (Egyptian: Sopdet, Greek: Sothis) at the beginning of the inundation was a particularly important point to fix in the yearly calendar. Writing in the Roman era, Clement of Alexandria gives some idea of the importance of astronomical observations to the sacred rites: And after the Singer advances the Astrologer (ὡροσκόπος), with a horologium (ὡρολόγιον) in his hand, and a palm (φοίνιξ), the symbols of astrology. He must know by heart the Hermetic astrological books, which are four in number. Of these, one is about the arrangement of the fixed stars that are visible; one on the positions of the Sun and Moon and five planets; one on the conjunctions and phases of the Sun and Moon; and one concerns their risings. The Astrologer's instruments (horologium and palm) are a plumb line and sighting instrument. They have been identified with two inscribed objects in the Berlin Museum; a short handle from which a plumb line was hung, and a palm branch with a sight-slit in the broader end. The latter was held close to the eye, the former in the other hand, perhaps at arm's length. The "Hermetic" books which Clement refers to are the Egyptian theological texts, which probably have nothing to do with Hellenistic Hermetism. From the tables of stars on the ceiling of the tombs of Rameses VI and Rameses IX it seems that for fixing the hours of the night a man seated on the ground faced the Astrologer in such a position that the line of observation of the pole star passed over the middle of his head. On the different days of the year each hour was determined by a fixed star culminating or nearly culminating in it, and the position of these stars at the time is given in the tables as in the centre, on the left eye, on the right shoulder, etc. According to the texts, in founding or rebuilding temples the north axis was determined by the same apparatus, and we may conclude that it was the usual one for astronomical observations. In careful hands it might give results of a high degree of accuracy. China The astronomy of East Asia began in China. Solar term was completed in Warring States period. The knowledge of Chinese astronomy was introduced into East Asia. Astronomy in China has a long history. Detailed records of astronomical observations were kept from about the 6th century BC, until the introduction of Western astronomy and the telescope in the 17th century. Chinese astronomers were able to precisely predict eclipses. Much of early Chinese astronomy was for the purpose of timekeeping. The Chinese used a lunisolar calendar, but because the cycles of the Sun and the Moon are different, astronomers often prepared new calendars and made observations for that purpose. Astrological divination was also an important part of astronomy. Astronomers took careful note of "guest stars" () which suddenly appeared among the fixed stars. They were the first to record a supernova, in the Astrological Annals of the Houhanshu in 185 AD. Also, the supernova that created the Crab Nebula in 1054 is an example of a "guest star" observed by Chinese astronomers, although it was not recorded by their European contemporaries. Ancient astronomical records of phenomena like supernovae and comets are sometimes used in modern astronomical studies. The world's first star catalogue was made by Gan De, a Chinese astronomer, in the 4th century BC. Mesoamerica Maya astronomical codices include detailed tables for calculating phases of the Moon, the recurrence of eclipses, and the appearance and disappearance of Venus as morning and evening star. The Maya based their calendrics in the carefully calculated cycles of the Pleiades, the Sun, the Moon, Venus, Jupiter, Saturn, Mars, and also they had a precise description of the eclipses as depicted in the Dresden Codex, as well as the ecliptic or zodiac, and the Milky Way was crucial in their Cosmology. A number of important Maya structures are believed to have been oriented toward the extreme risings and settings of Venus. To the ancient Maya, Venus was the patron of war and many recorded battles are believed to have been timed to the motions of this planet. Mars is also mentioned in preserved astronomical codices and early mythology. Although the Maya calendar was not tied to the Sun, John Teeple has proposed that the Maya calculated the solar year to somewhat greater accuracy than the Gregorian calendar. Both astronomy and an intricate numerological scheme for the measurement of time were vitally important components of Maya religion. The Maya believed that the Earth was the center of all things, and that the stars, moons, and planets were gods. They believed that their movements were the gods traveling between the Earth and other celestial destinations. Many key events in Maya culture were timed around celestial events, in the belief that certain gods would be present. Middle Ages Middle East The Arabic and the Persian world under Islam had become highly cultured, and many important works of knowledge from Greek astronomy and Indian astronomy and Persian astronomy were translated into Arabic, used and stored in libraries throughout the area. An important contribution by Islamic astronomers was their emphasis on observational astronomy. This led to the emergence of the first astronomical observatories in the Muslim world by the early 9th century. Zij star catalogues were produced at these observatories. In the 9th century, Persian astrologer Albumasar was thought to be one of the greatest astrologer at that time. His practical manuals for training astrologers profoundly influenced Muslim intellectual history and, through translations, that of western Europe and Byzantium In the 10th century, Albumasar's "Introduction" was one of the most important sources for the recovery of Aristotle for medieval European scholars. Abd al-Rahman al-Sufi (Azophi) carried out observations on the stars and described their positions, magnitudes, brightness, and colour and drawings for each constellation in his Book of Fixed Stars. He also gave the first descriptions and pictures of "A Little Cloud" now known as the Andromeda Galaxy. He mentions it as lying before the mouth of a Big Fish, an Arabic constellation. This "cloud" was apparently commonly known to the Isfahan astronomers, very probably before 905 AD. The first recorded mention of the Large Magellanic Cloud was also given by al-Sufi. In 1006, Ali ibn Ridwan observed SN 1006, the brightest supernova in recorded history, and left a detailed description of the temporary star. In the late 10th century, a huge observatory was built near Tehran, Iran, by the astronomer Abu-Mahmud al-Khujandi who observed a series of meridian transits of the Sun, which allowed him to calculate the tilt of the Earth's axis relative to the Sun. He noted that measurements by earlier (Indian, then Greek) astronomers had found higher values for this angle, possible evidence that the axial tilt is not constant but was in fact decreasing. In 11th-century Persia, Omar Khayyám compiled many tables and performed a reformation of the calendar that was more accurate than the Julian and came close to the Gregorian. Other Muslim advances in astronomy included the collection and correction of previous astronomical data, resolving significant problems in the Ptolemaic model, the development of the universal latitude-independent astrolabe by Arzachel, the invention of numerous other astronomical instruments, Ja'far Muhammad ibn Mūsā ibn Shākir's belief that the heavenly bodies and celestial spheres were subject to the same physical laws as Earth, and the introduction of empirical testing by Ibn al-Shatir, who produced the first model of lunar motion which matched physical observations. Natural philosophy (particularly Aristotelian physics) was separated from astronomy by Ibn al-Haytham (Alhazen) in the 11th century, by Ibn al-Shatir in the 14th century, and Qushji in the 15th century. India Bhāskara II (1114–1185) was the head of the astronomical observatory at Ujjain, continuing the mathematical tradition of Brahmagupta. He wrote the Siddhantasiromani which consists of two parts: Goladhyaya (sphere) and Grahaganita (mathematics of the planets). He also calculated the time taken for the Earth to orbit the Sun to 9 decimal places. The Buddhist University of Nalanda at the time offered formal courses in astronomical studies. Other important astronomers from India include Madhava of Sangamagrama, Nilakantha Somayaji and Jyeshtadeva, who were members of the Kerala school of astronomy and mathematics from the 14th century to the 16th century. Nilakantha Somayaji, in his Aryabhatiyabhasya, a commentary on Aryabhata's Aryabhatiya, developed his own computational system for a partially heliocentric planetary model, in which Mercury, Venus, Mars, Jupiter and Saturn orbit the Sun, which in turn orbits the Earth, similar to the Tychonic system later proposed by Tycho Brahe in the late 16th century. Nilakantha's system, however, was mathematically more efficient than the Tychonic system, due to correctly taking into account the equation of the centre and latitudinal motion of Mercury and Venus. Most astronomers of the Kerala school of astronomy and mathematics who followed him accepted his planetary model. Western Europe After the significant contributions of Greek scholars to the development of astronomy, it entered a relatively static era in Western Europe from the Roman era through the 12th century. This lack of progress has led some astronomers to assert that nothing happened in Western European astronomy during the Middle Ages. Recent investigations, however, have revealed a more complex picture of the study and teaching of astronomy in the period from the 4th to the 16th centuries. Western Europe entered the Middle Ages with great difficulties that affected the continent's intellectual production. The advanced astronomical treatises of classical antiquity were written in Greek, and with the decline of knowledge of that language, only simplified summaries and practical texts were available for study. The most influential writers to pass on this ancient tradition in Latin were Macrobius, Pliny, Martianus Capella, and Calcidius. In the 6th century Bishop Gregory of Tours noted that he had learned his astronomy from reading Martianus Capella, and went on to employ this rudimentary astronomy to describe a method by which monks could determine the time of prayer at night by watching the stars. In the 7th century the English monk Bede of Jarrow published an influential text, On the Reckoning of Time, providing churchmen with the practical astronomical knowledge needed to compute the proper date of Easter using a procedure called the computus. This text remained an important element of the education of clergy from the 7th century until well after the rise of the Universities in the 12th century. The range of surviving ancient Roman writings on astronomy and the teachings of Bede and his followers began to be studied in earnest during the revival of learning sponsored by the emperor Charlemagne. By the 9th century rudimentary techniques for calculating the position of the planets were circulating in Western Europe; medieval scholars recognized their flaws, but texts describing these techniques continued to be copied, reflecting an interest in the motions of the planets and in their astrological significance. Building on this astronomical background, in the 10th century European scholars such as Gerbert of Aurillac began to travel to Spain and Sicily to seek out learning which they had heard existed in the Arabic-speaking world. There they first encountered various practical astronomical techniques concerning the calendar and timekeeping, most notably those dealing with the astrolabe. Soon scholars such as Hermann of Reichenau were writing texts in Latin on the uses and construction of the astrolabe and others, such as Walcher of Malvern, were using the astrolabe to observe the time of eclipses in order to test the validity of computistical tables. By the 12th century, scholars were traveling to Spain and Sicily to seek out more advanced astronomical and astrological texts, which they translated into Latin from Arabic and Greek to further enrich the astronomical knowledge of Western Europe. The arrival of these new texts coincided with the rise of the universities in medieval Europe, in which they soon found a home. Reflecting the introduction of astronomy into the universities, John of Sacrobosco wrote a series of influential introductory astronomy textbooks: the Sphere, a Computus, a text on the Quadrant, and another on Calculation. In the 14th century, Nicole Oresme, later bishop of Liseux, showed that neither the scriptural texts nor the physical arguments advanced against the movement of the Earth were demonstrative and adduced the argument of simplicity for the theory that the Earth moves, and not the heavens. However, he concluded "everyone maintains, and I think myself, that the heavens do move and not the earth: For God hath established the world which shall not be moved." In the 15th century, Cardinal Nicholas of Cusa suggested in some of his scientific writings that the Earth revolved around the Sun, and that each star is itself a distant sun. Renaissance and Early Modern Europe Copernican Revolution During the renaissance period, astronomy began to undergo a revolution in thought known as the Copernican Revolution, which gets the name from the astronomer Nicolaus Copernicus, who proposed a heliocentric system, in which the planets revolved around the Sun and not the Earth. His De revolutionibus orbium coelestium was published in 1543. While in the long term this was a very controversial claim, in the very beginning it only brought minor controversy. The theory became the dominant view because many figures, most notably Galileo Galilei, Johannes Kepler and Isaac Newton championed and improved upon the work. Other figures also aided this new model despite not believing the overall theory, like Tycho Brahe, with his well-known observations. Brahe, a Danish noble, was an essential astronomer in this period. He came on the astronomical scene with the publication of De nova stella, in which he disproved conventional wisdom on the supernova SN 1572 (As bright as Venus at its peak, SN 1572 later became invisible to the naked eye, disproving the Aristotelian doctrine of the immutability of the heavens.) He also created the Tychonic system, where the Sun and Moon and the stars revolve around the Earth, but the other five planets revolve around the Sun. This system blended the mathematical benefits of the Copernican system with the "physical benefits" of the Ptolemaic system. This was one of the systems people believed in when they did not accept heliocentrism, but could no longer accept the Ptolemaic system. He is most known for his highly accurate observations of the stars and the Solar System. Later he moved to Prague and continued his work. In Prague he was at work on the Rudolphine Tables, that were not finished until after his death. The Rudolphine Tables was a star map designed to be more accurate than either the Alfonsine tables, made in the 1300s, and the Prutenic Tables, which were inaccurate. He was assisted at this time by his assistant Johannes Kepler, who would later use his observations to finish Brahe's works and for his theories as well. After the death of Brahe, Kepler was deemed his successor and was given the job of completing Brahe's uncompleted works, like the Rudolphine Tables. He completed the Rudolphine Tables in 1624, although it was not published for several years. Like many other figures of this era, he was subject to religious and political troubles, like the Thirty Years' War, which led to chaos that almost destroyed some of his works. Kepler was, however, the first to attempt to derive mathematical predictions of celestial motions from assumed physical causes. He discovered the three Kepler's laws of planetary motion that now carry his name, those laws being as follows: The orbit of a planet is an ellipse with the Sun at one of the two foci. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit. With these laws, he managed to improve upon the existing heliocentric model. The first two were published in 1609. Kepler's contributions improved upon the overall system, giving it more credibility because it adequately explained events and could cause more reliable predictions. Before this, the Copernican model was just as unreliable as the Ptolemaic model. This improvement came because Kepler realized the orbits were not perfect circles, but ellipses. Galileo Galilei was among the first to use a telescope to observe the sky, and after constructing a 20x refractor telescope. He discovered the four largest moons of Jupiter in 1610, which are now collectively known as the Galilean moons, in his honor. This discovery was the first known observation of satellites orbiting another planet. He also found that the Moon had craters and observed, and correctly explained sunspots, and that Venus exhibited a full set of phases resembling lunar phases. Galileo argued that these facts demonstrated incompatibility with the Ptolemaic model, which could not explain the phenomenon and would even contradict it. With the moons it demonstrated that the Earth does not have to have everything orbiting it and that other parts of the Solar System could orbit another object, such as the Earth orbiting the Sun. In the Ptolemaic system the celestial bodies were supposed to be perfect so such objects should not have craters or sunspots. The phases of Venus could only happen in the event that Venus' orbit is inside Earth's orbit, which could not happen if the Earth was the center. He, as the most famous example, had to face challenges from church officials, more specifically the Roman Inquisition. They accused him of heresy because these beliefs went against the teachings of the Roman Catholic Church and were challenging the Catholic church's authority when it was at its weakest. While he was able to avoid punishment for a little while he was eventually tried and pled guilty to heresy in 1633. Although this came at some expense, his book was banned, and he was put under house arrest until he died in 1642. Sir Isaac Newton developed further ties between physics and astronomy through his law of universal gravitation. Realizing that the same force that attracts objects to the surface of the Earth held the Moon in orbit around the Earth, Newton was able to explain – in one theoretical framework – all known gravitational phenomena. In his Philosophiæ Naturalis Principia Mathematica, he derived Kepler's laws from first principles. Those first principles are as follows: In an inertial frame of reference, an object either remains at rest or continues to move at constant velocity, unless acted upon by a force. In an inertial reference frame, the vector sum of the forces F on an object is equal to the mass m of that object multiplied by the acceleration a of the object: F = ma. (It is assumed here that the mass m is constant) When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body. Thus while Kepler explained how the planets moved, Newton accurately managed to explain why the planets moved the way they do. Newton's theoretical developments laid many of the foundations of modern physics. Completing the Solar System Outside of England, Newton's theory took some time to become established. Descartes' theory of vortices held sway in France, and Huygens, Leibniz and Cassini accepted only parts of Newton's system, preferring their own philosophies. Voltaire published a popular account in 1738. In 1748, the French Academy of Sciences offered a reward for solving the question of the perturbations of Jupiter and Saturn, which was eventually done by Euler and Lagrange. Laplace completed the theory of the planets, publishing from 1798 to 1825. The early origins of the solar nebular model of planetary formation had begun. Edmond Halley succeeded Flamsteed as Astronomer Royal in England and succeeded in predicting the return of the comet that bears his name in 1758. Sir William Herschel found the first new planet, Uranus, to be observed in modern times in 1781. The gap between the planets Mars and Jupiter disclosed by the Titius–Bode law was filled by the discovery of the asteroids Ceres and Pallas in 1801 and 1802 with many more following. At first, astronomical thought in America was based on Aristotelian philosophy, but interest in the new astronomy began to appear in Almanacs as early as 1659. Stellar astronomy Cosmic pluralism is the name given to the idea that the stars are distant suns, perhaps with their own planetary systems. Ideas in this direction were expressed in antiquity, by Anaxagoras and by Aristarchus of Samos, but did not find mainstream acceptance. The first astronomer of the European Renaissance to suggest that the stars were distant suns was Giordano Bruno in his De l'infinito universo et mondi (1584). This idea, together with a belief in intelligent extraterrestrial life, was among the charges brought against him by the Inquisition. The idea became mainstream in the later 17th century, especially following the publication of Conversations on the Plurality of Worlds by Bernard Le Bovier de Fontenelle (1686), and by the early 18th century it was the default working assumptions in stellar astronomy. The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and Hipparchus. William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he established a series of gauges in 600 directions and counted the stars observed along each line of sight. From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is noted for his discovery that some stars do not merely lie along the same line of sight, but are physical companions that form binary star systems. Modern astronomy 19th century Pre-photography, data recording of astronomical data was limited by the human eye. In 1840, John W. Draper, a chemist, created the earliest known astronomical photograph of the Moon. And by the late 19th century thousands of photographic plates of images of planets, stars, and galaxies were created. Most photography had lower quantum efficiency (i.e. captured less of the incident photons) than human eyes but had the advantage of long integration times (100 ms for the human eye compared to hours for photos). This vastly increased the data available to astronomers, which led to the rise of human computers, famously the Harvard Computers, to track and analyze the data. Scientists began discovering forms of light which were invisible to the naked eye: X-rays, gamma rays, radio waves, microwaves, ultraviolet radiation, and infrared radiation. This had a major impact on astronomy, spawning the fields of infrared astronomy, radio astronomy, x-ray astronomy and finally gamma-ray astronomy. With the advent of spectroscopy it was proven that other stars were similar to the Sun, but with a range of temperatures, masses and sizes. The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865, Secchi began classifying stars into spectral types. The first evidence of helium was observed on August 18, 1868, as a bright yellow spectral line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers such as Friedrich Georg Wilhelm von Struve and S. W. Burnham, allowing the masses of stars to be determined from the computation of orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827. In 1847, Maria Mitchell discovered a comet using a telescope. 20th century With the accumulation of large sets of astronomical data, teams like the Harvard Computers rose in prominence which led to many female astronomers, previously relegated as assistants to male astronomers, gaining recognition in the field. The United States Naval Observatory (USNO) and other astronomy research institutions hired human "computers", who performed the tedious calculations while scientists performed research requiring more background knowledge. A number of discoveries in this period were originally noted by the women "computers" and reported to their supervisors. Henrietta Swan Leavitt discovered the cepheid variable star period-luminosity relation which she further developed into a method of measuring distance outside of the Solar System. A veteran of the Harvard Computers, Annie J. Cannon developed the modern version of the stellar classification scheme in during the early 1900s (O B A F G K M, based on color and temperature), manually classifying more stars in a lifetime than anyone else (around 350,000). The twentieth century saw increasingly rapid advances in the scientific study of stars. Karl Schwarzschild discovered that the color of a star and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope at Mount Wilson Observatory. Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung–Russell diagram was developed, propelling the astrophysical study of stars. In Potsdam in 1906, the Danish astronomer Ejnar Hertzsprung published the first plots of color versus luminosity for these stars. These plots showed a prominent and continuous sequence of stars, which he named the Main Sequence. At Princeton University, Henry Norris Russell plotted the spectral types of these stars against their absolute magnitude, and found that dwarf stars followed a distinct relationship. This allowed the real brightness of a dwarf star to be predicted with reasonable accuracy. Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin first proposed that stars were made primarily of hydrogen and helium in her 1925 doctoral thesis. The spectra of stars were further understood through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined. As evolutionary models of stars were developed during the 1930s, Bengt Strömgren introduced the term Hertzsprung–Russell diagram to denote a luminosity-spectral class diagram. A refined scheme for stellar classification was published in 1943 by William Wilson Morgan and Philip Childs Keenan. The existence of our galaxy, the Milky Way, as a separate group of stars was only proven in the 20th century, along with the existence of "external" galaxies, and soon after, the expansion of the universe seen in the recession of most galaxies from us. The "Great Debate" between Harlow Shapley and Heber Curtis, in the 1920s, concerned the nature of the Milky Way, spiral nebulae, and the dimensions of the universe. With the advent of quantum physics, spectroscopy was further refined. The Sun was found to be part of a galaxy made up of more than 1010 stars (10 billion stars). The existence of other galaxies, one of the matters of the great debate, was settled by Edwin Hubble, who identified the Andromeda nebula as a different galaxy, and many others at large distances and receding, moving away from our galaxy. Physical cosmology, a discipline that has a large intersection with astronomy, made huge advances during the 20th century, with the model of the hot Big Bang heavily supported by the evidence provided by astronomy and physics, such as the redshifts of very distant galaxies and radio sources, the cosmic microwave background radiation, Hubble's law and cosmological abundances of elements. See also Age of the universe Anthropic principle Astrotheology Expansion of the universe Hebrew astronomy History of astrology History of Mars observation History of supernova observation History of the telescope Letters on Sunspots List of astronomers List of French astronomers List of Hungarian astronomers List of Russian astronomers and astrophysicists List of Slovenian astronomers List of women astronomers List of astronomical instrument makers List of astronomical observatories Patronage in astronomy Society for the History of Astronomy Timeline of astronomy Timeline of Solar System astronomy Worship of heavenly bodies References Citations Works cited Further reading External links Astronomy & Empire, BBC Radio 4 discussion with Simon Schaffer, Kristen Lippincott & Allan Chapman (In Our Time, May 4, 2006) Bibliothèque numérique de l'Observatoire de Paris (Digital library of the Paris Observatory) Caelum Antiquum: Ancient Astronomy and Astrology Resources on LacusCurtius Mesoamerican Archaeoastronomy: A Review of Contemporary Understandings of Prehispanic Astronomical Knowledge UNESCO-IAU Portal to the Heritage of Astronomy Astronomy Astronomy
History of astronomy
Astronomy
10,060
182,455
https://en.wikipedia.org/wiki/Corrin
Corrin is a heterocyclic compound. Although not known to exist on its own, the molecule is of interest as the parent macrocycle related to the cofactor and chromophore in vitamin B12. Its name reflects that it is the "core" of vitamin B12 (cobalamins). Compounds with a corrin core are known as "corrins". There are two chiral centres, which in natural compounds like cobalamin have the same stereochemistry. Coordination chemistry Upon deprotonation, the corrinoid ring is capable of binding cobalt. In vitamin B12, the resulting complex also features a benzimidazole-derived ligand, and the sixth site on the octahedron serves as the catalytic center. The corrin ring resembles the porphyrin ring. Both feature four pyrrole-like subunits organized into rings. Corrins have a central 15-membered ring whereas porphryins have an interior 16-membered ring. All four nitrogen centers are linked by conjugation structure, with alternating double and single bonds. In contrast to porphyrins, corrins lack one of the carbon groups that link the pyrrole-like units into a fully conjugated structure. With a conjugated system that extends only 3/4 of the way around the ring, and does not include any of the outer edge carbons, corrins have a number of non-conjugated sp3 carbons, making them more flexible than porphyrins and not as flat. A third closely related biological structure, the chlorin ring system found in chlorophyll, is intermediate between porphyrin and corrin, having 20 carbons like the porphyrins and a conjugated structure extending all the way around the central atom, but with only 6 of the 8 edge carbons participating. Corroles (octadehydrocorrins) are fully aromatic derivatives of corrins. References Further reading Biomolecules Tetrapyrroles Metabolism Macrocycles Schiff bases
Corrin
Chemistry,Biology
441
69,605,961
https://en.wikipedia.org/wiki/Government%20Chief%20Scientific%20Adviser%20%28Ireland%29
The Irish Chief Scientific Adviser (CSA) is an adviser on science and technology to the Government of Ireland. The role was created in 2004 and was to operate independently of government departments, but report to Cabinet Committee on Science and Technology. History In 2004, Barry McSweeney was appointed as the first CSA. Following a controversy about the awarding of his PhD, McSweeney resigned in November 2005. In January 2007, Patrick Cunningham was announced as the new CSA. He served a five-year term and hosted the 2012 EuroScience Open Forum meeting in Dublin. In 2012, following the retirement of Cunningham, the separate office of CSA was abolished, and the role was given to Mark Ferguson, then director general of Science Foundation Ireland. Ferguson was reappointed in 2017 for a further five years. During this period, academics, politicians and others highlighted the limitations of having a non-independent science advisor. Following the announcement of Philip Nolan's upcoming appointment as director general of Science Foundation Ireland, Simon Harris announced that the CSA was to be reinstated as an independent role. List of Government Chief Scientific Advisers Barry McSweeney (2004-2005) Patrick Cunningham (2007-2012) Mark Ferguson (2012-2022) Aoife McLysaght (2024-present) References Ireland Science and technology in the Republic of Ireland
Government Chief Scientific Adviser (Ireland)
Technology
271
26,108,747
https://en.wikipedia.org/wiki/Relieving%20tackle
Relieving tackle is tackle employing one or more lines attached to a vessel's steering mechanism, to assist or substitute for the whipstaff or ship's wheel in steering the craft. This enabled the helmsman to maintain control in heavy weather, when the rudder is under more stress and requires greater effort to handle, and also to steer the vessel were the helm damaged or destroyed. In vessels with whipstaffs (long vertical poles extending above deck, acting as a lever to move the tiller below deck), relieving lines were attached to the tiller or directly to the whipstaff. When wheels were introduced, their greater mechanical advantage lessened the need for such assistance, but relieving tackle could still be used on the tiller, located on a deck underneath the wheel. Relieving tackle was also rigged on vessels going into battle, to assist in steering in case the helm was damaged or shot away. When a storm threatened, or battle impended, the tackle would be affixed to the tiller, and hands assigned to man them. Additional tackle was available to attach directly to the rudder as surety against loss of the tiller. The term can also refer to lines or cables attached to a vessel that has been careened (laid over to one side for maintenance). The lines passed under the hull and were secured to the opposite side, to keep the vessel from overturning further, and to aid in righting the ship when the work was finished. References Ships Simple machines
Relieving tackle
Physics,Technology
309
53,730,614
https://en.wikipedia.org/wiki/Family%20symmetries
In particle physics, the family symmetries or horizontal symmetries are various discrete, global, or local symmetries between quark-lepton families or generations. In contrast to the intrafamily or vertical symmetries (collected in the conventional Standard Model and Grand Unified Theories) which operate inside each family, these symmetries presumably underlie physics of the family flavors. They may be treated as a new set of quantum charges assigned to different families of quarks and leptons. Spontaneous symmetry breaking of these symmetries is believed to lead to an adequate description of the flavor mixing of quarks and leptons of different families.  This is certainly one of the major problems that presently confront particle physics. Despite its great success in explaining the basic interactions of nature, the Standard Model still suffers from an absence of such a unique ability to explain the flavor mixing angles or weak mixing angles (as they are conventionally referred to) whose observed values are collected in the corresponding Cabibbo–Kobayashi–Maskawa matrices. While being conceptually useful and leading in some cases to the physically valuable patterns of the flavor mixing, the family symmetries are not yet observationally confirmed. Introduction The Standard Model is based on the internal symmetries of the unitary product group  the members of which have a quite different nature. The color symmetry  has the vectorlike structure due to which the lefthanded and righthanded quarks are transformed identically as its fundamental triplets. At the same time, the electroweak symmetry consisting of the weak isospin and hypercharge is chiral. So, the lefthanded components of all quarks and leptons are the doublets, whereas their righthanded components are its singlets: Here, the quark-lepton families are numbered by the index both for the quark and lepton ones. The up and down righthanded quarks and leptons are written separately and for completeness the righthanded neutrinos are also included. Many attempts have been made to interpret the existence of the quark-lepton families and the pattern of their mixing in terms of various family symmetries – discrete or continuous, global or local. Among them, the abelian and non-abelian and family symmetries seem to be most interesting. They provide some guidance to the mass matrices for families of quarks and leptons, leading to relationships between their masses and mixing parameters. In the framework of the supersymmetric Standard Model, such a family symmetry should at the same time provide an almost uniform mass spectrum for superpartners, with a high degree of the family flavor conservation, that makes its existence even more necessary in the SUSY case. The U(1) symmetry case This class of the family symmetry models was first studied by Froggatt and Nielsen in 1979 and extended later on in. In this mechanism, one introduces a new complex scalar field called flavon whose vacuum expectation value (VEV) presumably breaks a global family symmetry imposed. Under this symmetry different quark-lepton families carry different charges . Aсcordingly, the connection between families is provided by inclusion into play (via the relevant see-saw mechanism) some intermediate heavy fermion(s) being properly charged under the family symmetry . So, the effective Yukawa coupling constants for quark-lepton families are arranged in a way that they may only appear through the primary couplings of these families with the messenger fermion(s) and the flavon field . The hierarchy of these couplings is determined by some small parameter , which is given by ratio of the flavon VEV to the mass of the intermediate heavy fermion,   (or , if the messenger fermions have been integrated out at some high-energy cut-off scale). Since different quark-lepton families carry different charges the various coupling constants are suppressed by different powers of being primarily controlled by the postulated fermion charge assignment. Specially, for quarks these couplings acquire the form where the index stands for the particular family of the up quarks () and down quarks () including their lefthanded and righthanded components, respectively. This hierarchy is then transferred to their mass matrices once the conventional Standard Model Higgs boson develops its own VEV, . So, the mass matrices being proportional to the matrices of Yukawa coupling constants can generally produce (by an appropriate choice of the family charges) the required patterns for the weak mixing angles which are in basic conformity with the corresponding Cabibbo–Kobayashi–Maskawa matrices observed. In the same way the appropriate  mass matrices can also be arranged for the lepton families. Among some other applications the family symmetry, the most interesting one could stem from its possible relation to (or even identification) with the Peccei–Quinn symmetry. This may point out some deep connection between the fermion mixing problem and the strong CP problem of the Standard Model that was also discussed in the literature. The SU(2) family symmetry The family symmetry models were first addressed by Wilczek and Zee in 1979 and then the interest in them was renewed in the 1990s especially in connection with the Supersymmetric Standard Model. In the original model the quark-lepton families fall into the horizontal triplets of the local symmetry taken. Fortunately, this symmetry is generically free from the gauge anomaly problem which may appear for other local family symmetry candidates. Generally, the model contains the set of the Higgs boson multiplets being scalar, vector and tensor of , apart from they all are the doublets of the conventional electroweak symmetry . These scalar multiplets provide the mass matrices for quarks and leptons giving eventually the reasonable weak mixing angles in terms of the fermion mass ratios. In principle, one could hope to reach it in a more economic way when the heavy family masses appears at the tree-level, while the light families acquire their masses from radiative corrections at the one–loop level and higher ones. Another and presumably more realistic way of using of the family symmetry is based on the picture that, in the absence of flavor mixing, only the particles belonging to the third generation ( ) have non-zero masses. The masses and the mixing angles of the light first and second families being doublets of the symmetry appear then as a result of the tree-level mixings of families, related to spontaneous breaking of this symmetry. The VEV hierarchy of the horizontal scalars are then enhanced by the effective cut-off scale involved. Again, as in the above symmetry case, the family mixings are eventually turned out to be proportional to powers of some small parameter, which are determined by the dimensions of the family symmetry allowed operators. This finally generate the effective (diagonal and off-diagonal Yukawa couplings for the light families in the framework of the (ordinary or supersymmetric) Standard Model. In supersymmetric theories there are mass and interaction matrices for the squarks and sleptons, leading to a rich flavor structure. In particular, if fermions and scalars of a given charge have mass matrices which are not diagonalized by the same rotation, new mixing matrices occur at gaugino vertices. This may lead in general to the dangerous light family flavor changing processes unless the breaking of symmetry, which controls the light family sector, together with small fermion masses yields the small mass splittings of their scalar superpartners. Apart from with all that, there is also the dynamical aspect of the local symmetry, related to its horizontal gauge bosons. The point is, however, that these bosons (as well as various Higgs bosons involved) have to be several orders of magnitude more massive than the Standard Model W and Z bosons  in order to avoid forbidden quark-flavor- and lepton-flavor-changing transitions. Generally, this requires the introduction of additional Higgs bosons to give the large masses to the horizontal gauge bosons so as to not disturb the masses of the fermions involved. The chiral SU(3) symmetry alternative It can be generally argued that the presumably adequate family symmetry should be chiral rather than vectorlike, since the vectorlike family symmetries do not in general forbid the large invariant masses for quark-lepton families. This may lead (without some special fine tuning of parameters) to the almost uniform mass spectra for them that would be natural if the family symmetry were exact rather than broken. Rather intriguingly, both known examples of the local vectorlike symmetries, electromagnetic and color , appear to be exact symmetries, while all chiral symmetries including the conventional electroweak symmetry and grand unifications SU(5), SO(10) and E(6) appear broken. In this connection, one of the most potentially relevant option considered in the literature may be associated with the local chiral family symmetry introduced by Chkareuli in 1980 in the framework of the family-unified symmetry and further developed by its own. Motivation The choice of the as the underlying family symmetry beyond the Standard Model appears related to the following issues: (i) It provides a natural explanation of the number three of observed quark-lepton families correlated with three species of massless or light neutrinos contributing to the invisible Z boson  partial decay width; (ii) Its local nature conforms with the other local symmetries of the Standard Model, such as the weak isospin symmetry or color symmetry . This actually leads to the family-unified Standard Model with a total symmetry which then breaks at some high family scale down to the conventional SM; (iii) Its chiral nature, according to which the left-handed and right-handed fermions are proposed to be, respectively, the fundamental triplets and antitriplets of the symmetry. This means that their masses may only appear as a result of its spontaneous symmetry breaking of the whose anisotropy in the family flavor space provides the hierarchical mass spectrum of quark-lepton families; (iv) The invariant Yukawa couplings are always accompanied by an accidental global chiral symmetry which can be identified with the Peccei–Quinn symmetry, thus giving a solution to the strong CP problem; (v) Due to its chiral structure, it admits a natural unification with conventional Grand unified theories in a direct product form, such as , or , and also as a subgroup of the extended (family-unified) or GUTs; (vi) It has a straightforward extension to the supersymmetric Standard Model and GUTs. With these natural criteria accepted, other family symmetry candidates have turned out to be at least partially discriminated. Indeed, the family symmetry does not satisfy the criterion (i) and is in fact applicable to any number of quark-lepton families. Also, the family symmetry can contain, besides two light families treated as its doublets, any number of additional (singlets or new doublets of ) families. All global non-Abelian symmetries are excluded by the criterion (ii), while the vectorlike symmetries are excluded by the criteria (iii) and (v). Basic applications In the Standard Model and GUT extended by the local chiral symmetry quarks and leptons are supposed to be chiral triplets, so that their left-handed (weak-doublet) components – and – are taken to be the triplets of , while their right-handed (weak-singlet) components – , ,   and – are anti-triplets (or vice versa). Here is the family symmetry index ( ), rather than the index introduced in Section in order to simply number all the families involved. The spontaneous breaking of this symmetry gives some understanding to the observed hierarchy between elements of the quark-lepton mass matrices and presence of texture zeros in them. This breaking is normally provided by some set of the horizontal scalar multiplets being symmetrical and anti-symmetrical under the ,  and ( = 1, 2, ..., = 1, 2, ...). When they develop their VEVs, the up and down quark families acquire their effective Yukawa coupling constants which generally have a form where again the index stands for the particular family of the up quarks ( ) and down quarks ( ), respectively ( and are some dimensionless proportionality constants of the order).  These coupling constants normally appear via the sort of the see-saw mechanism due to the exchange of a special set of heavy (of order the family symmetry scale ) vectorlike fermions. The VEVs of the horizontal scalars taken in general as large as , are supposed to be hierarchically arranged along the different directions in family flavor space. This hierarchy is then transferred to their mass matrices and , when the conventional Standard Model Higgs boson develops its own VEV in the corresponding Yukawa couplings In the minimal case with one  sextet and two triplets developing the basic VEV configuration one comes the typical nearest-neighbor family mixing pattern in the mass matrices and  that leads to the weak mixing angles being generally in approximate conformity with the corresponding Cabibbo–Kobayashi–Maskawa matrices. In the same way, the appropriate  mass matrices can also be arranged for the lepton families that leads to the realistic description – both in the Standard Model and  GUT – of the lepton masses and mixings, including neutrino masses and oscillations. In the framework of supersymmetric theories, the family   symmetry hand in hand with hierarchical masses and mixings for quarks and leptons leads to an almost uniform mass spectrum for their superpartners with a high degree of flavor conservation. Due to the special relations between the fermion mass matrices and soft SUSY breaking terms, dangerous supersymmetric contributions to the flavor-changing processes can be naturally suppressed. Among other applications of the symmetry, the most interesting ones are those related to its gauge sector. Generally, the family scale may be located in the range from GeV up to the grand unification scale and even higher. For the relatively low family scale , the gauge bosons will also enter into play so that there may become important many flavor-changing rare processes including some of their astrophysical consequences. In contrast to the vectorlike family symmetries the chiral is not generically free from gauge anomalies. They, however, can be readily cancelled by introduction of the appropriate set of the pure horizontal fermion multiplets. Being sterile with respect to all the other Standard Model interactions, they may treated as one of possible candidates for a dark matter in the Universe. The special sector of applications is related to a new type of topological defects – flavored cosmic strings and monopoles – which can appear during the spontaneous violation of the which may be considered as possible candidates for the cold dark matter in the Universe. Summary Despite some progress in understanding the family flavor mixing problem, one still has the uneasy feeling that, in many cases, the problem seems just to be transferred from one place to another. The peculiar quark-lepton mass hierarchy is replaced by a peculiar set of flavor charges or a peculiar hierarchy of the horizontal Higgs field VEVs in the non-abelian symmetry case or . As a result,  there are not so many distinctive and testable generic predictions relating the weak mixing angles to the quark-lepton masses that could distinctively differentiate the one family symmetry model from the other. This indeed related to the fact that Yukawa sector in the theory is somewhat arbitrary as compared with its gauge sector. Actually, one can always arrange the flavor charges of families or the VEVs of horizontal scalars in these models in a way to get the acceptable hierarchical mass matrices for quarks and relatively smooth ones for leptons. As matter of fact, one of the possible ways for these models to have their own specific predictions might appear if nature would favor the local family symmetry case. This would then allow to completely exclude the global family symmetry case and properly differentiate the non-Abelian and symmetry cases. All that is possible, of course, provided that the breaking scale of such a family symmetry is not as large as the GUT scale or Planck scale. Otherwise, all the flavor-changing processes caused by the exchanges of the horizontal gauge bosons will be, therefore, vanishingly suppressed. Another way for these models to be distinguished might appear, if they were generically being included in some extended GUT. In contrast to many others, such a possibility appears for the chiral family symmetry (considered in the previous section) which could be incorporated into the family-unified symmetry. Even if this GUT would not provide the comparatively low family symmetry scale, the existence of several multiplets of extra heavy  fermions in the original SU(8) matter sector could help with a model verification. Some of them through a natural see-saw mechanism could provide the physical neutrino masses which, in contrast to conventional picture, may appear to follow both the direct or inverted family hierarchy. Others mix with ordinary quark-lepton families in a way that there may arise a marked violation of unitarity in the CKM matrix. It is also worth pointing out some important aspect related to the family symmetries. As matter of fact, an existence of three identical quark-lepton families could mean that there might exist the truly elementary fermions, preons, being actual carriers of all the Standard Model fundamental quantum numbers involved and composing the observed quarks and leptons at larger distances. Generally, certain regularities in replications of particles may signal about their composite structure. Indeed, just regularities in the spectroscopy of hadrons observed in the nineteen-sixties made it possible to discover the constituent quark structure of hadrons. As to the quarks and leptons, it appears that an idea of their composite structure may distinguish the local chiral family symmetry among other candidates. Namely, the preon model happens under certain natural conditions to determine a local “metaflavor” symmetry as a basic internal symmetry of the physical world at small distances. Being exact for preons, it gets then broken at large distances down to a conventional  SU(5) GUT with an extra local family symmetry and three standard families of composite quarks and leptons. References Symmetry Physics beyond the Standard Model
Family symmetries
Physics,Mathematics
3,772
14,334,551
https://en.wikipedia.org/wiki/Namak%20Lake
Namak Lake (, i.e., salt lake) is a salt lake in Iran. It is located approximately east of the city of Qom and of Aran va bidgol at an elevation of above sea level. The lake is a remnant of the Paratethys sea, which started to dry up from the Pleistocene epoch, leaving Lake Urmia and the Caspian Sea and other bodies of water. The lake has a surface area of about , but most of this is dry. Water only covers . The lake only reaches a depth between to . Environmental characteristics The air in this area is very dry and the temperature difference between day and night reaches 70 degrees Celsius. Due to the high rate of evaporation and very high salinity of the water, Qom's salt lake has a desert-like structure and is covered with thick layers of salt. Also, this lake is known as the habitat of some special plant and animal species that have the ability to live in the harsh and salty conditions of this region. Aliabad Caravanserai, Red Castle, Desert National Park, Sefidab Caravanserai and Manzariyeh Caravanserai are some of the sightseeing places in Qom around Namak Lake. References Lakes of Iran Endorheic lakes of Asia Landforms of Qom province Salt flats Salt flats of Iran
Namak Lake
Chemistry
278
25,531,753
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20June%201%2C%202087
A partial solar eclipse will occur at the Moon's descending node of orbit on Sunday, June 1, 2087, with a magnitude of 0.2146. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. The partial solar eclipse will be visible for parts of New Zealand. Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. The first and last eclipse in this sequence is separated by one synodic month. Related eclipses Eclipses in 2087 A partial solar eclipse on May 2. A total lunar eclipse on May 17. A partial solar eclipse on June 1. A partial solar eclipse on October 26. A total lunar eclipse on November 10. Metonic Preceded by: Solar eclipse of August 13, 2083 Tzolkinex Followed by: Solar eclipse of July 12, 2094 Half-Saros Followed by: Lunar eclipse of June 6, 2096 Tritos Preceded by: Solar eclipse of July 1, 2076 Solar Saros 158 Preceded by: Solar eclipse of May 20, 2069 Followed by: Solar eclipse of June 12, 2105 Inex Preceded by: Solar eclipse of June 21, 2058 Triad Preceded by: Solar eclipse of July 31, 2000 Followed by: Solar eclipse of April 1, 2174 Solar eclipses of 2083–2087 Saros 158 This eclipse is a part of Saros series 158, repeating every 18 years, 11 days, and containing 70 events. The series will start with a partial solar eclipse on May 20, 2069. It contains total eclipses from August 5, 2195 through August 13, 2808; hybrid eclipses on August 24, 2826 and September 3, 2844; and annular eclipses from September 15, 2862 through February 27, 3133. The series ends at member 70 as a partial eclipse on June 16, 3313. Its eclipses are tabulated in three columns; every third eclipse in the same column is one exeligmos apart, so they all cast shadows over approximately the same parts of the Earth. The longest duration of totality will be produced by member 10 at 4 minutes, 43 seconds on August 28, 2231, and the longest duration of annularity will be produced by member 57 at 6 minutes, 7 seconds on January 25, 3079. All eclipses in this series occur at the Moon’s descending node of orbit. <noinclude> Metonic series Tritos series Inex series References External links 2087 6 1 2087 6 1 2087 6 1 2087 in science
Solar eclipse of June 1, 2087
Astronomy
693
5,225,618
https://en.wikipedia.org/wiki/CPU%20shim
A CPU shim (also called CPU spacer) is a shim used between the CPU and the heat sink in a computer. Shims make it easier and less risky to mount a heatsink on the processor because it stabilizes the heatsink, preventing accidental damaging of the fragile CPU packaging. They help distribute weight evenly over the surface. CPU shims are usually made of thin and very flat aluminium or copper. Copper has good heat dissipation capacity but is electrically conductive. CPU shims should be non-conductive to prevent any accidental short circuiting. Aluminium shims are often anodized, which makes them non-conductive and improves their appearance (see case modding). It is also very important that the shim is the proper thickness. If it is too thick then the heatsink will not make contact with the CPU, resulting in poor cooling and possibly overheating. Most shims are CNC manufactured, often using laser cutting. Cheaper ones may be pressed or stamped which could make them less accurate. Usage CPU shims are not common at all in OEM computers, but are used by some computer hardware enthusiasts who may use heavier heatsinks because they wish to have a cooler or less noisy system or perhaps to overclock the CPU for better performance. A heavy heatsink puts more pressure on the CPU and motherboard. Shims are very useful for people who often change CPU, heatsink or cooling solutions, or use a heatsink that is heavier than the CPU manufacturer's recommended weight. CPU shims are nowadays largely obsolete because most modern processors designed for home users since the introduction of the Athlon 64 and Pentium 4 have an Integrated Heat Spreader (IHS) which prevents the CPU core from being accidentally broken. See also Computer fan Computer cooling Thermal grease References Computer hardware cooling
CPU shim
Technology
374
75,211,656
https://en.wikipedia.org/wiki/Big%20dream
In Jungian dream analysis, big dreams () are dreams which have a strong impact on the dreamer and contain heavily archetypal imagery. Background According to Carl Jung, these dreams arise from the collective unconscious more than the personal unconscious, that is, their imagery is broadly shared by many people in different cultures. Jung states that these dreams appear more often in during critical phases of change in human life, being early youth, puberty, middle age and as one nears death. These dreams primarily express "eternal human problems", rather than personal issues. Despite this, they serve as milestones along the path to individuation, which includes the integration of the personal ego into a sense of becoming a universal human being. Big dreams are connected to the idea of the Hero's Journey, which Jung describes as the "life of the hero", waypoints along a human life understood in mythological terms. Examples Jung gives the example of a man who dreamt of a great snake that guarded a golden bowl in an underground vault. He explains that this image was not based directly on the dreamer's personal experience (although he had once seen a large snake at the zoo), but on archetypal imagery and collective emotion. References Analytical psychology Dream
Big dream
Biology
254
14,596,562
https://en.wikipedia.org/wiki/Nyctinasty
In plant biology, nyctinasty is the circadian rhythm-based nastic movement of higher plants in response to the onset of darkness, or a plant "sleeping". Nyctinastic movements are associated with diurnal light and temperature changes and controlled by the circadian clock. It has been argued that for plants that display foliar nyctinasty, it is a crucial mechanism for survival; however, most plants do not exhibit any nyctinastic movements. Nyctinasty is found in a range of plant species and across xeric, mesic, and aquatic environments, suggesting that this singular behavior may serve a variety of evolutionary benefits. Examples are the closing of the petals of a flower at dusk and the sleep movements of the leaves of many legumes. Physiology Plants use phytochrome to detect red and far red light. Depending on which kind of light is absorbed, the protein can switch between a Pr state that absorbs red light and a Pfr state that absorbs far red light. Red light converts Pr to Pfr and far red light converts Pfr to Pr. Many plants use phytochrome to establish circadian cycles which influence the opening and closing of leaves associated with nyctastic movements. Anatomically, the movements are mediated by pulvini. Pulvinus cells are located at the base or apex of the petiole and the flux of water from the dorsal to ventral motor cells regulates leaf closure. This flux is in response to movement of potassium ions between pulvinus and surrounding tissue. Movement of potassium ions is connected to the concentration of Pfr or Pr. In Albizia julibrissin, longer darker periods, leading to low Pfr, result in a faster leaf opening. In the SLEEPLESS mutation of Lotus japonicus, the pulvini are changed into petiole-like structures, rendering the plant incapable of closing its leaflets at night. Non-pulvinar mediated movement is also possible and happens through differential cell division and growth on either side of the petiole, resulting in a bending motion within the leaves to the desired position. Leaf movement is also controlled by bioactive substances known as leaf opening or leaf closing factors. Several leaf-opening and leaf-closing factors have been characterized biochemically. These factors differ among plants. Leaf closure and opening is mediated by the relative concentrations of leaf opening and closing factors in a plant. Either the leaf opening or closing factor is a glycoside, which is inactivated by hydrolysis of the glycosidic bond via beta glucosidase. In Lespedeza cuneata the leaf opening factor, potassium lespedezate, is hydrolyzed to 4 hydroxy phenyl pyruvic acid. In Phyllanthus urinaria, leaf closing factor Phyllanthurinolactone is hydrolyzed to its aglycon during the day. Beta glucosidase activity is regulated via circadian rhythms. Fluorescence studies have shown that the binding sites of leaf opening and closing factors are located on the surface of the motor cell. Shrinking and expansion of the motor cell in response to this chemical signal allows for leaf opening and closure. The binding of leaf opening and closing factors is specific to related plants. The leaf movement factor of Chamaecrista mimosoides (formerly Cassia mimosoides) was found to not bind to the motor cell of Albizia julibrissin. The leaf movement factor of Albizia julibrissin similarly didn't bind to the motor cell of Chamaecrista mimosoides, but did bind to Albizia saman and Albizia lebbeck. Function The functions of nyctinastic movement have yet to be conclusively identified, although several have been proposed. Minorsky hypothesized that nyctinastic behaviors are adaptive due to the plant being able to reduce its surface area during night time, which can lead to better temperature retention and also reduces night-time herbivory. Minorsky specifically suggests a Tritrophic Hypothesis in which he considers the predators of herbivores in addition to the plants and herbivores themselves. By moving leaves up or down, herbivores become more visible to nocturnal predators in both a spatial and olfactory sense, increasing herbivore predation and subsequently decreasing damage to a plant's leaves. Studies using mutant plants with a loss of function gene that results in petiole growth instead of pulvini found that these plants have less biomass and smaller leaf area than the wild type. This indicates nyctinastic movement may be beneficial toward plant growth. Charles Darwin believed that nyctinasty exists to reduce the risk of plants freezing. Nyctinasty may occur to protect the pollen, keeping pollen dry and intact during the nighttime when most pollinating insects are inactive. Conversely, some flowers that are pollinated by moths or bats exhibit nyctinastic flower opening at night. History The earliest recorded observation of this behavior in plants dates back to 324 BC when Androsthenes of Thasos, a companion to Alexander the Great, noted the opening and closing of tamarind tree leaves from day to night. Carl Linnaeus (1729) proposed that this was the plants sleeping, but this idea has been widely contested. References External links Plant physiology
Nyctinasty
Biology
1,090
27,355,554
https://en.wikipedia.org/wiki/Scotland%20Manufacturing
Scotland Manufacturing, Inc. is a full-service stamping company and manufacturer of deep drawn metal stampings, progressive stamping (die manufacturing) and value-added assembly solutions. Scotland has presses running from 110 to 1,000 tons and the company provides refrigerant, compressor housing, filter shells and other deep-drawn stampings from the industrial, automotive and heavy truck industries. Scotland Manufacturing is ISO 9001 certified and produces stampings from a variety of metals including cold rolled steel, electro tin plate and stainless steel. Their facility has more than of manufacturing space. History Scotland Manufacturing, located in Laurinburg, North Carolina, was founded in 1979 as a supplier to the filter industry. Named for Scotland County located in Southeastern North Carolina, the company is situated between Charlotte, the state’s largest city and Wilmington, the state’s largest port. From an $8-million business in 2001, Scotland Manufacturing has grown to a $20-million business in 2009. Scotland Manufacturing is part of The Reserve Group (TRG), a private equity group based in Akron, Ohio. The Reserve Group’s philosophy is to provide strategic business support and investment capital, allowing its portfolio of companies to remain competitive in the marketplace. Scotland Manufacturing is the oldest member company in the TRG portfolio. Products For the automotive industry, Scotland Manufacturing creates door panels for the high-end automotive OEM, clutch plates, transmission plates, brackets and filter cans. Additionally, the company uses a variety of metals, from flat steel to stainless steel, to produce components for the construction industry such as chimney caps, and brackets and braces for pre-fabricated buildings. Scotland also creates railway brake intercasings, rail shoes, rail pads and rail friction products as well as fire extinguishers and other components for the aerospace industry. Scotland Manufacturing offers shell supply in either low cost cold rolled steel or stronger, more corrosion-resistant tin plate steel. Deep drawn stampings are manufactured in diameters from 2 5/8 inches to 5 inches wide and up to 12 inches tall. The company’s production capacity offers low, medium and high volume capabilities on presses ranging from 110 to 1,000 tons. Stampings are manufactured in steel, stainless steel, pre-coated steel and aluminum. Scotland Manufacturing products are used to create filters, Mack trucks, John Deere Tractors, Dodge pick-up trucks, chimney caps, fire extinguishers, railroad car brakes, end caps and retainer plates. The production facility offers welding, assembly, and pressure testing. Associations Scotland Manufacturing is a member of the following associations: Filter Manufacturers Council and American Filtration & Separations Society. In addition, Scotland Manufacturing has partnered with Richmond Community College of Hamlet, North Carolina for an Industrial Training Program. References External links Scotland Manufacturing Steel companies of the United States Filter manufacturers
Scotland Manufacturing
Chemistry
573
76,909,744
https://en.wikipedia.org/wiki/Scrooge%20effect
The Scrooge effect is a psychological phenomenon that describes a noticeable behavioural change in individuals towards increased generosity and altruism following encounters with mortality or existential dread. It emphasizes that the realization of mortality motivates individuals to embrace cultural values and engage in activities that provide significance and transcendence beyond the concept of death. Corresponding to the terror management theory, the Scrooge effect proposes that existential apprehension can stimulate positive shifts in behavior. Individuals may prioritize acts of kindness and philanthropy as coping mechanisms to grapple with mortality and reaffirm their sense of purpose. Empirical studies suggest that personal adversities such as severe illness, financial adversity, or the bereavement of a loved one can instigate pro-social conduct, fostering sentiments of generosity and empathy. The Scrooge effect offers a conceptual framework within psychology to examine the determinants influencing altruistic behaviors and the underlying mechanisms driving transformative experiences. Ebenezer Scrooge "A Christmas Carol" published in 1843 by Charles Dickens has become holiday classic which revolves around Ebenezer Scrooge, an elderly, rich man that the reader gets to know as being bitter and cold. During the night of Christmas Eve he is visited by three ghosts leaving him with lessons that guide Scrooge through a recognition. The ghost of Christmas Future reveals to Scrooge his future in which he passes away alone and mourned by no one. This encounter reminds him to change his behavior by donating his money to charity, being kind to others and spending time with his family. The character of Ebenezer Scrooge's transformation serves as a timeless reminder of the importance of redemption, compassion, reflection and the true meaning of the Christmas spirit. Charles Dickens' novel serves as a reminder of the true meaning of life and reminds the reader of the salience of mortality. This novel critiques certain social aspects such as social injustice, poverty, inequality, and the dehumanizing effects of greed exemplifying the phenomenon of the Scrooge effect. Therefore, Charles Dickens' classic Christmas story connects to terror management theory and the Scrooge effect. The terror management theory revolves around the idea that the thought of dying should encourage people to act pro-socially. In the case of the protagonist of this novel he is not following social norms in terms of kindness and empathy. When reminded of the loneliness of his own death, terror management theory comes into play and gives him a new perspective and what is important in life. Studies Dicken’s story hypothesizes the following consequence of mortality salience: Generous behaviour leads to the belief that one is a meaningful and valuable member in one’s own construct of the world and the confrontation with mortality should lead to kinder and more benevolent acts. This hypothesis was tested by Pyszczynki et al. in the U.S. in 1996. 17 male and 14 female participants were interviewed about the importance of several charities, either directly in front of a funeral home or a few blocks away. Results showed that people with the direct view of the funeral home were more likely to rate the charities favourably. Just like Dicken’s Mr. Scrooge, people that are confronted with their own mortality, in this case by standing in front of a funeral home, view giving to charities in a more favourable light. Pyszczynski et al. found that there is some form of in-group favouritism concerning the choice of charity. 27 introductory psychology students of which 18 were female and 9 male were given 1.50$ as a “compensation” at the beginning of the study and were offered to donate a small amount of money to the charity of their choice later in the experiment. People gave more money to an American house-building charity than to an international education charity. This is why one has to keep in mind that there are limitations to the Scrooge effect. Ingroup favouritism in relation to mortality salience was also demonstrated in further studies. Based on the research of Pyszczynski et al. in 1996, another study was conducted by Zaleskiewicz et al. in 2014 to investigate the Scrooge effect further using the dictator game (Study 1), the ultimatum game (Study 2) and a quasi-naturalistic situation (Study 3). They hypothesized that reminders of one’s own mortality would increase prosocial behaviour, leading to more generous distribution of financial resources and such behaviour would in turn suppress death-related thoughts. Again, this is what Mr. Scrooge is trying to cause. Mortality salience predicted the amount of money sent to the other player in the games, what is interpreted as higher joy solely from giving. These studies were specifically designed to investigate only the Scrooge effect. Since this phenomenon is connected to terror management theory, investigations commonly link the effect with the theory. Terror Management Theory The Scrooge effect, a concept that delves into the correlation between human behavior and the realization of one's own mortality, can be explained by the Terror Management Theory (TMT). It states that people's innate fear of dying leads them to look for strategies to satisfy their death anxiety by preserving their cultural values and beliefs. Acting in a prosocial way as well as dedicating yourself to religious beliefs are two of the main strategies for managing this anxiety. These observations were established through questionnaires about views relating to death and spirituality which were filled out by patients facing a life-threatening illness. Results confirmed that religious beliefs buffer the anxiety concerning the topic of death and decrease the likelihood of depressive symptoms. The Scrooge Effect is a sensation that shows the impact of TMT on prosocial behavior. TMT proposes that existential anxiety is triggered by reminders of mortality, like those felt by Scrooge. People frequently look to their cultural worldview and look for ways to reinforce their significance within it to diminish this anxiety. Individuals may feel more inclined to cooperate, show kindness, and be generous when they are made aware of their own death, whether consciously or unconsciously. These actions support a feeling of meaning and purpose in life to being in line with cultural norms and values. People reduce existential anxiety by affirming their worth and significance through helping others and improving their community. Life threatening events have the power to change an outlook on life; this is suggested by the study conducted before and during the COVID-pandemic. The results depict a significant rise in mortality salience. This is a prime example of TMT and the Scrooge effect. The anxiety concerning death and their own mortality also lead to an increase in prosocial behavior and a decrease of interest in materialistic properties. TMT suggests that exhibiting prosocial conduct can offer a feeling of symbolic immortality. Humans can find some psychological solace in the face of mortality when they leave a positive legacy that transcends their physical life and benefits others or society at large. Other studies suggest that mortality salience may not influence positive reciprocity, but has an impact on negative reciprocity (and on retaliation rather than altruism), which raises questions of the effect’s universality, and whether or not this effect is context dependent. Further research has shown that existential anxiety amplifies different radical behaviors, both positive, such as the search for meaning and negative, such as terrorism and religious fanaticism. References Psychological effects A Christmas Carol Philanthropy Altruism
Scrooge effect
Biology
1,528
6,869,913
https://en.wikipedia.org/wiki/Norton%20Guides
Norton Guides were a product family sold by Peter Norton Computing. The guides were written in 1985 by Warren Woodford for the x86 Assembly Language, C, BASIC, and Forth languages and made available to DOS users via a terminate-and-stay-resident (TSR) program that integrated with programming language editors on IBM PC type computers. Norton Guides appears to be one of the first Online help systems and the first example of a commercial product where programming reference information was integrated into the software development environment. The format was later used by independent users to create simple hypertexts before this concept was more popular. Hypertext capabilities however were limited, links between entries were only possibly by "see also" references at the end of each entry. The concept of providing "information at your fingertips", as he called it, via a TSR program was a signature technology developed by Woodford in 1980 and used in other programs he created in that era including MathStar, WordFinder/SynonymFinder and a TEMPEST WWMCCS workstation developed for Systematics General Corporation. Warren's Guides, aka the Norton Guides, were the last application of this type written by Woodford. Norton Guides were compiled from ASCII source files with a tool called NGC. Morten Elling wrote an alternative guides compiler NGX in 1994 or earlier. A utility to view Norton Guides .ng files is found at http://www.davep.org/norton-guides/ Editions Norton on-line programmer's guides. An on-line reference library of programming data. Version 1.0. 4 5-1/4" floppy disks. System requirements: IBM PC or compatible computer; 128K RAM; DOS 2.0 or higher; one disk drive. Links http://www.davep.org/norton-guides/ http://x-hacker.org/ng/ Computer programming books Hypertext
Norton Guides
Technology
388
55,108,227
https://en.wikipedia.org/wiki/Forever%20Labs
Forever Labs is a longevity company that uses an outpatient procedure to harvest adult Mesenchymal stem cells for possible future medical treatment. While the collection and storage of infant cord blood has become commonplace since the 1990s, Forever Labs is notable for being the first company to offer collection and storage of adult stem cells. History Forever Labs was founded in Ann Arbor, Michigan in 2015 by Dr. Mark Katakowski, Dr. Ben Buller, Dr. Laith Farjo, and Steven Clausnitzer. After initial launch in Michigan, the company now operates in eight states. In the summer of 2017, Forever Labs was invited to participate in the Silicon Valley Incubator Y Combinator. Procedure Forever Labs contracts with licensed physicians to perform the collection procedure. The patient lies on his/her side, the area just above the hip is disinfected and locally anesthetized. The physician then uses a special syringe to collect the bone marrow. The cells are spun down with special centrifuge devices, cooled and shipped off to the long term storage facility. The entire process typically takes 15–20 minutes. Storage and future use The patient's stem cells are stored in a clinical-grade cryogenic biorepository for future withdrawal, expansion, and use in disease prevention, longevity treatments, and regenerative medical treatments. Hundreds of clinical trials are currently underway studying the effects of stem cell therapy. References External links https://techcrunch.com/2017/08/17/forever-labs-preserves-young-stem-cells-to-prevent-your-older-self-from-aging/ http://michiganradio.org/post/michigan-company-mines-stem-cells-search-fountain-youth https://techcrunch.com/gallery/demo-day-y-combinator/slide/7/ https://www.acast.com/dannyinthevalley/special-insidesiliconvalleysquesttodefeatageing?autoplay https://futurism.com/these-six-startups-from-y-combinators-demo-day-1-are-ready-to-transform-our-world/ http://www.wxyz.com/news/can-storing-your-stem-cells-be-the-key-to-fighting-disease-and-living-longer https://foreverlabs.com Cryopreservation Companies based in Ann Arbor, Michigan
Forever Labs
Chemistry
528
63,884,052
https://en.wikipedia.org/wiki/NOMARS
NOMARS (No Manning Required, Ship) is a concept for a range of ships and smaller watercraft operating as unmanned surface vessels for the US Department of Defense, developed by the United States's Defense Advanced Research Projects Agency (DARPA). These concept craft range in size and form. Examples include: Sea Hunter, a DARPA-developed trimaran USV launched in 2016. A dual hulled platform, proposed in a concept by Austal USA. A single hulled missile ship with a propulsor and steering pod. By removing the human element from all ship design considerations, NOMARS will demonstrate significant advantages, to include size, cost (procurement, operations, and sustainment), at-sea reliability, survivability to sea-state, survivability to adversary actions (stealth considerations, resistance to tampering, etc.), and hydrodynamic efficiency (hull optimization without consideration for crew safety or comfort). In 2022, ship designer Serco was selected to develop the NOMARS program through building, testing, and demonstrating the first generation ship. References Unmanned surface vehicles of the United States DARPA projects Autonomous ships
NOMARS
Technology,Engineering
240
78,143,396
https://en.wikipedia.org/wiki/Clorprenaline
Clorprenaline (, , ), also known as isoprophenamine and known as clorprenaline hydrochloride (, ) in the case of the hydrochloride salt, is a sympathomimetic and bronchodilator medication which is marketed in Japan. It acts as a β-adrenergic receptor agonist or as a β-sympathomimetic. Brand names of clorprenaline in Japan are numerous and include Asnormal, Bazarl, Bronchon, Clopinerin, Conselt, Cosmoline, Fusca, Kalutein, Pentadoll, Restanolon, and Troberin. The drug was first described in the literature by 1956. References Beta-adrenergic agonists Bronchodilators Chloroarenes Phenylethanolamines Sympathomimetics Isopropylamino compounds
Clorprenaline
Chemistry
201
4,249,038
https://en.wikipedia.org/wiki/M%C3%B8ller%20scattering
Møller scattering is the name given to electron-electron scattering in quantum field theory, named after the Danish physicist Christian Møller. The electron interaction that is idealized in Møller scattering forms the theoretical basis of many familiar phenomena such as the repulsion of electrons in the helium atom. While formerly many particle colliders were designed specifically for electron-electron collisions, more recently electron-positron colliders have become more common. Nevertheless, Møller scattering remains a paradigmatic process within the theory of particle interactions. We can express this process in the usual notation, often used in particle physics: In quantum electrodynamics, there are two tree-level Feynman diagrams describing the process: a t-channel diagram in which the electrons exchange a photon and a similar u-channel diagram. Crossing symmetry, one of the tricks often used to evaluate Feynman diagrams, in this case implies that Møller scattering should have the same cross section as Bhabha scattering (electron-positron scattering). In the electroweak theory the process is instead described by four tree-level diagrams: the two from QED and an identical pair in which a Z boson is exchanged instead of a photon. The weak force is purely left-handed, but the weak and electromagnetic forces mix into the particles we observe. The photon is symmetric by construction, but the Z boson prefers left-handed particles to right-handed particles. Thus the cross sections for left-handed electrons and right-handed differ. The difference was first noticed by the Russian physicist Yakov Zel'dovich in 1959, but at the time he believed the parity violating asymmetry (a few hundred parts per billion) was too small to be observed. This parity violating asymmetry can be measured by firing a polarized beam of electrons through an unpolarized electron target (liquid hydrogen, for instance), as was done by an experiment at the Stanford Linear Accelerator Center, SLAC-E158. The asymmetry in Møller scattering is where me is the electron mass, E the energy of the incoming electron (in the reference frame of the other electron), is Fermi's constant, is the fine structure constant, is the scattering angle in the center of mass frame, and is the weak mixing angle, also known as the Weinberg angle. QED computation The Møller scattering can be calculated from the QED point-of-view, at the tree-level, with the help of the two diagrams shown on this page. These two diagrams are contributing at leading order from the QED point-of-view. If we are taking in account the weak force, which is unified with the electromagnetic force at high energy, then we have to add two tree-level diagram for the exchange of a boson. Here we will focus our attention on a strict tree-level QED computation of the cross section, which is rather instructive but maybe not the most accurate description from a physical point-of-view. Before the derivation, we write the 4-momenta as (and for incoming electrons, and for outgoing electrons, and ): The Mandelstam variables are: These Mandelstam variables satisfy the identity: . According to the two diagrams on this page, the matrix element of t-channel is the matrix element of u-channel is So the sum is Therefore, To calculate the unpolarized cross section, we average over initial spins and sum over final spins, with the factor 1/4 (1/2 for each incoming electron): where we have used the relation . We would next calculate the traces. The first term in the braces is Here , and we have used the -matrix identity and that trace of any product of an odd number of is zero. Similarly, the second term is Using the -matrix identities and the identity of Mandelstam variables: , we get the third term Therefore, Substitute in the momentums we have set here, which are Finally we get the unpolarized cross section with and . In the nonrelativistic limit, , In the ultrarelativistic limit, , References External links SLAC E158: Measuring the Electron's WEAK Charge Quantum electrodynamics Scattering theory
Møller scattering
Chemistry
882
27,016,205
https://en.wikipedia.org/wiki/International%20Conference%20on%20Architectural%20Support%20for%20Programming%20Languages%20and%20Operating%20Systems
The International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) is an annual interdisciplinary computer science conference organized by the Association for Computing Machinery (ACM). Reflecting its focus, sponsorship of the conference is made up of 50% by the ACM's Special Interest Group on Computer Architecture (SIGARCH) and 25% by each of the Special Interest Group on Programming Languages (SIGPLAN) and the Special Interest Group on Operating Systems (SIGOPS). It is a high-impact conference in computer architecture and operating systems, but less so in programming languages/software engineering. See also List of computer science conferences References Computer science conferences Computer architecture conferences
International Conference on Architectural Support for Programming Languages and Operating Systems
Technology
139
21,483,862
https://en.wikipedia.org/wiki/Center%20for%20Probing%20the%20Nanoscale
The Center for Probing the Nanoscale (CPN) at Stanford University was founded in 2004 by researchers from Stanford University and IBM. The center is one of the National Science Foundation (NSF) Nanoscale Science and Engineering Centers (NSEC). The goal of the center is to develop and apply novel nanoprobes that dramatically improve our capability to observe, manipulate, and control nanoscale objects and phenomena. Developed technology will be transferred to industry for commercial implementation. Nanoprobe development and applications are under way in five theme groups, focusing on Individual Nanomagnet Characterization, Nanoscale Magnetic Resonance Imaging, Nanoscale Electrical Imaging, Plasmonic Scanning Tunneling Microscopy and BioProbes. Besides the scientific research activities, CPN members actively engage in public outreach programs to bring nanoscale science and technology to a broad and diverse audience. A Summer Institute for Middle School Teachers is held each summer and gives teachers the opportunity to learn about nanotechnology, engage with scientists, develop course material and get hands-on experience in research labs. Nanoprobe lectures and video recordings from various workshops are also available on-line. External links National Science Foundation - Consortium of Nanoscale Science and Engineering Centers Stanford University National Science Foundation Nanotechnology institutions 2004 establishments in California
Center for Probing the Nanoscale
Materials_science
262