id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
18,702,001
https://en.wikipedia.org/wiki/Arsenic%20triiodide
Arsenic triiodide is the inorganic compound with the formula AsI3. It is an orange to dark red solid that readily sublimes. It is a pyramidal molecule that is useful for preparing organoarsenic compounds. Preparation It is prepared by a reaction of arsenic trichloride and potassium iodide: AsCl3 + 3KI → AsI3 + 3 KCl Reactions Hydrolysis occurs only slowly in water forming arsenic trioxide and hydroiodic acid. The reaction proceeds via formation of arsenous acid which exists in equilibrium with hydroiodic acid. The aqueous solution is highly acidic, pH of 0.1N solution is 1.1. It decomposes to arsenic trioxide, elemental arsenic and iodine when heated in air at 200 °C. The decomposition, however, commences at 100 °C and occurs with the liberation of iodine. Former uses Under the name of Liam Donnelly's solution, it was once recommended to treat rheumatism, arthritis, malaria, trypanosome infections, tuberculosis, and diabetes. References Arsenic(III) compounds Arsenic halides Iodides
Arsenic triiodide
Chemistry
231
1,530,070
https://en.wikipedia.org/wiki/NT%20%28cassette%29
NT (sometimes marketed under the name Scoopman) is a digital memo recording system introduced by Sony in 1992. The NT system was introduced to compete with the Microcassette, introduced by Olympus, and the Mini-Cassette, by Philips. Design The system was an R-DAT based system which stored memos using helical scan on special microcassettes, which were with a tape width of 2.5 mm, with a recording capacity of up to 120 minutes similar to Digital Audio Tape. The cassettes are offered in three versions: The Sony NTC-60, -90, and -120, each describing the length of time (in minutes) the cassette can record. NT stands for Non-Tracking, meaning the head does not precisely follow the tracks on the tape. Instead, the head moves over the tape at approximately the correct angle and speed, but performs more than one pass over each track. The data in each track is stored on the tape in blocks with addressing information that enables reconstruction in memory from several passes. This considerably reduced the required mechanical precision, reducing the complexity, size, and cost of the recorder. Another feature of NT cassettes is Non-Loading, which means instead of having a mechanism to pull the tape out of the cassette and wrap it around the drum, the drum is pushed inside the cassette to achieve the same effect. This also significantly reduces the complexity, size, and cost of the mechanism. Audio sampling is in stereo at 32 kHz with 12 bit nonlinear quantization, corresponding to 17 bit linear quantization. Data written to the tape is packed into data blocks and encoded with LDM-2 low deviation modulation. Uses The Sony NT-1 Digital Micro Recorder, introduced in 1992, features a real-time clock that records a time signal on the digital track along with the sound data, making it useful for journalism, police and legal work. Due to the machine's buffer memory, it is capable of automatically reversing the tape direction at the end of the reel without an interruption in the sound. The recorder uses a single "AA"-size cell for primary power, plus a separate CR-1220 lithium cell to provide continuous power to the real-time clock. The Sony NT-2, an improved successor to the Sony NT-1 Digital Micro Recorder, introduced in 1996, was the final machine in the series. NT cassettes were used in the film industry and law enforcement, as the quality was superior to most portable audio recorders in that time period. The data portion embedded in the recording made it an excellent choice for law enforcement in addition to recording to a proprietary tape in a proprietary format. As digital technology evolved, and became accepted in US court systems, the NT2 was replaced by devices that recorded to internal drives and removable digital media. The new media was much more cost-effective, and yielded premium quality audio recordings at a lesser cost. It was also easier to make court admissible copies utilizing other media, besides NT2 cassettes. Rebranded NT cassettes were used as the storage medium in the Datasonix Pereos backup system from 1994, claiming a capacity of up to 1.25 gigabytes per tape. Due to overhead and variable data compression ratios, the actual amount of data stored could be significantly below a gigabyte. See also References Digital electronics Sony products Audiovisual introductions in 1992 Discontinued media formats
NT (cassette)
Engineering
690
13,092,163
https://en.wikipedia.org/wiki/Reversible%20charge%20injection%20limit
For an electrode in a solution with a particular size and geometry, the reversible charge injection limit is the amount of charge that can move from the electrode to the surroundings without causing a chemical reaction that is irreversible. References Electrochemistry
Reversible charge injection limit
Chemistry
52
37,401,198
https://en.wikipedia.org/wiki/RV%20Rachel%20Carson%20%282008%29
RV Rachel Carson is a research vessel owned and operated by the University of Maryland's Center for Environmental Science, named in honor of the marine biologist and writer Rachel Carson. The 81-foot aluminum-hulled vessel is an extended and modified Challenger class fast research vessel, designed by marine architect Roger Long. It is equipped with twin 1,200 horsepower diesel engines and water jet drives which give a maximum speed of 24 knots. A dynamic positioning system automatically maintains the vessel's position. The ship was built by Hike Metal Products of Wheatley, Ontario, at a cost of $4.6 million, and christened by Katie O'Malley on November 16, 2008, at Annapolis. The Rachel Carson has operated in Chesapeake Bay since early 2009, teaching estuarine sampling techniques, carrying out water quality surveys, plankton collection, box coring operations, and deploying instrument packages. References 2008 ships Environmental science Research vessels of the United States Ships built in Ontario University of Maryland, College Park RV 2008
RV Rachel Carson (2008)
Environmental_science
203
56,226,666
https://en.wikipedia.org/wiki/Mensacarcin
Mensacarcin is a highly oxygenated polyketide first isolated from soil-dwelling Streptomyces bottropensis bacteria. The molecule is a secondary metabolite, and can be obtained in large amounts from its producing organism. Due to its unique properties it is an important model for drug development against melanoma and other cancers. Medical properties In NCI-60 anti-cancer compound screening mensacarcin has a high cytostatic effect against almost all cell lines (mean of 50% growth inhibition) and a relatively selective cytotoxic effect against melanoma cells. Low COMPARE correlation with standard antitumor agents indicate a unique mechanism of action. Further examinations reveal mensacarcin effecting the mitochondria. Potential use in cancer therapy With its unique mechanism, effective also in BRAF V600E mutation cell lines, mensacarcin is a promising model for the development of new anticancer drugs. Existing therapies for melanoma are limited. Mensacarcin's powerful effect against melanoma cells make it especially valuable for this disease. Mechanism of action Specific disruption of mitochondrial function Mitochondria provide most of the energy used by eukaryotic cells. In a study at the Oregon State University a synthesized fluorescent probe of mensacarcin was localized to the mitochondria within 20 minutes of treatment. Live-cell bioenergetic flux analysis showed rapid disturbance of energy production and of mitochondrial function. The localization together with the metabolic effects provide evidence that mensacarcin targets mitochondria. Induction of cell death Mitochondria are also important in cell death signaling. Mensacarcin in melanoma cells activates apoptotic pathways related to caspase 3 and caspase 7, and thus induces cell death. After mensacarcin treatment of two melanoma cell lines, the cells showed characteristic chromatin condensation as well as distinct poly(ADP-ribose)polymerase-1 cleavage; flow cytometry identified a large population of apoptotic cells; single-cell electrophoresis indicated that mensacarcin causes genetic instability, a sign of early apoptosis. Effect in melanoma cell lines with BRAF V600E mutation The BRAF V600E mutation is associated with drug resistance. Due to its independent mechanism, mensacarcin has an undiminished effect in melanoma cell lines with this mutation (NCI 60 cell lines SK-Mel-28 and SK-Mel-5). References Polyketides Oncology Cancer research Cancer Melanoma
Mensacarcin
Chemistry
545
50,689,963
https://en.wikipedia.org/wiki/Mercy%20%28Overwatch%29
Mercy is a character developed by Blizzard Entertainment for their Overwatch franchise. She was introduced at launch in their 2016 first-person hero shooter video game of the same name and again appeared in its 2022 sequel, Overwatch 2. Mercy has also featured in its related animated and literary media. Lucie Pohl voices Mercy in English-language Overwatch media. Within the Overwatch narrative, "Mercy" is the callsign of Swiss doctor Angela Ziegler, who provided key medical support for the original Overwatch group. In-game, she is a Support class character who can heal and buff teammates and resurrect fallen teammates. The character is one of the more popular in the game, being noted by Blizzard to be the most played support character during the game's beta. However, her resurrect ability has been criticized in competitive and professional play levels, given the swing in momentum the ability creates. Her gameplay mechanics have undergone various reworks and patches in an attempt by Blizzard to make her a more well-rounded playable hero. Development and design Art and character Mercy was one of the first twelve Overwatch characters introduced at Blizzard's BlizzCon 2014 event. A Polygon story covering the event noted Mercy was equipped with feathery wings, a healing stream, and a pistol. Mercy is voiced by Lucie Pohl, a German voice actress. While Blizzard had been trying to find an actor native to the area that could perform a good Swiss-German accent, they found Pohl's accent to be good for the character and selected her instead, according to lead writer Michael Chu. In concept art for Overwatch, prior to her final design, Mercy had been represented as a black man with white hair and a broad build, but otherwise having similar outfits and abilities as the released version. The character was originally named "Angelica", with "Mercy" being the name of the character now known as "Pharah". Confusion among the game's beta testers arose, which involved players switching to Angelica when requested to switch to Mercy. As a result, Blizzard renamed Mercy as "Rocket Dude", before the character became known as Pharah, while simultaneously, Angelica was renamed Mercy. In addition to her default skin, Mercy has received themed cosmetics, such as during the game's 2017 "Year of the Rooster" event. During May 2018, Blizzard partnered with the Breast Cancer Research Foundation to offer a limited-time "Pink Mercy" skin that can be purchased by players, as well as a corresponding tee shirt with all proceeds going to the charity to support breast cancer awareness (akin to the pink ribbon). Blizzard reported that the charity sale raised over , the largest single-year donation that the Breast Cancer Research Foundation had seen. On Overwatchs accompanying website, Blizzard published a fictional biography for Mercy, listing her real name, age, and base of operations: Angela Ziegler, 37, and Zürich, Switzerland, respectively. The fictional bio also describers her as "a peerless support, a brilliant scientist, and a staunch advocate for peace." In the Overwatch universe, she is known to have been affiliated with Overwatch, being their head of medical research, and operating as a field medic and first responder while with them. She ascended to these positions in Overwatch via becoming the head of a prominent Swiss hospital, after which she developed a breakthrough in the field of applied nanobiology, which attracted the attention of Overwatch. Although she disagreed with the militaristic methods implemented by Overwatch, she used their resources to develop her winged Valkyrie swift–response suit. Mercy found herself at odds with her superiors and others in Overwatch, despite her medical contributions. Mercy is also mentioned in Genji's fictional biography, with one line stating, "Hanzo believed that he had killed his brother, but Genji was rescued by Overwatch and the intervention of Dr. Angela Ziegler." In a fictional news report published by Blizzard, Mercy is quoted as commenting on the end of the Overwatch organization, describing the growing negative relationship between Soldier: 76 (Jack Morrison) and Reaper (Gabriel Reyes): "after Morrison's promotion to strike commander, his relationship with Reyes changed, the tension became more pronounced as time went on. I tried to mend things. We all did. Sometimes when the closest bonds break, all you can do is pray you stay out of the cross fire." Gameplay design While Blizzard has made various changes to all of the heroes since the game's release, Mercy has seen very significant changes, including a complete rework of her skill kit, maintain to address how her ability to resurrect downed team members has impacted the game. At release, Mercy's ultimate ability was "Resurrect", which revived any recently eliminated teammates within a certain range, granting them full health and a brief invulnerability. Blizzard's developer notes on their July 21, 2016, patch for Overwatch referred to Resurrect as "one of the most powerful abilities in the game." Timely and strategic use of Mercy's Resurrect ability was considered by players to be a game changer, either saving a defending team from a defeat, or allowing an attacking team to continue an offensive push despite several eliminations. In August 2016, Blizzard added a buff to Mercy's "Resurrect" ability, as well as her general healing abilities, in order "to solidify her role as a strong, single-target healer." Prior to a patch in September 2016, post-match Plays of the Games in competitive play were frequently dominated by a Mercy player using the Resurrect ability, requiring Blizzard to rework how the game selected Plays of the Game to reduce this frequency. Blizzard found that Mercy players overemphasised the "Resurrect" ability, as "it incentivized Mercy players to hide away from important battles, instead of taking part in them". With a September 2017 update, Mercy's "Resurrect" was moved out from being an ultimate ability to a standard skill with a long cooldown. Blizzard introduced her new ultimate, "Valkyrie", which significantly buffs all of Mercy's abilities and reduces cooldowns for fifteen seconds. Blizzard felt by keeping the Resurrect as a skill and adding the new Valkyrie ultimate, it "gives her the opportunity to make big game-making plays and opens a number of new options for her". Even with her kit change, Blizzard has continued to fine-tune the abilities within the public test servers to try to achieve a proper balance. Jeff Kaplan said that Blizzard continued to find that with the new kit, Mercy players still kept out of battle save to use Resurrect, when they want the character to be treated as great healer for all team members. A further adjustment was made to Mercy in a January 2018 patch; whereas the Valkyrie ultimate originally had reset the Resurrect cooldown and provided a second Resurrect that could be used immediately, the patch eliminated the reset and second Resurrect. The Overwatch developers stated that they found that many Mercy players were still holding back and using the immediate Resurrects granted by Valkyrie to swing control of the game and which was difficult for opposing teams to counter. During the beta development period of Overwatch 2, Blizzard developers reworked Mercy's mobility. Developers noticed if made to jump during this ability at a specific time, Mercy would be launched into the air, and noted that many players were incorporating this into their gameplay. Blizzard received backlash over these changes, with some players calling Mercy more clunky and awkward to play. In response to the player feedback, Blizzard stated that Mercy launching into the air was a bug and further tweaked the ability in a subsequent patch, reworking it so that a jumping mechanism would be included as a feature. Players voiced further discontent with these changes, causing Blizzard to rollback the removal of Mercy's "super jump". In February 2023, with Overwatch 2 then its early access phase, Mercy received significant nerfs to her the cooldown rate of her Guardian Angel ability, movement, and healing per second. Appearances Video games Mercy first appeared in the 2016 video game Overwatch. Lacking a traditional single-player campaign, Overwatch had minimal in-game lore and character backgrounds. These were instead shown through map designs and character voice lines, and elements established in tie-in media. Blizzard added seasonal events to the game to support ongoing interest, with some events including story elements; Mercy made an appearance in several of these events. In April 2017, Blizzard launched the Uprising event, which included a player versus environment co-op game mode. The default version of the mode limits players to four characters, which included Mercy. The mode was set seven years before the events in the main game, in which a strike team consisting of Mercy, Torbjörn, Reinhardt, and Tracer are tasked with thwarting an attack on London perpetrated by an extremist group. Mercy also appeared in the Mercy's Recall Challenge event, launched in November 2019. Mercy appears in Overwatch 2, which launched in October 2022. Gameplay Classified as having a "support" role in Overwatch and its sequel, Mercy fills "the classic healer archetype", with healing being her primary function. Though her gameplay design has gone through several iterations with considerably more significant reworks compared to other Overwatch characters, her function as her team's healer has maintained throughout these changes. In 2015, during the original game's beta period, PC Gamer noted "her job is simply to glue herself to a friendly and not die," and in 2023, Kotaku wrote that "as she exists now [in Overwatch 2], Mercy's primary function is as a pocket healer and damage booster." As a healer, a player using Mercy can see colored ghost images of their teammates through any obstacle, with the color indicating their health levels, and when in-sight, can see the health bar of the teammate. Mercy is equipped with the "Caduceus Staff" and "Caduceus Blaster". The Caduceus Staff possesses two firing modes: the primary fire, when connected to an ally heals them for as long as they are tethered to the healing stream, while the secondary fire buffs an ally's damage output. The Caduceus Blaster is a small pistol which can deal effective damage at close range, but is otherwise weak. Even equipped with the pistol, Mercy's kit design remains largely of a pacifist nature and she is the only player character in the game that cannot reliably score kills. Some players use Mercy with an offensive approach. Kotaku noted that many of these so-called "Battle Mercies" are often ridiculed, as firing the pistol is perceived to detract from time spent healing teammates. However, some Mercy players maintain that the pistol is useful in self-defense. Her abilities include "Guardian Angel" and "Angelic Descent". Mercy can use Guardian Angel to fly directly towards a targeted teammate, including those that have recently been eliminated, either to move quickly across the field or near a teammate to apply her staff's powers, or to dodge enemy fire. While aloft, Mercy can use the Angelic Descent ability to slow her rate of falling, provide more maneuverability. She also has her "Resurrect" ability, allowing her to revive one fallen teammate shortly after they are killed, though leaving her vulnerable for a few seconds during the process and has a long thirty-second cooldown. Her Ultimate ability is "Valkyrie", which gives her omni-directional flight and several boosts for fifteen seconds: passive self-healing, increased "Guardian Angel" range and speed, increased five healing per second and damage boost range, chain healing and damage boost to all nearby allies of the target for the same amount, increased sidearm fire rate, and infinite sidearm ammo. Because Mercy is more mobile compared to other supports and is able to heal or boost without direct line of sight of her allies, a player's resourcefulness as Mercy can be largely independent of their teammates, particularly in cases where teammates can exemplify "bare minimum" game sense. Literary media In the tenth issue of the Overwatch digital comic book, Mercy is briefly featured and seen reading a letter, implied to be Genji's. Mercy was also the focus of a short story, "Valkyrie", released on November 11, 2019, written by Michael Chu, which focuses on the period around her joining of the Overwatch team. The story tied into Mercy's Recall Challenge. Reception Often considered one of Overwatchs staple and most recognizable characters, as well as its most iconic support-class hero, Mercy has received a somewhat mixed response from both video game media writers and Overwatch players. A popularly used character in Overwatch, Mercy is "celebrated by general Overwatch fandom". Her impact on gameplay, however, particularly in the game's esports community has attracted criticism. During the game's open beta she was the most played support class character. Following her major revamp in September 2017, Mercy was found to be universally picked as at least one healer on a team in competitive play, and because these changes require her to be closer to combat, the concept of a "Battle Mercy" who mixes both healing with attacks gained popularity at higher-ranked play. Some criticized this change, and felt it was a sign that Blizzard was trying to cater the game towards the competitive players over casual ones; high-level players felt that Mercy was a "zero aim" character prior to the September 2017 revamp, making it easy for new players to learn, but subsequently, the character requires more decision making and choices that can be difficult to casual players to learn. A Eurogamer article referred to her as "gaming's greatest healer." In his review of Overwatch, Phil Savage of PC Gamer, expressed, "I particularly love how varied the movement is between characters," praising Mercy's glide ability and citing Mercy as "the perfect example of how every aspect of a character can, in the best cases, support a specific style." Savage elaborated, "alone, she's vulnerable and slow – easily ambushed and dispatched. But, with line-of-sight to a teammate, she can spread her wings and fly towards them. It's fun to do, and also reinforces the symbiotic partnership between healer and healed: Mercy needs her teammates as much as they need her. It's masterful design." Despite being a popularly selected character, both media writers and players have noted success and enjoyment playing Mercy being largely dependent on the capabilities, attitudes, and competence of the player's teammates. Nathan Grayson of Kotaku called playing Mercy a "crapshoot" due to this. Mercy has also been cited as often present in discussions of player discontent with the game's balancing. Edwin Evans-Thirwell of PC Gamer wrote that Mercy was "patient zero" in the conflict of the original game's competing demands. Her impact on gameplay has garnered scorn from the Overwatch player base, both in the original game and its sequel, with some considering her damage boosting abilities toxic to game's balance. Tyler Colp of PC Gmmer wrote that "she is often the scapegoat for the shooter's balance issues, a healer who many players point to as the parasite leeching away Overwatchs competitive integrity." Her Resurrect ability was considered "unfair" by some players, and due to it being able to cause such a considerable impact on the flow of a game, Kill Screens Joshua Calixto referred to Mercy as "the most terrifying character in Overwatch." In professional play, such as the Overwatch League, some have argued that Mercy's Resurrect ability was still too much of a game-changer in these competitive matches; this concern persisted even after Mercy's September 2017 rework. Damian Alonzo for PC Gamer said that strategy for competitive Overwatch is generally about achieving a player advantage, which could be achieved with an early kill and which can snowball into greater advantages throughout the match, but Mercy's Resurrection completely undoes that advantage, nullifying such early kills and without any consequences. PCGamesNs Ben Barrett also said that because Mercy can undo the effects of short-term gain strategies like flanking with Resurrection, these strategies become futile, and resorting to more standard attack patterns slows down the pace of the game. In the week of the game's launch, IGN noted Mercy as among a shortlist of characters "more popular than others when it comes to fan art." Her "Witch" skin for the game's Halloween-themed event was noted by Polygon to be extremely popular with Mercy players. A Mercy voice line used in a themed-event mode was not additionally paired with the Witch skin. Fans were able to successfully petition Blizzard to further include the voice line in-game with the skin equipped. In addition to fan art featuring the skin being made, some players changed their desktop background to the game's official menu screen featuring Mercy in the skin. Actress Amber Heard was reported to have spent two months designing her own Mercy cosplay for Elon Musk, after he told her she resembled the character, his favorite in Overwatch. Mercy has also been cited as a commonly picked character in the healslut community, which engages in dominance and submission roleplay both in-game and through external means. Fandom Mercy is featured in two of the game's most prominent fandom "ships". A relationship between Mercy and Genji was hinted at in voice lines added in early 2017; fan reception to these voice lines was mixed, particularly among fans who engage in "shipping" characters from the game. Fans of the game created a couple name, Gency, for the duo's possible relationship. The relationship is one of the more popular fan-created ones, based on Genji's fictional biography, which includes Mercy saving his life and help rehabilitate him into a cyborg. Gita Jackson of Kotaku wrote that, "while some fans don't like the idea of the characters being canonically straight, there are other concerns that go beyond preference for a particular [relationship]. Some fans believe that 'shipping Mercy and Genji is inappropriate because Mercy is the Overwatch foundation's doctor. For them, the conflict of interest in a doctor/patient relationship is enough to make the ship feel inappropriate," adding that "some fans see the relationship as predatory on Mercy's part." The shipping of Mercy and Pharah, often dubbed "PharMercy", has been noted as one of the most prolific Overwatch fan pairings. Kotaku has noted its particularly intense popularity within femslash circles. Players of the game often pair up as Pharah and Mercy, as the two character's gameplay abilities allow for the latter to follow the former while in-flight. Though unconfirmed as official references to the ship, Blizzard has included a rotating weekly game mode called "Death From Above" which limits players to choose only Pharah or Mercy, as well as voice lines between the two which have been considered flirtatious by some players and writers. Notes References External links Official design reference guide by Blizzard Entertainment. Comics characters introduced in 2016 Female characters in animation Female characters in comics Female characters in video games Fiction about nanotechnology Fictional biologists Fictional combat medics Fictional female doctors Fictional female scientists Fictional scientists in video games Fictional superhuman healers Fictional Swiss people Fictional United Nations personnel Overwatch characters Video game characters introduced in 2016
Mercy (Overwatch)
Materials_science
4,052
12,205,594
https://en.wikipedia.org/wiki/Gruber%20Prize%20in%20Neuroscience
The Gruber Prize in Neuroscience, established in 2004, is one of three international awards worth US$500,000 made by the Gruber Foundation, a non-profit organization based in Yale University in New Haven, Connecticut. It is awarded annually to scientists for significant discoveries that have enhanced the comprehension of the neurological system. The prize comprises a gold medal engraved with the recipient's name and a citation detailing the accomplishment for which the recipient is being recognized. The Gruber Prize in Neuroscience winners are nominated by the Society for Neuroscience. Recipients 2004 Seymour Benzer 2005 Eric Knudsen and Masakazu Konishi 2006 Masao Ito and Roger Nicoll, cellular neurobiologists 2007 Shigetada Nakanishi a molecular neurobiologist, Director of the Osaka Bioscience Institute 2008 John O’Keefe, PhD, Professor of Cognitive Neuroscience at University College London 2009 Jeffrey C. Hall, professor of neurogenetics at the University of Maine; Michael Rosbash, professor and director of the National Center for Behavioral Genomics at Brandeis University; and Michael Young, professor and head of the Laboratory of Genetics at Rockefeller University 2010 Robert H. Wurtz, NIH Distinguished Investigator at the National Eye Institute Laboratory of Sensorimotor Research 2011 Huda Zoghbi 2012 Lily Jan and Yuh Nung Jan, University of California, San Francisco 2013 Eve Marder 2014 Thomas Jessell 2015 Carla Shatz and Michael Greenberg 2016 Mu-ming Poo, Institute of Neuroscience, Chinese Academy of Sciences and UC Berkeley 2017 Joshua R. Sanes, Center for Brain Neuroscience, Harvard University 2018 Ann Graybiel (McGovern Institute for Brain Research/MIT), Okihide Hikosaka (National Eye Institute/NIH) and Wolfram Schultz (University of Cambridge) 2019 Joseph S. Takahashi 2020 Friedrich Bonhoeffer, Corey Goodman and Marc Tessier-Lavigne 2021 Christine Petit and Christopher A. Walsh 2022 Larry Abbott, Emery Neal Brown, Terrence Sejnowski and Haim Sompolinsky 2023 Huda Akil 2024 Cornelia Bargmann and Gerald M. Rubin See also The Brain Prize Golden Brain Award Kavli Prize in Neuroscience W. Alden Spencer Award Karl Spencer Lashley Award Mind & Brain Prize List of medicine awards List of neuroscience awards References External links Gruber Foundation Web site Gruber Prizes nomination page Facebook page for The Peter and Patricia Gruber Foundation Neuroscience awards Awards established in 2000
Gruber Prize in Neuroscience
Technology
500
64,648,770
https://en.wikipedia.org/wiki/List%20of%20chemical%20databases
This is a list of websites that contain lists of chemicals, or databases of chemical information. There is further detail on the content of these and other resources in a Wikibook of information sources. References Databases Chemistry
List of chemical databases
Chemistry
44
780,852
https://en.wikipedia.org/wiki/Two%20Dogmas%20of%20Empiricism
"Two Dogmas of Empiricism" is a paper by analytic philosopher Willard Van Orman Quine published in 1951. According to University of Sydney professor of philosophy Peter Godfrey-Smith, this "paper [is] sometimes regarded as the most important in all of twentieth-century philosophy". The paper is an attack on two central aspects of the logical positivists' philosophy: the first being the analytic–synthetic distinction between analytic truths and synthetic truths, explained by Quine as truths grounded only in meanings and independent of facts, and truths grounded in facts; the other being reductionism, the theory that each meaningful statement gets its meaning from some logical construction of terms that refer exclusively to immediate experience. "Two Dogmas" has six sections. The first four focus on analyticity, the last two on reductionism. There, Quine turns the focus to the logical positivists' theory of meaning. He also presents his own holistic theory of meaning. Analyticity and circularity Most of Quine's argument against analyticity in the first four sections is focused on showing that different explanations of analyticity are circular. The main purpose is to show that no satisfactory explanation of analyticity has been given. Quine begins by making a distinction between two different classes of analytic statements. The first one is called logically true and has the form: (1) No unmarried man is married. A sentence with that form is true independent of the interpretation of "man" and "married", so long as the logical particles "no", "un-" and "is" have their ordinary English meaning. The statements in the second class have the form: (2) No bachelor is married. A statement with this form can be turned into a statement with form (1) by exchanging synonyms with synonyms, in this case "bachelor" with "unmarried man". It is the second class of statements that lack characterization according to Quine. The notion of the second form of analyticity leans on the notion of synonymy, which Quine believes is in as much need of clarification as analyticity. Most of Quine's following arguments are focused on showing how explanations of synonymy end up being dependent on the notions of analyticity, necessity, or even synonymy itself. How do we reduce sentences from the second class to a sentence of the first class? Some might propose definitions. "No bachelor is married" can be turned into "No unmarried man is married" because "bachelor" is defined as "unmarried man". But, Quine asks: how do we find out that "bachelor" is defined as "unmarried man"? Clearly, a dictionary would not solve the problem, as a dictionary is a report of already known synonyms, and thus is dependent on the notion of synonymy, which Quine holds as unexplained. A second suggestion Quine considers is an explanation of synonymy in terms of interchangeability. Two linguistic forms are (according to this view) synonymous if they are interchangeable in all contexts without changing the truth-value. But consider the following example: (3) "Bachelor" has fewer than ten letters. Obviously "bachelor" and "unmarried man" are not interchangeable in that sentence. To exclude that example and some other obvious counterexamples, such as poetic quality, Quine introduces the notion of cognitive synonymy. But does interchangeability hold as an explanation of cognitive synonymy? Suppose we have a language without modal adverbs like "necessarily". Such a language would be extensional, in the way that two predicates which are true about the same objects are interchangeable again without altering the truth-value. Thus, there is no assurance that two terms that are interchangeable without the truth-value changing are interchangeable because of meaning, and not because of chance. For example, "creature with a heart" and "creature with kidneys" share extension. In a language with the modal adverb "necessarily" the problem is solved, as salva veritate holds in the following case: (4) Necessarily all and only bachelors are unmarried men while it does not hold for (5) Necessarily all and only creatures with a heart are creatures with kidneys. Presuming that 'creature with a heart' and 'creature with kidneys' have the same extension, they will be interchangeable salva veritate. But this interchangeability rests upon both empirical features of the language itself and the degree to which extension is empirically found to be identical for the two concepts, and not upon the sought for principle of cognitive synonymy. It seems that the only way to assert the synonymy is by supposing that the terms 'bachelor' and 'unmarried man' are synonymous and that the sentence "All and only all bachelors are unmarried men" is analytic. But for salva veritate to hold as a definition of something more than extensional agreement, i.e., cognitive synonymy, we need a notion of necessity and thus of analyticity. So, from the above example, it can be seen that in order for us to distinguish between analytic and synthetic we must appeal to synonymy; at the same time, we should also understand synonymy with interchangeability salva veritate. However, such a condition to understand synonymy is not enough so we not only argue that the terms should be interchangeable, but necessarily so. And to explain this logical necessity we must appeal to analyticity once again. Thus, the argument is circular, and fails. Ultimately, Quine reaches the conclusion about analyticity the paper is famous for: "It is obvious that truth in general depends on both language and extralinguistic fact... Hence the temptation to suppose in general that the truth of a statement is somehow analyzable into a linguistic component and a factual component. Given this supposition, it next seems reasonable that in some statements the factual component should be null; and these are the analytic statements. But, for all its a priori reasonableness, a boundary between analytic and synthetic statements simply has not been drawn. That there is such a distinction to be drawn at all is an unempirical dogma of empiricists, a metaphysical article of faith." Reductionism Analyticity would be acceptable if we allowed for the verification theory of meaning: an analytic statement would be one synonymous with a logical truth, which would be an extreme case of meaning where empirical verification is not needed, because it is "confirmed no matter what". "So, if the verification theory can be accepted as an adequate account of statement synonymy, the notion of analyticity is saved after all." The problem that naturally follows is how statements are to be verified. An empiricist would say that it can only be done using empirical evidence. So some form of reductionism - "the belief that each meaningful statement is equivalent to some logical construct upon terms which refer to immediate experience" - must be assumed in order for an empiricist to 'save' the notion of analyticity. Such reductionism, says Quine, presents just as intractable a problem as did analyticity. In order to prove that all meaningful statements can be translated into a sense-datum language, a reductionist would surely have to confront "the task of specifying a sense-datum language and showing how to translate the rest of significant discourse, statement by statement, into it." To illustrate the difficulty of doing so, Quine describes Rudolf Carnap's attempt in his book Der logische Aufbau der Welt. Quine first observes that Carnap's starting point was not the strictest possible, as his "sense-datum language" included not only sense-events but also "the notations of logic, up through higher set theory... Empiricists there are who would boggle at such prodigality." Nonetheless, says Quine, Carnap showed great ingenuity in defining sensory concepts "which, but for his constructions, one would not have dreamed were definable on so slender a basis." However, even such admirable efforts left Carnap, by his own admission, far short of completing the whole project. Finally, Quine objects in principle to Carnap's proposed translation of statements like "quality q is at point-instant x;y;z;t" into his sense-datum language, because he does not define the connective "is at". Without statements of this kind, it is difficult to see, even in principle, how Carnap's project could have been completed. The difficulty that Carnap encountered shows that reductionism is, at best, unproven and very difficult to prove. Until a reductionist can produce an acceptable proof, Quine maintains that reductionism is another "metaphysical article of faith". Quine's holism Instead of reductionism, Quine proposes that it is the whole field of science and not single statements that are verified. All scientific statements are interconnected. Logical laws give the relation between different statements, while they also are statements of the system. This makes talk about the empirical content of a single statement misleading. It also becomes impossible to draw a line between synthetic statements, which depend on experience, and analytic statements, that hold come what may. Any statement can be held as necessarily true according to Quine, if the right changes are made somewhere else in the system. In the same way, no statements are immune to revision. Even logical laws can be revised according to Quine. Quantum logic, introduced by Garrett Birkhoff and John von Neumann, abandons the law of distributivity from classical logic in order to reconcile some of the apparent inconsistencies of classical Boolean logic with the facts related to measurement and observation in quantum mechanics. He states also that a revision of the law of excluded middle has been proposed as a means of simplifying quantum mechanics. Quine makes the case that the empirical study of physics has furnished apparently credible grounds for replacing classical logic by quantum logic, rather as Newtonian physics gave way to Einsteinian physics. The idea that logical laws are not immune to revision in the light of empirical evidence has provoked an intense debate (see "Is Logic Empirical?"). According to Quine, there are two different results of his reasoning. The first is a blurring of the line between metaphysics and natural science. The common-sense theory about physical objects is epistemologically comparable to the gods of Homer. Quine is a physicalist, in the sense that he considers it a scientific error not to adopt a theory which makes reference to physical objects. However, like Gods of Homer, physical objects are posits, and there is no great epistemic difference in kind; the difference is rather that the theory of physical objects has turned out to be a more efficient theory. After having defined himself an "empiricist", Quine states in Two Dogmas: "The myth of physical objects is epistemologically superior to most in that it has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience". The second result is a move towards pragmatism. Since, Quine says, the function of science is to predict future experiences in the light of past ones, the only ground for choosing which explanations to believe is "the degree to which they expedite our dealings with sense experiences." While pragmatic concerns are important for Carnap and other logical positivists when choosing a linguistic framework, their pragmatism "leaves off at the imagined boundary between the analytic and the synthetic". For Quine, every change in the system of science is, when rational, pragmatic. Reception Rudolf Carnap prepared a reply entitled "Quine on Analyticity", but this was not published until 1990. Addressing Quine's concern over the status of the sentence "Everything green is extended", Carnap wrote "the difficulty here lies in the unclarity of the word 'green', namely in an indecision over whether one should use the word for something unextended, i.e., for a single space-time point. In daily life it is never so used, and one scarcely ever speaks of space-time points." Carnap then puts forward that an exact artificial language ought to clarify the problem by defining 'green' (or its synonym) as something that is either necessarily or contingently not applied to space-time points. He wrote that once that decision is made, the difficulty is resolved. Carnap also answers Quine's argument on the use of sets of formal sentences to explain analyticity by arguing that this method is an explication of a poorly understood notion. Paul Grice and P. F. Strawson criticized Two Dogmas in their (1956) article In Defense of a Dogma. Among other things, they argue that Quine's skepticism about synonyms leads to a skepticism about meaning. If statements can have meanings, then it would make sense to ask "What does it mean?". If it makes sense to ask "What does it mean?", then synonymy can be defined as follows: Two sentences are synonymous if and only if the true answer of the question "What does it mean?" asked of one of them is the true answer to the same question asked of the other. They also draw the conclusion that discussion about correct or incorrect translations would be impossible given Quine's argument. Four years after Grice and Strawson published their paper, Quine's book Word and Object was released. In the book Quine presented his theory of indeterminacy of translation. In Two Dogmas' revisited, Hilary Putnam argues that Quine is attacking two different notions. Analytic truth defined as a true statement derivable from a tautology by putting synonyms for synonyms is near Kant's account of analytic truth as a truth whose negation is a contradiction. Analytic truth defined as a truth confirmed no matter what however, is closer to one of the traditional accounts of a priori. While the first four sections of Quine's paper concern analyticity, the last two concern apriority. Putnam considers the argument in the two last sections as independent of the first four, and at the same time as Putnam criticizes Quine, he also emphasizes his historical importance as the first top rank philosopher to both reject the notion of apriority and sketch a methodology without it. Jerrold Katz countered the arguments of Two Dogmas directly by trying to define analyticity non-circularly on the syntactical features of sentences. In his book Philosophical Analysis in the Twentieth Century, Volume 1 : The Dawn of Analysis Scott Soames (pp. 360–361) has pointed out that Quine's circularity argument needs two of the logical positivists' central theses to be effective: All necessary truths (and all a priori truths) are analytic Analyticity is needed to explain and legitimate necessity. It is only when these two theses are accepted that Quine's argument holds. It is not a problem that the notion of necessity is presupposed by the notion of analyticity if necessity can be explained without analyticity. According to Soames, both theses were accepted by most philosophers when Quine published Two Dogmas. Today however, Soames holds both statements to be antiquated. Editions Reprinted in his 1953 From a Logical Point of View. Harvard University Press. See also Duhem–Quine thesis Kantian empiricism Paradox of analysis Notes External links "Two Dogmas of Empiricism" 1951 essays Academic journal articles Analytic philosophy literature Empiricism Epistemology literature Holism Logical truth Philosophy papers Philosophy essays Philosophy of language literature Philosophy of science literature Willard Van Orman Quine Works originally published in American magazines Works originally published in philosophy magazines
Two Dogmas of Empiricism
Mathematics
3,299
9,947,160
https://en.wikipedia.org/wiki/Open%20Architecture%20Network
Open Architecture Network was the world's first online open source community dedicated to improving global living conditions through innovative and sustainable design. It was developed by Architecture for Humanity and incorporated Creative Commons licensing within the project management tools. History Open Architecture Network was formed after one of its founders, Cameron Sinclair, won the 2006 TED Prize from the Technology Entertainment Design conference. The prize awards each recipient 'one wish to change the world'. The Beta Version launched at TED2007 on March 8, 2007. Shortly after the launch, AMD announced the sponsoring of the 2007 Open Architecture Challenge, an open design competition to develop technology facilities in the developing world. Purpose The aim of the network is to allow architects, designers, innovators, and community leaders to share innovative and sustainable ideas, designs and plans. View and review designs posted by others. Collaborate with each other, people in other professions and community leaders to address specific design challenges. Manage design projects from concept to implementation. Protect their intellectual property rights using the Creative Commons "some rights reserved" licensing system and be shielded from unwarranted liability. See also Open source architecture External links Archived version of OpenArchitectureNetwork.org Humanitarian Goals, Tech-Savvy Solutions Framing Open Source Architecture Web 2.0 Goes To Work Architecture organizations Organizations established in 2007 Open content projects Open-source hardware Development charities based in the United States
Open Architecture Network
Engineering
280
57,557,766
https://en.wikipedia.org/wiki/Zaltoprofen
Zaltoprofen (JAN; trade name Soleton) is a nonsteroidal anti-inflammatory drug (NSAID) used as an analgesic, antipyretic, and anti-inflammatory agent. It is a selective COX-2 inhibitor and also inhibits bradykinin-induced pain responses without blocking bradykinin receptors. It was approved for use in Japan in 1993. References COX-2 inhibitors Nonsteroidal anti-inflammatory drugs Carboxylic acids
Zaltoprofen
Chemistry
100
2,101,244
https://en.wikipedia.org/wiki/Nylon%206
Nylon 6 or polycaprolactam is a polymer, in particular semicrystalline polyamide. Unlike most other nylons, nylon 6 is not a condensation polymer, but instead is formed by ring-opening polymerization; this makes it a special case in the comparison between condensation and addition polymers. Its competition with nylon 6,6 and the example it set have also shaped the economics of the synthetic fibre industry. It is sold under numerous trade names including Perlon (Germany), Dederon (former East Germany), Nylatron, Capron, Ultramid, Akulon, Kapron (former Soviet Union and satellite states), Rugopa (Turkey) and Durethan. History Polycaprolactam was developed by Paul Schlack at IG Farben in late 1930s (first synthesized in 1938) to reproduce the properties of Nylon 66 without violating the patent on its production. (Around the same time, Kohei Hoshino at Toray also succeeded in synthesizing nylon 6.) It was marketed as Perlon, and industrial production with a capacity of 3,500 tons per year was established in Nazi Germany in 1943, using phenol as a feedstock. At first, the polymer was used to produce coarse fiber for artificial bristle, then the fiber quality was improved, and Germans started making parachutes, cord for aircraft tires and towing cables for gliders. The Soviet Union began its development of an analog in the 1940s, while negotiating with Germany on building an IG Farben plant in Ukraine, basic scientific work was ongoing in 1942. The production only started in 1948 in Klin, Moscow Oblast, after USSR obtained the 2000 volumes of IG Farben, and 10,000 volumes of AEG technical documentation, as a result of victory in the World War II. Synthesis Nylon 6 can be modified using comonomers or stabilizers during polymerization to introduce new chain end or functional groups, which changes the reactivity and chemical properties. It is often done to change its dyeability or flame retardance. Nylon 6 is synthesized by ring-opening polymerization of caprolactam. Caprolactam has 6 carbons, hence Nylon 6. When caprolactam is heated at about 533K in an inert atmosphere of nitrogen for about 4–5 hours, the ring breaks and undergoes polymerization. Then the molten mass is passed through spinnerets to form fibres of nylon 6. During polymerization, the amide bond within each caprolactam molecule is broken, with the active groups on each side re-forming two new bonds as the monomer becomes part of the polymer backbone. Unlike nylon 6,6, in which the direction of the amide bond reverses at each bond, all nylon 6 amide bonds lie in the same direction (see figure: note the N to C orientation of each amide bond). Properties Nylon 6 fibres are tough, possessing high tensile strength, elasticity and lustre. They are wrinkleproof and highly resistant to abrasion and chemicals such as acids and alkalis. The fibres can absorb up to 2.4% of water, although this lowers tensile strength. The glass transition temperature of Nylon 6 is 47 °C. As a synthetic fibre, Nylon 6 is generally white but can be dyed in a solution bath prior to production for different color results. Its tenacity is 6–8.5gf/D with a density of 1.14g/cm. Its melting point is at 215 °C and can protect heat up to 150 °C on average. Biodegradation Flavobacterium sp. [85] and Pseudomonas sp. (NK87) degrade oligomers of Nylon 6, but not polymers. Certain white rot fungal strains can also degrade Nylon 6 through oxidation. Compared to aliphatic polyesters, Nylon 6 has been said to have poor biodegradability. Strong interchain interactions from hydrogen bonds between molecular nylon chains is said to be the cause by some sources. However, in 2023 a team of Northwestern University chemists led by Linda Broadbelt and Tobin J. Marks developed rare earth metallocene catalysts that rapidly break Nylon 6 down back to caprolactam at 220°C, which is considered mild conditions. Production in Europe At present, polyamide 6 is a significant construction material used in many industries, for instance in the automotive industry, aircraft industry, electronic and electrotechnical industry, clothing industry and medicine. Annual demand for polyamides in Europe amounts to a million tonnes. They are produced by all leading chemical companies. The largest producers of polyamide 6 in Europe: Fibrant, 260,000 tonnes per year BASF, 240,000 tonnes per year Lanxess, 170,000 tonnes per year Radici, 125,000 tonnes per year DOMO, 100,000 tonnes per year Grupa Azoty, 100,000 tonnes per year References External links The Promise of Nylon 6: A Case Study in Intelligent Product Design by William McDonough & Michael Braungart Polyamides Plastics Synthetic fibers German inventions
Nylon 6
Physics,Chemistry
1,063
6,033,694
https://en.wikipedia.org/wiki/Brasilsat%20B1
Brasilsat B1 is a Brazilian communications satellite launched on August 10, 1994, by an Ariane rocket model 44LP at Guiana Space Centre which is located in Kourou, French Guiana. History It was constructed by the United States and Brazil and is classified as a second generation satellite. It is larger and more powerful than the previous generation of satellites. The Boeing Company (which chose to expand its presence in another aerospace field of satellite communications by purchasing Hughes Electronics Corporation, the builder of Brasilsat B1 and B2) contracted the acquisition of three satellites from Hughes. As part of the contract, Hughes would divide the work with Promon Engenharia SA of São Paulo. Brasilsat B1 and B2 were tested by the Institute of Space Research - INPE of São José dos Campos, Brasilsat B3 and B4 were tested in the Hughes laboratories. The contract also included renovation of sensor equipment and telemetry, provided by Guaratiba Center for Satellite Signaling, located in Rio de Janeiro, as well as automation and installation of security equipment in the Tanguá Control Station. Current status In March 2007, Brasilsat B1 was moved from its former orbital position at 70.0°W to 68.0°W and replaced by Brasilsat B4. On June 2, 2008 Brasilsat B4 was moved from its new position to 84.0°W and replaced at 70.0°W by Star One C2. Brasilsat B3 is currently at 75.0°W. Of the four Brasilsat satellites, only B3 is still transmitting signals as of July 2021. B4 was retired in June 2021. Main characteristics Original orbital position: 70.0° W Current orbital position: 68.0°W (Inactive) Coverage: Brazil Transponders: 28 C-band Downlink frequencies: 5850–6425 MHz Uplink frequency: 3625 – 4200 MHz Launch date: August 10, 1994 Model: Hughes HS 376 W Launch location/vehicle: Arianespace / Ariane 44 LP Planned life of satellite: 12 years References External links Satellite Brasilsat 1 Communications satellites in geostationary orbit Spacecraft launched in 1994 Satellites using the HS-376 bus Star One satellites
Brasilsat B1
Astronomy
446
1,434,061
https://en.wikipedia.org/wiki/Affinity%20chromatography
Affinity chromatography is a method of separating a biomolecule from a mixture, based on a highly specific macromolecular binding interaction between the biomolecule and another substance. The specific type of binding interaction depends on the biomolecule of interest; antigen and antibody, enzyme and substrate, receptor and ligand, or protein and nucleic acid binding interactions are frequently exploited for isolation of various biomolecules. Affinity chromatography is useful for its high selectivity and resolution of separation, compared to other chromatographic methods. Principle Affinity chromatography has the advantage of specific binding interactions between the analyte of interest (normally dissolved in the mobile phase), and a binding partner or ligand (immobilized on the stationary phase). In a typical affinity chromatography experiment, the ligand is attached to a solid, insoluble matrix—usually a polymer such as agarose or polyacrylamide—chemically modified to introduce reactive functional groups with which the ligand can react, forming stable covalent bonds. The stationary phase is first loaded into a column to which the mobile phase is introduced. Molecules that bind to the ligand will remain associated with the stationary phase. A wash buffer is then applied to remove non-target biomolecules by disrupting their weaker interactions with the stationary phase, while the biomolecules of interest will remain bound. Target biomolecules may then be removed by applying a so-called elution buffer, which disrupts interactions between the bound target biomolecules and the ligand. The target molecule is thus recovered in the eluting solution. Affinity chromatography does not require the molecular weight, charge, hydrophobicity, or other physical properties of the analyte of interest to be known, although knowledge of its binding properties is useful in the design of a separation protocol. Types of binding interactions commonly exploited in affinity chromatography procedures are summarized in the table below. Batch and column setups Binding to the solid phase may be achieved by column chromatography whereby the solid medium is packed onto a column, the initial mixture run through the column to allow settling, a wash buffer run through the column and the elution buffer subsequently applied to the column and collected. These steps are usually done at ambient pressure. Alternatively, binding may be achieved using a batch treatment, for example, by adding the initial mixture to the solid phase in a vessel, mixing, separating the solid phase, removing the liquid phase, washing, re-centrifuging, adding the elution buffer, re-centrifuging and removing the elute. Sometimes a hybrid method is employed such that the binding is done by the batch method, but the solid phase with the target molecule bound is packed onto a column and washing and elution are done on the column. The ligands used in affinity chromatography are obtained from both organic and inorganic sources. Examples of biological sources are serum proteins, lectins and antibodies. Inorganic sources are moronic acid, metal chelates and triazine dyes. A third method, expanded bed absorption, which combines the advantages of the two methods mentioned above, has also been developed. The solid phase particles are placed in a column where liquid phase is pumped in from the bottom and exits at the top. The gravity of the particles ensure that the solid phase does not exit the column with the liquid phase. Affinity columns can be eluted by changing salt concentrations, pH, pI, charge and ionic strength directly or through a gradient to resolve the particles of interest. More recently, setups employing more than one column in series have been developed. The advantage compared to single column setups is that the resin material can be fully loaded since non-binding product is directly passed on to a consecutive column with fresh column material. These chromatographic processes are known as periodic counter-current chromatography (PCC). The resin costs per amount of produced product can thus be drastically reduced. Since one column can always be eluted and regenerated while the other column is loaded, already two columns are sufficient to make full use of the advantages. Additional columns can give additional flexibility for elution and regeneration times, at the cost of additional equipment and resin costs. Specific uses Affinity chromatography can be used in a number of applications, including nucleic acid purification, protein purification from cell free extracts, and purification from blood. By using affinity chromatography, one can separate proteins that bind to a certain fragment from proteins that do not bind that specific fragment. Because this technique of purification relies on the biological properties of the protein needed, it is a useful technique and proteins can be purified many folds in one step. Various affinity media Many different affinity media exist for a variety of possible uses. Briefly, they are (generalized) activated/functionalized that work as a functional spacer, support matrix, and eliminates handling of toxic reagents. Amino acid media is used with a variety of serum proteins, proteins, peptides, and enzymes, as well as rRNA and dsDNA. Avidin biotin media is used in the purification process of biotin/avidin and their derivatives. Carbohydrate bonding is most often used with glycoproteins or any other carbohydrate-containing substance; carbohydrate is used with lectins, glycoproteins, or any other carbohydrate metabolite protein. Dye ligand media is nonspecific but mimics biological substrates and proteins. Glutathione is useful for separation of GST tagged recombinant proteins. Heparin is a generalized affinity ligand, and it is most useful for separation of plasma coagulation proteins, along with nucleic acid enzymes and lipases Hydrophobic interaction media are most commonly used to target free carboxyl groups and proteins. Immunoaffinity media (detailed below) utilizes antigens' and antibodies' high specificity to separate; immobilized metal affinity chromatography is detailed further below and uses interactions between metal ions and proteins (usually specially tagged) to separate; nucleotide/coenzyme that works to separate dehydrogenases, kinases, and transaminases. Nucleic acids function to trap mRNA, DNA, rRNA, and other nucleic acids/oligonucleotides. Protein A/G method is used to purify immunoglobulins. Speciality media are designed for a specific class or type of protein/co enzyme; this type of media will only work to separate a specific protein or coenzyme. Immunoaffinity Another use for the procedure is the affinity purification of antibodies from blood serum. If the serum is known to contain antibodies against a specific antigen (for example if the serum comes from an organism immunized against the antigen concerned) then it can be used for the affinity purification of that antigen. This is also known as Immunoaffinity Chromatography. For example, if an organism is immunised against a GST-fusion protein it will produce antibodies against the fusion-protein, and possibly antibodies against the GST tag as well. The protein can then be covalently coupled to a solid support such as agarose and used as an affinity ligand in purifications of antibody from immune serum. For thoroughness, the GST protein and the GST-fusion protein can each be coupled separately. The serum is initially allowed to bind to the GST affinity matrix. This will remove antibodies against the GST part of the fusion protein. The serum is then separated from the solid support and allowed to bind to the GST-fusion protein matrix. This allows any antibodies that recognize the antigen to be captured on the solid support. Elution of the antibodies of interest is most often achieved using a low pH buffer such as glycine pH 2.8. The eluate is collected into a neutral tris or phosphate buffer, to neutralize the low pH elution buffer and halt any degradation of the antibody's activity. This is a nice example as affinity purification is used to purify the initial GST-fusion protein, to remove the undesirable anti-GST antibodies from the serum and to purify the target antibody. Monoclonal antibodies can also be selected to bind proteins with great specificity, where protein is released under fairly gentle conditions. This can become of use for further research in the future. A simplified strategy is often employed to purify antibodies generated against peptide antigens. When the peptide antigens are produced synthetically, a terminal cysteine residue is added at either the N- or C-terminus of the peptide. This cysteine residue contains a sulfhydryl functional group which allows the peptide to be easily conjugated to a carrier protein (e.g. Keyhole limpet hemocyanin (KLH)). The same cysteine-containing peptide is also immobilized onto an agarose resin through the cysteine residue and is then used to purify the antibody. Most monoclonal antibodies have been purified using affinity chromatography based on immunoglobulin-specific Protein A or Protein G, derived from bacteria. Immunoaffinity chromatography with monoclonal antibodies immobilized on monolithic column has been successfully used to capture extracellular vesicles (e.g., exosomes and exomeres) from human blood plasma by targeting tetraspanins and integrins found on the surface of the EVs. Immunoaffinity chromatography is also the basis for immunochromatographic test (ICT) strips, which provide a rapid means of diagnosis in patient care. Using ICT, a technician can make a determination at a patient's bedside, without the need for a laboratory. ICT detection is highly specific to the microbe causing an infection. Immobilized metal ion affinity chromatography Immobilized metal ion affinity chromatography (IMAC) is based on the specific coordinate covalent bond of amino acids, particularly histidine, to metals. This technique works by allowing proteins with an affinity for metal ions to be retained in a column containing immobilized metal ions, such as cobalt, nickel, or copper for the purification of histidine-containing proteins or peptides, iron, zinc or gallium for the purification of phosphorylated proteins or peptides. Many naturally occurring proteins do not have an affinity for metal ions, therefore recombinant DNA technology can be used to introduce such a protein tag into the relevant gene. Methods used to elute the protein of interest include changing the pH, or adding a competitive molecule, such as imidazole. Recombinant proteins Possibly the most common use of affinity chromatography is for the purification of recombinant proteins. Proteins with a known affinity are protein tagged in order to aid their purification. The protein may have been genetically modified so as to allow it to be selected for affinity binding; this is known as a fusion protein. Protein tags include hexahistidine (His), glutathione-S-transferase (GST), maltose binding protein (MBP), and the Colicin E7 variant CL7 tag. Histidine tags have an affinity for nickel, cobalt, zinc, copper and iron ions which have been immobilized by forming coordinate covalent bonds with a chelator incorporated in the stationary phase. For elution, an excess amount of a compound able to act as a metal ion ligand, such as imidazole, is used. GST has an affinity for glutathione which is commercially available immobilized as glutathione agarose. During elution, excess glutathione is used to displace the tagged protein. CL7 has an affinity and specificity for Immunity Protein 7 (Im7) which is commercially available immobilized as Im7 agarose resin. For elution, an active and site-specific protease is applied to the Im7 resin to release the tag-free protein. Lectins Lectin affinity chromatography is a form of affinity chromatography where lectins are used to separate components within the sample. Lectins, such as concanavalin A are proteins which can bind specific alpha-D-mannose and alpha-D-glucose carbohydrate molecules. Some common carbohydrate molecules that is used in lectin affinity chromatography are Con A-Sepharose and WGA-agarose. Another example of a lectin is wheat germ agglutinin which binds D-N-acetyl-glucosamine. The most common application is to separate glycoproteins from non-glycosylated proteins, or one glycoform from another glycoform. Although there are various ways to perform lectin affinity chromatography, the goal is extract a sugar ligand of the desired protein. Specialty Another use for affinity chromatography is the purification of specific proteins using a gel matrix that is unique to a specific protein. For example, the purification of E. coli β-galactosidase is accomplished by affinity chromatography using p-aminobenyl-1-thio-β-D-galactopyranosyl agarose as the affinity matrix. p-aminobenyl-1-thio-β-D-galactopyranosyl agarose is used as the affinity matrix because it contains a galactopyranosyl group, which serves as a good substrate analog for E. coli β-Galactosidase. This property allows the enzyme to bind to the stationary phase of the affinity matrix and β-Galactosidase is eluted by adding increasing concentrations of salt to the column. Alkaline phosphatase Alkaline phosphatase from E. coli can be purified using a DEAE-Cellulose matrix. A. phosphatase has a slight negative charge, allowing it to weakly bind to the positively charged amine groups in the matrix. The enzyme can then be eluted out by adding buffer with higher salt concentrations. Boronate affinity chromatography Boronate affinity chromatography consists of using boronic acid or boronates to elute and quantify amounts of glycoproteins. Clinical adaptations have applied this type of chromatography for use in determining long term assessment of diabetic patients through analysis of their glycated hemoglobin. Serum albumin purification Affinity purification of albumin and macroglobulin contamination is helpful in removing excess albumin and α2-macroglobulin contamination, when performing mass spectrometry. In affinity purification of serum albumin, the stationary used for collecting or attracting serum proteins can be Cibacron Blue-Sepharose. Then the serum proteins can be eluted from the adsorbent with a buffer containing thiocyanate (SCN−). Weak affinity chromatography Weak affinity chromatography (WAC) is an affinity chromatography technique for affinity screening in drug development. WAC is an affinity-based liquid chromatographic technique that separates chemical compounds based on their different weak affinities to an immobilized target. The higher affinity a compound has towards the target, the longer it remains in the separation unit, and this will be expressed as a longer retention time. The affinity measure and ranking of affinity can be achieved by processing the obtained retention times of analyzed compounds. Affinity chromatography is part of a larger suite of techniques used in chemoproteomics based drug target identification. The WAC technology is demonstrated against a number of different protein targets – proteases, kinases, chaperones and protein–protein interaction (PPI) targets. WAC has been shown to be more effective than established methods for fragment based screening. History Affinity chromatography was conceived and first developed by Pedro Cuatrecasas and Meir Wilchek. References External links "Affinity Chromatography Principle, Procedure And Advance Detailed Note – 2020". "What is affinity chromatography" Biochemical separation processes Chromatography
Affinity chromatography
Chemistry,Biology
3,416
38,026,362
https://en.wikipedia.org/wiki/Amanita%20flavella
Amanita flavella is a species of mycorrhizal fungus from family Amanitaceae. It has a convex lemon-yellow coloured cap up to in diameter. They can also be yellowish-orange coloured and have crowded pale-yellow gills. The yellowish-white stipe is central and 9 cm tall; it is slightly bulbous, and enclosed into a volva. The yellowish-white ring is flared, ample, and membranous. The spores are 8.5–10 μm long and 6–6.5 μm wide, white, amyloid, and ellipsoid. The species is similar in appearance to A. flavoconia and A. flavipes. It can be found in New South Wales and Queensland Australia. See also List of Amanita species References flavella Fungi described in 1941 Fungi of Australia Poisonous fungi Taxa named by John Burton Cleland Fungus species
Amanita flavella
Biology,Environmental_science
191
25,356,265
https://en.wikipedia.org/wiki/Solar%20eclipse%20of%20December%2016%2C%202047
A partial solar eclipse will occur at the Moon's ascending node of orbit between Monday, December 16 and Tuesday, December 17, 2047, with a magnitude of 0.8816. A solar eclipse occurs when the Moon passes between Earth and the Sun, thereby totally or partly obscuring the image of the Sun for a viewer on Earth. A partial solar eclipse occurs in the polar regions of the Earth when the center of the Moon's shadow misses the Earth. This will be the last of four partial solar eclipses in 2047, with the others occurring on January 26, June 23, and July 22. The partial solar eclipse will be visible for parts of Antarctica, southern Chile, and southern Argentina. Images Animated path Eclipse details Shown below are two tables displaying details about this particular solar eclipse. The first table outlines times at which the moon's penumbra or umbra attains the specific parameter, and the second table describes various other parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2047 A total lunar eclipse on January 12. A partial solar eclipse on January 26. A partial solar eclipse on June 23. A total lunar eclipse on July 7. A partial solar eclipse on July 22. A partial solar eclipse on December 16. Metonic Preceded by: Solar eclipse of February 28, 2044 Followed by: Solar eclipse of October 4, 2051 Tzolkinex Preceded by: Solar eclipse of November 4, 2040 Followed by: Solar eclipse of January 27, 2055 Half-Saros Preceded by: Lunar eclipse of December 11, 2038 Followed by: Lunar eclipse of December 22, 2056 Tritos Preceded by: Solar eclipse of January 16, 2037 Followed by: Solar eclipse of November 16, 2058 Solar Saros 123 Preceded by: Solar eclipse of December 5, 2029 Followed by: Solar eclipse of December 27, 2065 Inex Preceded by: Solar eclipse of January 6, 2019 Followed by: Solar eclipse of November 26, 2076 Triad Preceded by: Solar eclipse of February 15, 1961 Followed by: Solar eclipse of October 17, 2134 Solar eclipses of 2047–2050 Saros 123 Metonic series Tritos series Inex series References External links 2047 in science 2047 12 16 2047 12 16
Solar eclipse of December 16, 2047
Astronomy
556
25,378,720
https://en.wikipedia.org/wiki/Isay%20reaction
The Isay reaction also known as Gabriel-Isay condensation is an organic reaction in which certain diaminopyrimidines are transformed into pterins by condensation with a 1,2-dicarbonyl compound, such as 2,3-butanedione. The reaction is named after Oskar Isay. See also List of organic reactions References Name reactions Nitrogen heterocycle forming reactions
Isay reaction
Chemistry
86
37,402,258
https://en.wikipedia.org/wiki/Thomas%20Appelquist
Thomas William Appelquist is a theoretical particle physicist who is the Eugene Higgins Professor of Physics at Yale University. He received his bachelor's degree from Illinois Benedictine College and his Ph.D. in 1968 from Cornell University under Donald R. Yennie with thesis Parametric Representations of Renormalized Feynman Amplitudes. In 1970, following a postdoctoral appointment at the Stanford Linear Accelerator Center, he joined the faculty at Harvard University. In 1975, he moved to Yale and was appointed professor of physics in 1976. From 1983 until 1989, he served as chair of Yale's department of physics. He served as director of the division of physical sciences and engineering from 1990 to 1993. In 1991, he was named Eugene Higgins Professor of Physics, and from 1993 to 1998 he served as dean of Yale's graduate school. He is a Fellow of the American Physical Society (elected in 1984), the recipient of a Senior U.S. Scientist award from the Alexander von Humboldt Foundation, and a Fellow of the American Academy of Arts and Sciences. In 1997, he was awarded the J.J. Sakurai Prize of the American Physical Society for his work on charmonium and the de-coupling of heavy particles. From 1993 to 1996, he served as President of the Aspen Center for Physics. He has served on many advisory committees for the National Science Foundation, the Department of Energy and the American Physical Society. From 1989 to 1993, he was a member of the Scientific Policy Committee of the Superconducting Supercollider (SSC) Laboratory. At Yale, during the 1992-1993 academic year, he served on the faculty/trustee Presidential Search Committee. From 1999-2001, he chaired the committee of the National Research Council that prepared an Overview of the field of physics as the culmination of the NRC survey “Physics in a New Era”. From 2001-2006, he served as Chair of the Board of the Aspen Center for Physics. He chaired the Science Council of the Jefferson National Laboratory in Newport News, Virginia from 2007 to 2017. His research has focused on the theory of elementary particles, including the strong interactions and electroweak unification. References Particle physicists 21st-century American physicists J. J. Sakurai Prize for Theoretical Particle Physics recipients Benedictine University alumni Cornell University alumni Yale University faculty Fellows of the American Academy of Arts and Sciences Fellows of the American Physical Society Living people 1941 births Aspen Center for Physics people
Thomas Appelquist
Physics
493
42,894,437
https://en.wikipedia.org/wiki/2014%20Khalilabad%20derailment
On 26 May 2014, an express train travelling from Gorakhpur town to Hisar collided with a stationary goods train at Chureb railway station in Khalilabad north India, killing at least 40 people and injuring another 150. Prime-minister-elect Narendra Modi expressed his condolences at the time, on Twitter. References Railway accidents in 2014 Derailments in India 2014 disasters in India Sant Kabir Nagar district May 2014 events in India Railway accidents and incidents in Uttar Pradesh History of Uttar Pradesh (1947–present)
2014 Khalilabad derailment
Technology
111
37,663,218
https://en.wikipedia.org/wiki/Ulrich%20P%C3%B6schl
Ulrich "Uli" Pöschl (9 October 1969) is an Austrian chemist who was appointed Director of the newly founded Department of Multiphase Chemistry at the Max Planck Institute for Chemistry in Mainz, Germany on 1 October 2012. Biography Ulrich Pöschl studied chemistry at the Graz University of Technology in Austria and obtained his PhD in 1995 with Karl Hassler at the Institute of Inorganic Chemistry with a thesis on "Synthesis, Spectroscopy and Structure of selectively functionalized cyclosilanes ". From 1996 to 1997 he worked as a postdoctoral fellow at the Massachusetts Institute of Technology, Cambridge, Massachusetts, in the group of Mario J. Molina in the field of atmospheric chemical kinetics and mass spectrometry of sulfuric acid. In 1997 Pöschl became a research assistant at the Max Planck Institute for Chemistry in the Department of Atmospheric Chemistry and a researcher in the group of Paul Crutzen on the photochemistry of ozone, organic trace gases and stratospheric clouds. From 1999 to 2005 he worked at the Institute for Hydrochemistry of the Munich Technical University, led an independent research group and became a chemistry professor with a thesis on "Carbonaceous Aerosol Composition, Reactivity and Water Interactions". In 2005 he returned to the Max Planck Institute for Chemistry in Mainz and headed a research group in the Department of Biogeochemistry until 2012. Since 2007 Pöschl has also been teaching in the Department of Chemistry, Pharmacy and Earth Sciences at Johannes Gutenberg University in Mainz and habilitated in 2007 in Geochemistry. Research The Earth System and climate research are the main focus of the department in the investigation of biological and organic aerosols, aerosol-cloud interactions and atmosphere-surface exchange processes. The food and health sciences area studies how air pollutants cause changes in protein macromolecules and how this affects allergic reactions and diseases. The sequence of multiphase processes at the molecular level and its impact on the macroscopic and global scale is also investigated. The challenge lies in bridging different spatial and temporal scales: from tenths of nanometers to thousands of kilometers and from nanoseconds to years. Pöschl is also the founder and chief executive editor of Atmospheric Chemistry and Physics (ACP), an open access peer-reviewed scientific journal published by the European Geosciences Union (EGU). Founded in 2001 as the world's first scientific journal with public peer review and discussion, it has become one of the major environmental and earth sciences journal. Pöschl is also a council member of the EGU (2003-2007), and he has been the Chair of the EGU Publication Committee (2009-2014). He is an initiator and co-chair of the international open access initiative OA2020. Awards 1991–1994: Student and research scholarships of the Technical University of Graz, the Pro Scientia Foundation, and the Austrian Science Foundation 1996: Graduation “Sub Auspiciis Praesidentis” by the Austrian Federal President (highest award in the Austrian educational system) 1996: Research Awards of the Austrian Federal Minister of Arts and Science, the Industrial Union of Carinthia and the Josef Krainer Foundation 1996: Schrödinger Scholarship of the Austrian Science Foundation 2000: Young Scientist Award of the German Federal Ministry of Education and Research 2005: EGU Union Service Award 2012: Pius XI Gold Medal of the Pontifical Academy of Sciences for his research on the role of chemistry in the atmosphere, climate and health. 2015: Copernicus-Medal of the Copernicus-Gesellschaft References External links http://www.mpic.de/en/research/multiphase-chemistry.html Ina Helms: Die neue Offenheit des Wissens, in: MaxPlanckForschung 3/2006 (über die Open-Access-Zeitschrift ACP), PDF Publication list of Ulrich Pöschl in Google Scholar 1969 births Living people Austrian geochemists Atmospheric chemists Graz University of Technology alumni Academic staff of Johannes Gutenberg University Mainz Max Planck Institute directors People from Klagenfurt
Ulrich Pöschl
Chemistry
844
9,732,609
https://en.wikipedia.org/wiki/Rawmill
A raw mill is the equipment used to grind raw materials into "rawmix" during the manufacture of cement. Rawmix is then fed to a cement kiln, which transforms it into clinker, which is then ground to make cement in the cement mill. The raw milling stage of the process effectively defines the chemistry (and therefore physical properties) of the finished cement, and has a large effect upon the efficiency of the whole manufacturing process. History The history of the development of the technology of raw material grinding defines the early history of cement technology. Other stages of cement manufacture used existing technology in the early days. Early hydraulic materials such as hydraulic limes, natural cements and Parker's Roman cement were all based on "natural" raw materials, burned "as-dug". Because these natural blends of minerals occur only rarely, manufacturers were interested in making a fine-grained artificial mixture of readily available minerals such as limestone and clay that could be used in the same way. A typical problem would be to make an intimate mixture of 75% chalk and 25% clay, and burn this to produce an ”artificial cement". The development of the "wet" method of producing fine-grained clay in the ceramics industry afforded a means of doing this. For this reason, the early cement industry used the "wet process", in which the raw materials are ground together with water, to produce a slurry, containing 20–50% water. Both Louis Vicat and James Frost used this technique in the early 19th century, and it remained the only way of making rawmix for Portland cement until 1890. A modification of the technique used by the early industry was "double-burning", in which a hard limestone would be burned and slaked before combining with clay slurry. This technique avoided the grinding of hard stone, and was employed by, among others, Joseph Aspdin. Early grinding technology was poor, and early slurries were made thin, with a high water content. The slurry was then allowed to stand in large reservoirs ("slurry-backs") for several weeks. Large, un-ground particles would drop to the bottom, and excess water rose to the top. The water was periodically decanted until a stiff cake, of the consistency of pottery clay, was left. This was sliced up, discarding the coarse material at the bottom, and burned in the kiln. Wet grinding is comparatively energy-efficient, and so when good dry-grinding equipment became available, the wet process continued in use throughout the 20th century, often employing equipment that Josiah Wedgwood would have recognized. Materials ground Rawmixes are formulated to contain a correctly balanced chemistry for the production of calcium silicates (alite and belite) and fluxes (aluminate and ferrite) in the kiln. Chemical analysis data in cement manufacture are expressed in terms of oxides, and the most important of these in rawmix design are SiO2, Al2O3, Fe2O3 and CaO. In principle, any material that can contribute any of these oxides can be used as a rawmix component. Because the major oxide required is CaO, the most prevalent rawmix component is limestone, while the others are mostly contributed by clay or shale. Minor adjustments to the chemistry are made by smaller additions of materials such as those shown below. Typical rawmix component chemical analyses: Note: LoI950 is the Loss on ignition at 950 °C, and represents (approximately) the components lost during kiln processing. It consists mainly of CO2 from carbonates, H2O from clay hydrates, and organic carbon. Using these materials, typical rawmixes could be composed: Mix 1: General-purpose cement: 88.0% gray limestone, 8.9% clay, 2.2% sand and 0.9% millscale. Mix 2: Sulfate-resisting cement: 87.6% gray limestone, 5.2% clay, 5.0% sand and 2.2% millscale. Mix 3: White cement: 82.3% white limestone, 6.8% kaolin and 10.9% sand. The chemical analyses of these rawmixes would be: The raw materials and mixes shown are only "typical": considerable variations are possible depending on the raw materials available. Control of minor elements Apart from the major oxides (CaO, SiO2, Al2O3 and Fe2O3) the minor oxides are, at best, diluents of the clinker, and may be deleterious. However, cement raw materials are for the most part dug from the Earth's crust and contain most of the elements in the periodic table in some amount. The manufacturer therefore selects materials so that the deleterious effects of minor elements are minimized or kept under control. Minor elements that are frequently encountered are as follows: Fluorine is beneficial to the kiln process in that it allows alite to form at lower temperature. However, at levels above 0.25% in the clinker, delayed and erratic cement setting time results. Alkali metals (primarily sodium and potassium) cause processing problems because they form volatile salts in the kiln system. These evaporate in the kiln burning zone and re-condense in the cooler regions of the preheater, causing blockages. Alkalis are also deleterious to concrete, potentially causing alkali silica reaction. For this reason, many standards limit alkalis (typically expressed as "total equivalent soda" which is Na2O + 0.658 K2O). Typical specification limits are in the range 0.5–0.8%. MgO causes problems at levels over 2.5%. Small amounts are accommodated in solid solution in the clinker minerals, but above 2.5%, "free" MgO exists in the clinker as periclase. This can slowly hydrate to Mg(OH)2 with expansion in the hardened concrete, causing cracking. Careful processing of the clinker to keep the periclase in a microcrystalline form allows levels up to 5% to be managed without serious effect. All standards limit MgO, typical limits being in the range 4-6%. P2O5 at levels above 0.5% starts to cause slow setting and low clinker reactivity. Chlorine produces very volatile salts and consequent preheater blockages, and is usually limited to below 0.1% in rawmix. TiO2 is ubiquitous, but is rarely present at levels (~1%) that might cause problems. Chromium can end up as chromates (Cr[VI]) in cement, particularly when the clinker is high in sulfate. Chromates cause allergic contact dermatitis in cement users, and for this reason cement Cr[VI] content is limited in many standards to 0.0002%. Typical natural rawmixes contain around 0.01% Cr2O3, and at this level, Cr[VI] formation can be controlled. Chromium present in the cement as Cr[III] is of no consequence. Mn2O3 is not deleterious, acting as a substitute for iron. But it contributes more color to the cement than iron, and high-Mn2O3 cements (>1%) are almost black. ZnO is encountered in some rawmix additives (as well as tires used as kiln fuel). At levels above 0.2%, it causes slow setting and low clinker reactivity. Strontium and barium act as calcium replacements, and only start to reduce clinker reactivity at levels of 1.5% and 0.2% respectively. Toxic heavy metals: among these, low levels of arsenic, selenium, cadmium, antimony and tungsten are not a problem, because they are absorbed in the basic clinker structure as anions. On the other hand, mercury, thallium and lead must be carefully controlled because they can be emitted as volatile halides in the kiln exhaust. Wet rawmills Wet grinding is more efficient than dry grinding because water coats the newly formed surfaces of broken particles and prevents re-agglomeration. The process of blending and homogenizing the rawmix is also much easier when it is in slurry form. The disadvantage is that the water in the resultant slurry has to be removed subsequently, and this usually requires a lot of energy. While energy was cheap, wet grinding was common, but since 1970 the situation has changed dramatically, and new wet process plant is now rarely installed. Wet grinding is performed by two distinct means: washmills and ballmills. Washmill This represents the earliest rawmilling technology, and was used to grind soft materials such as chalk and clay. It is rather similar to a food processor. It consists of a large bowl (up to 15 m in diameter) into which the crushed (to less than 250 mm) raw materials are tipped along with a stream of water. The material is stirred by rotating sets of harrows. The outside walls of the bowl consist of gratings or perforated plates through which fine product can pass. Grinding is largely autogenous (i.e. it takes place by collision between lumps of raw material), and is very efficient, producing little waste heat, provided that the materials are soft. Typically two or three washmills are connected in series, these being provided with successively smaller outlet perforations. The entire system can produce slurry with the expenditure of as little as 5 kW·h of electricity per dry tonne. Relatively hard minerals (such as flint) in the mix, are more or less untouched by the grinding process, and settle out in the base of the mill, from where they are periodically dug out. Ballmills and washdrums The ballmill allows grinding of the harder limestones that are more common than chalk. A ballmill consists of a horizontal cylinder that rotates on its axis. It holds spherical, cylindrical or rod-like grinding media of size 15–100 mm that may be steel or a variety of ceramic materials, and occupy 20–30% of the mill volume. The shell of the mill is lined with steel or rubber plates. Grinding is effected by impact and attrition between the grinding media. The various mineral components of the rawmix are fed to the mill at a constant rate along with water, and the slurry runs from the outlet end. The washdrum has a similar concept, but contains little or no grinding media, grinding being autogenous, by the cascading action of the larger raw material pieces. It is suitable for soft materials, and particularly for flinty chalk, where the unground flint acts as grinding media. Slurry fineness and moisture content It is essential that large particles (> 150 μm for calcium carbonate and > 45 μm for quartz) should be eliminated from the rawmix, to facilitate chemical combination in the kiln. In the case of slurries, larger particles can be removed by hydrocyclones or sieving devices. These require a certain amount of energy, supplied by high pressure pumping. This process, and the moving and blending of the slurry, require careful control of the slurry viscosity. Clearly, a thinner slurry is easily obtained by adding more water, but at the expense of high energy consumption for its subsequent removal. In practice, the slurry is therefore made as thick as the plant equipment can handle. Cement rawmix slurries are Bingham plastics which can also exhibit thixotropic or rheopectic behaviour. The energy needed to pump slurry at a desired rate is controlled mainly by the slurry's yield stress, and this in turn varies more or less exponentially with the slurry solids/liquid ratio. In practice, deflocculants are often added in order to maintain pumpability at low moisture contents. Common deflocculants used (at typical dose rates of 0.005–0.03%) are sodium carbonate, sodium silicate, sodium polyphosphates and lignosulfonates. Under favourable circumstances, pumpable slurries with less than 25% water can be obtained. Rawmixes frequently contain minerals of contrasting hardness, such as calcite and quartz. Simultaneous grinding of these in a rawmill is inefficient, because the grinding energy is preferentially used in grinding the softer material. This results in a large amount of excessively fine soft material, which "cushions" the grinding of the harder mineral. For this reason, sand is sometimes ground separately, then fed to the main rawmill as a fine slurry. Dry rawmills Dry rawmills are the normal technology installed today, allowing minimization of energy consumption and CO2 emissions. In general, cement raw materials are mainly quarried, and so contain a certain amount of natural moisture. Attempting to grind a wet material is unsuccessful because an intractable "mud" forms. On the other hand, it is much easier to dry a fine material than a coarse one, because large particles hold moisture deep in their structure. It is therefore usual to simultaneously dry and grind the materials in the rawmill. A hot-air furnace may be used to supply this heat, but usually hot waste gases from the kiln are used. For this reason, the rawmill is usually placed close to the kiln preheater. Types of dry rawmill include ball mills, roller mills and hammer mills. Ball mills These are similar to cement mills, but often with a larger gas flow. The gas temperature is controlled by cold-air bleeds to ensure a dry product without overheating the mill. The product passes into an air separator, which returns oversized particles to the mill inlet. Occasionally, the mill is preceded by a hot-air-swept hammer mill which does most of the drying and produces millimetre-sized feed for the mill. Ball mills are rather inefficient, and typically require 10–20 kW·h of electric power to make a tonne of rawmix. The Aerofall mill is sometimes used for pre-grinding large wet feeds. It is a short, large-diameter semi-autogenous mill, typically containing 15% by volume of very large (130 mm) grinding balls. Feed can be up to 250 mm, and the larger chunks produce much of the grinding action. The mill is air-swept, and the fines are carried away in the gas stream. Crushing and drying are efficient, but the product is coarse (around 100 μm), and is usually re-ground in a separate ball mill. Roller mills These are the standard form in modern installations, occasionally called vertical spindle mills. In a typical arrangement, the material is fed onto a rotating table, onto which steel rollers press down. A high velocity of hot gas flow is maintained close to the dish so that fine particles are swept away as soon as they are produced. The gas flow carries the fines into an integral air separator, which returns larger particles to the grinding path. The fine material is swept out in the exhaust gas and is captured by a cyclone before being pumped to storage. The remaining dusty gas is usually returned to the main kiln dust control equipment for cleaning. Feed size can be up to 100 mm. Roller mills are efficient, using about half the energy of a ball mill, and there seems to be no limit to the size available. Roller mills with output in excess of 800 tonnes per hour have been installed. Unlike ball mills, feed to the mill must be regular and uninterrupted; otherwise damaging resonant vibration sets in. Hammer mills Hammer mills (or "crusher driers") swept with hot kiln exhaust gases have limited application where a soft, wet raw material is being ground. The simple design means that it can be operated at a higher temperature than other mills, giving it high drying capacity. However, the grinding action is poor, and the product is often re-ground in a ball mill. Notes and references Cement Chemical equipment
Rawmill
Chemistry,Engineering
3,358
26,600,666
https://en.wikipedia.org/wiki/Systematic%20Biology
Systematic Biology is a peer-reviewed scientific journal published by Oxford University Press on behalf of the Society of Systematic Biologists. It covers the theory, principles, and methods of systematics as well as phylogeny, evolution, morphology, biogeography, paleontology, genetics, and the classification of all living things. The journal was established in 1952 as Systematic Zoology and obtained its current title in 1992. References External links Submission website Society of Systematic Biologists Systematics journals Bimonthly journals English-language journals Oxford University Press academic journals Academic journals established in 1952
Systematic Biology
Biology
116
52,187,839
https://en.wikipedia.org/wiki/NGC%20341
NGC 341 is a spiral galaxy in the constellation Cetus. It was discovered on October 21, 1881 by Édouard Stephan. It was described by Dreyer as "faint, pretty large, round, a little brighter middle, mottled but not resolved." It has a companion galaxy, PGC 3627, which is sometimes called NGC 341B. For this, reason, it has been included in Halton Arp's Atlas of Peculiar Galaxies. References External links 0341 003620 059 -02-03-063 18811021 Cetus Intermediate spiral galaxies Markarian galaxies Discoveries by Édouard Stephan
NGC 341
Astronomy
126
3,069,483
https://en.wikipedia.org/wiki/Dynamic%20recrystallization
Dynamic recrystallization (DRX) is a type of recrystallization process, found within the fields of metallurgy and geology. In dynamic recrystallization, as opposed to static recrystallization, the nucleation and growth of new grains occurs during deformation rather than afterwards as part of a separate heat treatment. The reduction of grain size increases the risk of grain boundary sliding at elevated temperatures, while also decreasing dislocation mobility within the material. The new grains are less strained, causing a decrease in the hardening of a material. Dynamic recrystallization allows for new grain sizes and orientation, which can prevent crack propagation. Rather than strain causing the material to fracture, strain can initiate the growth of a new grain, consuming atoms from neighboring pre-existing grains. After dynamic recrystallization, the ductility of the material increases. In a stress–strain curve, the onset of dynamic recrystallization can be recognized by a distinct peak in the flow stress in hot working data, due to the softening effect of recrystallization. However, not all materials display well-defined peaks when tested under hot working conditions. The onset of DRX can also be detected from inflection point in plots of the strain hardening rate against stress. It has been shown that this technique can be used to establish the occurrence of DRX when this cannot be determined unambiguously from the shape of the flow curve. If stress oscillations appear before reaching the steady state, then several recrystallization and grain growth cycles occur and the stress behavior is said to be of the cyclic or multiple peak type. The particular stress behavior before reaching the steady state depends on the initial grain size, temperature, and strain rate. DRX can occur in various forms, including: Geometric dynamic recrystallization Discontinuous dynamic recrystallization Continuous dynamic recrystallization Dynamic recrystallization is dependent on the rate of dislocation creation and movement. It is also dependent on the recovery rate (the rate at which dislocations annihilate). The interplay between work hardening and dynamic recovery determines grain structure. It also determines the susceptibility of grains to various types of dynamic recrystallization. Regardless of the mechanism, for dynamic crystallization to occur, the material must have experienced a critical deformation. The final grain size increases with increased stress. To achieve very fine-grained structures the stresses have to be high. Some authors have used the term 'postdynamic' or 'metadynamic' to describe recrystallization that occurs during the cooling phase of a hot-working process or between successive passes. This emphasises the fact that the recrystallization is directly linked to the process in question, while acknowledging that there is no concurrent deformation. Geometric Dynamic Recrystallization (GDRX) Geometric dynamic recrystallization occurs in grains with local serrations. Upon deformation, grains undergoing GDRX elongate until the thickness of the grain falls below a threshold (below which the serration boundaries intersect and small grains pinch off into equiaxed grains). The serrations may predate stresses being exerted on the material, or may result from the material’s deformation. Geometric Dynamic Recrystallization has 6 main characteristics: It generally occurs with deformation at elevated temperatures, in materials with high stacking fault energy Stress increases and then declines to a steady state Subgrain formation requires a critical deformation Subgrain misorientation peaks at 2˚ There is little texture change Pinning of grain boundaries causes an increase in the required strain While GDRX is primarily affected by the initial grain size and strain (geometry-dependent), other factors that occur during the hot working process complicate the development of predictive modeling (which tend to oversimplify the process) and can lead to incomplete recrystallization.  The equiaxed grain formation does not occur immediately and uniformly along the entire grain once the threshold stress is reached, as individual regions are subjected to different strains/stresses. In practice, a generally sinusoidal edge (as predicted by Martorano et al.) gradually forms as the grains begin to pinch off as they each reach the threshold.  More sophisticated models consider complex initial grain geometries, local pressures along grain boundaries, and hot working temperature, but the models are unable to make accurate predictions throughout the entire stress regime and the evolution of the overall microstructure. Additionally, grain boundaries may migrate during GDRX at high temperatures and GB curvatures, dragging along subgrain boundaries and resulting in unwanted growth of the original grain. This new, larger grain will require far more deformation for GDRX to occur, and the local area will be weaker rather than strengthened.  Lastly, recrystallization can be accelerated as grains are shifted and stretched, causing subgrain boundaries to become grain boundaries (angle increases). The affected grains are thinner and longer, and thus more easily undergo deformation. Discontinuous Dynamic Recrystallization Discontinuous recrystallization is heterogeneous; there are distinct nucleation and growth stages. It is common in materials with low stacking-fault energy. Nucleation then occurs, generating new strain-free grains which absorb the pre-existing strained grains. It occurs more easily at grain boundaries, decreasing the grain size and thereby increasing the amount of nucleation sites. This further increases the rate of discontinuous dynamic recrystallization. Discontinuous Dynamic Recrystallization has 5 main characteristics: Recrystallization does not occur until the threshold strain has been reached The stress-strain curve may have several peaks – there is not a universal equation Nucleation generally occurs along pre-existing grain boundaries Recrystallization rates increase as the initial grain size decreases There is a steady grain size which is approached as recrystallization proceeds Discontinuous dynamic recrystallization is caused by the interplay of work hardening and recovery. If the annihilation of dislocations is slow relative to the rate at which they are generated, dislocations accumulate. Once critical dislocation density is achieved, nucleation occurs on grain boundaries.  Grain boundary migration, or the atoms transfer from a large pre-existing grain to a smaller nucleus, allows the growth of the new nuclei at the expense of the pre-existing grains. The nucleation can occur through the bulging of existing grain boundaries. A bulge forms if the subgrains abutting a grain boundary are of different sizes, causing a disparity in energy from the two subgrains. If the bulge achieves a critical radius, it will successfully transition to a stable nucleus and continue its growth. This can be modeled using Cahn’s theories pertaining to nucleation and growth. Discontinuous dynamic recrystallization commonly produces a ‘necklace’ microstructure. Since new grain growth is energetically favorable along grain boundaries, new grain formation and bulging preferentially occurs along pre-existing grain boundaries. This generates layers of new, very fine grains along the grain boundary initially leaving the interior of the pre-existing grain unaffected. As the dynamic recrystallization continues, it consumes the unrecrystallized region. As deformation continues, the recrystallization does not maintain coherency between layers of new nuclei, producing a random texture. Continuous Dynamic Recrystallization Continuous dynamic recrystallization is common in materials with high stacking-fault energies. It occurs when low angle grain boundaries form and evolve into high angle boundaries, forming new grains in the process. For continuous dynamic recrystallization there is no clear distinction between nucleation and growth phases of the new grains. Continuous Dynamic Recrystallization has 4 main characteristics: As strain increases, stress increases As strain increases, subgrain boundary misorientation increases As low angle grain boundaries evolve into high angle grain boundaries, the misorientation increases homogeneously As deformation increases, crystallite size decreases There are three main mechanisms of continuous dynamic recrystallization: First, continuous dynamic recrystallization can occur when low angle grain boundaries are assembled from dislocations formed within the grain. When the material is subjected to continued stress, the misorientation angle increases until the critical angle is achieved, creating a high angle grain boundary. This evolution can be promoted by the pinning of subgrain boundaries. Second, continuous dynamic recrystallization can occur through subgrain rotation recrystallization; subgrains rotate increasing the misorientation angle. Once the misorientation angle exceeds the critical angle, the former subgrains qualify as independent grains. Third, continuous dynamic recrystallization can occur due to deformation caused by microshear bands. Subgrains are assembled by dislocations within the grain formed during work hardening. If microshear bands are formed within the grain, the stress they introduce rapidly increases the misorientation of low angle grain boundaries, transforming them into high angle grain boundaries. However, the impact of microshear bands are localized, so this mechanism preferentially impacts regions which deform heterogeneously, such as microshear bands or areas near pre-existing grain boundaries. As recrystallization proceeds, it spreads out from these zones, generating a homogenous, equiaxed microstructure. Mathematical Formulas Based on the method developed by Poliak and Jonas, a few models are developed in order to describe the critical strain for the onset of DRX as a function of the peak strain of the stress–strain curve. The models are derived for the systems with single peak, i.e. for the materials with medium to low stacking fault energy values. The models can be found in the following papers: Determination of flow stress and the critical strain for the onset of dynamic recrystallization using a sine function Determination of flow stress and the critical strain for the onset of dynamic recrystallization using a hyperbolic tangent function Determination of critical strain for initiation of dynamic recrystallization Characteristic points of stress–strain curve at high temperature The DRX behavior for systems with multiple peaks (and single peak as well) can be modeled considering the interaction of multiple grains during deformation. I. e. the ensemble model describes the transition between single and multi peak behavior based on the initial grain size. It can also describe the effect of transient changes of the strain rate on the shape of the flow curve. The model can be found in the following paper: A new unified approach for modeling recrystallization during hot rolling of steel Literature A one-parmenter approach to determining the critical conditions for the initiation of dynamic recrystallization, onset of DRX Flow Curve Analysis of 17–4 PH Stainless Steel under Hot Compression Test, comprehensive study of DRX Constitutive relations to model the hot flow of commercial purity copper, chapter 6, doctoral thesis by V.G. García, UPC (2004) A review of dynamic recrystallization phenomena in metallic materials, Latest review paper on DRX A Cellular Automaton Model of Dynamic Recrystallization: Introduction & Source Code, Software simulating DRX by CA: Introduction, Video of software run References Metallurgy Geology
Dynamic recrystallization
Chemistry,Materials_science,Engineering
2,315
24,378,069
https://en.wikipedia.org/wiki/C13H10N2
{{DISPLAYTITLE:C13H10N2}} The molecular formula C13H10N2 may refer to: Aminoacridines 2-Aminoacridine 3-Aminoacridine 4-Aminoacridine 9-Aminoacridine Diazodiphenylmethane
C13H10N2
Chemistry
64
32,687,542
https://en.wikipedia.org/wiki/Friedrich%20Beck
Friedrich Hans Beck (16 February 1927 – 20 December 2008) was a German physicist. His research interests were focused on superconductivity, nuclear and elementary particle physics, relativistic quantum field theory, and late in his life, biophysics and theory of consciousness. Early life and education Beck was born in Wiesbaden, Germany. He was the son of the businessman Fritz Beck and his wife Margaret Cron. Beck attended the Grammar School in Darmstadt and after that studied physics at University of Göttingen and Darmstadt University of Technology. As a student of Max von Laue, he performed research on superconductivity. In the spring of 1950 Beck started work on his PhD thesis entitled "The electrodynamic potential in the extended phenomenological theory of superconductivity", which he defended 1952 at University of Göttingen and obtained Doctor rerum naturalium. Academic career From 1952 to 1954, Beck worked as an assistant at the Fritz Haber Institute in Berlin. He followed a research visit in the U.S. from 1954 to 1956 as a research associate at the Massachusetts Institute of Technology, then went back to the University of Munich, where in 1958 he wrote a Habilitation thesis on nuclear reactions as a result of electromagnetic interactions. From 1958 to 1960, he worked as a lecturer both at the University of Munich and Heidelberg University. In 1960, Beck was appointed an Associate Professor of Theoretical Physics at Goethe University Frankfurt. In 1963, he became a Professor of Theoretical Physics at Darmstadt University of Technology, where in the same year he took over the management of the Institute for Theoretical Nuclear Physics. Beck held visiting professorship positions several times. From 1974 to 1975, he taught at the Lawrence Berkeley National Laboratory, in 1976 at the Universidade Federal Rural do Rio de Janeiro, in 1979 at the University of Maryland, College Park, in 1983 at the Weizmann Institute of Science in Rehovot, in 1987 at the University of Washington in Seattle, in 1988 at the Ben-Gurion University of the Negev in Beersheba, and in 1991 at the University of the Witwatersrand in Johannesburg. After Beck's retirement in 1995, his successor at Darmstadt University of Technology became Professor Jochen Wambach. Collaboration with John C. Eccles In 1991, Friedrich Beck met sir John Carew Eccles, a 1963 Nobel Laureate in Physiology or Medicine, during a summer school in Northern Italy organized by a German foundation for the promotion of outstanding students. In collaboration, they developed a quantum mechanical model of exocytosis and neurotransmitter release at synapses in the human cerebral cortex. The model endorses interactionist dualism and postulates that human consciousness could affect the functioning of synapses in the brain through quantum tunneling of electrons between the lipid bilayers of the synaptic vesicle and the presynaptic membrane. The tunneling of electrons triggers the process of exocytosis and thus initiates the transmission of information from the presynaptic neuron towards the postsynaptic neuron. The model proposed by Beck and Eccles is based on pure quantum tunneling and predicts temperature independence of exocytosis, which has been experimentally tested and found to be incorrect. Nevertheless, recent research has shown that the original Beck-Eccles model could be updated and zipping of SNARE proteins in exocytosis could be triggered by vibrationally assisted tunneling. References 1927 births 2008 deaths 20th-century German physicists Scientists from Wiesbaden Biophysics German theoretical physicists Academic staff of Technische Universität Darmstadt
Friedrich Beck
Physics,Biology
740
13,138,516
https://en.wikipedia.org/wiki/General%20Motors%20Research%20Laboratories
General Motors Research Laboratories are the part of General Motors responsible for creation of the first known operating system (GM-NAA I/O) in 1955 and contributed to the first mechanical heart, the Dodrill-GMR, successfully used while performing open heart surgery. See also Multiple Console Time Sharing System References External links General Motors Research Laboratories site. Domain is one of the first .com domains. General Motors subsidiaries
General Motors Research Laboratories
Technology
84
25,296,802
https://en.wikipedia.org/wiki/Symmetrical%20dimethylhydrazine
{{chembox | Verifiedfields = changed | Watchedfields = changed | verifiedrevid = 477204010 | ImageFile = Dimethylhydrazin.svg | ImageFile_Ref = | ImageSize = 160 | ImageName = Skeletal formula of 1,2-dimethylhydrazine with all implicit hydrogens shown | ImageFile1 = 1,2-dimethylhydrazine-3D-balls.png | ImageFile1_Ref = | ImageSize1 = 100 | ImageName1 = Ball and stick model of 1,2-dimethylhydrazine | PIN = 1,2-Dimethylhydrazine | OtherNames = {{Unbulleted list|N,''N-Dimethylhydrazine|sym-Dimethylhydrazine|Hydrazomethane}} |Section1= |Section2= |Section3= |Section7= |Section8= }}Symmetrical dimethylhydrazine (SDMH), or 1,2-dimethylhydrazine''', is the organic compound with the formula (CH3NH)2. It is one of the two isomers of dimethylhydrazine. Both isomers are colorless liquids at room temperature, with properties similar to those of methylamines. Symmetrical dimethylhydrazine is a potent carcinogen that acts as a DNA methylating agent. The compound has no commercial value, in contrast to its isomer unsymmetrical dimethylhydrazine (1,1-dimethylhydrazine, UDMH), which is used as a rocket fuel. Symmetrical dimethylhydrazine is more toxic than unsymmetrical dimethylhydrazine and is therefore an unwanted impurity in UDMH. It is used to induce colon tumors in experimental animals—particularly mice and feline cell samples. References Methylating agents Hydrazines IARC Group 2A carcinogens
Symmetrical dimethylhydrazine
Chemistry
427
15,226,281
https://en.wikipedia.org/wiki/HIST1H4C
Histone H4 is a protein that in humans is encoded by the HIST1H4C gene. Function Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Two molecules of each of the four core histones (H2A, H2B, H3, and H4) form an octamer, around which approximately 146 bp of DNA is wrapped in repeating units, called nucleosomes. The linker histone, H1, interacts with linker DNA between nucleosomes and functions in the compaction of chromatin into higher order structures. This gene is intronless and encodes a member of the histone H4 family. Transcripts from this gene lack polyA tails but instead contain a palindromic termination element. This gene is found in the large histone gene cluster on chromosome 6. References Further reading External links Human proteins
HIST1H4C
Chemistry
200
30,240,076
https://en.wikipedia.org/wiki/Cortical%20pseudolaminar%20necrosis
Cortical pseudolaminar necrosis, also known as cortical laminar necrosis and simply laminar necrosis, is the death of cells in the cerebral cortex of the brain in a band-like pattern, with a relative preservation of cells immediately adjacent to the meninges. It is seen in the context of cerebral hypoxic-ischemic insults, e.g. status epilepticus, strokes. Histologically, grey matter is more vulnerable than white matter to necrosis due to lack of oxygen. The third layer of the grey matter is the most vulnerable. Damage is greater in the sulci when compared to gyri of the brain. When seen on CT scan, it shows hyperdensity in the surface of the cortex. Cortical enhancement is seen after two weeks, with maximum intensity at one to two months, and resolved after six months. On MRI scans, early changes show low T1 intensity due to ischemic changes. There is high T1 intensity due to accumulation of neuronal damage, reactive tissue changes, and deposition of fat-laden macrophages. See also Cardiovascular disease Reactive astrocyte Status epilepticus Stroke References External links Hypoxic-ischemic encephalopathy - principles (neuropathology-web.org) Laminar necrosis on MRI (rochester.edu) Pathology
Cortical pseudolaminar necrosis
Biology
291
22,146,365
https://en.wikipedia.org/wiki/Liquid%20chalk
The term liquid chalk, or sharkchalk, refers to several different kinds of liquified chalk including liquid-chalk marking pens (with water-soluble ink), liquid-chalk mixtures (for athletic use: rock climbing, weightlifting, gymnastics), and liquid-chalk hobby-craft paints made of cornstarch and food coloring (some with small amounts of flour). Some forms of "liquid chalk" contain no actual chalk. Use in sports Liquid chalk can be a variation of normal chalk (see: magnesium carbonate) used to improve grip for sports, such as rock climbing, weight lifting, or gymnastics. Rock climbing Rock climbers use liquid chalk to prevent their hands from sweating. It may be used by climbers in situations where powdered chalk is restricted. It is preferred by some athletes because it remains effective longer and leaves less residue on rocks and equipment. Liquid chalk for rock climbers is made from magnesium carbonate. Other sports In other sports, liquid chalk is less beneficial to the athlete, because re-chalking can be done more easily between sets or rounds. However, some gyms require liquid chalk because it leaves less residue on gym equipment. Liquid chalk adheres to the hand better, reducing the need to re-chalk. Ingredients Some liquid-chalk mixtures for climbing are made with magnesium carbonate, colophony, and ethanol or an alcohol that dissolves the colophony and quickly evaporates from the solution (as isopropyl alcohol or ethanol). Sometimes, an additive for aroma is included because of the bad smell of spirit. See also Glossary of climbing terms Climbing Bouldering Magnesium carbonate References Climbing equipment Alchemical substances
Liquid chalk
Chemistry
338
15,666,340
https://en.wikipedia.org/wiki/Consolidated%20Safety-Valve%20Co.%20v.%20Crosby%20Steam%20Gauge%20%26%20Valve%20Co.
Consolidated Safety-Valve Co. v. Crosby Steam Gauge & Valve Co., 113 U.S. 157 (1885), was a patent case to determine validity of patent No. 58,294, granted to George W. Richardson September 25, 1866, for an improvement in steam safety valves. Technical background Richardson was the first person who made a safety valve which, while it automatically relieved the pressure of steam in the boiler, did not, in effecting that result, reduce the pressure to such an extent as to make the use of the relieving apparatus practically impossible because of the expenditure of time and fuel necessary to bring up the steam again to the proper working standard. His valve was the first which had the strictured orifice to retard the escape of the steam and enable the valve to open with increasing power against the spring and close suddenly, with small loss of pressure in the boiler. Ruling The direction given in the patent that the flange or lip is to be separated from the valve seat by about one sixty-fourth of an inch for an ordinary spring, with less space for a strong spring and more space for a weak spring, to regulate the escape of steam as required, is a sufficient description as matter of law, and it is not shown to be insufficient as a matter of fact. Letters patent No. 85,963, granted to said Richardson January 19, 1869, for an improvement in safety valves for steam boilers or generators, are valid. The patents of Richardson were infringed by a valve which produces the same effects in operation by the means described in Richardson's claims, although the valve proper is an annulus and the extended surface is a disc inside of the annulus, the Richardson valve proper being a disc and the extended surface an annulus surrounding the disc, and although the valve proper has two ground joints, and only the steam which passes through one of them goes through the stricture, while, in the Richardson valve, all the steam which passes into the air goes through the stricture, and although the huddling chamber is at the center instead of the circumference, and is in the seat of the valve, under the head, instead of in the head, and the stricture is at the circumference of the seat of the valve instead of being at the circumference of the head. The fact that the prior patented valves were not used and the speedy and extensive adoption of Richardson's valve support the conclusion as to the novelty of the latter. Suits in equity having been begun in 1879 for the infringement of the two patents, and the circuit court having dismissed the bills, this Court in reversing the decrees after the first patent had expired but not the second, awarded accounts of profits and damages as to both patents, and a perpetual injunction as to the second patent. See also List of United States Supreme Court cases, volume 113 References External links United States Supreme Court cases United States Supreme Court cases of the Waite Court Steam power 1885 in United States case law United States patent case law Safety valves
Consolidated Safety-Valve Co. v. Crosby Steam Gauge & Valve Co.
Physics,Engineering
623
2,054,964
https://en.wikipedia.org/wiki/Coding%20by%20exception
Coding by exception is an accidental complexity in a software system in which the program handles specific errors that arise with unique exceptions. When an issue arises in a software system, an error is raised tracing the issue back to where it was caught and then where that problem came from, if applicable. Exceptions can be used to handle the error while the program is running and avoid crashing the system. Exceptions should be generalized and cover numerous errors that arise. Using these exceptions to handle specific errors that arise to continue the program is called coding by exception. This anti-pattern can quickly degrade software in performance and maintainability. Executing code even after the exception is raised resembles the goto method in many software languages, which is also considered poor practice. See also Accidental complexity Creeping featurism Test-driven development Anti-patterns
Coding by exception
Technology
163
9,797,288
https://en.wikipedia.org/wiki/Thrackle
A thrackle is an embedding of a graph in the plane in which each edge is a Jordan arc and every pair of edges meet exactly once. Edges may either meet at a common endpoint, or, if they have no endpoints in common, at a point in their interiors. In the latter case, they must cross at their intersection point: the intersection must be transverse. A special case of thrackles, the linear thrackles, restrict the edges to be drawn as straight line segments. One method for constructing a linear thrackle with any given set of points as vertices is to form an edge between each farthest pair of points. For a linear thrackle, each connected component contains at most one cycle, from which it follows that the number of edges is at most equal to the number of vertices. John H. Conway conjectured more generally that every thrackle has at most as many edges as vertices. It is known that the number of edges is at most a constant times the number of vertices. Linear thrackles A linear thrackle is a thrackle drawn in such a way that its edges are straight line segments. As Paul Erdős observed, every linear thrackle has at most as many edges as vertices. If a vertex v is connected to three or more edges vw, vx, and vy, at least one of those edges (say vw) lies on a line that separates two other edges. Then, w must have degree one, because no line segment ending at w, other than vw, can touch both vx and vy. Removing w and vw produces a smaller thrackle, without changing the difference between the numbers of edges and vertices. After removals like this lead to a thrackle in which every vertex has at most two neighbors, by the handshaking lemma the number of edges is at most the number of vertices. Based on Erdős' proof, one can infer that every linear thrackle is a pseudoforest. Every cycle of odd length may be arranged to form a linear thrackle, but this is not possible for an even-length cycle, because if one edge of the cycle is chosen arbitrarily then the other cycle vertices must lie alternatingly on opposite sides of the line through this edge. Micha Perles provided another simple proof that linear thrackles have at most n edges, based on the fact that in a linear thrackle every edge has an endpoint at which the edges span an angle of at most 180°, and for which it is the most clockwise edge within this span. For, if not, there would be two edges, incident to opposite endpoints of the edge and lying on opposite sides of the line through the edge, which could not cross each other. But each vertex can only have this property with respect to a single edge, so the number of edges is at most equal to the number of vertices. As Erdős also observed, the set of pairs of points realizing the diameter of a point set must form a linear thrackle: no two diameters can be disjoint from each other, because if they were then their four endpoints would have a pair at farther distance apart than the two disjoint edges. For this reason, every set of n points in the plane can have at most n diametral pairs, answering a question posed in 1934 by Heinz Hopf and Erika Pannwitz. Andrew Vázsonyi conjectured bounds on the number of diameter pairs in higher dimensions, generalizing this problem. In computational geometry, the method of rotating calipers can be used to form a linear thrackle from any set of points in convex position, by connecting pairs of points that support parallel lines tangent to the convex hull of the points. This graph contains as a subgraph the thrackle of diameter pairs. The diameters of the Reinhardt polygons form linear thrackles. An enumeration of linear thrackles may be used to solve the biggest little polygon problem, of finding an n-gon with maximum area relative to its diameter. Thrackle conjecture John H. Conway conjectured that, in any thrackle, the number of edges is at most equal to the number of vertices. Conway himself used the terminology paths and spots (for edges and vertices respectively), so Conway's thrackle conjecture was originally stated in the form every thrackle has at least as many spots as paths. Conway offered a $1000 prize for proving or disproving this conjecture, as part of a set of prize problems also including Conway's 99-graph problem, the minimum spacing of Danzer sets, and the winner of Sylver coinage after the move 16. Equivalently, the thrackle conjecture may be stated as every thrackle is a pseudoforest. More specifically, if the thrackle conjecture is true, the thrackles may be exactly characterized by a result of Woodall: they are the pseudoforests in which there is no cycle of length four and at most one odd cycle. It has been proved that every cycle graph other than C4 has a thrackle embedding, which shows that the conjecture is sharp. That is, there are thrackles having the same number of spots as paths. At the other extreme, the worst-case scenario is that the number of spots is twice the number of paths; this is also attainable. The thrackle conjecture is known to be true for thrackles drawn in such a way that every edge is an x-monotone curve, crossed at most once by every vertical line. Known bounds proved that every bipartite thrackle is a planar graph, although not drawn in a planar way. As a consequence, they show that every thrackleable graph with n vertices has at most 2n − 3 edges. Since then, this bound has been improved several times. First, it was improved to 3(n − 1)/2, and another improvement led to a bound of roughly 1.428n. Moreover, the method used to prove the latter result yields for any ε > 0 a finite algorithm that either improves the bound to (1 + ε)n or disproves the conjecture. The current record is due to , who proved a bound of 1.393n. If the conjecture is false, a minimal counterexample would have the form of two even cycles sharing a vertex. Therefore, to prove the conjecture, it would suffice to prove that graphs of this type cannot be drawn as thrackles. References External links thrackle.org—website about the problem Conjectures Topological graph theory Geometric intersection
Thrackle
Mathematics
1,396
864,240
https://en.wikipedia.org/wiki/Programming%20Ruby
Programming Ruby is a book about the Ruby programming language by Dave Thomas and Andrew Hunt, authors of The Pragmatic Programmer. In the Ruby community, it is commonly known as "The PickAxe" because of the pickaxe on the cover. The book has helped Ruby to spread outside Japan. The complete first edition of this book is freely available under the Open Publication License v1.0, and was published by Addison-Wesley in 2001. The second edition, covering the features of Ruby 1.8, was published by The Pragmatic Programmers, LLC in 2004. The third edition, covering Ruby 1.9, was published in 2010, with the fourth edition, covering Ruby 1.9 and 2.0 being published in 2013. A fifth edition, updated for Ruby 3.3, was written by Noel Rappin, and published by Pragmatic Programmers, LLC in 2023. References External links First edition (for online reading) at RUBY-DOC.ORG 4th edition at The Pragmatic Programmers Programming Ruby (First Edition) 2004 non-fiction books Ruby (programming language) Books about free software Computer programming books Open Publication License-licensed works
Programming Ruby
Technology
237
25,232,831
https://en.wikipedia.org/wiki/North%20American%20Dryopteris%20hybrid%20complex
Hybridization and polyploidy are common phenomena in ferns, and the genus Dryopteris is known to be one of the most freely-hybridizing fern genera. North American botanists recognized early that there were close relationships between many of the species of Dryopteris on the continent, and that these relationships reflected hybrid ancestry. The complex includes six sexual diploid parents (one of which, "D. semicristata", is hypothesized to be extinct), six sexual allopolyploids, and numerous sterile hybrids at various ploidal levels. Diploid species Dryopteris intermedia Dryopteris expansa Dryopteris goldieana Dryopteris ludoviciana Dryopteris marginalis Dryopteris "semicristata" Allopolyploid species Dryopteris carthusiana (D. intermedia × "D. semicristata"; allotetraploid) Dryopteris campyloptera (D. intermedia × D. expansa; allotetraploid) Dryopteris celsa (D. goldieana × D. ludoviciana; allotetraploid) Dryopteris clintoniana (D. cristata × D. goldieana; allohexaploid) Dryopteris cristata (D. ludovicana × "D. semicristata"; allotetraploid) Dryopteris filix-mas (progenitors D. caucasica and D. oreades) Other hybrids Dryopteris × australis (D. celsa × D. ludoviciana; triploid) Dryopteris × bootii (D. cristata × D. intermedia; triploid) Dryopteris × critica (D. borreri × D. filix-mas) Dryopteris × complexa aggregate (D. filix-mas and D. affinis; tetraploid) Dryopteris × convoluta (D. cambrensis × D. filix-mas) Dryopteris × deweveri (D. dilatata × D. carthusiana) Dryopteris × neo-wherryi (D. goldieana × D. marginalis; diploid) Dryopteris × triploidea (D. carthusiana × D. intermedia; triploid) References hybrid Hybrid plants
North American Dryopteris hybrid complex
Biology
533
841,792
https://en.wikipedia.org/wiki/Kinetic%20art
Kinetic art is art from any medium that contains movement perceivable by the viewer or that depends on motion for its effects. Canvas paintings that extend the viewer's perspective of the artwork and incorporate multidimensional movement are the earliest examples of kinetic art. More pertinently speaking, kinetic art is a term that today most often refers to three-dimensional sculptures and figures such as mobiles that move naturally or are machine operated (see e. g. videos on this page of works of George Rickey and Uli Aschenborn). The moving parts are generally powered by wind, a motor or the observer. Kinetic art encompasses a wide variety of overlapping techniques and styles. There is also a portion of kinetic art that includes virtual movement, or rather movement perceived from only certain angles or sections of the work. This term also clashes frequently with the term "apparent movement", which many people use when referring to an artwork whose movement is created by motors, machines, or electrically powered systems. Both apparent and virtual movement are styles of kinetic art that only recently have been argued as styles of op art. The amount of overlap between kinetic and op art is not significant enough for artists and art historians to consider merging the two styles under one umbrella term, but there are distinctions that have yet to be made. "Kinetic art" as a moniker developed from a number of sources. Kinetic art has its origins in the late 19th century impressionist artists such as Claude Monet, Edgar Degas, and Édouard Manet who originally experimented with accentuating the movement of human figures on canvas. This triumvirate of impressionist painters all sought to create art that was more lifelike than their contemporaries. Degas’ dancer and racehorse portraits are examples of what he believed to be "photographic realism";. During the late 19th century artists such as Degas felt the need to challenge the movement toward photography with vivid, cadenced landscapes and portraits. By the early 1900s, certain artists grew closer and closer to ascribing their art to dynamic motion. Naum Gabo, one of the two artists attributed to naming this style, wrote frequently about his work as examples of "kinetic rhythm". He felt that his moving sculpture Kinetic Construction (also dubbed Standing Wave, 1919–20) was the first of its kind in the 20th century. From the 1920s until the 1960s, the style of kinetic art was reshaped by a number of other artists who experimented with mobiles and new forms of sculpture. Origins and early development The strides made by artists to "lift the figures and scenery off the page and prove undeniably that art is not rigid" (Calder, 1954) took significant innovations and changes in compositional style. Édouard Manet, Edgar Degas, and Claude Monet were the three artists of the 19th century that initiated those changes in the Impressionist movement. Even though they each took unique approaches to incorporating movement in their works, they did so with the intention of being a realist. In the same period, Auguste Rodin was an artist whose early works spoke in support of the developing kinetic movement in art. However, Auguste Rodin's later criticisms of the movement indirectly challenged the abilities of Manet, Degas, and Monet, claiming that it is impossible to exactly capture a moment in time and give it the vitality that is seen in real life. Édouard Manet It is almost impossible to ascribe Manet's work to any one era or style of art. One of his works that is truly on the brink of a new style is Le Ballet Espagnol (1862). The figures' contours coincide with their gestures as a way to suggest depth in relation to one another and in relation to the setting. Manet also accentuates the lack of equilibrium in this work to project to the viewer that he or she is on the edge of a moment that is seconds away from passing. The blurred, hazy sense of color and shadow in this work similarly place the viewer in a fleeting moment. In 1863, Manet extended his study of movement on flat canvas with Le déjeuner sur l'herbe. The light, color, and composition are the same, but he adds a new structure to the background figures. The woman bending in the background is not completely scaled as if she were far away from the figures in the foreground. The lack of spacing is Manet's method of creating snapshot, near-invasive movement similar to his blurring of the foreground objects in Le Ballet Espagnol. Edgar Degas Edgar Degas is believed to be the intellectual extension of Manet, but more radical for the impressionist community. Degas' subjects are the epitome of the impressionist era; he finds great inspiration in images of ballet dancers and horse races. His "modern subjects" never obscured his objective of creating moving art. In his 1860 piece Jeunes Spartiates s'exerçant à la lutte, he capitalizes on the classic impressionist nudes but expands on the overall concept. He places them in a flat landscape and gives them dramatic gestures, and for him this pointed to a new theme of "youth in movement". One of his most revolutionary works, L’Orchestre de l’Opéra (1868) interprets forms of definite movement and gives them multidimensional movement beyond the flatness of the canvas. He positions the orchestra directly in the viewer's space, while the dancers completely fill the background. Degas is alluding to the Impressionist style of combining movement, but almost redefines it in a way that was seldom seen in the late 1800s. In the 1870s, Degas continues this trend through his love of one-shot motion horse races in such works as Voiture aux Courses (1872). It wasn't until 1884 with Chevaux de Course that his attempt at creating dynamic art came to fruition. This work is part of a series of horse races and polo matches wherein the figures are well integrated into the landscape. The horses and their owners are depicted as if caught in a moment of intense deliberation, and then trotting away casually in other frames. The impressionist and overall artistic community were very impressed with this series, but were also shocked when they realized he based this series on actual photographs. Degas was not fazed by the criticisms of his integration of photography, and it actually inspired Monet to rely on similar technology. Claude Monet Degas and Monet's style was very similar in one way: both of them based their artistic interpretation on a direct "retinal impression" to create the feeling of variation and movement in their art. The subjects or images that were the foundation of their paintings came from an objective view of the world. As with Degas, many art historians consider that to be the subconscious effect photography had in that period of time. His 1860s works reflected many of the signs of movement that are visible in Degas' and Manet's work. By 1875, Monet's touch becomes very swift in his new series, beginning with Le Bâteau-Atelier sur la Seine. The landscape almost engulfs the whole canvas and has enough motion emanating from its inexact brushstrokes that the figures are a part of the motion. This painting along with Gare Saint-Lazare (1877–1878), proves to many art historians that Monet was redefining the style of the Impressionist era. Impressionism initially was defined by isolating color, light, and movement. In the late 1870s, Monet had pioneered a style that combined all three, while maintaining a focus on the popular subjects of the Impressionist era. Artists were often so struck by Monet's wispy brushstrokes that it was more than movement in his paintings, but a striking vibration. Auguste Rodin Auguste Rodin at first was very impressed by Monet's 'vibrating works' and Degas' unique understanding of spatial relationships. As an artist and an author of art reviews, Rodin published multiple works supporting this style. He claimed that Monet and Degas' work created the illusion "that art captures life through good modeling and movement". In 1881, when Rodin first sculpted and produced his own works of art, he rejected his earlier notions. Sculpting put Rodin into a predicament that he felt no philosopher nor anyone could ever solve; how can artists impart movement and dramatic motions from works so solid as sculptures? After this conundrum occurred to him, he published new articles that didn't attack men such as Manet, Monet, and Degas intentionally, but propagated his own theories that Impressionism is not about communicating movement but presenting it in static form. 20th century surrealism and early kinetic art The surrealist style of the 20th century created an easy transition into the style of kinetic art. All artists now explored subject matter that would not have been socially acceptable to depict artistically. Artists went beyond solely painting landscapes or historical events, and felt the need to delve into the mundane and the extreme to interpret new styles. With the support of artists such as Albert Gleizes, other avant-garde artists such as Jackson Pollock and Max Bill felt as if they had found new inspiration to discover oddities that became the focus of kinetic art. Albert Gleizes Gleizes was considered the ideal philosopher of the late 19th century and early 20th century arts in Europe, and more specifically France. His theories and treatises from 1912 on cubism gave him a renowned reputation in any artistic discussion. This reputation is what allowed him to act with considerable influence when supporting the plastic style or the rhythmic movement of art in the 1910s and 1920s. Gleizes published a theory on movement, which further articulated his theories on the psychological, artistic uses of movement in conjunction with the mentality that arises when considering movement. Gleizes asserted repeatedly in his publications that human creation implies the total renunciation of external sensation. That to him is what made art mobile when to many, including Rodin, it was rigidly and unflinchingly immobile. Gleizes first stressed the necessity for rhythm in art. To him, rhythm meant the visually pleasant coinciding of figures in a two-dimensional or three-dimensional space. Figures should be spaced mathematically, or systematically so that they appeared to interact with one another. Figures should also not have features that are too definite. They need to have shapes and compositions that are almost unclear, and from there the viewer can believe that the figures themselves are moving in that confined space. He wanted paintings, sculptures, and even the flat works of mid-19th-century artists to show how figures could impart on the viewer that there was great movement contained in a certain space. As a philosopher, Gleizes also studied the concept of artistic movement and how that appealed to the viewer. Gleizes updated his studies and publications through the 1930s, just as kinetic art was becoming popular. Jackson Pollock When Jackson Pollock created many of his famous works, the United States was already at the forefront of the kinetic and popular art movements. The novel styles and methods he used to create his most famous pieces earned him the spot in the 1950s as the unchallenged leader of kinetic painters, his work was associated with Action painting coined by art critic Harold Rosenberg in the 1950s. Pollock had an unfettered desire to animate every aspect of his paintings. Pollock repeatedly said to himself, "I am in every painting". He used tools that most painters would never use, such as sticks, trowels, and knives. He thought of the shapes he created as being "beautiful, erratic objects". This style evolved into his drip technique. Pollock repeatedly took buckets of paint and paintbrushes and flicked them around until the canvas was covered with squiggly lines and jagged strokes. In the next phase of his work, Pollock tested his style with uncommon materials. He painted his first work with aluminum paint in 1947, titled Cathedral and from there he tried his first "splashes" to destroy the unity of the material itself. He believed wholeheartedly that he was liberating the materials and structure of art from their forced confinements, and that is how he arrived at the moving or kinetic art that always existed. Max Bill Max Bill became an almost complete disciple of the kinetic movement in the 1930s. He believed that kinetic art should be executed from a purely mathematical perspective. To him, using mathematics principles and understandings were one of the few ways that you could create objective movement. This theory applied to every artwork he created and how he created it. Bronze, marble, copper, and brass were four of the materials he used in his sculptures. He also enjoyed tricking the viewer's eye when he or she first approached one of his sculptures. In his Construction with Suspended Cube (1935–1936) he created a mobile sculpture that generally appears to have perfect symmetry, but once the viewer glances at it from a different angle, there are aspects of asymmetry. Mobiles and sculpture Max Bill's sculptures were only the beginning of the style of movement that kinetic explored. Tatlin, Rodchenko, and Calder especially took the stationary sculptures of the early 20th century and gave them the slightest freedom of motion. These three artists began with testing unpredictable movement, and from there tried to control the movement of their figures with technological enhancements. The term "mobile" comes from the ability to modify how gravity and other atmospheric conditions affect the artist's work. Although there is very little distinction between the styles of mobiles in kinetic art, there is one distinction that can be made. Mobiles are no longer considered mobiles when the spectator has control over their movement. This is one of the features of virtual movement. When the piece only moves under certain circumstances that are not natural, or when the spectator controls the movement even slightly, the figure operates under virtual movement. Kinetic art principles have also influenced mosaic art. For instance, kinetic-influenced mosaic pieces often use clear distinctions between bright and dark tiles, with three-dimensional shape, to create apparent shadows and movement. Vladimir Tatlin Russian artist and founder-member of the Russian Constructivism movement Vladimir Tatlin is considered by many artists and art historians to be the first person to ever complete a mobile sculpture. The term mobile wasn't coined until Rodchenko's time, but is very applicable to Tatlin's work. His mobile is a series of suspended reliefs that only need a wall or a pedestal, and it would forever stay suspended. This early mobile, Contre-Reliefs Libérés Dans L'espace (1915) is judged as an incomplete work. It was a rhythm, much similar to the rhythmic styles of Pollock, that relied on the mathematical interlocking of planes that created a work freely suspended in air. Tatlin's Tower or the project for the 'Monument to the Third International' (1919–20), was a design for a monumental kinetic architecture building that was never built. It was planned to be erected in Petrograd (now St. Petersburg) after the Bolshevik Revolution of 1917, as the headquarters and monument of the Comintern (the Third International). Tatlin never felt that his art was an object or a product that needed a clear beginning or a clear end. He felt above anything that his work was an evolving process. Many artists whom he befriended considered the mobile truly complete in 1936, but he disagreed vehemently. Alexander Rodchenko Russian artist Alexander Rodchenko, Tatlin's friend and peer who insisted his work was complete, continued the study of suspended mobiles and created what he deemed to be "non-objectivism". This style was a study less focused on mobiles than on canvas paintings and objects that were immovable. It focuses on juxtaposing objects of different materials and textures as a way to spark new ideas in the mind of the viewer. By creating discontinuity with the work, the viewer assumed that the figure was moving off the canvas or the medium to which it was restricted. One of his canvas works titled Dance, an Objectless Composition (1915) embodies that desire to place items and shapes of different textures and materials together to create an image that drew in the viewer's focus. However, by the 1920s and 1930s, Rodchenko found a way to incorporate his theories of non-objectivism in mobile study. His 1920 piece Hanging Construction is a wood mobile that hangs from any ceiling by a string and rotates naturally. This mobile sculpture has concentric circles that exist in several planes, but the entire sculpture only rotates horizontally and vertically. Alexander Calder Alexander Calder is an artist who many believe to have defined firmly and exactly the style of mobiles in kinetic art. Over years of studying his works, many critics allege that Calder was influenced by a wide variety of sources. Some claim that Chinese windbells were objects that closely resembled the shape and height of his earliest mobiles. Other art historians argue that the 1920s mobiles of Man Ray, including Shade (1920) had a direct influence on the growth of Calder's art. When Calder first heard of these claims, he immediately admonished his critics. "I have never been and never will be a product of anything more than myself. My art is my own, why bother stating something about my art that isn’t true?" One of Calder's first mobiles, Mobile (1938) was the work that "proved" to many art historians that Man Ray had an obvious influence on Calder's style. Both Shade and Mobile have a single string attached to a wall or a structure that keeps it in the air. The two works have a crinkled feature that vibrates when air passes through it. Regardless of the obvious similarities, Calder's style of mobiles created two types that are now referred to as the standard in kinetic art. There are object-mobiles and suspended mobiles. Object mobiles on supports come in a wide range of shapes and sizes and can move in any way. Suspended mobiles were first made with colored glass and small wooden objects that hung on long threads. Object mobiles were a part of Calder's emerging style of mobiles that were originally stationary sculptures. It can be argued, based on their similar shape and stance, that Calder's earliest object mobiles have very little to do with kinetic art or moving art. By the 1960s, most art critics believed that Calder had perfected the style of object mobiles in such creations as the Cat Mobile (1966). In this piece, Calder allows the cat's head and its tail to be subject to random motion, but its body is stationary. Calder did not start the trend in suspended mobiles, but he was the artist that became recognized for his apparent originality in mobile construction. One of his earliest suspended mobiles, McCausland Mobile (1933), is different from many other contemporary mobiles simply because of the shapes of the two objects. Most mobile artists such as Rodchenko and Tatlin would never have thought to use such shapes because they didn't seem malleable or even remotely aerodynamic. Despite the fact that Calder did not divulge most of the methods he used when creating his work, he admitted that he used mathematical relationships to make them. He only said that he created a balanced mobile by using direct variation proportions of weight and distance. Calder's formulas changed with every new mobile he made, so other artists could never precisely imitate the work. Virtual movement By the 1940s, new styles of mobiles, as well as many types of sculpture and paintings, incorporated the control of the spectator. Artists such as Calder, Tatlin, and Rodchenko produced more art through the 1960s, but they were also competing against other artists who appealed to different audiences. When artists such as Victor Vasarely developed a number of the first features of virtual movement in their art, kinetic art faced heavy criticism. This criticism lingered for years until the 1960s, when kinetic art was in a dormant period. Materials and electricity Vasarely created many works that were considered to be interactive in the 1940s. One of his works Gordes/Cristal (1946) is a series of cubic figures that are also electrically powered. When he first showed these figures at fairs and art exhibitions, he invited people up to the cubic shapes to press the switch and start the color and light show. Virtual movement is a style of kinetic art that can be associated with mobiles, but from this style of movement there are two more specific distinctions of kinetic art. Apparent movement and op art Apparent movement is a term ascribed to kinetic art that evolved only in the 1950s. Art historians believed that any type of kinetic art that was mobile independent of the viewer has apparent movement. This style includes works that range from Pollock's drip technique all the way to Tatlin's first mobile. By the 1960s, other art historians developed the phrase "op art" to refer to optical illusions and all optically stimulating art that was on canvas or stationary. This phrase often clashes with certain aspects of kinetic art that include mobiles that are generally stationary. In 1955, for the exhibition Mouvements at the Denise René gallery in Paris, Victor Vasarely and Pontus Hulten promoted in their "Yellow manifesto" some new kinetic expressions based on optical and luminous phenomenon as well as painting illusionism. The expression "kinetic art" in this modern form first appeared at the Museum für Gestaltung of Zürich in 1960, and found its major developments in the 1960s. In most European countries, it generally included the form of optical art that mainly makes use of optical illusions, such as op art, represented by Bridget Riley, as well as art based on movement represented by Yacov Agam, Carlos Cruz-Diez, Jesús Rafael Soto, Gregorio Vardanega, Martha Boto or Nicolas Schöffer. From 1961 to 1968, GRAV (Groupe de Recherche d’Art Visuel) founded by François Morellet, Julio Le Parc, Francisco Sobrino, Horacio Garcia Rossi, Yvaral, Joël Stein and Vera Molnár was a collective group of opto-kinetic artists. According to its 1963 manifesto, GRAV appealed to the direct participation of the public with an influence on its behavior, notably through the use of interactive labyrinths. Contemporary work In November 2013, the MIT Museum opened 5000 Moving Parts, an exhibition of kinetic art, featuring the work of Arthur Ganson, Anne Lilly, Rafael Lozano-Hemmer, John Douglas Powers, and Takis. The exhibition inaugurates a "year of kinetic art" at the Museum, featuring special programming related to the artform. Neo-kinetic art has been popular in China where you can find interactive kinetic sculptures in many public places, including Wuhu International Sculpture Park and in Beijing. Changi Airport, Singapore has a curated collection of artworks including large-scale kinetic installations by international artists ART+COM and Christian Moeller. Selected works Selected kinetic sculptors Yaacov Agam Uli Aschenborn David Ascalon Fletcher Benton Mark Bischof Daniel Buren Alexander Calder Gregorio Vardanega Martha Boto U-Ram Choe Angela Conner Carlos Cruz-Diez Marcel Duchamp Lin Emery Rowland Emett Ivana Franke Arthur Ganson Nemo Gould Gerhard von Graevenitz Bruce Gray Ralfonso Gschwend Rafael Lozano-Hemmer Chuck Hoberman Anthony Howe Irma Hünerfauth Tim Hunkin Theo Jansen Ned Kahn Roger Katan Starr Kempf Frederick Kiesler Viacheslav Koleichuk Gyula Kosice Paul Kuniholm Gilles Larrain Julio Le Parc Liliane Lijn Len Lye Sal Maccarone Heinz Mack Phyllis Mark László Moholy-Nagy Alejandro Otero Robert Perless Otto Piene George Rickey Ken Rinaldo Barton Rubenstein Nicolas Schöffer Eusebio Sempere Jesús Rafael Soto Mark di Suvero Takis Jean Tinguely Wen-Ying Tsai Marc van den Broek Panayiotis Vassilakis Willem van Weeghel Lyman Whitaker Ludwig Wilding Selected kinetic op artists Nadir Afonso Getulio Alviani Marina Apollonio Carlos Cruz-Díez Ronald Mallory Youri Messen-Jaschin Vera Molnár Abraham Palatnik Bridget Riley Eusebio Sempere Grazia Varisco Victor Vasarely Jean-Pierre Yvaral Romano Rizzato See also Gas sculpture Lumino kinetic art Odonien Robotic art Sound art Sound installation References Further reading External links Kinetic Art Organization (KAO) – Largest International Kinetic Art Organisation (Kinetic Art film and book library, KAO Museum planned) Modern art Types of sculpture Motion (physics) Contemporary art Visual arts genres
Kinetic art
Physics
5,083
804,218
https://en.wikipedia.org/wiki/Astronomical%20clock
An astronomical clock, horologium, or orloj is a clock with special mechanisms and dials to display astronomical information, such as the relative positions of the Sun, Moon, zodiacal constellations, and sometimes major planets. Definition The term is loosely used to refer to any clock that shows, in addition to the time of day, astronomical information. This could include the location of the Sun and Moon in the sky, the age and Lunar phases, the position of the Sun on the ecliptic and the current zodiac sign, the sidereal time, and other astronomical data such as the Moon's nodes for indicating eclipses), or a rotating star map. The term should not be confused with an astronomical regulator, a high precision but otherwise ordinary pendulum clock used in observatories. Astronomical clocks usually represent the Solar System using the geocentric model. The center of the dial is often marked with a disc or sphere representing the Earth, located at the center of the Solar System. The Sun is often represented by a golden sphere (as it initially appeared in the Antikythera mechanism, back in the 2nd century BC), shown rotating around the Earth once a day around a 24-hour analog dial. This view accorded both with the daily experience and with the philosophical world view of pre-Copernican Europe. History The Antikythera mechanism is the oldest known analog computer and a precursor to astronomical clocks. A complex arrangement of multiple gears and gear trains could perform functions such as determining the position of the sun, moon and planets, predict eclipses and other astronomical phenomena and tracking the dates of Olympic Games. Research in 2011 and 2012 led an expert group of researchers to posit that European astronomical clocks are descended from the technology of the Antikythera mechanism. In the 11th century, the Song dynasty Chinese horologist, mechanical engineer, and astronomer Su Song created a water-driven astronomical clock for his clock-tower of Kaifeng City. Su Song is noted for having incorporated an escapement mechanism and the earliest known endless power-transmitting chain drive for his clock-tower and armillary sphere to function. Contemporary Muslim astronomers and engineers also constructed a variety of highly accurate astronomical clocks for use in their observatories, such as the astrolabic clock by Ibn al-Shatir in the early 14th century. The early development of mechanical clocks in Europe is not fully understood, but there is general agreement that by 1300–1330 there existed mechanical clocks (powered by weights rather than by water and using an escapement) which were intended for two main purposes: for signalling and notification (e.g. the timing of services and public events), and for modelling the solar system. The latter is an inevitable development because the astrolabe was used both by astronomers and astrologers, and it was natural to apply a clockwork drive to the rotating plate to produce a working model of the solar system. American historian Lynn White Jr. of Princeton University wrote: The astronomical clocks developed by the English mathematician and cleric Richard of Wallingford in St Albans during the 1330s, and by medieval Italian physician and astronomer Giovanni Dondi dell'Orologio in Padua between 1348 and 1364 are masterpieces of their type. They no longer exist, but detailed descriptions of their design and construction survive, and modern reproductions have been made. Wallingford's clock may have shown the sun, moon (age, phase, and node), stars and planets, and had, in addition, a wheel of fortune and an indicator of the state of the tide at London Bridge. De Dondi's clock was a seven-faced construction with 107 moving parts, showing the positions of the sun, moon, and five planets, as well as religious feast days. Both these clocks, and others like them, were probably less accurate than their designers would have wished. The gear ratios may have been exquisitely calculated, but their manufacture was somewhat beyond the mechanical abilities of the time, and they never worked reliably. Furthermore, in contrast to the intricate advanced wheelwork, the timekeeping mechanism in nearly all these clocks until the 16th century was the simple verge and foliot escapement, which had errors of at least half an hour a day. Astronomical clocks were built as demonstration or exhibition pieces, to impress as much as to educate or inform. The challenge of building these masterpieces meant that clockmakers would continue to produce them, to demonstrate their technical skill and their patrons' wealth. The philosophical message of an ordered, heavenly-ordained universe, which accorded with the Gothic-era view of the world, helps explain their popularity. The growing interest in astronomy during the 18th century revived interest in astronomical clocks, less for the philosophical message, more for the accurate astronomical information that pendulum-regulated clocks could display. Generic description Although each astronomical clock is different, they share some common features. Time of day Most astronomical clocks have a 24-hour analog dial around the outside edge, numbered from I to XII then from I to XII again. The current time is indicated by a golden ball or a picture of the sun at the end of a pointer. Local noon is usually at the top of the dial, and midnight at the bottom. Minute hands are rarely used. The Sun indicator or hand gives an approximate indication of both the Sun's azimuth and altitude. For azimuth (bearing from the north), the top of the dial indicates South, and the two VI points of the dial East and West. For altitude, the top is the zenith and the two VI and VI points define the horizon. (This is for the astronomical clocks designed for use in the northern hemisphere.) This interpretation is most accurate at the equinoxes, of course. If XII is not at the top of the dial, or if the numbers are Arabic rather than Roman, then the time may be shown in Italian hours (also called Bohemian, or Old Czech, hours). In this system, 1 o'clock occurs at sunset, and counting continues through the night and into the next afternoon, reaching 24 an hour before sunset. In the photograph of the Prague clock shown at the top of the article, the time indicated by the Sun hand is about 9am (IX in Roman numerals), or about the 13th hour (Italian time in Arabic numerals). Calendar and zodiac The year is usually represented by the 12 signs of the zodiac, arranged either as a concentric circle inside the 24-hour dial, or drawn onto a displaced smaller circle, which is a projection of the ecliptic, the path of the Sun and planets through the sky, and the plane of the Earth's orbit. The ecliptic plane is projected onto the face of the clock, and, because of the Earth's tilted angle of rotation relative to its orbital plane, it is displaced from the center and appears to be distorted. The projection point for the stereographic projection is the North pole; on astrolabes the South pole is more common. The ecliptic dial makes one complete revolution in 23 hours 56 minutes (a sidereal day), and will therefore gradually get out of phase with the hour hand, drifting slowly further apart during the year. To find the date, find the place where the hour hand or Sun disk intersects the ecliptic dial: this indicates the current star sign, the sun's current location on the ecliptic. The intersection point slowly moves around the ecliptic dial during the year, as the Sun moves out of one astrological sign into another. In the diagram showing the clock face on the right, the Sun's disk has recently moved into Aries (the stylized ram's horns), having left Pisces. The date is therefore late March or early April. If the zodiac signs run around inside the hour hands, either this ring rotates to align itself with the hour hand, or there's another hand, revolving once per year, which points to the Sun's current zodiac sign. Moon A dial or ring indicating the numbers 1 to 29 or 30 indicates the moon's age: a new moon is 0, waxes become full around day 15, and then wanes up to 29 or 30. The phase is sometimes shown by a rotating globe or black hemisphere, or a window that reveals part of a wavy black shape beneath. Hour lines Unequal hours were the result of dividing up the period of daylight into 12 equal hours and nighttime into another 12. There is more daylight in the summer, and less night time, so each of the 12 daylight hours is longer than a night hour. Similarly in winter, daylight hours are shorter, and night hours are longer. These unequal hours are shown by the curved lines radiating from the center. The longer daylight hours in summer can usually be seen at the outer edge of the dial, and the time in unequal hours is read by noting the intersection of the sun hand with the appropriate curved line. Aspects Astrologers placed importance on how the Sun, Moon, and planets were arranged and aligned in the sky. If certain planets appeared at the points of a triangle, hexagon, or square, or if they were opposite or next to each other, the appropriate aspect was used to determine the event's significance. On some clocks you can see the common aspects – triangle, square, and hexagon – drawn inside the central disc, with each line marked by the symbol for that aspect, and you may also see the signs for conjunction and opposition. On an astrolabe, the corners of the different aspects could be lined up on any of the planets. On a clock, though, the disc containing the aspect lines can't be rotated at will, so they usually show only the aspects of the Sun or Moon. On the Torre dell'Orologio, Brescia clock in northern Italy, the triangle, square, and star in the centre of the dial show these aspects (the third, fourth, and sixth phases) of (presumably) the moon. "Dragon" hand: eclipse prediction and lunar nodes The Moon's orbit is not in the same plane as the Earth's orbit around the Sun but crosses it in two places. The Moon crosses the ecliptic plane twice a month, once when it goes up above the plane, and again 15 or so days later when it goes back down below the ecliptic. These two locations are the ascending and descending lunar nodes. Solar and lunar eclipses will occur only when the Moon is positioned near one of these nodes because at other times the Moon is either too high or too low for an eclipse to be seen on the Earth. Some astronomical clocks keep track of the position of the lunar nodes with a long pointer that crosses the dial, with its length extended out to both sides of the dial to pointing at two opposite points on the solar or lunar dial. This so-called "dragon" hand makes one complete rotation around the ecliptic dial every 19 years. It is sometimes decorated with the figure of a serpent or lizard () with its snout and tail-tip touching the outer dial, traditionally labelled and even if the decorative dragon is omitted (not to be confused with the similar-seeming names of the two sections of the constellation Serpens). During the two yearly eclipse seasons the Sun pointer coincides with either the dragon's snout or tail. When the dragon hand and the full Moon coincide, the Moon is on the same plane as the Earth and Sun, and so there is a good chance that a lunar eclipse will be visible on one side of the Earth. When the new Moon is aligned with the dragon hand there is a moderate possibility that a solar eclipse might be visible somewhere on the Earth. Historical examples Su Song's Cosmic Engine The Science Museum (London) has a scale model of the 'Cosmic Engine', which Su Song, a Chinese polymath, designed and constructed in China in 1092. This great astronomical hydromechanical clock tower was about ten metres high (about 30 feet) and featured a clock escapement and was indirectly powered by a rotating wheel either with falling water and liquid mercury, which freezes at a much lower temperature than water, allowing operation of the clock during colder weather. A full-sized working replica of Su Song's clock exists in the Republic of China (Taiwan)'s National Museum of Natural Science, Taichung city. This full-scale, fully functional replica, approximately in height, was constructed from Su Song's original descriptions and mechanical drawings. Astrarium of Giovanni Dondi dell'Orologio The Astrarium of Giovanni Dondi dell'Orologio was a complex astronomical clock built between 1348 and 1364 in Padova, Italy, by the doctor and clock-maker Giovanni Dondi dell'Orologio. The Astrarium had seven faces and 107 moving gears; it showed the positions of the sun, the moon and the five planets then known, as well as religious feast days. The astrarium stood about 1 metre high, and consisted of a seven-sided brass or iron framework resting on 7 decorative paw-shaped feet. The lower section provided a 24-hour dial and a large calendar drum, showing the fixed feasts of the church, the movable feasts, and the position in the zodiac of the moon's ascending node. The upper section contained 7 dials, each about 30 cm in diameter, showing the positional data for the Primum Mobile, Venus, Mercury, the moon, Saturn, Jupiter, and Mars. Directly above the 24-hour dial is the dial of the Primum Mobile, so called because it reproduces the diurnal motion of the stars and the annual motion of the sun against the background of stars. Each of the 'planetary' dials used complex clockwork to produce reasonably accurate models of the planets' motion. These agreed reasonably well both with Ptolemaic theory and with observations. For example, Dondi's dial for Mercury uses a number of intermediate wheels, including: a wheel with 146 teeth, and a wheel with 63 internal (facing inwards) teeth that meshed with a 20 tooth pinion. Interior clocks and watches The Rasmus Sørnes Clock Arguably the most complicated of its kind ever constructed, the last of a total of four astronomical clocks designed and made by Norwegian Rasmus Sørnes (1893–1967), is characterized by its superior complexity compactly housed in a casing with the modest measurements of 0.70 x 0.60 x 2.10 m. Features include locations of the sun and moon in the zodiac, Julian calendar, Gregorian calendar, sidereal time, GMT, local time with daylight saving time and leap year, solar and lunar cycle corrections, eclipses, local sunset and sunrise, moon phase, tides, sunspot cycles and a planetarium including Pluto's 248-year orbit and the 25 800-year periods of the polar ecliptics (precession of the Earth's axis). All wheels are in brass and gold-plated. Dials are silver-plated. The clock has an electromechanical pendulum. Sørnes also made the necessary tools and based his work on his own astronomical observations. Having been exhibited at the Time Museum in Rockford, Illinois (since closed), and at the Chicago Museum of Science and Industry, the clock was sold in 2002 and its current location is not known. The Rasmus Sørnes Astronomical Clock No. 3, the precursor to the Chicago Clock, his tools, patents, drawings, telescope, and other items, are exhibited at the Borgarsyssel Museum in Sarpsborg, Norway. Table clocks There are many examples of astronomical table clocks, due to their popularity as showpieces. To become a master clockmaker in 17th-century Augsburg, candidates had to design and build a 'masterpiece' clock, an astronomical table-top clock of formidable complexity. Examples can be found in museums, such as London's British Museum. Currently Edmund Scientific among other retailers offers a mechanical Tellurium clock, perhaps the first mechanical astronomical clock to be mass-marketed. In Japan, Tanaka Hisashige made a Myriad year clock in 1851. Watches More recently, independent clockmaker created a wristwatch astrolabe, the "Astrolabium" in addition to the "Planetarium 2000", the "Eclipse 2001" and the "Real Moon." Ulysse Nardin also sells several astronomical wristwatches, the "Astrolabium," "Planetarium", and the "Tellurium J. Kepler." Other examples Two of Holland America's cruise ships, the MS Rotterdam and the MS Amsterdam, both have large astronomical clocks as their main centerpieces inside the ships' atriums. Examples by country Austria Innsbruck. The astronomical clock in the gable of 17–19 Maria-Theresien-Strasse is a 20th-century copy of the astronomical clock of the Ulm Rathaus. Peuerbach. The facade of Peuerbach Town Hall features an astrolabe clock, an enlarged copy of Georg von Peuerbach's original astrolabe of 1457. Belgium Lier. The Zimmer tower houses an astronomical clock installed by Louis Zimmer in 1930. On twelve dials surrounding a central clockface, it gives indications including the time around the world, the date, the moon phase, and the equation of time, and includes a tide clock. . The was constructed by self-taught Lucien Charloteaux between 1896 and 1912. A domestic clock housed in a wooden case, it gives indications including the solar, mean and sidereal time around the world, the positions of the constellations and planets, and the appearance of Halley's Comet. Sint-Truiden. The astronomical clock constructed by between 1937 and 1942 is now housed in the Festraets Museum. Croatia Dubrovnik. The Dubrovnik Bell Tower constructed in 1444 has housed a clock since its creation, though due to earthquake damage, both the tower and the clock were replaced in 1929. A rotating moon ball shows the lunar phase. Czech Republic Prague. The Prague astronomical clock at the Old Town Hall is one of the most famous astronomical clocks. The central section was completed in 1410, the calendar dial was added in 1490. The clock was renovated after damage during World War II, and in 1979. On the hour, Death strikes the time, and the twelve apostles appear at the doors above the clock. Olomouc. The Olomouc astronomical clock at the Town Hall is a rare example of a heliocentric astronomical clock. Dated 1422 by legend, but first mentioned in history in 1517, the clock was remodelled approximately once every century; in 1898 the astrolabe was replaced with a heliocentric model of the solar system. Badly damaged by the retreating German army in 1945, the clock was remodelled in socialist realism style in 1955, under the Communist government. The religious and royal figures were replaced with athletes, workers, farmers, scientists, and other members of the proletariat. Litomyšl. The tower of the Old Town Hall has an art nouveau astronomical clock, installed in 1907. Prostějov. The astronomical clock in the tower of the New Town Hall was installed in 1910. Kryštofovo Údolí. The Kryštofovo Údolí astronomical clock is a modern astronomical clock (inaugurated in 2008), built-in a former electrical substation. Hojsova Stráž. An astronomical clock in the Bohemian Forest was inaugurated in 2017. It has a concentric dial showing the 24-hour time, the date and zodiac, and the moon phase, and a star map dial with a dragon hand, and indicates the time of sunrise and sunset. Třebíč. At the Třebíč Astronomical Observatory, a modern astronomical clock which shows the time in world cities, the time of sunrise and sunset, the date and zodiac, and the orbits of the planets. Žatec. The , a museum and amusement complex dedicated to beer, has an astronomical clock on which the zodiac indication illustrates the annual processes of beer production. Denmark Copenhagen. Jens Olsen's World Clock in Copenhagen City Hall was designed by Jens Olsen and assembled from 1948 to 1955. France Auxerre. The 15th-century clock in the has a 24-hour sun hand and a moon hand which completes a revolution in a lunar day of 24 hours 50 minutes, and shows the lunar phase on a rotating moon ball. Beauvais. The Beauvais astronomical clock in Beauvais Cathedral, constructed 1865–1868 by Auguste-Lucien Vérité, has 52 dials that display the times of sunrise, sunset, moonrise, moonset, the phases of the moon, the solstices, the position of the planets, the current time in 18 cities around the world, and the tidal hours. Its 68 automata enact the Last Judgement on the hour. Besançon. The Besançon astronomical clock in Besançon Cathedral (1860) was also constructed by Auguste-Lucien Vérité. Its 70 dials provide 122 indications. Bourges. The Bourges astronomical clock in Bourges Cathedral was installed in 1424. It shows the zodiac, and the moon phase and age. Chartres. The in Chartres Cathedral is an astrolabe clock, installed in 1528. It was overhauled, its mechanism replaced by an electric mechanism, in 2009. Haguenau. The facade of the Musée alsacien displays an astronomical clock, a modern copy of the clock of the Ulm Rathaus. Lyon. The Lyon astronomical clock in Lyon Cathedral was constructed in 1661, replacing a 14th-century original. It has an astrolabe dial and a calendar dial. Munster. The Church of Saint-Léger houses the Clock of Creation, installed in 2008. It shows the time, the day of the week, the month and zodiac, and the moon phase. Ploërmel. The Ploërmel astronomical clock, constructed 1850–1855, comprises an astronomical clock with 10 dials and an orrery. Rouen. The Gros Horloge has a movement built in 1389, with a dial added in 1529. It indicates the moon phase on a rotating sphere above the dial, and the day of the week in an aperture at the base of the dial. Saint-Omer. The in Saint-Omer Cathedral is an astrolabe clock of 1558. Strasbourg. The Strasbourg astronomical clock is the third clock housed in Strasbourg Cathedral, following 14th-century and 16th-century predecessors. Constructed by Jean-Baptiste Schwilgué from 1838 to 1843, it shows many astronomical and calendrical functions (including what is thought to be the first complete mechanization of the computus needed to compute Easter) and several automata. Versailles. The Passemant astronomical clock in the Palace of Versailles near Paris is a rococo astronomical clock sitting on a formal low marble base. It took 12 years for a clockmaker and an engineer to build and was presented to Louis XV in 1754. Georgia Batumi. The facade of the former National Bank Building on Europe Square has an astronomical clock based on the clock at Mantua, which shows the positions of the sun and moon in the zodiac, and the moon phase. Germany A group of interior astronomical clocks of the 14th, 15th and 16th centuries in churches of Hanseatic League towns in northern Germany, known as the Hanseatic clocks (the group also includes the clock at Gdańsk, now in Poland). Bad Doberan. At Doberan Minster, an astrolabe clock was installed by Nikolaus Lilienfeld in 1390. Only the dial survives, now positioned above the west door. Lübeck. The astronomical clock of St. Mary's Church, constructed 1561–1566, was destroyed in the bombing of Lübeck in 1942. The present clock is a replacement by Paul Behrens, installed in 1967. Münster. The Münster astronomical clock of 1540 in Münster Cathedral, adorned with hand-painted zodiac symbols, which traces the movement of the planets, plays a glockenspiel tune every noon. Rostock. The Rostock astronomical clock in St. Mary's Church dating from 1472, built by Hans Düringer. Clock with daily time, zodiac, moon phases, and month. With a dedicated electronic database this clock is particularly well documented. Stendal. At , an astronomical clock of the 1580s, rebuilt in 1856 (and vandalized by the clockmaker), and restored in 1977. Stralsund. The astronomical clock in St. Nicholas' Church is an astrolabe clock installed by Nikolaus Lilienfeld in 1394. It has not been in working order since the 16th century. Tangermünde. At , an astronomical clock of the 2023 built by Volker Schulz and Thomas Leu. Wismar. The 15th-century astronomical clock in was destroyed by bombing in 1945. A group of 16th-century clocks on the facade of town halls in southern Germany, which have a 12-hour dial, a moon phase indication, and a calendar dial indicating the positions of the sun and moon in the zodiac, with a dragon hand: Esslingen am Neckar. The , constructed 1581–1586. Heilbronn. The of of Isaac Habrecht, installed 1579–1580. Tübingen. The clock of , installed in 1510. Ulm. The 16th-century astronomical clock of has a 24-hour astrolabe format, although the zodiac is repeated as a rotating ring of gold sculptures, and the outer ring of the dial is a 12-hour chapter ring. Cologne. At the , a modern astronomical clock which shows the hour in regular and sidereal time, the moon phase, positions of the sun and moon in the zodiac, and the rotation of the earth according to the geocentric model. Esslingen am Neckar. At the headquarters of Festo, Professor Hans Scheurenbrand has constructed the Harmonices Mundi (named after Kepler's book of the same name), which consists of an astronomical clock, a world time clock, and a 74 bell glockenspiel. Görlitz. and the both have 16th-century clocks which indicate the lunar phase. Munich. The Old Town Hall and the Deutsches Museum both have clocks which indicate the moon phase on a rotating ball, and the zodiac on a fixed ring within a 12-hour dial. Schramberg. The Town Hall has an astronomical clock installed in 1913. Its indications are similar to the clock of Ulm (except that the outer hour ring is 24-hour), with an offset astrolabe ring repeated as a golden zodiac ring. Stuttgart. A modern clock in the tower of shows the moon phase and the day of the week. Worms. The clock tower has a modern calendar dial that shows the month, the positions of the sun and moon in the zodiac, the moon phase, and has a dragon hand. Hungary Budapest: A modern astronomical clock with automata, at the Clock Museum. Italy Arezzo. The clock of the , installed in 1552, shows the moon phase and age. Bassano del Grappa. 24-hour dial with zodiac indication on the Palazzo del Municipio, first installed in 1430, reconstructed by Bartolomeo Ferracina in 1747. Brescia. Astronomical clock dated in the Torre dell'Orologio. Clusone. Fanzago's astronomical clock at the , built by Pietro Fanzago in 1583. Cremona. The 16th-century astronomical clock of the Torrazzo, the bell tower of Cremona Cathedral, is the largest medieval clock in Europe. Macerata. An astronomical clock installed in the , a modern replica of the original clock of 1571, which shows the orbits of the planets. Mantua. Astronomical clock was installed in 1473 in the Torre dell'Orologio of the Palazzo della Ragione. Merano. Clock tower at the entrance to Merano town cemetery, installed in 1908 by Philipp Hörz of Ulm, with a calendar dial showing the month, zodiac, and moon phase. Messina. The Messina astronomical clock in the tower of Messina Cathedral. Multi-dial clock equipped with complex automata. Constructed between 1930 and 1933 by the Ungerer Company of Strasbourg. It is one of the largest astronomical clocks in the world. Padua. 15th-century astronomical clock in the Torre dell'Orologio. Rimini. The clock tower on Piazza Tre Martiri has a calendar dial installed in 1750 showing the date, zodiac, and moon phase and age. Soncino. 24-hour dial with zodiac indication in the town hall. The terracotta zodiac dial dates from 1977. Trapani. Astronomical clock of 1596 in the Porta Oscura, with a dial for the hours and the zodiac, and a lunar dial. Venice. St Mark's Clock, in the clocktower on St Mark's Square, was built and installed by Gian Paulo and Gian Carlo Rainieri, father and son, between 1496 and 1499. Latvia Riga: The clock on the facade of the House of the Blackheads shows the time, date, month, day of the week, and lunar phase. Malta Valletta. The clock of the Grandmaster's Palace, installed in 1745, shows the hour, date, month, and lunar phase, and has bells struck by four jacquemarts. Malta has several church clocks that show calendar indications on separate dials, including those of St John's Co-Cathedral, Valletta; St Paul's Cathedral, Mdina; the Rotunda of Mosta; and the Church of St Bartholomew, Għargħur. Netherlands Arnemuiden. The 16th-century church clock at Arnemuiden indicates the lunar phase and the time of high tide. Franeker. The Eise Eisinga Planetarium, built 1774–1781, is an orrery and astronomical clock which shows the movements of the solar system. Norway Oslo. A 20th-century astronomical clock at Oslo City Hall. Poland Gdańsk. In St. Mary's Church there is the Gdańsk astronomical clock dating from 1464 to 1470, and built by Hans Düringer of Toruń. It was reconstructed after 1945. Wrocław. A 16th-century clock showing the moon phase at Wrocław Town Hall. Slovakia Stará Bystrica: An astronomical clock in the stylized shape of Our Lady of Sorrows was built in the town square in 2009. The astronomical part of the clock consists of an astrolabe displaying the astrological signs, positions of the Sun and Moon, and the lunar phases. Its statues and automata depict Slovakian historical and religious figures. The clock is controlled by computer using DCF77 signals. South Korea Honcheonsigye: is an astronomical clock made by Song Yi-Yeong (), a professor of Gwansanggam () (one of the scientific institution of Joseon Dynasty) in 1669. It was designated as South Korean national treasure number 230 in August 9, 1985. The clock used the alarm clock technology created by Christiaan Huygens in 1657. This relic shows that Huygens' technology was spread to East Asia in just 12 years. Also, It demonstrates the astronomy and mechanical engineering technology of the Joseon Dynasty. Korea has been making armillary sphere since the 15th century as part of King Sejong's technology development policy, and this clock is an important historical document that shows the fusion of East Asian astronomy and European mechanical technology. Spain Astorga: The interior face of the clock of Astorga Cathedral has a 24-hour dial which shows the lunar phase and the date. Sweden Lund: Lund astronomical clock in Lund Cathedral in Sweden, (Horologium mirabile Lundense) was made around 1425, probably by the clockmaker Nicolaus Lilienveld in Rostock. After it had been in storage since 1837, it was restored and put back in place in 1923. Only the upper, astronomical part is original, while some of the other remaining medieval parts can be seen at the Cathedral museum. When it plays, one can hear In Dulci Jubilo from the smallest organ in the church, while seven wooden figures, representing the three magi and their servants, pass by. : Emil Ahrent, the local priest, constructed and donated an astronomical clock to Fjelie Church in 1946. : K.L. Lundén, the local priest, installed an astronomical clock in in 1954. Rinkaby: An astronomical clock was installed in Rinkaby Church in the 1950s. Modelled on medieval clocks, it was made by a local electrician. Switzerland Bern. The Zytglogge is a famous 15-century astronomical clock housed in a medieval fortification tower. A set of 16th-century clocks which show the zodiac and the days of the week in concentric rings within a 12-hour clock face, with a moon phase ball above: Bremgarten. The clock of the , installed in 1558. Diessenhofen. The clock of the Siegelturm, installed in 1546. Mellingen. The clock of the , installed in 1554. Schaffhausen: The astronomical clock by in the gable of the Fronwagturm, installed in 1564, has five hands, including indications of the positions of the sun and moon in the zodiac, and a dragon hand indicating the lunar nodes. Sion: The Sion astronomical clock on the town hall dates from 1667–68. Its current mechanism was installed in 1902. Solothurn. This astronomical clock, installed by and in 1545 to replace an original of 1452, shows the positions of the sun and moon in the zodiac. Winterthur. This astrolabe astronomical clock was installed in 1529. The building which housed it was demolished in 1870. The clock is now an exhibit at the Museum Lindengut. Zug: The astronomical clock of the Zytturm was installed in 1574. Its calendar dial shows the zodiac, the lunar phase, the day of the week and the leap year cycle. United Kingdom A group of four famous astronomical clocks in the West Country, dating from the 14th and 15th centuries, all of which show the 24-hour time and the moon phase: Exeter. The Exeter Cathedral astronomical clock () Ottery St Mary. The Ottery St Mary astronomical clock (15th century) Wells. The Wells Cathedral clock (1386–1392) Wimborne Minster. The Wimborne Minster astronomical clock (14th century) Durham. Prior Castell's Clock in Durham Cathedral, installed between 1494 and 1519. Hampton Court Palace. The Hampton Court astronomical clock (1540) is on the interior façade of the Main Gatehouse. It is a fine early example of a pre-Copernican astronomical clock. Leicester. The Leicester University astronomical clock (1989) is on the Rattray Lecture Theatre opposite the Physics department. London. The astrological clock of Bracken House was installed in 1959, and depicts the Signs of the Zodiac. Snowshill. The Nychthemeron Clock, installed in the garden of Snowshill Manor in Gloucestershire. St Albans. A modern clock dating from 1995, built from notes by Richard of Wallingford held in the Bodleian Library, Oxford. On display in St Albans Cathedral. York. The York Minster astronomical clock, an astronomical clock installed in 1955 as a memorial to airmen killed in World War II, shows the positions of the sun and stars from the perspective of a pilot flying over York. It was damaged by fire in 1984, and is not currently working. See also Astrolabe Astrarium Clock of the Long Now, also called the 10,000-year clock Orrery Solar System models Torquetum Notes References Borgarsyssel Museum, Sarpsborg, 2003 Norwegian edition, and 2008 English edition (available from the museum). Further reading External links The search for Rasmus Sørnes 4th clock Prague Astronomical Clock A modern, online astronomical clock Les Cadrans Solaires (Sundials), also showing European astronomical clocks MoonlightClock.com – Handmade Astronomical Clocks Festraets' astronomical clock Clock Clock Clock Chinese inventions English inventions Greek inventions Arab inventions Hellenistic engineering History of astrology Historical scientific instruments Italian inventions
Astronomical clock
Astronomy
7,474
5,490,892
https://en.wikipedia.org/wiki/18D/Perrine%E2%80%93Mrkos
18D/Perrine–Mrkos is a periodic comet in the Solar System, originally discovered by the American-Argentine astronomer Charles Dillon Perrine (Lick Observatory, California, United States) on December 9, 1896. For some time it was thought to be a fragment of Biela's Comet. It was considered lost after the 1909 appearance, but was rediscovered by the Czech astronomer Antonín Mrkos (Skalnate Pleso Observatory, Slovakia) on October 19, 1955, using ordinary binoculars, it was later confirmed as 18D by Leland E. Cunningham (Leuschner Observatory, University of California, Berkeley). The comet was last observed during the 1968 perihelion passage when it passed from the Earth. The comet has not been observed during the following perihelion passages: 1975 Aug. 2 1982 May 16 1989 Feb. 28 1995 Dec. 6 (apmag 19?) 2002 Sept.10 (apmag 20?) 2009 Apr. 17 (apmag 24?) 2017 Feb. 26 (apmag 24?) The next predicted perihelion passage would be on 2025-Jan-01 but the comet is currently considered lost as it has not been seen since Jan 1969. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris 18D at Kronk's Cometography 18D at Kazuo Kinoshita's Comets 18D at Seiichi Yoshida's Comet Catalog NK 835 18D/Perrine-Mrkos – Syuichi Nakano (2002) Periodic comets Lost comets 018P 0018 18961209 19551019 Recovered astronomical objects
18D/Perrine–Mrkos
Astronomy
343
63,863,048
https://en.wikipedia.org/wiki/Proximity%20labeling
Enzyme-catalyzed proximity labeling (PL), also known as proximity-based labeling, is a laboratory technique that labels biomolecules, usually proteins or RNA, proximal to a protein of interest. By creating a gene fusion in a living cell between the protein of interest and an engineered labeling enzyme, biomolecules spatially proximal to the protein of interest can then be selectively marked with biotin for pulldown and analysis. Proximity labeling has been used for identifying the components of novel cellular structures and for determining protein-protein interaction partners, among other applications. History Before the development of proximity labeling, determination of protein proximity in cells relied on studying protein-protein interactions through methods such as affinity purification-mass spectrometry and proximity ligation assays. DamID is a method developed in 2000 by Steven Henikoff for identifying parts of the genome proximal to a chromatin protein of interest. DamID relies on a DNA methyltransferase fusion to the chromatin protein to nonnaturally methylate DNA, which can then be subsequently sequenced to reveal genome methylation sites near the protein. Researchers were guided by the fusion protein strategy of DamID to create a method for site-specific labeling of protein targets, culminating in the creation of the biotin protein labelling-based BioID in 2012. Alice Ting and the Ting lab at Stanford University have engineered several proteins that demonstrate improvements in biotin-based proximity labeling efficacy and speed. Principles Proximity labeling relies on a labeling enzyme that can biotinylate nearby biomolecules promiscuously. Biotin labeling can be achieved through several different methods, depending on the species of labeling enzyme. BioID, also known as BirA*, is a mutant E. coli biotin ligase that catalyzes the activation of biotin by ATP. The activated biotin is short-lived and thus can only diffuse to a region proximal to BioID. Labeling is achieved when the activated biotin reacts with nearby amines, such as the lysine sidechain amines found in proteins. TurboID is a biotin ligase engineered via yeast surface display directed evolution. TurboID enables ~10 minute labeling times instead of the ~18 hour labeling times required by BioID. APEX is an ascorbate peroxidase derivative reliant on hydrogen peroxide for catalyzing the oxidation of biotin-tyramide, also known as biotin-phenol, to a short-lived and reactive biotin-phenol free radical. Labeling is achieved when this intermediate reacts with various functional groups of nearby biomolecules. APEX can also be used for local deposition of diaminobenzidine, a precursor for an electron microscopy stain. APEX2 is a derivative of APEX engineered via yeast surface display directed evolution. APEX2 shows improved labeling efficiency and cellular expression levels. To label proteins nearby a protein of interest, a typical proximity labeling experiment begins by cellular expression of an APEX2 fusion to the protein of interest, which localizes to the protein of interest's native environment. Cells are next incubated with biotin-phenol, then briefly with hydrogen peroxide, initiating biotin-phenol free radical generation and labeling. To minimize cellular damage, the reaction is then quenched using an antioxidant buffer. Cells are lysed and the labeled proteins are pulled down with streptavidin beads. The proteins are digested with trypsin, and finally the resulting peptidic fragments are analyzed using shotgun proteomics methods such as LC-MS/MS or SPS-MS3. If instead a protein fusion is not genetically accessible (such as in human tissue samples) but an antibody for the protein of interest is known, proximity labeling can still be enabled by fusing a labeling enzyme with the antibody, then incubating the fusion with the sample. Applications Proximity labeling methods have been used to study the proteomes of biological structures that are otherwise difficult to isolate purely and completely, such as cilia, mitochondria, postsynaptic clefts, p-bodies, stress granules, and lipid droplets. Fusion of APEX2 with G-protein coupled receptors (GPCRs) allows for both tracking GPCR signaling at a 20-second temporal resolution and also identification of unknown GPCR-linked proteins. Proximity labeling has also been used for transcriptomics and interactomics. In 2019, Alice Ting and the Ting lab have used APEX to identify RNA localized to specific cellular compartments. In 2019, BioID has been tethered to the beta-actin mRNA transcript to study its localization dynamics. Proximity labeling has also been used to find interaction partners of heterodimeric protein phosphatases, of the miRISC (microRNA-induced silencing complex) protein Ago2, and of ribonucleoproteins. Recent developments TurboID-based proximity labeling has been used to identify regulators of a receptor involved in the innate immune response, a NOD-like receptor. BioID-based proximity labeling has been used to identify the molecular composition of breast cancer cell invadopodia, which are important for metastasis. Biotin-based proximity labeling studies demonstrate increased protein tagging of intrinsically disordered regions, suggesting that biotin-based proximity labeling can be used to study the roles of IDRs. A photosensitizer nucleus-targeted small molecule has also been developed for photoactivatable proximity labeling. Photocatalytic-based Proximity Labeling A new frontier in the field of proximity labeling exploits the utility of photocatalysis to achieve high spatial and temporal resolution of proximal protein microenvironments. This photocatalytic technology leverages the photonic energy of iridium-based photocatalysts to activate diazirine probes that can tag proximal proteins within a tight radius of about four nanometers. This technology was developed by the Merck Exploratory Science Center in collaboration with researchers at Princeton University. References Protein methods Molecular biology techniques
Proximity labeling
Chemistry,Biology
1,255
2,902,863
https://en.wikipedia.org/wiki/51%20Arietis
51 Arietis is a star in the northern constellation of Aries. 51 Arietis is the Flamsteed designation. It is a dim, yellow-hued star – a challenge to view with the naked eye, having an apparent visual magnitude of 6.6. Based upon parallax measurements, the star is located at an estimated distance of from the Sun. It is receding from the Earth with a heliocentric radial velocity of +9.5 km/s, and is a member of the IC 2391 moving group. This is an ordinary G-type main sequence star with a stellar classification of G8 V. Similar to the Sun, it has 1.04 times the mass and 0.99 times the Sun's radius. It is 1.4 billion years old with a leisurely rotation rate, showing a projected rotational velocity of 4 km/s. The atmospheric metallicity is higher than solar. The star radiates 92% of the Sun's luminosity from its photosphere at an effective temperature of 5,666 K. This heat gives it the golden-hued glow of a G-type star. References External links sky-map.org/ Image 51 Arietis G-type main-sequence stars Aries (constellation) Durchmusterung objects Arietis, 51 0120.2 018803 014150
51 Arietis
Astronomy
288
36,443,304
https://en.wikipedia.org/wiki/Clavulina%20puiggarii
Clavulina puiggarii is a species of coral fungus in the family Clavulinaceae. It is found in Australia and Brazil. References External links Fungi described in 1881 Fungi of Australia Fungi of South America puiggarii Taxa named by Carlo Luigi Spegazzini Fungus species
Clavulina puiggarii
Biology
61
3,597,591
https://en.wikipedia.org/wiki/Bad%20Reichenhall%20Ice%20Rink
The Bad Reichenhall ice skating and swimming hall was a combined ice skating and swimming hall in the town of Bad Reichenhall, Bavaria, Germany, near the Austrian border. It was built between 1971 and 1973 by the city of Bad Reichenhall based on a design by the architect Hans Jürgen Schmidt-Schicketanz. At approximately 15:54 UTC on Monday 2 January 2006, the roof of the ice skating rink collapsed due to construction defects following heavy snowfall. Fifteen people perished in the accident, twelve of whom were children. Thirty-four were injured. The last body was recovered early on 5 January. Weather conditions in the area were extremely severe, an avalanche having killed three people nearby earlier in the day. The rescue was temporarily halted on 3 January due to fears that the walls of the ice rink could collapse, endangering firefighters, police and rescue workers. However it resumed in the early hours of the next morning. The accident provoked outrage in the town as it emerged that officials had halted the training session of an ice hockey team inside the rink due to fears that the wall could collapse. Prior to the disaster, officials had planned to close the ice rink on Monday 2 January as snowfall was continuing. However, as many meteorologists pointed out, the weather and snow conditions were not unusual for the time of the year as the town lies in a popular winter sport area of Southern Germany. In 2007, the remains of the entire complex were finally demolished. Construction and use Under the impression of the preparations for the Olympic Games in Munich in 1972, the then Mayor of Reichenhall Max Neumeyer ( CSU ) and the city council pursued the goal of establishing a center for school sports, recreation and prophylaxis in the city, which it was hoped would send a clear signal . A modern sports hall had already been built in 1970 at a cost of over 2.2 million DM; Only a year later, the most ambitious project by far was the combined ice skating, tennis and swimming hall (15.4 million DM), which was unprecedented in the region in terms of the generosity of its execution and covered a catchment area far beyond the Traunstein area was aligned. The ice skating and swimming hall was built right next to the sports hall in the new development area on Münchner Allee. The roof construction consisted of hollow box girders as the main beam,  this construction principle is very labor-intensive and is therefore no longer common; In contrast to the solid glulam beams that are common today, damage can occur inside the hollow girder that is hardly visible from the outside. The roof was provided with a very stiff infill at right angles to the box girders. Ice skating rink The ice skating rink formed the northeastern part of the facility, and there was an underground car park under the ice skating area. Part of the total hall area of 75 m × 48 m was the ice surface, which at 60 m × 30 m was also suitable for international competitions. The total volume of the hall, which was subsequently glazed on all sides in 1974, was 69,814 m 3 . In winter the hall was used as an ice skating rink and the rest of the time as a tennis hall. The hall served, among other things, the ice hockey club EAC Bad Reichenhall as a training and competition facility. Swimming pool The swimming pool formed the southwestern part of the complex. The hall was – with the exception of the area of the changing rooms and the restaurant – completely glazed from northwest to southwest, so from the inside you had a view of the surrounding mountains with Untersberg, Lattengebirge, Reiter Alm, Müllnerberg, Sonntagshorn as well as Zwiesel, Staufen and Fuderheuberg . The hall was equipped with a competition pool with six lanes of 25 m each, a diving pool with 1, 3 and 5 m high diving boards, a non-swimmer pool and a children's pool. The swimming pool also had an outdoor area in the south with a terrace. However, this was only used in the first few years after it opened and was later no longer opened due to a lack of interest from visitors. Intermediate building Between the ice skating hall and swimming pool there was an entrance hall, the checkout area, technical rooms and, in the early years, a kiosk with skate rentals. There was a restaurant on the upper floor at the level of the top tiers of the stands. This could be accessed directly from both halls and also via the entrance hall. Collapse On 2 January 2006, at around 3:54 p.m., the roof of the northeastern part of the facility collapsed over the ice rink. This was preceded by heavy snowfall in the region. At the time of the accident, the public run was still taking place, with over 50 people in the hall. The SSC Bad Reichenhall training session was planned for 4:00 p.m. However, the ice master decided at 3:30 p.m. to close the hall at the end of the public run and to cancel the training. Although the snow load was still below the load limit set at the time, so no immediate clearing was necessary, due to further snowfalls that had been announced, the roof should be cleared of snow before further use. A few minutes before the closure, the roof unexpectedly collapsed . After rescuing the first easily accessible injured people, the rescue of the victims progressed only slowly and took two days, as safety measures were first necessary on the collapsed roof parts and the external pillars. Later, the underground car park underneath the ice surface also had to be supported in order to ensure the safety of the helpers and any survivors when driving on the ice surface with heavy equipment. 15 people were killed, including 12 children and young people, and another 34 people were injured, some seriously. The autopsy of the dead revealed that all of them had died in the collapse and had not died of subsequent hypothermia . The roof of the swimming pool withstood the masses of snow. The visitors in this part of the facility were able to leave the hall unscathed. Investigations The building materials technologist Bernd Hillemeier (Technische Universität Berlin) analyzed samples of the wooden roof construction on behalf of the ZDF magazine Frontal21 . According to him, glue based on urea resin was used, the adhesive effect of which weakens when exposed to moisture. Immediately after the accident, the Traunstein public prosecutor's office got involved. Two experts were commissioned to carry out the technical investigation. The reports have been available since July 2006. Specific investigations were then started against eight people, including four (former) employees of the city of Bad Reichenhall, two planners and two former employees of the company that built the roof structure. Consequences The accident sparked a Germany-wide discussion about ensuring the safety of high-rise buildings . The then Federal Building Minister Wolfgang Tiefensee called on the responsible state building ministers to check the effectiveness of the relevant state building regulations . The media criticized the legal and administrative burden of the building inspection authorities, which are largely not led by construction experts . There were calls for a "construction TÜV ” for existing buildings based on the model of regular inspections of bridges and other engineering structures. Up to now, according to the state building regulations, high-rise buildings have only had to be checked during planning and construction – and only when they are of a certain size. The first meeting of the "Roofs Working Group" of the Bavarian State Ministry of the Interior took place on 30 January 2006; The Secretary of State for the Interior, civil engineers and construction associations took part in this, among others . The working group is intended to scientifically support the research into causes and suggestions for consequences. In addition to this meeting, the operators of halls and stadiums in Bavaria that have similar roofing began checking the roof structures; some halls were temporarily closed as a precaution (e.g. the ice rink in Geretsried, whose roof was in acute danger of collapsing was demolished in 2006). In particular, deficiencies that needed to be remedied were identified at the stadiums in Senden (closed for almost the entire remainder of the 2005/06 winter season) and in Deggendorf (temporarily closed in 2005/06). The ice rink in Rosenheim was temporarily closed at the beginning of February 2006 when suspicions arose that the same glue was used as in Bad Reichenhall. The Werner-Rittberger-Halle (training hall for ice skating next to the Rheinlandhalle in Krefeld ) was shut down after the accident in Bad Reichenhall. The city commissioned a report to check the load-bearing capacity of the hall roof. The hall has been in operation again since September 2006, and the permitted snow load has been reduced as a precaution. The ice rink in Göppingen was also temporarily shut down in March 2006 after the roof was checked. After a fire in July 2008, the hall was demolished in 2010. The roof of the dolphinarium at Duisburg Zoo was dismantled in spring 2006 because moisture had also attacked the glued connections. In July 2006, the ice stadium in Wiehl was closed after cracks were discovered in the glued beams . Although the Traunstein public prosecutor's office did not investigate Mayor Wolfgang Heitmeier ( FWG ),  he was sharply attacked by the public and national media after the roof collapsed  and held responsible. In the local elections on 12 March 2006, Heitmeier missed the absolute majority required for re-election. He was well behind his challenger Dr. Herbert Lackner (CSU), against whom he lost in the runoff election on 26 March 2006. Process On 28 January 2008, the trial began before the Traunstein regional court against three defendants: the then construction manager and structural engineer for the roof structure, the then project manager of the architectural office and the author of a report from 2003. They were accused of negligent homicide and negligent bodily harm. The proceedings against an architect and former senior employee of the city of Bad Reichenhall were separated due to his health. The proceedings against another accused were also discontinued for health reasons; he died on 30 December 2007. The focus of the evidence was the lack of tested statics (generally by a structural engineering test engineer). In this context, the public prosecutor's office was accused by the defense attorneys and the ranks of the co-plaintiffs of having investigated one-sidedly in the direction of those planning and executing the case. The city's responsibility as a developer and building supervisory authority was not sufficiently taken into account. A verdict was originally scheduled to be handed down on 24 April 2008. However, the presiding judge scheduled further court dates at the main hearing on 28 February 2008, so that the trial dragged on until autumn. According to a witness statement on 12 June 2008, the city administration of Bad Reichenhall knew about the danger of their ice rink collapsing. The board of directors of the ice hockey club said that they had been warned by phone half an hour before the collapse that training in the evening would have to be canceled. On 18 November 2008, the designer of the roof was found guilty of negligent homicide by the regional court for violating the duty of care and sentenced to 18 months' suspended imprisonment. The architect and the structural engineer were acquitted. The defense of the convicted civil engineer had announced an appeal, as had the public prosecutor's office, which was not satisfied with the two acquittals. On 12 January 2010, the Federal Court of Justice (BGH) overturned the acquittal of the expert (graduate engineer specializing in civil engineering) and referred the matter back to another criminal division of the Traunstein Regional Court.  In making its decision, the BGH cited deficiencies in the assessment of evidence at the time .  In particular, the regional court did not explain in a comprehensible manner why the representatives of the city of Bad Reichenhall would not have acted differently even when the expert had clearly warned (for example, they would have cleared the roof or restricted the opening hours).  The regional court also acquitted the expert in a second trial. Reallocation and redesign The entire building complex was demolished by March 2007, with work stopping between December 2006 and January 2007. A referendum took place in Bad Reichenhall on the future of the site . In which 53% of those who voted were in favor of building a new ice rink and swimming pool on the site, while the city was planning a tourism college there. In order to demonstrate the will of the majority of the population and to commemorate the victims of the collapse of the ice skating rink, the songwriter Hans Söllner, who had given a benefit concert in Reichenhall shortly after the accident for the benefit of the victims' relatives in the spring of 2006, organized from From 16 January to 10 February 2009 there was a sit-in on the site of the collapsed ice rink. Nevertheless, the city of Bad Reichenhall ignored the referendum and the demonstration, mainly for cost reasons, and stuck to the university, which was to be implemented on the site from September 2009 as the Bad Reichenhall campus of the IUBH School of Business and Management . However, the campus is now located at the hotel management school after the university did not extend the leasehold agreement for the site of the former ice skating and swimming hall in 2013 due to insufficient capacity utilization. The majority of the site is therefore currently undeveloped. In 2016 it became known that the Bavarian State Office for Weights and Measures, which was moving to Bad Reichenhall as part of a relocation of authorities, was to be rebuilt on the property. At the beginning of 2010, a memorial was built on a small section of the former ice rink area, which was officially inaugurated on 20 November 2010. The planning for this, which began shortly after the accident, had previously come to a standstill in 2008 in an effort to find an amicable solution with all relatives. The city of Bad Reichenhall then, in consultation with a small circle of relatives, privately commissioned the artist Karl-Martin Hartmann . The costs for this memorial remained secret, but are said to have been in the range of several hundred thousand euros. Individual relatives had vehemently protested publicly against this project until the very end. On 2 January 2016 at 3:54 p.m., the time of the tragic accident, in the presence of the mayor of Bad Reichenhall, Herbert Lackner, the relatives gathered at the memorial made of colorful glass steles and remembered the 15 people who were killed in the accident.Following the short, private memorial service, an ecumenical service was held in the Church of St. Zeno. This commemoration on the 10th anniversary of the accident was, among other things, also reported in the daily topics . See also Katowice Trade Hall roof collapse – a similar accident on 28 January 2006, in Katowice, Poland. List of structural failures and collapses Structural integrity and failure Structural robustness External links USA Today story on the collapse German ice rink toll climbs to 14, cnn.com Pressing questions over rink tragedy 'Final body' found at German rink, BBC News Building and structure collapses in 2006 2006 in Germany 2006 industrial disasters 2006 in Bavaria Building and structure collapses in Germany January 2006 events in Germany Berchtesgadener Land 2006 disasters in Germany Ice rinks
Bad Reichenhall Ice Rink
Engineering
3,159
48,862,532
https://en.wikipedia.org/wiki/Christian%20Hartinger
Christian G. Hartinger (born 1974) is an Austrian-born New Zealand bioinorganic chemist known for his work in metal-based anticancer drugs. In 2022 he was elected a Fellow of the Royal Society Te Apārangi. Scientific career Hartinger studied chemistry at the University of Vienna, earning his MSc in 1999 and his PhD in 2001 under Bernhard Keppler. He was an Erwin Schrödinger Fellow with Paul Dyson at the École Polytechnique Fédérale de Lausanne from 2006 to 2008 and obtained his habilitation at the University of Vienna in 2009. In 2011, Hartinger was appointed the position of associate professor at Waipapa Taumata Rau, where he currently serves and in 2015 was promoted to professor. Hartinger's research interests are in bioinorganic chemistry, medicinal chemistry, and bioanalytical chemistry, where he uses an interdisciplinary approach in drug discovery. He is specially interested in the development of metal-centred anticancer agents, particularly ruthenium anticancer drugs, and using analytical methods to characterise their behaviour in the presence of biomolecules. Hartinger has now published over 169 publications and has an h-index of 69. In 2022 Hartinger was elected a Fellow of the Royal Society Te Apārangi. The society said his "innovative approaches have established new directions in metallodrug research, and his developed methodologies continue to have far-reaching impact in the community. His findings challenge paradigms about the reactivity of metal compounds towards biomolecules and thereby inform the design of novel biomaterials". Distinctions/honours Selected research outputs Meier, S. M.; Novak, M. S.; Kandioller, W.; Jakupec, M. A.; Roller, A.; Keppler, B. K.; Hartinger, C. G., Aqueous chemistry and antiproliferative activity of a pyrone-based phosphoramidate Ru(arene) anticancer agent. Dalton Trans, 2014, 43 (26), 9851–9855 Meier, S. M.; Babak, M. V.; Keppler, B. K.; Hartinger, C. G., Efficiently detecting metallodrug-protein adducts: ion trap versus time-of-flight mass analyzers. ChemMedChem, 2014, 9 (7), 1351–1355 Hartinger, C. G.; Groessl, M.; Meier, S. M.; Casini, A.; Dyson, P. J., Application of mass spectrometric techniques to delineate the modes-of-action of anticancer metallodrugs. Chem Soc Rev, 2013, 42 (14), 6186–6199 Meier, S. M.; Novak, M.; Kandioller, W.; Jakupec, M. A.; Arion, V. B.; Metzler-Nolte, N.; Keppler, B. K.; Hartinger, C. G., Identification of the structural determinants for anticancer activity of a ruthenium arene peptide conjugate. Chem Eur J, 2013, 19 (28), 9297–9307 Kandioller, W.; Balsano, E.; Meier, S. M.; Jungwirth, U.; Göschl S.; Roller, A.; Jakupec, M. A.; Berger, W.; Keppler, B. K.; Hartinger, C. G., Organometallic anticancer complexes of lapachol: metal centre-dependent formation of reactive oxygen species and correlation with cytotoxicity. Chem Commun, 2013, 49 (32), 3348–3350 Meier, S. M.; Hanif, M.; Pichler, V.; Novak, M.; Jirkovsky, E.; Jakupec, M. A.; Davey, C. A.; Keppler, B. K.; Hartinger, C. G., Novel metal(II) arene 2-pyridinecarbothioamides: A rationale to orally active organometallic anticancer agents. Chem Sci, 2013, 4 (4), 1837–1846 Babak, M. V.; Meier, S. M.; Legin, A. A.; Adib Razavi, M. S.; Roller, A.; Jakupec, M. A.; Keppler, B. K.; Hartinger, C. G., Am(m)ines make the difference: organoruthenium am(m)ine complexes and their chemistry in anticancer drug development. Chemistry, 2013, 19 (13), 4308–4318 Kurzwernhart, A.; Kandioller, W.; Bächler S.; Bartel, C.; Martic, S.; Buczkowska, M.; Mühlgassner, G.; Jakupec, M. A.; Kraatz, H.; Berdnarski, P. J.; Arion, V. B.; Markos, D.; Keppler, B. K.; Hartinger, C. G., Structure-activity relationships of targeted RuII(η6-p-cymene) anticancer complexes with flavonol-derived ligands. J Med Chem, 2012, 55 (23), 10512-10522 References External links Hartinger group Webpage University of Auckland staff profile Articles on Google Scholar 21st-century New Zealand chemists Inorganic chemists Analytical chemists New Zealand chemists Academic staff of the University of Auckland University of Vienna alumni 1974 births Living people Scientists from Vienna Fellows of the Royal Society of New Zealand
Christian Hartinger
Chemistry
1,272
36,145,695
https://en.wikipedia.org/wiki/Triangular%20network%20coding
In coding theory, triangular network coding (TNC) is a non-linear network coding based packet coding scheme introduced by . Previously, packet coding for network coding was done using linear network coding (LNC). The drawback of LNC over large finite field is that it resulted in high encoding and decoding computational complexity. While linear encoding and decoding over GF(2) alleviates the concern of high computational complexity, coding over GF(2) comes at the tradeoff cost of degrading throughput performance. The main contribution of triangular network coding is to reduce the worst-case decoding computational complexity of to (where n is the total number of data packets being encoded in a coded packet) without degrading the throughput performance, with code rate comparable to that of optimal coding schemes. Triangular code has also been proposed as Fountain code to achieve near-optimal performance with encoding and decoding computational complexity of . It has been further shown that triangular based fountain code can even outperform optimized Luby transform code. Coding and decoding In TNC, coding is performed in two stages. First redundant "0" bits are added at the head and tail of each packet such that all packets are of uniform bit length. Then the packets are XOR coded, bit-by-bit. The "0" bits are added in such a way that these redundant "0" bits added to each packet generate a triangular pattern. In essence, the TNC decoding process, like the LNC decoding process involves Gaussian elimination. However, since the packets in TNC have been coded in such a manner that the resulting coded packets are in triangular pattern, the computational process of triangularization, with complexity of , where is the number of packets, can be bypassed. The receiver now only needs to perform back-substitution, with worst-case complexity given as for each bit location. References Coding theory Finite fields Information theory
Triangular network coding
Mathematics,Technology,Engineering
396
1,046,120
https://en.wikipedia.org/wiki/Canonicalization
In computer science, canonicalization (sometimes standardization or normalization) is a process for converting data that has more than one possible representation into a "standard", "normal", or canonical form. This can be done to compare different representations for equivalence, to count the number of distinct data structures, to improve the efficiency of various algorithms by eliminating repeated calculations, or to make it possible to impose a meaningful sorting order. Usage cases Filenames Files in file systems may in most cases be accessed through multiple filenames. For instance in Unix-like systems, the string "/./" can be replaced by "/". In the C standard library, the function realpath() performs this task. Other operations performed by this function to canonicalize filenames are the handling of /.. components referring to parent directories, simplification of sequences of multiple slashes, removal of trailing slashes, and the resolution of symbolic links. Canonicalization of filenames is important for computer security. For example, a web server may have a restriction that only files under the cgi directory C:\inetpub\wwwroot\cgi-bin may be executed. This rule is enforced by checking that the path starts with C:\inetpub\wwwroot\cgi-bin\ and only then executing it. While the file C:\inetpub\wwwroot\cgi-bin\..\..\..\Windows\System32\cmd.exe initially appears to be in the cgi directory, it exploits the .. path specifier to traverse back up the directory hierarchy in an attempt to execute a file outside of cgi-bin. Permitting cmd.exe to execute would be an error caused by a failure to canonicalize the filename to the simplest representation, C:\Windows\System32\cmd.exe, and is called a directory traversal vulnerability. With the path canonicalized, it is clear the file should not be executed. Unicode In Unicode, many accented letters can be represented in more than one way. For example, é can be represented in Unicode as the Unicode character U+0065 (LATIN SMALL LETTER E) followed by the character U+0301 (COMBINING ACUTE ACCENT), but it can also be represented as the precomposed character U+00E9 (LATIN SMALL LETTER E WITH ACUTE). This makes string comparison more complicated, since every possible representation of a string containing such glyphs must be considered. To deal with this, Unicode provides the mechanism of canonical equivalence. In this context, canonicalization is Unicode normalization. Variable-width encodings in the Unicode standard, in particular UTF-8, may cause an additional need for canonicalization in some situations. Namely, by the standard, in UTF-8 there is only one valid byte sequence for any Unicode character, but some byte sequences are invalid, i.e., they cannot be obtained by encoding any string of Unicode characters into UTF-8. Some sloppy decoder implementations may accept invalid byte sequences as input and produce a valid Unicode character as output for such a sequence. If one uses such a decoder, some Unicode characters effectively have more than one corresponding byte sequence: the valid one and some invalid ones. This could lead to security issues similar to the one described in the previous section. Therefore, if one wants to apply some filter (e.g., a regular expression written in UTF-8) to UTF-8 strings that will later be passed to a decoder that allows invalid byte sequences, one should canonicalize the strings before passing them to the filter. In this context, canonicalization is the process of translating every string character to its single valid byte sequence. An alternative to canonicalization is to reject any strings containing invalid byte sequences. URL A canonical URL is a URL for defining the single source of truth for duplicate content. Use by Google A canonical URL is the URL of the page that Google thinks is most representative from a set of duplicate pages on your site. For example, if you have URLs for the same page, such as https://example.com/?dress=1234 and https://example.com/dresses/1234, Google chooses one as canonical. Note that the pages do not need to be absolutely identical; minor changes in sorting or filtering of list pages do not make the page unique (for example, sorting by price or filtering by item color). The canonical can be in a different domain than a duplicate. Internet With the help of canonical URLs, a search engine knows which link should be provided in a query result. A canonical link element can get used to define a canonical URL. Intranet In intranets, manual searching for information is predominant. In this case, canonical URLs can be defined in a non-machine-readable form, too. For example in a guideline. Misc Canonical URLs are usually the URLs that get used for the share action. Since the Canonical URL gets used in the search result of search engines, they are in most cases a landing page. Search engines and SEO In web search and search engine optimization (SEO), URL canonicalization deals with web content that has more than one possible URL. Having multiple URLs for the same web content can cause problems for search engines - specifically in determining which URL should be shown in search results. Most search engines support the Canonical link element as a hint to which URL should be treated as the true version. As indicated by John Mueller of Google, having other directives in a page, like the robots noindex element can give search engines conflicting signals about how to handle canonicalization Example: http://wikipedia.com http://www.wikipedia.com http://www.wikipedia.com/ http://www.wikipedia.com/?source=asdf All of these URLs point to the homepage of Wikipedia, but a search engine will only consider one of them to be the canonical form of the URL. XML A Canonical XML document is by definition an XML document that is in XML Canonical form, defined by The Canonical XML specification. Briefly, canonicalization removes whitespace within tags, uses particular character encodings, sorts namespace references and eliminates redundant ones, removes XML and DOCTYPE declarations, and transforms relative URIs into absolute URIs. A simple example would be the following two snippets of XML: <code><node1 x='1' a="1" a="2">Data</node1    > Data</code> Data Data The first example contains extra spaces in the closing tag of the first node. The second example, which has been canonicalized, has had these spaces removed. Note that only the spaces within the tags are removed under W3C canonicalization, not those between tags. A full summary of canonicalization changes is listed below: The document is encoded in UTF-8 Line breaks normalized to #xA on input, before parsing Attribute values are normalized, as if by a validating processor Character and parsed entity references are replaced CDATA sections are replaced with their character content The XML declaration and document type declaration are removed Empty elements are converted to start-end tag pairs Whitespace outside of the document element and within start and end tags is normalized All whitespace in character content is retained (excluding characters removed during line feed normalization) Attribute value delimiters are set to quotation marks (double quotes) Special characters in attribute values and character content are replaced by character references Superfluous namespace declarations are removed from each element Default attributes are added to each element Fixup of xml:base attributes is performed Lexicographic order is imposed on the namespace declarations and attributes of each element Computational linguistics In morphology and lexicography, a lemma is the canonical form of a set of words. In English, for example, run, runs, ran, and running are forms of the same lexeme, so we can select one of them; ex. run, to represent all the forms. Lexical databases such as Unitex use this kind of representation. Lemmatisation is the process of converting a word to its canonical form. See also References External links Canonical XML Version 1.0, W3C Recommendation OWASP Security Reference for Canonicalization Computing terminology
Canonicalization
Technology
1,742
3,980,048
https://en.wikipedia.org/wiki/Computational%20scientist
A computational scientist is a person skilled in scientific computing. This person is usually a scientist, a statistician, an applied mathematician, or an engineer who applies high-performance computing and sometimes cloud computing in different ways to advance the state-of-the-art in their respective applied discipline; physics, chemistry, social sciences and so forth. Thus scientific computing has increasingly influenced many areas such as economics, biology, law, and medicine to name a few. Because a computational scientist's work is generally applied to science and other disciplines, they are not necessarily trained in computer science specifically, though concepts of computer science are often used. Computational scientists are typically researchers at academic universities, national labs, or tech companies. One of the tasks of a computational scientist is to analyze large amounts of data, often from astrophysics or related fields, as these can often generate huge amounts of data. Computational scientists often have to clean up and calibrate the data to a usable form for an effective analysis. Computational scientists are also tasked with creating artificial data through computer models and simulations. References Computational science Computer occupations Science occupations Mathematical science occupations Computational fields of study
Computational scientist
Mathematics,Technology
230
29,834,362
https://en.wikipedia.org/wiki/Biofunctionalisation
In the field of bioengineering, biofunctionalisation (or biofunctionalization) is the modification of a material to have biological function and/or stimulus, whether permanent or temporary, while at the same time being biologically compatible. Various types of medical implants are designed to biofunctionalize so that they are accepted by the host organism to replace or repair a defective biological function. References Biotechnology
Biofunctionalisation
Biology
88
4,498,471
https://en.wikipedia.org/wiki/Groundwater%20sapping
Groundwater sapping is a geomorphic erosion process that results in the headward migration of channels in response to near constant fluid discharge at a fixed point. The consistent flow of water displaces fine sediments which physically and chemically weathers rocks. Valleys that appear to have been created by groundwater sapping occur throughout the world in areas such as England, Colorado, Hawai’i, New Zealand, and many other places. However, it is difficult to characterize a landform as being formed exclusively by groundwater sapping due to phenomena such as pluvial runoff, plunge-pool undercutting, changes in water table level, and inconsistent groundwater flow. An example of drainage ways created purely by the outflow of subsurface fluids can be seen on the foreshores of beaches. As the surge of water and sand brought to land by a wave retreats seaward, the film of water becomes thinner until it forms rhomboid shaped patterns in the sand. Small fans form at the apex of the rhombic features, which are eventually fed by the remaining backflow of water traveling downslope. Channels begin to form headward in the form of millimeter wide rills along the sides of the fans; the creation of these small channel networks culminates when the last of the backwash dissipates. This is one of the processes involved in the formation of gullies, such as lavaka. Erosion by sapping tends to produce steep-sided U-shaped valleys of fairly uniform width with box-like, "theater-shaped" headwalls. This contrasts with the more common branching or dendritic pattern of V-shaped valleys produced by overland flows that become wider with distance from their source. Groundwater sapping has been suggested as the cause for erosion of the valley and channel networks on Mars, although studies show that groundwater alone can not excavate and transport the material required to create these canyons. Geomorphology and geology Sapping typically occurs in permeable sandstones associated with high water tables underlain by an impermeable layer. Limited in its ability to travel vertically, water is forced to travel laterally where it eventually seeps out of the ground. Limestones, siltstones, and shales can be found in valleys created by groundwater sapping as well. Characteristic landforms Characteristic landforms caused by groundwater sapping are “theater-shaped” channel heads and “U-shaped” valleys, which have a consistent width and steep valley walls. Weakened basal rocks are unable to support more resistant upper layers, causing valley head and sidewalls to collapse inwards. Theater-shaped channel heads are characterized by overhanging sidewalls that are relatively dry compared to the lower level rocks below the zone of seepage. The development of theater heads has been related to “ground-water flow direction, jointing and faulting, permeability contrasts, formation slope and dip angles, and formation cohesion”. The morphology of channels and valleys created by sapping are highly dependent on regional scale geology, and can be hard to distinguish from features created through alternative processes. Chemical precipitates can be used as indicators of groundwater water discharge implying that a valley or channel may have been formed as a result of sapping. These sorts of clues are important in areas where water is not currently being discharged. Notable landmarks Colorado Plateau Many “natural amphitheaters” can be found near the Colorado River. It is thought that sapping may have been more common in this area in the past when there was a higher water table. A shift in the climate and associated precipitation or the incision of the Colorado River are two factors that may have caused a change in the water table level. Mars Short, stream-like, deep channels have been observed on Mars. Very similar to valleys created by groundwater sapping here on Earth, the discovery of the Martian valleys has prompted numerous studies that aim to better understand the process of sapping. See also Headward erosion Groundwater discharge Valley networks (Mars) References External links Simulation of Groundwater Sapping Alan D. Howard, "Introduction: Groundwater Sapping on Mars and Earth" in Sapping Features of the Colorado Plateau, edited by A.D. Howard, R. C. Kochel, and H. R. Holt, NASA SP-491, p. 1-5 (1988) Julie E. Laity and Michael C. Malin, "Sapping processes and the development of theater-headed valley networks on the Colorado Plateau," Geological Society of America Bulletin: Vol. 96, No. 2 (1985), pp. 203–217 (Abstract). Geomorphology Hydrogeology Aquifers
Groundwater sapping
Environmental_science
959
102,219
https://en.wikipedia.org/wiki/Net%20protein%20utilization
The net protein utilization (NPU) is the percentage of ingested nitrogen that is retained in the body. Rating It is used to determine the nutritional efficiency of protein in the diet, that is, it is used as a measure of "protein quality" for human nutritional purposes. As a value, NPU can range from 0 to 1 (or 100), with a value of 1 (or 100) indicating 100% utilization of dietary nitrogen as protein and a value of 0 an indication that none of the nitrogen supplied was converted to protein. Certain foodstuffs, such as eggs or milk, rate as 1 on an NPU chart. Experimentally, this value can be determined by determining dietary protein intake and then measuring nitrogen excretion. One formula for apparent NPU is: NPU = {0.16 × (24 hour protein intake in grams)} - {(24 hour urinary urea nitrogen) + 2} - {0.1 × (ideal body weight in kilograms)} / {0.16 × (24 hour protein intake in grams)} NPU and biological value (BV) both measure nitrogen retention; the difference is that biological value is calculated from nitrogen absorbed, whereas net protein utilization is from nitrogen ingested. Another closely related quantity is the net postprandial protein utilization (NPPU), which is the maximum potential NPU of a dietary protein source under ideal conditions. The Protein Digestibility Corrected Amino Acid Score (PDCAAS) is a more modern rating for determining protein quality, and the current ranking standard used by the FDA. The Digestible Indispensable Amino Acid Score (DIAAS) is a protein quality method, proposed in March 2013 by the Food and Agriculture Organization to replace the current protein ranking standard, the Protein Digestibility Corrected Amino Acid Score (PDCAAS). The proposition is contested, however, due to lack of data. See also Protein efficiency ratio Nitrogen balance References Amino acids Proteins Nutrition
Net protein utilization
Chemistry
401
966,493
https://en.wikipedia.org/wiki/Mountain%20tapir
The mountain tapir, also known as the Andean tapir or woolly tapir (Tapirus pinchaque), is the smallest of the four widely recognized species of tapir. It is found only in certain portions of the Andean Mountain Range in northwestern South America. As such, it is the only tapir species to live outside of tropical rainforests in the wild. It is most easily distinguished from other tapirs by its thick woolly coat and white lips. The species name comes from the term "La Pinchaque", an imaginary beast said to inhabit the same regions as the mountain tapir. Description Mountain tapirs are black or very dark brown, with occasional pale hairs flecked in amongst the darker fur. The fur becomes noticeably paler on the underside, around the anal region, and on the cheeks. A distinct white band runs around the lips, although it may vary in extent, and there are usually also white bands along the upper surface of the ears. In adults, the rump has paired patches of bare skin, which may help to indicate sexual maturity. The eyes are initially blue, but change to a pale brown as the animal ages. Unlike all other species of tapir, the fur is long and woolly, especially on the underside and flanks, reaching or more in some individuals. Adults are usually around in length and in height at the shoulder. They typically weigh between , and while the sexes are of similar size, females tend to be around heavier than the males. Like the other types of tapir, they have small, stubby tails and long, flexible proboscises. They have four toes on each front foot and three toes on each back foot, each with large nails and supported by a padded sole. A patch of bare skin, pale pink or grey in colour, extends just above each toe. Reproduction Female mountain tapirs have a 30-day estrous cycle, and typically breed only once every other year. During courtship, the male chases the female and uses soft bites, grunts, and squeals to get her attention, while the female responds with frequent squealing. After a gestation period of 392 or 393 days, the female gives birth to a single young; multiple births are very rare. Newborn mountain tapirs weigh about and have a brown coat with yellowish-white spots and stripes. Like adults, baby mountain tapirs have thick, woolly fur to help keep them warm. Weaning begins at around three months of age. The immature coloration fades after about a year, but the mother continues to care for her young for around 18 months. Mountain tapirs reach sexual maturity at age three and have lived up to 27 years in captivity. Ecology Tapirs are herbivores, and eat a wide range of plants, including leaves, grasses, and bromeliads. In the wild, particularly common foods include lupins, Gynoxys, ferns, and umbrella plants. It also seeks out natural salt licks to satisfy its need for essential minerals. Mountain tapirs are also important seed dispersers in their environments, and have been identified as a keystone species of the high Andes. A relatively high proportion of plant seeds eaten by mountain tapirs successfully germinate in their dung, probably due to a relatively inefficient digestive system and a tendency to defecate near water. Although a wide range of seeds are dispersed in this manner, those of the endangered wax palm seem to rely almost exclusively on mountain tapirs for dispersal, and this plant, along with the highland lupine, declines dramatically whenever the animal is extirpated from an area. Predators of mountain tapirs include cougars, spectacled bears, and, less commonly, jaguars. Attacks by invasive domestic dogs have also been reported. Behavior When around other members of their species, mountain tapirs communicate through high-pitched whistles, and the males occasionally fight over estrous females by trying to bite each other's rear legs. But for the most part, mountain tapirs are shy and lead solitary lives, spending their waking hours foraging for food on their own along well-worn tapir paths. Despite their bulk, they travel easily through dense foliage, up the steep slopes of their hilly habitats, and in water, where they often wallow and swim. Mountain tapirs are generally crepuscular, although they are more active during the day than other species of tapirs. They sleep from roughly midnight to dawn, with an additional resting period during the hottest time of the day for a few hours after noon, and prefer to bed down in areas with heavy vegetation cover. Mountain tapirs forage for tender plants to eat. When trying to access high plants, they will sometimes rear up on their hind legs to reach and then grab with their prehensile snouts. Though their eyesight is lacking, they get by on their keen senses of smell and taste, as well as the sensitive bristles on their proboscises. Males will frequently mark their territory with dung piles, urine, and rubbings on trees, and females will sometimes engage in these behaviors, as well. The territories of individuals usually overlap, with each animal claiming over , and females tend to have larger territories than males. Distribution and habitat The mountain tapir is found in the cloud forests and páramo of the Eastern and Central Cordilleras mountains in Colombia, Ecuador, and the far north of Peru. Its range may once have extended as far as western Venezuela, but it has long been extirpated from that region. It commonly lives at elevations between , and since at this altitude temperatures routinely fall below freezing, the animal's woolly coat is essential. During the wet season, mountain tapirs tend to inhabit the forests of the Andes, while during the drier months, they move to the páramo, where fewer biting insects pester them. The mountain tapir has no recognised subspecies. In Peru, it is protected in the National Sanctuary Tabaconas Namballe. The species needs continuous stretches of cloud forest and páramo, rather than isolated patches, to successfully breed and maintain a healthy population, and this obstacle is a major concern for conservationists trying to protect the endangered animal. Evolution The mountain tapir is the least specialised of the living species of tapir, and has changed the least since the origin of the genus in the early Miocene. Genetic studies have shown that mountain tapirs diverged from its closest relative, the Brazilian tapir, in the late Pliocene, around three million years ago. This would have been shortly after the formation of the Panamanian Isthmus, allowing the ancestors of the two living species to migrate southward from their respective points of origin in Central America as part of the Great American Interchange. However, the modern species most likely originated in the Andes, some time after this early migration. Molecular dating methods based on three mitochondrial cytochrome genes found T. pinchaque to be within a paraphyletic T. terrestris complex. Vulnerability The mountain tapir is the most threatened of the five Tapirus species, classified as "Endangered" by the IUCN in 1996. According to the IUCN, there was a 20% chance the species could have been extinct as early as 2014. Due to the fragmentation of its surviving range, populations may already have fallen below the level required to sustain genetic diversity. Historically, mountain tapirs have been hunted for their meat and hides, while the toes, proboscises, and intestines are used in local folk medicines and as aphrodisiacs. Since they will eat crops when available, they are also sometimes killed by farmers protecting their produce. Today, deforestation for agriculture and mining, and poaching are the main threats to the species. There may be only 2,500 individuals left in the wild today, making it all the more difficult for scientists to study them. Also, very few individuals are found in zoos. Only a handful of breeding pairs of this species exists in captivity in the world — at the Los Angeles Zoo, the Cheyenne Mountain Zoo in Colorado Springs, and, as of 2006, the San Francisco Zoo. In Canada, a mating pair is kept in Langley, BC, at the Mountain View Conservation and Breeding Centre. The nine individuals in captivity are descendants of just two founder animals. This represents a distinct lack of genetic diversity and may not bode well for their continued existence in captivity. The three zoos that house this species are working to ensure that the remaining wild populations of mountain tapirs are protected. Two mountain tapirs were sent from San Francisco Zoo to Cali Zoo, making them be the only captive tapirs in their natural home range; one male is kept in Pitalito, it could be moved to the Cali Zoo to make a breeding pair. References Video/Multimedia Video - Mountain Tapirs at the San Francisco Zoo External links Tapir Specialist Group – Mountain tapir ARKive – images and movies of the mountain tapir (Tapirus pinchaque) Tapirs Mammals of the Andes Mammals of Colombia Mammals of Ecuador Mammals of Peru EDGE species Mammals described in 1829 Páramo fauna
Mountain tapir
Biology
1,859
57,822,802
https://en.wikipedia.org/wiki/Parthenin
Parthenin is a chemical compound classified as a sesquiterpene lactone. It has been isolated from Parthenium hysterophorus. It is genotoxic, allergenic, and an irritant. Parthenin is believed to be responsible for the dermatitis caused by Parthenium hysterophorus. References sesquiterpene lactones Vinylidene compounds Plant toxins
Parthenin
Chemistry
92
74,391,075
https://en.wikipedia.org/wiki/Charenton%20Metro-Viaduct
The Charenton Metro-Viaduct is a railroad girder bridge located in the French department of Val-de-Marne in the Île-de-France region. It links the communes of Charenton-le-Pont and Maisons-Alfort, crossing the Marne river, as well as the A4 autoroute and 103 departmental road. First put into operation in 1970, the viaduct is used by trains on line 8 of the Paris metro. The total length of the viaduct is 199 m. Made up of steel beams resting on concrete piers, the viaduct has a continuous gradient, due to the difference in level between the two banks of the Marne. It was renovated for the first time in 2011. Location The viaduct is located between the Charenton-Écoles and Maisons-Alfort–Stade stations. It crosses the 103 departmental road, the A4 autoroute and then the Marne. Since Charenton-le-Pont is located on a hillside overlooking the Marne, the viaduct is inclined to compensate for the difference in level between the two stations. The structure is flanked by two tramways, enabling line 8 to return underground. The surrounding bridges are the Charenton bridge, to the east, and the railway viaduct of the Paris–Marseille railway, to the west. The structure is only a few hundred meters away from the confluence of the Marne and Seine rivers. Technical specifications The total length of the viaduct is 199 m, with an average height of 15.05 m between the rail and the water level. The structure has a continuous gradient of 41 mm/m towards Maisons-Alfort. It comprises two 55.5 m central sections and two 30 m lateral sections. The structure rests on six supports for three concrete piles, one of which is set in the riverbed. The steel deck is designed to aesthetically blend into the landscape. It consists of a continuous beam supported by two vertical solid-core girders located between the two tracks. By enclosing the lower part of the trains, they minimize rolling noise. The track rests on ballast, which is laid on a concrete screed. History With the Charenton bridge becoming too saturated, in 1965 it was decided to extend line 8 from Charenton-Écoles to Maisons-Alfort and Créteil. To achieve this, the line had to cross the Marne, but the difference in level between the two banks was too great to allow the line to pass under the river, as Charenton was located on a highland plateau. Instead, an elevated viaduct crossing was chosen – a first for a Paris metro since 1909. This extension, which had been planned as early as the 1930s, had been postponed due to World War II. In anticipation of the metro's construction, buildings in Charenton's Rue de Paris were expropriated in 1937, including the Hôtel du Plessis-Bellière. The upper part of Square Jules Noël and Place de Valois, as well as the buildings lining it, were built to replace the destroyed buildings. Construction of the viaduct began in the spring of 1968, with the construction of the piers. The steel girders were installed in June 1969, and the viaduct was completed in November, enabling load tests to be carried out using five-motor Sprague-Thomson trains. The line was opened to traffic on 19 September 1970, with the extension of line 8 from Charenton-Écoles to Maisons-Alfort-Stade. In the context of the extension of line 8 to Pointe du Lac, the viaduct was closed for renovations during the summers of 2010 and 2011, with a replacement shuttle service in place during this period. The aim of the renovation is to improve the soundproofing of the structure and adapt it to the increased traffic associated with the extension. After the removal of the tracks and ballast, the concrete slab was replaced and then covered with an anti-vibration rubber coating, reducing noise levels by 10 dB per train. At the same time, the viaduct was repainted in a blue-green color, replacing the original blue-gray. The total cost of the project was four million euros. References See also Bibliography Jean Robert, Notre Métro, Paris, éd. Jean Robert, 1983, 2e éd., 511 p. Clive Lamming, La grande histoire du métro parisien de 1900 à nos jours, Atlas, October 2015, 336 p. Related articles Paris Métro Line 8 Pont ferroviaire Crueize Viaduct Rue de Paris (Charenton-le-Pont) External links Architectural resource: Structurae Trains Paris Métro Viaducts in France Île-de-France Engineering
Charenton Metro-Viaduct
Technology,Engineering
951
72,082,890
https://en.wikipedia.org/wiki/TOI-1136
TOI-1136 is a G-type main-sequence star away in the constellation Draco. It is slightly smaller than the Sun and similar in mass and temperature, but is much younger, with an age of about 700 million years. It hosts a system of at least six, and possibly seven, exoplanets. Planetary system TOI-1136 was discovered to have six transiting planets in 2022 using the Transiting Exoplanet Survey Satellite (TESS), all orbiting closer to their star than Mercury is to the Sun. All of them are Neptune-sized or mini-Neptunes, and their masses have been measured using a combination of radial velocity and transit-timing variations, showing them to have low densities. The planets are in an orbital resonance, with period ratios near 3:2, 2:1, 3:2, 7:5, and 3:2. A possible single transit of a seventh planet was also identified. This candidate planet would also be sub-Neptune-sized, but its orbit is poorly constrained. If this is confirmed, it would make TOI-1136 one of the largest known planetary systems. See also HD 110067 Kepler-90 Kepler-385 TOI-178 TRAPPIST-1 References Draco (constellation) G-type main-sequence stars Planetary systems with six confirmed planets J12484436+6451191 BD+65 0902 1136 142276270
TOI-1136
Astronomy
302
21,289,989
https://en.wikipedia.org/wiki/Deaerating%20feed%20tank
A deaerating feed tank (DFT), often found in steam plants that propel ships, is located after the main condensate pump and before the main feed booster pump. Purpose It has these three purposes: Remove dissolved oxygen (“air”) from the condensate Pre-heat the feedwater Provide a storage/surge volume Based on the relevant theoretical Rankine cycle diagram, there are four main processes, or stages: Stages 1→2: Water pressure is raised from low to high by one or more pumps. Stages 2→3: Water is heated to boiling in either a conventional boiler or a nuclear propulsion steam generator Stages 3→4: Steam is expanded in the steam turbine to turn the ship's propeller and power the ship's turbine generators. Stages 4→1: Low pressure wet steam leaving the turbine is condensed in the (condenser. A high vacuum in the condenser improves efficiency. In the practical implementation of a Rankine cycle, it is common to break the pump process (stages 1→2) into three pumps: (in water flow order: condensate pump, feed booster pump and then feedwater pump). Details Dissolved oxygen is removed by injecting auxiliary exhaust steam into the upper portion of the tank (above the feed water level) at roughly the same location (elevation) that the condensate enters the tank. The two are put in close physical contact over a large surface area to maximize heat transfer. As the condensate is heated, the steam drives off any dissolved gasses. Since the steam is injected above the feed water level a steam blanket forms above the water to keep the non-condensable gasses from re-entering the feed water. There is a connection to the gland exhaust system on the upper portion of the DFT that withdraws the oxygen and other non-condensable gasses as they are driven from the condensate. Removing oxygen minimizes corrosion and improves the vacuum quality. Liquid water loses its capacity to absorb dissolved water with rising temperatures. The feed tank temperature is maintained just below boiling. The steam heats the water in the tank The water in the tank serves as a surge volume within the steam plant. The deaerating feed tank's surge volume allows the ship to change "bells" (steam turbine power output) and change the ship's speed without running the feed pump dry or flooding the turbines with liquid water. When the officer in charge on ship's bridge orders an increased bell, the steam turbine power output is demanded, using more steam and requiring an increased feed rate. This draws more water from the condenser, potentially to the point of going dry and starving the boiler or steam generator resulting in a loss of propulsion. This is until the water, converted to steam, provides its energy to the turbine and then is condensed in the condenser. When the bell is decreased, turbine power output is reduced and the feed rate drops. Without the DFT's surge volume, less water is drawn from the condenser, the condensate level rises, potentially covering condenser tubes and reducing the ability of the condenser to maintain vacuum. If the level is allowed to go high enough, vacuum could be lost and/or water could impinge (and damage) the turbine blades as the turbine normally sits directly above the condenser. The feed tank serves as a surge volume to take this excess condensate and avoid losing vacuum. References (U.S. Government Printing Office) Marine steam propulsion Steam power
Deaerating feed tank
Physics
723
7,649,609
https://en.wikipedia.org/wiki/Rubens%20tube
A Rubens tube, also known as a standing wave flame tube, or simply flame tube, is a physics apparatus for demonstrating acoustic standing waves in a tube. Invented by German physicist Heinrich Rubens in 1905, it graphically shows the relationship between sound waves and sound pressure, as a primitive oscilloscope. Today, it is used only occasionally, typically as a demonstration in physics education. Overview A length of pipe is perforated along the top and sealed at both ends - one seal is attached to a small speaker or frequency generator, the other to a supply of a flammable gas (propane tank). The pipe is filled with the gas, and the gas leaking from the perforations is lit. If a suitable constant frequency is used, a standing wave can form within the tube. When the speaker is turned on, the standing wave will create points with oscillating (higher and lower) pressure and points with constant pressure (pressure nodes) along the tube. Where there is oscillating pressure due to the sound waves, less gas will escape from the perforations in the tube, and the flames will be lower at those points. At the pressure nodes, the flames are higher. At the end of the tube gas molecule velocity is zero and oscillating pressure is maximal, thus low flames are observed. It is possible to determine the wavelength from the flame minimum and maximum by simply measuring with a ruler. Explanation Since the time averaged pressure is equal at all points of the tube, it is not straightforward to explain the different flame heights. The flame height is proportional to the gas flow as shown in the figure. Based on Bernoulli's principle, the gas flow is proportional to the square root of the pressure difference between the inside and outside of the tube. This is shown in the figure for a tube without standing sound wave. Based on this argument, the flame height depends non-linearly on the local, time-dependent pressure. The time average of the flow is reduced at the points with oscillating pressure and thus flames are lower. History Heinrich Rubens was a German physicist born in 1865. Though he worked with better remembered physicists such as Max Planck at the University of Berlin on some of the ground work for quantum physics, he is best known for his flame tube, which was demonstrated in 1905. This original Rubens tube was a four-meter section of pipe with approximately 100 holes of 2 mm diameter spaced evenly along its length. When the ends of the pipe are sealed and a flammable gas is pumped into the device, the escaping gas can be lit to form a row of flames of roughly equal size. When sound is applied from one end by means of a loudspeaker, internal pressure will change along the length of the tube. If the sound is of a frequency that produces standing waves, the wavelength will be visible in the series of flames, with the tallest flames occurring at pressure nodes, and the lowest flames occurring at pressure antinodes. The pressure antinodes correspond to the locations with the highest amount of compression and rarefaction. The Guinness record for longest Rubens tube was achieved in 2019, when science show Kvark built a 10 meter Rubens tube at Saku Suurhall. Public displays A Rubens tube was on display at The Exploratory in Bristol, England until it closed in 1999. A similar exhibit using polystyrene beads instead of flames featured in the At-Bristol science centre until 2009. Students make models of Rubens tube at their school science exhibition. This display is also found in physics departments at a number of universities. A number of physics shows also have one, such as: Rino Foundation (The Netherlands), Fysikshow Aarhus (Denmark), Fizika Ekspres (Croatia) and ÅA Physics show (Finland). The MythBusters also included a demonstration on their "Voice Flame Extinguisher" episode in 2007. The Daily Planet's The Greatest Show Ever, ran a competition whereby five Canadian science centres competed for the best science centre's experiment/display. Edmonton's Science Centre (Telus World of Science) utilized a Rubens tube, and won the competition. The special was filmed on October 10, 2010. Tim Shaw on the show Street Genius on National Geographic Channel also featured one in Episode 18 "Wave of fire". The artist Emer O'Brien used Rubens tubes as the basis for the sound sculpture featured in her 2012 exhibition Return to Normal at the Wapping Project in London. 2D Rubens tube (pyro board) Overview A 2D Rubens tube, also known as a pyro board, is a plane of Bunsen burners that can demonstrate an acoustic standing wave in two dimensions. Similar to its predecessor, the one dimensional Rubens tube, this standing wave is caused by a multitude of factors. Pressure variation caused by the inflow of propane gas interfering with the input of sound waves into the plane causes changes in the height and color of the flames. The 2D Rubens tube was made famous by a Danish science demonstrator group in Denmark called Fysikshow. Explanation A 2D Rubens tube is made up of a lot of different parts. The main part itself is the rectangular steel box that outputs the propane gas. Steel is generally used for the plane on pyro boards because the compound can generally withstand immense amounts of heat and still be able to maintain its structure. Holes are drilled on the top of the steel plane to output the propane gas that is being constantly and slowly pumped into the steel box. Instead of having a complete steel box, some pyro boards designs have wooden sides to support the steel plane on top. In wooden-style pyro boards, the interior of the box is usually covered with some sort of heat-resistant membrane that prevents the propane inside the box from leaking. On the sides of the steel box are speakers that input a sound into the contained medium. The rate at which the propane gas escapes through the holes on the top of the pyro board is dependent on the intensity of the inputted sound. This relationship is directly proportional, meaning as the intensity of the sound increases, the rate at which the propane gas escapes increases. Since the medium inside the steel box is kept at a constant volume, a standing wave has the ability to be produced. The frequency at which the standing wave can be produced is largely dependent on the physical dimensions of the box and the wavelength of the wave. Since pyro boards range in sizes, each board has its own unique frequencies at which a standing wave can be produced. References External links Information on Rubens' original design in .doc format Fire Physics experiments Acoustics
Rubens tube
Physics,Chemistry
1,378
72,465,140
https://en.wikipedia.org/wiki/GG%20Lupi
GG Lupi is an eclipsing binary star in the southern constellation of Lupus. Most of the time it is a magnitude 5.6 object, making it faintly visible to the naked eye, but during the primary eclipse its brightness falls to 6.1. GG Lupi is located 1/2 degree (one full moon diameter) west of the 3rd magnitude star Delta Lupi. This star was found to be a spectroscopic binary in 1930, and its eclipses were detected in observations during 1964. Its location in the sky, distance (~490 light years) and proper motion make it a likely member of the Scorpius–Centaurus Association within the Gould's Belt star formation region. The two stars comprising this binary are both very young main sequence stars of spectral type B. They are estimated to be about 20 million years old, placing them near the zero-age main sequence. Their orbit is somewhat eccentric (e=0.15) and the period of apsidal precession is 102 years. References Spectroscopic binaries Algol variables 135876 74950 Lupi, GG B-type main-sequence stars 5687 Lupus (constellation)
GG Lupi
Astronomy
249
9,669,620
https://en.wikipedia.org/wiki/Metropolis%20%28architecture%20magazine%29
Metropolis is an internationally recognized design and architecture–concentrated magazine with a strong focus on ethics, innovation and sustainability in the creative sector. The magazine was established in 1981 by Horace Havemeyer III of Bellerophon Publications, Inc alongside his wife Eugenie Cowan Havemeyer and is based in New York City. Metropolis's work towards future focused is based in their motto "design at all scales". The magazine is published ten times a year with over 50,000 subscribers. Metropolis publishes both print and digital editorial coverage encouraging design focused conversation through a range of diverse mediums. Alongside the magazine itself, Metropolis produces four additional print supplements and a series of live across the United States. Metropolis produces digital media for their website and social accounts. Their website receives approximately 85,000 unique visitors every month while its socials amass an audience of over 100,000 followers across Instagram and Facebook combined. In 2019 Metropolis was acquired by Sandow Media for an undisclosed amount. Metropolis annually hosts a range of virtual and in-person events alongside design competition schemes encouraging innovation and sustainability . History Metropolis was launched in 1981 by Horace Havemeyer III (1942–2014) and Eugenie Cowan Havemeyer. Havemeyer III was born in Dix Hills moving to New York City in 1969 where he worked at Doubleday publishers as a production planning supervisor for a decade. He went on to completed courses at the Institute for Architecture and Urban Studies prompting his work at the IAUS journal, Skyline until it closed. In 1981, he founded Bellerophon Publications alongside his wife, serving as publishing body and founders of Metropolis magazine. As evidenced in early editions, the magazine began with a particular focus on architecture in New York City and quickly expanded to embrace a range of design disciplines internationally. In 1985 Suzan S. Szenasy took over as the editor-in-chief of Metropolis. As a contemporary figurehead in design and innovation and notable "voice of the design world" Szenasy led Metropolis to international acclaim. Szenasy's influence addressed sustainability and inclusive design on the stage of a mainstream publication before familiarity of such practices was attracted. Metropolis''' values largely surrounded Szenasy's writings, which emphasize the importance of ethics and sustainable intervention in the education and practices of designers. In 2017, Avinash Rajagopal took over from Szenasy as the editor-in-chief of Metropolis as Szenasy moved into a new role as Director of Design Innovation. Rajagopal's role as editor-in-chief followed his success as Metropolis' senior editor from 2011 to 2016. In 2019 Rajagopal worked alongside Eugenie Havemeyer in Sandow Media's acquisition of the magazine to their portfolio. Editors-In-Chief Horace Havemeyer III, 1981–986 Susan S. Szenasy, 1986–2017 Avinash Rajagopal, 2017–present; assumed this role after working as senior editor at the magazine from 2011 to 2016. InfluenceMetropolis is centered around futurism in its focus on the innovative needs of consumers and the planet. The magazine investigates the role designers can play in reversing the climate crisis by rejecting developments founded on self-interest and monetary gain. With a strong focus on futurist ideals and sustainability Metropolis encourages affirmative action in developing a progressive and environmentally enduring world. Futurism and SustainabilityMetropolis is heavily inspired aesthetically and culturally by futurist values. It focuses significantly on the technological progress of the modern machine age, eagerly anticipating vitality and progression of design and architectural realms. The magazine embraces an artistic futurist aesthetic, frequently applying neo-impressionism and cubism in its covers as a direct reflection of the dynamism of modernity the magazine embodies. The magazine remains heavily influenced by the work and writings of former editor, Susan S. Szenasy, due to the accuracy at which she was able to anticipate changing realities within design from the 80’s to the present moment. As part of Metropolis's 40th anniversary the magazine republished old pieces that anticipated trends in design that remain pertinent today with a particular focus on "work, sustainability, the wellness movement, the concept of reuse, gender issues, accessibility and the new digital technologies." The articles republished were included online and within print supplements entitled "40 Years of Looking Forward". Editor-in-chief Avinash Rajagopal wrote in the July/August 2021 issue of Metropolis: The republished work included: March 1989—This issue featured an article by Susan S. Szenasy entitled "Making Home Work" with exact parallels to the Covid-19 work from home movement. May 1989—Featured an article entitled "Building with the Sun" by Don Prowler focalising the emergence of the sustainability movement. October 1996 – Included the article, "Well-Being", by Barry M. Katz introduced this concept to designers. May 1999 – "The Mall Doctor", by Ellen Barr campaigned for reuse over reconstruction in architecture. Metropolis's embodiment of futurism and sustainability thereby encourages a "coexistence with future generations for the premonition, and anticipatory belief in future formulas". Notable Volumes The following table highlights volumes of Metropolis considered notable within the design community. The listed issues are significant due to contentious and innovative design thinking. ‌Competition Metropolis hosts a range of creative competitions annually to encourage innovation and forward thinking among contemporary design practitioners. The magazines most notable award schemes are the annual 'Next Generation Design Competition', 'Planet Positive Awards' and the 'Future 100’. Next Generation Design Metropolis annually hosts the 'Next Generation Design Competition' alongside Staples Business Advantage. The competition encourages experienced designers to create based around five major themes: collaboration, wellness, effectiveness and productivity, office culture, and sustainability. The competition awards victors with $10,000 in venture capital to encourage production, with the only eligibility criteria being entrants must have been practicing for a decade or less are eligible. Planet Positive Awards The 'Planet Positive Awards' recognize sustainable creative projects and products internationally that benefit both people and the planet. The competition inaugurally occurred in 2021. Project eligibility included sustainable architecture and interior design projects developed over the past three years dating from June 2018-June 2021, and sustainable products released between June 2019-June 2021. As a prerequisite to entrance all competition entries must recognize they are targeting sustainability through official sustainability certifications. The 'Planet Positive Award' winners were published in the November/December 2021 issue of the magazine, Volume 41, No 6 and recognized formally in a virtual awards ceremony facilitated by Sandow through their Design TV platform. Award Recipients The awards were grouped into seven categories for judgement including: civic/cultural, workplace, healthcare, multifamily, hospitality, education, and products. Future 100 Metropolis 'Future 100' awards act as the connector of top tier architectural talent with leading design firms. The award recognizes the top 100 graduating students from interior and architecture design programs in the United States and Canada. Award recipients will be featured in Metropolis on both print and digital scales, alongside recognition of their programs, nominators, and school. The inaugural 'Future 100' interior design and architecture graduating students were named in 2021 and can be found on the Metropolis website. Awards In 2007 and 2008 Metropolis was a finalist in the National Magazine Awards in the 'Under 100,000 Circulation' category for General Excellence. This award honours effectiveness and overall excellence in which "writing, reporting, editing, and design all come together to command readers attention and fulfill the magazines unique editorial decision". In 2007 Havemeyer III, Szenasy and Metropolis won the CIVITAS August Heckscher Award "for their twenty-five years pursuing enlightened and intelligent documentation of life in urban America, especially New York City". In 2009 Havemeyer III was awarded the Institute Honor for Collaborative Achievement by the American Institute of Architects on behalf of Metropolis magazine. In 2017, Susan S. Szenasy (Metropolis's editor-in-chief from 1986 to 2017) received the Cooper–Hewitt, Smithsonian Design Museum's Director's Award in recognition of her work at Metropolis'' and beyond. References External links Metropolis website Visual arts magazines published in the United States Architecture magazines Design magazines Magazines established in 1981 Magazines published in New York City Ten times annually magazines
Metropolis (architecture magazine)
Engineering
1,697
29,564,581
https://en.wikipedia.org/wiki/Aircraft%20design%20process
The aircraft design process is a loosely defined method used to balance many competing and demanding requirements to produce an aircraft that is strong, lightweight, economical and can carry an adequate payload while being sufficiently reliable to safely fly for the design life of the aircraft. Similar to, but more exacting than, the usual engineering design process, the technique is highly iterative, involving high-level configuration tradeoffs, a mixture of analysis and testing and the detailed examination of the adequacy of every part of the structure. For some types of aircraft, the design process is regulated by civil airworthiness authorities. This article deals with powered aircraft such as airplanes and helicopter designs. Design constraints Purpose The design process starts with the aircraft's intended purpose. Commercial airliners are designed for carrying a passenger or cargo payload, long range and greater fuel efficiency whereas fighter jets are designed to perform high speed maneuvers and provide close support to ground troops. Some aircraft have specific missions, for instance, amphibious airplanes have a unique design that allows them to operate from both land and water, some fighters, like the Harrier jump jet, have VTOL (vertical take-off and landing) ability, helicopters have the ability to hover over an area for a period of time. The purpose may be to fit a specific requirement, e.g. as in the historical case of a British Air Ministry specification, or fill a perceived "gap in the market"; that is, a class or design of aircraft which does not yet exist, but for which there would be significant demand. Aircraft regulations Another important factor that influences the design are the requirements for obtaining a type certificate for a new design of aircraft. These requirements are published by major national airworthiness authorities including the US Federal Aviation Administration and the European Aviation Safety Agency. Airports may also impose limits on aircraft, for instance, the maximum wingspan allowed for a conventional aircraft is to prevent collisions between aircraft while taxiing. Financial factors and market Budget limitations, market requirements and competition set constraints on the design process and comprise the non-technical influences on aircraft design along with environmental factors. Competition leads to companies striving for better efficiency in the design without compromising performance and incorporating new techniques and technology. In the 1950s and '60s, unattainable project goals were regularly set, but then abandoned, whereas today troubled programs like the Boeing 787 and the Lockheed Martin F-35 have proven far more costly and complex to develop than expected. More advanced and integrated design tools have been developed. Model-based systems engineering predicts potentially problematic interactions, while computational analysis and optimization allows designers to explore more options early in the process. Increasing automation in engineering and manufacturing allows faster and cheaper development. Technology advances from materials to manufacturing enable more complex design variations like multifunction parts. Once impossible to design or construct, these can now be 3D printed, but they have yet to prove their utility in applications like the Northrop Grumman B-21 or the re-engined A320neo and 737 MAX. Airbus and Boeing also recognize the economic limits, that the next airliner generation cannot cost more than the previous ones did. Environmental factors An increase in the number of aircraft also means greater carbon emissions. Environmental scientists have voiced concern over the main kinds of pollution associated with aircraft, mainly noise and emissions. Aircraft engines have been historically notorious for creating noise pollution and the expansion of airways over already congested and polluted cities have drawn heavy criticism, making it necessary to have environmental policies for aircraft noise. Noise also arises from the airframe, where the airflow directions are changed. Improved noise regulations have forced designers to create quieter engines and airframes. Emissions from aircraft include particulates, carbon dioxide (CO2), sulfur dioxide (SO2), carbon monoxide (CO), various oxides of nitrates and unburnt hydrocarbons. To combat the pollution, ICAO set recommendations in 1981 to control aircraft emissions. Newer, environmentally friendly fuels have been developed and the use of recyclable materials in manufacturing have helped reduce the ecological impact due to aircraft. Environmental limitations also affect airfield compatibility. Airports around the world have been built to suit the topography of the particular region. Space limitations, pavement design, runway end safety areas and the unique location of airport are some of the airport factors that influence aircraft design. However changes in aircraft design also influence airfield design as well, for instance, the recent introduction of new large aircraft (NLAs) such as the superjumbo Airbus A380, have led to airports worldwide redesigning their facilities to accommodate its large size and service requirements. Safety The high speeds, fuel tanks, atmospheric conditions at cruise altitudes, natural hazards (thunderstorms, hail and bird strikes) and human error are some of the many hazards that pose a threat to air travel. Airworthiness is the standard by which aircraft are determined fit to fly. The responsibility for airworthiness lies with the national civil aviation regulatory bodies, manufacturers, as well as owners and operators. The International Civil Aviation Organization sets international standards and recommended practices on which national authorities should base their regulations. The national regulatory authorities set standards for airworthiness, issue certificates to manufacturers and operators and the standards of personnel training. Every country has its own regulatory body such as the Federal Aviation Administration in USA, DGCA (Directorate General of Civil Aviation) in India, etc. The aircraft manufacturer makes sure that the aircraft meets existing design standards, defines the operating limitations and maintenance schedules and provides support and maintenance throughout the operational life of the aircraft. The aviation operators include the passenger and cargo airliners, air forces and owners of private aircraft. They agree to comply with the regulations set by the regulatory bodies, understand the limitations of the aircraft as specified by the manufacturer, report defects and assist the manufacturers in keeping up the airworthiness standards. Most of the design criticisms these days are built on crashworthiness. Even with the greatest attention to airworthiness, accidents still occur. Crashworthiness is the qualitative evaluation of how aircraft survive an accident. The main objective is to protect the passengers or valuable cargo from the damage caused by an accident. In the case of airliners the stressed skin of the pressurized fuselage provides this feature, but in the event of a nose or tail impact, large bending moments build all the way through the fuselage, causing fractures in the shell, causing the fuselage to break up into smaller sections. So the passenger aircraft are designed in such a way that seating arrangements are away from areas likely to be intruded in an accident, such as near a propeller, engine nacelle undercarriage etc. The interior of the cabin is also fitted with safety features such as oxygen masks that drop down in the event of loss of cabin pressure, lockable luggage compartments, safety belts, lifejackets, emergency doors and luminous floor strips. Aircraft are sometimes designed with emergency water landing in mind, for instance the Airbus A330 has a 'ditching' switch that closes valves and openings beneath the aircraft slowing the ingress of water. Design optimization Aircraft designers normally rough-out the initial design with consideration of all the constraints on their design. Historically design teams used to be small, usually headed by a Chief Designer who knows all the design requirements and objectives and coordinated the team accordingly. As time progressed, the complexity of military and airline aircraft also grew. Modern military and airline design projects are of such a large scale that every design aspect is tackled by different teams and then brought together. In general aviation a large number of light aircraft are designed and built by amateur hobbyists and enthusiasts. Computer-aided design of aircraft In the early years of aircraft design, designers generally used analytical theory to do the various engineering calculations that go into the design process along with a lot of experimentation. These calculations were labour-intensive and time-consuming. In the 1940s, several engineers started looking for ways to automate and simplify the calculation process and many relations and semi-empirical formulas were developed. Even after simplification, the calculations continued to be extensive. With the invention of the computer, engineers realized that a majority of the calculations could be automated, but the lack of design visualization and the huge amount of experimentation involved kept the field of aircraft design stagnant. With the rise of programming languages, engineers could now write programs that were tailored to design an aircraft. Originally this was done with mainframe computers and used low-level programming languages that required the user to be fluent in the language and know the architecture of the computer. With the introduction of personal computers, design programs began employing a more user-friendly approach. Design aspects The main aspects of aircraft design are: Aerodynamics Propulsion Controls Mass Structure All aircraft designs involve compromises of these factors to achieve the design mission. Wing design The wing of a fixed-wing aircraft provides the lift necessary for flight. Wing geometry affects every aspect of an aircraft's flight. The wing area will usually be dictated by the desired stalling speed but the overall shape of the planform and other detail aspects may be influenced by wing layout factors. The wing can be mounted to the fuselage in high, low and middle positions. The wing design depends on many parameters such as selection of aspect ratio, taper ratio, sweepback angle, thickness ratio, section profile, washout and dihedral. The cross-sectional shape of the wing is its airfoil. The construction of the wing starts with the rib which defines the airfoil shape. Ribs can be made of wood, metal, plastic or even composites. The wing must be designed and tested to ensure it can withstand the maximum loads imposed by maneuvering, and by atmospheric gusts. Fuselage The fuselage is the part of the aircraft that contains the cockpit, passenger cabin or cargo hold. Empennage Propulsion Aircraft propulsion may be achieved by specially designed aircraft engines, adapted auto, motorcycle or snowmobile engines, electric engines or even human muscle power. The main parameters of engine design are: Maximum engine thrust available Fuel consumption Engine mass Engine geometry The thrust provided by the engine must balance the drag at cruise speed and be greater than the drag to allow acceleration. The engine requirement varies with the type of aircraft. For instance, commercial airliners spend more time in cruise speed and need more engine efficiency. High-performance fighter jets need very high acceleration and therefore have very high thrust requirements. Landing gear Weight The weight of the aircraft is the common factor that links all aspects of aircraft design such as aerodynamics, structure, and propulsion, all together. An aircraft's weight is derived from various factors such as empty weight, payload, useful load, etc. The various weights are used to then calculate the center of mass of the entire aircraft. The center of mass must fit within the established limits set by the manufacturer. Structure The aircraft structure focuses not only on strength, aeroelasticity, durability, damage tolerance, stability, but also on fail-safety, corrosion resistance, maintainability and ease of manufacturing. The structure must be able to withstand the stresses caused by cabin pressurization, if fitted, turbulence and engine or rotor vibrations. Design process and simulation The design of any aircraft starts out in three phases Conceptual design Aircraft conceptual design involves sketching a variety of possible configurations that meet the required design specifications. By drawing a set of configurations, designers seek to reach the design configuration that satisfactorily meets all requirements as well as go hand in hand with factors such as aerodynamics, propulsion, flight performance, structural and control systems. This is called design optimization. Fundamental aspects such as fuselage shape, wing configuration and location, engine size and type are all determined at this stage. Constraints to design like those mentioned above are all taken into account at this stage as well. The final product is a conceptual layout of the aircraft configuration on paper or computer screen, to be reviewed by engineers and other designers. Preliminary design phase The design configuration arrived at in the conceptual design phase is then tweaked and remodeled to fit into the design parameters. In this phase, wind tunnel testing and computational fluid dynamic calculations of the flow field around the aircraft are done. Major structural and control analysis is also carried out in this phase. Aerodynamic flaws and structural instabilities if any are corrected and the final design is drawn and finalized. Then after the finalization of the design lies the key decision with the manufacturer or individual designing it whether to actually go ahead with the production of the aircraft. At this point several designs, though perfectly capable of flight and performance, might have been opted out of production due to their being economically nonviable. Detail design phase This phase simply deals with the fabrication aspect of the aircraft to be manufactured. It determines the number, design and location of ribs, spars, sections and other structural elements. All aerodynamic, structural, propulsion, control and performance aspects have already been covered in the preliminary design phase and only the manufacturing remains. Flight simulators for aircraft are also developed at this stage. Delays Some commercial aircraft have experienced significant schedule delays and cost overruns in the development phase. Examples of this include the Boeing 787 Dreamliner with a delay of 4 years with massive cost overruns, the Boeing 747-8 with a two-year delay, the Airbus A380 with a two-year delay and US$6.1 billion in cost overruns, the Airbus A350 with delays and cost overruns, the Bombardier C Series, Global 7000 and 8000, the Comac C919 with a four-year delay and the Mitsubishi Regional Jet, which was delayed by four years and ended up with empty weight issues. Program development An existing aircraft program can be developed for performance and economy gains by stretching the fuselage, increasing the MTOW, enhancing the aerodynamics, installing new engines, new wings or new avionics. For a 9,100 nmi long range at Mach 0.8/FL360, a 10% lower TSFC saves 13% of fuel, a 10% L/D increase saves 12%, a 10% lower OEW saves 6% and all combined saves 28%. Re-engine Fuselage stretch See also Index of aviation articles Aerospace engineering Aircraft manufacturer Iron bird (aviation) References External links Re-engine Aerospace engineering Aerodynamics Design
Aircraft design process
Chemistry,Engineering
2,877
49,358,794
https://en.wikipedia.org/wiki/Arrhenia%20epichysium
Arrhenia epichysium is a species of agaric fungus in the family Hygrophoraceae. It is found in Asia, Europe, and North America. The fruit body has small brown to dark gray caps measuring in diameter. The cap color changes to light gray to tan when it is dry. Gills are narrow and thin, placed together closely, and decurrently attached to the stipe. The spores are smooth and ellipsoid, measuring 6–7.5 μm. References External links Fungi described in 1794 Fungi of Asia Fungi of Europe Fungi of North America Hygrophoraceae Taxa named by Christiaan Hendrik Persoon Fungus species
Arrhenia epichysium
Biology
141
45,698,778
https://en.wikipedia.org/wiki/Dodecaborate
The dodecaborate(12) anion, [B12H12]2−, is a borane with an icosahedral arrangement of 12 boron atoms, with each boron atom being attached to a hydrogen atom. Its symmetry is classified by the molecular point group Ih. Synthesis and reactions The existence of the dodecaborate(12) anion, [B12H12]2−, was predicted by H. C. Longuet-Higgins and M. de V. Roberts in 1955. Hawthorne and Pitochelli first made it 5 years later, by the reaction of 2-iododecaborane with triethylamine in benzene solution at 80 °C. It is more conveniently prepared in two steps from sodium borohydride. First the borohydride is converted into a triborate anion using the etherate of boron trifluoride: 5 NaBH4 + BF3 → 2 NaB3H8 + 3 NaF + 2 H2 Pyrolysis of the triborate gives the twelve-boron cluster as the sodium salt. A variety of other synthetic methods have been published. Salts of the dodecaborate ion are stable in air and do not react with hot aqueous sodium hydroxide or hydrochloric acid. The anion can be electrochemically oxidised to [B24H23]3−. Substituted derivatives Salts of undergo hydroxylation with hydrogen peroxide to give salts of [B12(OH)12]2−. The hydrogen atoms in the ion [B12H12]2− can be replaced by the halogens with various degrees of substitution. The following numbering scheme is used to identify the products. The first boron atom is numbered 1, then the closest ring of five atoms around it is numbered anticlockwise from 2 to 6. The next ring of boron atoms is started from 7 for the atoms closest to number 2 and 3, and counts anticlockwise to 11. The atom opposite the original is numbered 12. A related derivative is [B12(CH3)12]2−. The icosahedron of boron atoms is aromatic in nature. Under kilobar pressure of carbon monoxide [B12H12]2− reacts to form the carbonyl derivatives [B12H11CO]− and the 1,12- and 1,7-isomers of B12H10(CO)2. The para disubstitution at the 1,12 is unusual. In water the dicarbonyls appear to form carboxylic ions: [B12H10(CO)CO2H]− and [B12H10(CO2H)2]2−. A perfluoroborane derivative (with the hydrogen atoms replaced by fluorine atoms) is also known. Potential applications Compounds based on the ion [B12H12]2− have been evaluated for solvent extraction of the radioactive ions 152Eu3+ and 241Am3+. [B12H12]2−, [B12(OH)12]2− and [B12(OMe)12]2− show promise for use in drug delivery. They form "closomers", which have been used to make nontargeted high-performance MRI contrast agents which are persistent in tumor tissue. Salts of [B12H12]2− are potential therapeutic agents in cancer treatment. For applications in boron neutron capture therapy, derivatives of closo-dodecaborate increase the specificity of neutron irradiation treatment. Neutron irradiation of boron-10 leads to the emission of an alpha particle near the tumor. References Boranes Anions Substances discovered in the 1960s
Dodecaborate
Physics,Chemistry
782
11,436,500
https://en.wikipedia.org/wiki/Cercospora%20minuta
Cercospora minuta is a fungal plant pathogen. References minuta Fungal plant pathogens and diseases Fungi described in 1876 Fungus species
Cercospora minuta
Biology
29
5,074,861
https://en.wikipedia.org/wiki/Sorption
Sorption is a physical and chemical process by which one substance becomes attached to another. Specific cases of sorption are treated in the following articles: Absorption "the incorporation of a substance in one state into another of a different state" (e.g., liquids being absorbed by a solid or gases being absorbed by a liquid); Adsorption The physical adherence or bonding of ions and molecules onto the surface of another phase (e.g., reagents adsorbed to a solid catalyst surface); Ion exchange An exchange of ions between two electrolytes or between an electrolyte solution and a complex. The reverse of sorption is desorption. Sorption rate The adsorption and absorption rate of a diluted solute in gas or liquid solution to a surface or interface can be calculated using Fick's laws of diffusion. See also Sorption isotherm References Physical chemistry
Sorption
Physics,Chemistry
191
3,116,570
https://en.wikipedia.org/wiki/La%20Superba
La Superba (Y CVn, Y Canum Venaticorum) is a strikingly red giant star in the constellation Canes Venatici. It is faintly visible to the naked eye, and the red colour is very obvious in binoculars. It is a carbon star and semiregular variable. Visibility La Superba is a semiregular variable star, varying by about a magnitude over a roughly 160-day cycle, but with slower variation over a larger range. Periods of 194 and 186 days have been suggested, with a resonance between the periods. Y CVn is one of the reddest stars known, and it is among the brightest of the giant red carbon stars. It is the brightest of known J-stars, which are a very rare category of carbon stars that contain large amounts of carbon-13 (carbon atoms with 7 neutrons instead of the usual 6). The 19th century astronomer Angelo Secchi, impressed with its beauty, gave the star its common name, which is now accepted by the International Astronomical Union. Properties Calculations with La Superba's luminosity and effective temperature give it a radius of about . If it were placed at the position of the Sun, the star's surface would extend beyond Earth's orbit. La Superba's temperature is believed to be about , making it one of the coolest true stars known. When infrared radiation is included, Y CVn has a bolometric luminosity several thousand times that of the Sun. The mass of this type of star is difficult to determine; it would initially have been around and somewhat less now due to mass loss. An estimate from Jim Kaler gives the star a luminosity between and radius between based on an assumed temperature of 3,000 K, and the author then classified it as a C7 or CN5 supergiant star although its mass is too low to be a true supergiant. Observations in the 60 and 100 micron infrared bands by the IRAS satellite showed that Y CVn is surrounded by a dust shell 0.9 parsecs in diameter. This is one of the most prominent circumstellar dust shells detected in the IRAS all-sky survey. Evolution After stars up to a few times the mass of the sun have finished fusing hydrogen to helium in their core, they start to burn hydrogen in a shell outside a degenerate helium core, and expand dramatically into the red giant state. Once the core reaches a high enough temperature, it ignites violently in the helium flash, which begins helium core burning on the horizontal branch. Once even the core helium is exhausted, a degenerate carbon-oxygen core remains. Fusion continues in both hydrogen and helium shells at different depths in the star, and the star increases luminosity on the asymptotic giant branch (AGB). La Superba is currently an AGB star. In the AGB stars, fusion products are moved outwards from the core by strong deep convection known as a dredge-up, thus creating a carbon abundance in the outer atmosphere where carbon monoxide and other compounds are formed. These molecules tend to absorb radiation at shorter wavelengths, resulting in a spectrum with even less blue and violet compared to ordinary red giants, giving the star its distinguished red color. La Superba is most likely in the final stages of fusing its remaining secondary fuel (helium) into carbon and shedding its mass at the rate of about a million times that of the Sun's solar wind. It is also surrounded by a 2.5 light year-wide shell of previously ejected material, implying that at one point it must have been losing mass as much as 50 times faster than it is now. La Superba thus appears almost ready to eject its outer layers to form a planetary nebula, leaving behind its core in the form of a white dwarf. Notes References External links https://web.archive.org/web/20051025230148/http://www.nckas.org/carbonstars/ http://www.backyard-astro.com/deepsky/top100/11.html http://jumk.de/astronomie/big-stars/la-superba.shtml Semiregular variable stars Canes Venatici Carbon stars Stars with proper names Canum Venaticorum, Y 110914 4846 062223 Durchmusterung objects Asymptotic-giant-branch stars
La Superba
Astronomy
922
24,784,880
https://en.wikipedia.org/wiki/Committed%20step
In biochemistry, the committed step (also known as the first committed step) is an effectively irreversible, enzyme-catalyzed reaction that occurs at a branch point during the biosynthesis of some molecules. As the name implies, after this step, the molecules are "committed" to the pathway and will ultimately end up in the pathway's final product. The first committed step should not be confused with the rate-limiting step, which is the step with the highest flux control coefficient. It is rare that the first committed step is in fact the rate-determining step. Regulation Metabolic pathways require tight regulation so that the proper compounds get produced in the proper amounts. Often, the first committed step is regulated by processes such as feedback inhibition and activation. Such regulation ensures that pathway intermediates do not accumulate, a situation that can be wasteful or even harmful to the cell. Examples of enzymes that catalyze the first committed steps of metabolic pathways Phosphofructokinase 1 catalyzes the first committed step of glycolysis. LpxC catalyzes the first committed step of lipid A biosynthesis. 8-amino-7-oxononanoate synthase catalyzes the first committed step in plant biotin synthesis. MurA catalyzes the first committed step of peptidoglycan biosynthesis. Aspartate transcarbamoylase catalyzes the committed step in the pyrimidine biosynthetic pathway in E. coli. 3-deoxy-D-arabinose-heptulsonate 7-phosphate synthase catalyses the first committed step of the shikimate pathway responsible for the synthesis of the aromatic amino acids Tyrosine, Tryptophan and Phenylalanine in plants, bacteria, fungi and some lower eukaryotes. Citrate synthase catalyzes the addition of acetyl-CoA to oxaloacetate and is the first committed step of the Citric Acid Cycle. Acetyl-CoA carboxylase catalyzes the irreversible carboxylation of acetyl-CoA to malonyl-CoA in the first committed step of fatty acid biosynthesis. Glucose-6-phosphate dehydrogenase catalyzes the conversion of G6P into 6-phosphogluconolactone to produce NADPH in the first and committed step of the pentose phosphate pathway. Other uses The term has also been applied to other processes that involve a series of steps. For example, the binding of egg and sperm can be thought of as the first committed step in metazoan fertilization. See also Enzyme catalysis Negative feedback Metabolic Control Analysis References External links Glycolysis Regulation at Cliffsnotes.com Enzymes Catalysis Biomolecules Biosynthesis
Committed step
Chemistry,Biology
593
1,529,560
https://en.wikipedia.org/wiki/LEGRI
The Low Energy Gamma-Ray Imager (LEGRI) was a payload for the first mission of the Spanish MINISAT platform, and active from 1997 to 2002. The objective of LEGRI was to demonstrate the viability of HgI2 detectors for space astronomy, providing imaging and spectroscopical capabilities in the 10-100 KeV range. LEGRI was successfully launched on April 21, 1997, on a Pegasus XL rocket. The instrument was activated on May 19, 1997. It was active until February 2002. The LEGRI system included the Detector Unit, Mask Unit, Power Supply, Digital Processing Unit, Star Sensor, and Ground Support Unit. The LEGRI consortium included: University of Valencia University of Southampton University of Birmingham Rutherford Appleton Laboratory Centro de Investigaciones Energéticas Medioambientales y Tecnológicas (Ciemat) INTA References External links Low Energy Gamma-Ray Imager (LEGRI) on the internet Space telescopes Gamma-ray telescopes 1997 in spaceflight
LEGRI
Astronomy
205
50,979,660
https://en.wikipedia.org/wiki/Kimble%20%28app%29
Kimble is a cloud-based PSA software application, also known as Kimble PSA. History Kimble Applications Limited was founded in 2010 by Sean Hoban, Mark Robinson, and David Scott. The company is headquartered in London, England with other offices in the United States including Boston, Park City, Chicago, and Atlanta. In 2018, Kimble secured investment from Accel-KKR, a technology-focused investment firm based in Silicon Valley. In 2021 Kimble merged with Mavenlink, another cloud based PSA software vendor. In 2022, Kimble and Mavenlink became Kantata. Services Kimble provides a professional services automation (PSA) solution known as Kimble PSA which is built on the Salesforce platform and recognized by Salesforce as a premier partner. The Kimble app automates pipeline forecasting, resource planning, delivery management and project accounting. See also Automation Professional services automation Comparison of PSA systems Salesforce.com References Automation software Cloud applications
Kimble (app)
Engineering
202
31,671,822
https://en.wikipedia.org/wiki/Spreeta
Spreeta is an electro-optical device utilizing surface plasmon resonance to detect small changes in refractive index of liquids. The Spreeta device was developed by Texas Instruments, Inc. in the 1990s. Device design incorporates a light-emitting diode (LED) illuminating a thin metal film (usually gold) in the Kretchmann geometry (needed to excite surface plasmons). The reflected light is detected by a photodiode linear array (which translates angle of reflection to pixel position) and the resonance (a dip in the reflectivity at a specific angle of incidence) denotes the refractive index on the outer surface of the metal film. Applications include real-time measurement of binding of antigens to antibodies attached to the sensor surface, monitoring changes in oil quality, and measuring sugar content in drinks (Brix level). The term "Spreeta" is an anglic derivative of SPR-ITA, which combines SPR (for Surface Plasmon Resonance) with "ita" (a Spanish suffix meaning "small"). References U.S. Patent Number 5912456 - Filed Mar 19, 1997 - Texas Instruments Incorporated Sensors Texas Instruments hardware Plasmonics
Spreeta
Physics,Chemistry,Materials_science,Technology,Engineering
250
34,485,760
https://en.wikipedia.org/wiki/Dermatologic%20and%20Ophthalmic%20Drugs%20Advisory%20Committee
The Dermatologic And Ophthalmic Drugs Advisory Committee (DODAC) receives requests for technical and clinical evaluation of new drugs by the U.S. Food and Drug Administration (FDA). The committee, consisting of members from academic and clinical dermatology, ophthalmology, biostatistics, the general public, and the pharmaceutical industry, makes non-binding recommendations to both the CDER and CBER divisions of the FDA about the advisability of approving new medications to treat dermatologic and ophthalmic conditions. References External links Dermatologic And Ophthalmic Drugs Advisory Committee, official webpage: https://www.fda.gov/AdvisoryCommittees/CommitteesMeetingMaterials/Drugs/DermatologicandOphthalmicDrugsAdvisoryCommittee/default.htm FDA Advisory Committees Calendar (including DODAC): https://www.fda.gov/AdvisoryCommittees/Calendar/default.htm Food and Drug Administration National agencies for drug regulation Regulators of biotechnology products Pharmacy organizations in the United States
Dermatologic and Ophthalmic Drugs Advisory Committee
Chemistry,Biology
233
63,513,617
https://en.wikipedia.org/wiki/Pioneering%20Women%20in%20American%20Mathematics
Pioneering Women in American Mathematics: The Pre-1940 PhD's is a book on women in mathematics. It was written by Judy Green and Jeanne LaDuke, based on a long study beginning in 1978, and was published in 2009 by the American Mathematical Society and London Mathematical Society as volume 34 in their joint History of Mathematics series. Unlike many previous works on the topic, it aims at encyclopedic coverage of women in mathematics in the pre-World War II United States, rather than focusing only on the biographies of individual women or on collecting stories of only the most famous women in mathematics. The Basic Library List Committee of the Mathematical Association of America has strongly recommended its inclusion in undergraduate mathematics libraries. Topics The first part of the book discusses the institutions that granted doctorates to women in mathematics before 1940, and the milieu in which they operated, including typical practices of the time of that demanded that women resign on marriage, that forbade institutions from hiring wives or other relatives of their male faculty, or in some cases prevented women who had done all the work for a graduate degree from being granted one. It also discusses the patterns the authors' found in these women's lives, including the discovery that their life expectancies were higher than typical for their time. Its eight chapters include material on the family background of the subjects, their undergraduate and graduate education, hiring and careers, and their contributions to mathematics. The second part of the book provides biographical profiles of every woman that the authors could identify as having earned a doctorate in mathematics in the US before 1940, as well as four American women who earned doctorates abroad, giving 228 in all. The typical biography in this section is approximately 2/3 of a page to a page in length, with information drawn from reference works, review journals, and archival material as well as interviews with the subjects still living at the time of the study. The 1940 cutoff for the biographies in the book represents both a time of "a precipitous drop in enrollment" for women in mathematics, and the starting time for two previous studies on women in mathematics and science by Margaret A. M. Murray and Margaret W. Rossiter. The rate of doctorates given to women in the period covered by the book, approximately 14%, would not be reached again until the 1980s. A companion web site provides additional information on the subjects of the book, and can be considered as a third and "potentially most valuable" section of the book itself. Audience and reception This book is readable by a general audience, but reviewer Charles Ashbacher writes that "only people deeply interested in the history of mathematics, particularly in the role of women, will find it a critical read", and suggests that the second half should be used as reference material rather than reading it through. Reviewer Amy Shell-Gellasch agrees, writing "It is intended as a reference, not necessarily as a book to sit down and read." Reviewer Silke Göbel adds that, beyond mathematics, the book will also be of interests to sociologists. Ashbacher rates the book as "an excellent resource for information in this area". Despite calling it "a labor of love" and "an important contribution", Shell-Gellasch writes that she was "disappointed by the lack of references" in the book, although significantly more references can be found in the companion web site. In contrast, reviewer Andrea Blunck calls the book "really fascinating", writing that she was "surprised to learn how numerous" these women were, "and how different yet how similar their lives and careers were". And reviewer Margaret A. M. Murray calls the book "spectacular" and "a stunning historical achievement", writing that because of it "we now know more about this first cohort of American women mathematicians than we know about any cohort of mathematicians, male or female." Notable mentions Mary Nicholas Arnoldy Grace Hopper M. Henrietta Reilly References External links Additional Material for the Book, American Mathematical Society Women in mathematics Biographies and autobiographies of mathematicians 2009 non-fiction books
Pioneering Women in American Mathematics
Technology
827
6,044,113
https://en.wikipedia.org/wiki/Occulting%20disk
An occulting disk is a small disk placed centrally in the eyepiece of a telescope or at its focal point, to block the view of a bright object so that fainter objects can be seen more easily. The coronagraph, at its simplest, is an occulting disk in the focal plane of a telescope, or in front of the entrance aperture, that blocks out the image of the solar disk, so that the corona can be seen. Starshade is one designed to fly in formation with a space telescope to image exoplanets. See also New Worlds Mission Space sunshade Telescope for Habitable Exoplanets and Interstellar/Intergalactic Astronomy References Optical telescope components Optical devices Star images Stellar astronomy
Occulting disk
Materials_science,Astronomy,Technology,Engineering
147
37,121,910
https://en.wikipedia.org/wiki/Lambda%20Hydrae
λ Hydrae, Latinised as Lambda Hydrae, is a spectroscopic binary star in the constellation Hydra. Its apparent magnitude is 3.61 Located around distant. The spiral galaxy NGC 3145 is only away to the southwest. The primary is an orange giant of spectral type K0IIICN+1, a star that has used up its core hydrogen, left the main sequence, and expanded into a giant. It is considered to be a red clump giant, a cool horizontal branch star that is burning helium in its core. λ Hydrae has two visual companions, components B and C, 11th and 13th magnitude stars respectively and away. References K-type giants Horizontal-branch stars Spectroscopic binaries Hydra (constellation) Hydrae, Lambda Durchmusterung objects Hydrae, 41 088284 049841 3994
Lambda Hydrae
Astronomy
176
1,572,728
https://en.wikipedia.org/wiki/Rho%20Coronae%20Borealis
Rho Coronae Borealis (ρ CrB, ρ Coronae Borealis) is a yellow dwarf star away in the constellation of Corona Borealis. The star is thought to be similar to the Sun with nearly the same mass, radius, and luminosity. It is orbited by four known exoplanets. Stellar properties Rho Coronae Borealis is a yellow main-sequence star of the spectral type G0V. The star is thought to have 96 percent of the Sun's mass, along with 1.3 times its radius and 1.7 times its luminosity. It may only be 51 to 65 percent as enriched with elements heavier than hydrogen (based on its abundance of iron) and is likely somewhat older than the Sun at around ten billion years old. The rotation period of Rho Coronae Borealis is approximately 20 days, even though at this age stars are hypothesized to decouple their rotational evolution and magnetic activity. Multiple star catalogs list a 10th-magnitude companion about two arc-minutes away, but it is an unrelated background object. Planetary system An extrasolar planet in a 39.8-day orbit around Rho Coronae Borealis was discovered in 1997 by observing the star's radial velocity variations. This detection method only gives a lower limit on the true mass of the companion. In 2001, preliminary Hipparcos astrometric satellite data indicated that the orbital inclination of the star's companion was 0.5°, nearly face-on, implying that its mass was as much as 115 times Jupiter's. A paper published in 2011 supported this claim using a new reduction of the astrometric data, with an updated mass value of 169.7 times Jupiter, with a 3σ confidence region 100.1 to 199.6 Jupiter masses. Such a massive body would be a dim red dwarf star, not a planet. In 2016, however, a paper was published that used interferometry to rule out any stellar companions to this star, in addition to detecting a second planetary companion in a 102-day orbit. Another two planets were discovered in 2023. The evolution of the parent star, nearing the conclusion of its life cycle, has been regarded as a model for the potential evolution of our planetary system. This is especially relevant for predicting whether the Sun will eventually engulf the Earth at the end of its own lifecycle (cf. Future of Earth). Circumstellar material In October 1999, astronomers at the University of Arizona announced the existence of a circumstellar disk around the star. Follow-up observations with the Spitzer Space Telescope failed to detect any infrared excess at 24- or 70-micrometre wavelengths, which would be expected if a disk were present. No evidence for a disk was detected in observations with the Herschel Space Observatory either. See also List of exoplanets discovered before 2000 - Rho Coronae Borealis b List of exoplanets discovered in 2016 - Rho Coronae Borealis c List of exoplanets discovered in 2023 - Rho Coronae Borealis d and Rho Coronae Borealis e References External links Coronae Borealis, Rho Corona Borealis Coronae Borealis, 15 143761 078459 5968 G-type main-sequence stars Solar analogs 9537 BD+33 2663 J16010264+3318124 Planetary systems with four confirmed planets
Rho Coronae Borealis
Astronomy
707
19,336,369
https://en.wikipedia.org/wiki/Web%20threat
A web threat is any threat that uses the World Wide Web to facilitate cybercrime. Web threats use multiple types of malware and fraud, all of which utilize HTTP or HTTPS protocols, but may also employ other protocols and components, such as links in email or IM, or malware attachments or on servers that access the Web. They benefit cybercriminals by stealing information for subsequent sale and help absorb infected PCs into botnets. Web threats pose a broad range of risks, including financial damages, identity theft, loss of confidential information/data, theft of network resources, damaged brand/personal reputation, and erosion of consumer confidence in e-commerce and online banking. It is a type of threat related to information technology (IT). The IT risk, i.e. risk affecting has gained and increasing impact on society due to the spread of IT processes. Reaching path Web threats can be divided into two primary categories, based on delivery method – push and pull. Push-based threats use spam, phishing, or other fraudulent means to lure a user to a malicious (often spoofed) website which then collects information and/or injects malware. Push attacks use phishing, DNS poisoning (or pharming), and other means to appear to originate from a trusted source. Precisely-targeted push-based web threats are often referred to as spear phishing to reflect the focus of their data gathering attack. Spear phishing typically targets specific individuals and groups for financial gain. In other push-based web threats, malware authors use social engineering such as enticing subject lines that reference holidays, popular personalities, sports, pornography, world events and other hot topics to persuade recipients to open the email and follow links to malicious websites or open attachments with malware that accesses the Web. Pull-based web threats are often referred to as “drive-by” threats by experts (and more commonly as “drive-by downloads” by journalists and the general public), since they can affect any website visitor. Cybercriminals infect legitimate websites, which unknowingly transmit malware to visitors or alter search results to take users to malicious websites. Upon loading the page, the user's browser passively runs a malware downloader in a hidden HTML frame (IFRAME) without any user interaction. Growth of web threats Giorgio Maone wrote in 2008 that "if today’s malware runs mostly runs on Windows because it’s the commonest executable platform, tomorrow’s will likely run on the Web, for the very same reason. Because, like it or not, the Web is already a huge executable platform, and we should start thinking of it this way, from a security perspective." The growth of web threats is a result of the popularity of the Web – a relatively unprotected, widely and consistently used medium that is crucial to business productivity, online banking, and e-commerce as well as the everyday lives of people worldwide. The appeal of Web 2.0 applications and websites increases the vulnerability of the Web. Most Web 2.0 applications make use of AJAX, a group of web development programming tools used for creating interactive web applications or rich Internet applications. While users benefit from greater interactivity and more dynamic websites, they are also exposed to the greater security risks inherent in browser client processing. Examples In September 2008, malicious hackers broke into several sections of BusinessWeek.com to redirect visitors to malware-hosting websites. Hundreds of pages were compromised with malicious JavaScript pointing to third-party servers. In August 2008, popular social networking sites were hit by a worm using social engineering techniques to get users to install a piece of malware. The worm installs comments on the sites with links to a fake site. If users follow the link, they are told they need to update their Flash Player. The installer then installs malware rather than the Flash Player. The malware then downloads a rogue anti-spyware application, AntiSpy Spider. by humanitarian, government and news sites in the UK, Israel and Asia. In this attack the compromised websites led, through a variety of redirects, to the download of a Trojan. In September 2017, visitors to TV network Showtime's website found that the website included Coinhive code that automatically began mining for Monero cryptocurrency without user consent. The adoption of online services has brought about changes in online services operations following the advancement of mobile communication techniques and the collaboration with service providers as a result, the online service technology has become more conductive to individuals. One of the most recent mobile technological wonders The Coinhive software was throttled to use only twenty percent of a visiting computer's CPU to avoid detection. Shortly after this discovery was publicized on social media, the Coinhive code was removed. Showtime declined to comment for multiple news articles. It's unknown if Showtime inserted this code into its website intentionally or if the addition of cryptomining code was the result of a website compromise. Coinhive offers code for websites that requires user consent prior to execution, but less than 2 percent of Coinhive implementations use this code. German researchers have defined cryptojacking as websites executing cryptomining on visiting users' computers without prior consent. With 1 out of every five hundred websites hosting a cryptomining script, cryptojacking is a persistent web threat. Prevention and detection Conventional approaches have failed to fully protect consumers and businesses from web threats. The most viable approach is to implement multi-layered protection—protection in the cloud, at the Internet gateway, across network servers and on the client. See also Asset (computing) Attack (computing) Botnets Browser security Countermeasure (computer) Cybercrime Cyberwarfare Denial-of-service attack High Orbit Ion Cannon IT risk Internet safety Internet security Low Orbit Ion Cannon Man-in-the-browser rich Internet applications Threat (computer) Vulnerability (computing) Web applications Web development References Internet security Web security exploits Cybercrime Cyberwarfare
Web threat
Technology
1,236
11,437,573
https://en.wikipedia.org/wiki/Cladosporium%20cucumerinum
Cladosporium cucumerinum is a fungal plant pathogen that affects cucumbers. References cucumerinum Fungal plant pathogens and diseases Vegetable diseases Fungi described in 1889 Fungus species
Cladosporium cucumerinum
Biology
40
320,056
https://en.wikipedia.org/wiki/Pyrex
Pyrex (trademarked as PYREX and pyrex) is a brand introduced by Corning Inc. in 1915, initially for a line of clear, low-thermal-expansion borosilicate glass used for laboratory glassware and kitchenware. It was later expanded in the 1930s to include kitchenware products made of soda–lime glass and other materials. Its name has become famous for making rectangular glass roasters. In 1998, the kitchenware division of Corning Inc. responsible for the development of Pyrex spun off from its parent company as Corning Consumer Products Company, subsequently renamed Corelle Brands. Corning Inc. no longer manufactures or markets consumer products, only industrial ones. History Borosilicate glass was first made by German chemist and glass technologist Otto Schott, founder of Schott AG in 1893, 22 years before Corning produced the Pyrex brand. Schott AG sells the product under the name "Duran". In 1908, Eugene Sullivan, director of research at Corning Glass Works, developed Nonex, a borosilicate low-expansion glass, to reduce breakage in shock-resistant lantern globes and battery jars. Sullivan had learned about Schott's borosilicate glass as a doctoral student in Leipzig, Germany. Jesse Littleton of Corning discovered the cooking potential of borosilicate glass by giving his wife Bessie Littleton a casserole dish made from a cut-down Nonex battery jar. Corning removed the lead from Nonex and developed it as a consumer product. Pyrex made its public debut in 1915 during World War I, positioned as an American-produced alternative to Duran. A Corning executive gave the following account of the etymology of the name "Pyrex": Corning purchased the Macbeth-Evans Glass Company in 1936 and their Charleroi, PA plant was used to produce Pyrex opal ware bowls and bakeware made of tempered soda–lime glass. In 1958 an internal design department was started by John B. Ward. He redesigned the Pyrex ovenware and Flameware. Over the years, designers such as Penny Sparke, Betty Baugh, Smart Design, TEAMS Design, and others have contributed to the design of the line. Corning divested itself of the Corning Consumer Products Company (now known as Corelle Brands) in 1998 and production of consumer Pyrex products went with it. Its previous licensing of the name to Newell Cookware Europe remained in effect. France-based cookware maker Arc International acquired Newell's European business in early 2006 to own rights to the brand in Europe, the Middle East and Africa. In 2007, Arc closed the Pyrex soda–lime factory in Sunderland, UK moving all European production to France. The Sunderland factory had first started making Pyrex in 1922. In 2014, Arc International sold off its Arc International Cookware division which operated the Pyrex business to Aurora Capital for its Resurgence Fund II. The division was renamed the International Cookware group. London-based private equity firm Kartesia purchased International Cookware in 2020. In 2021, Pyrex rival Duralex was acquired by International Cookware group for €3.5 million (US$4.2m). In March 2019, Corelle Brands, the makers of Pyrex in the United States, merged with Instant Brands, the makers of the Instant Pot. On June 12, 2023, Instant Brands filed for Chapter 11 bankruptcy after high interest rates and waning access to credit hit its cash position and made its debts unsustainable. The company emerged from bankruptcy on February 27, 2024 under the previous Corelle Brands moniker, after having sold off its appliance business ("Instant" branded products). Trademark In Europe, Africa, and the Middle East, a variation of the PYREX (all uppercase) trademark is licensed by International Cookware for bakeware that has been made of numerous materials including borosilicate and soda–lime glass, stoneware, metal, plus vitroceramic cookware. The pyrex (all lowercase, introduced in 1975) trademark is now used for kitchenware sold in the United States, South America, and Asia. In the past, the brand name has also been used for kitchen utensils and bakeware by other companies in regions such as Japan and Australia. It is a common misconception that the logo style alone indicates the type of glass used to manufacture the bakeware. Additionally, Corning's introduction of soda-lime-glass-based Pyrex in the 1940s predates the introduction of the all lowercase logo by nearly 30 years. Composition Older clear-glass Pyrex manufactured by Corning, Arc International's Pyrex products, and Pyrex laboratory glassware are made of borosilicate glass. According to the National Institute of Standards and Technology, borosilicate Pyrex is composed of (as percentage of weight): 4.0% boron, 54.0% oxygen, 2.8% sodium, 1.1% aluminum, 37.7% silicon, and 0.3% potassium. According to glass supplier Pulles and Hannique, borosilicate Pyrex is made of Corning 7740 glass and is equivalent in formulation to Schott Glass 8330 glass sold under the "Duran" brand name. The composition of both Corning 7740 and Schott 8330 is given as 80.6% , 12.6% , 4.2% , 2.2% , 0.1% , 0.1% , 0.05% , and 0.04% . In the late 1930s and 1940s, Corning also introduced new product lines under the Pyrex brand using different types of glass. Opaque tempered soda–lime glass was used to create decorated opal ware bowls and bakeware, and aluminosilicate glass was used for Pyrex Flameware stovetop cookware. The latter product had a bluish tint caused by the addition of alumino-sulfate. Beginning in the 1980s, production of clear Pyrex glass products manufactured in the USA by Corning was also shifted to tempered soda–lime glass, like their popular opal bakeware. This change was justified by stating that soda–lime glass has higher mechanical strength than borosilicatemaking it more resistant to physical damage when dropped, which is believed to be the most common cause of breakage in glass bakeware. The glass is also cheaper to produce and more environmentally friendly. Its thermal shock resistance is lower than borosilicate's, leading to potential breakage from heat stress if used contrary to recommendations. Since the closure of the soda–lime plant in England in 2007, European Pyrex has been made solely from borosilicate. The differences between Pyrex-branded glass products has also led to controversy regarding safety issuesin 2008, the U.S. Consumer Product Safety Commission reported it had received 66 complaints by users reporting that their Pyrex glassware had shattered over the prior ten years yet concluded that Pyrex glass bakeware does not present a safety concern. The consumer affairs magazine Consumer Reports investigated the issue and released test results, in January 2011, confirming that borosilicate glass bakeware was less susceptible to thermal shock breakage than tempered soda lime bakeware. They admitted their testing conditions were "contrary to instructions" provided by the manufacturer. STATS analyzed the data available and found that the most common way that users were injured by glassware was via mechanical breakage, being hit or dropped, and that "the change to soda lime represents a greater net safety benefit." Use in telescopes Because of its low expansion characteristics, borosilicate glass is often the material of choice for reflective optics in astronomy applications. In 1932, George Ellery Hale approached Corning with the challenge of fabricating the telescope mirror for the California Institute of Technology's Palomar Observatory project. A previous effort to fabricate the optic from fused quartz had failed, with the cast blank having voids. The mirror was cast by Corning during 1934–1936 out of borosilicate glass. After a year of cooling, during which it was almost lost to a flood, the blank was completed in 1935. The first blank now resides in the Corning Museum of Glass. See also Jena glass Citations General and cited references External links Pyrex Love, a vintage Pyrex reference site American brands Boron compounds Corning Inc. Glass trademarks and brands Kitchenware brands Kitchenware Low-expansion glass Products introduced in 1915 Companies that filed for Chapter 11 bankruptcy in 2023 Transparent materials
Pyrex
Physics
1,788
5,260,471
https://en.wikipedia.org/wiki/Signal%20tracer
A signal tracer is a piece of electronic test equipment used to troubleshoot radio and other electronic circuitry. Usually a very simple device, it normally provides an amplifier, and a loudspeaker, often battery-powered and packaged into a small, hand-held test probe. An optional diode detector is usually also provided, allowing the detection of amplitude-modulated signals. The technician injects a test signal into the device under test. Then, by using the signal tracer, the tech can follow the signal through the various circuits of the radio receiver. So long as the signal can be heard, the circuitry up to that point is (at least minimally) functional. If the signal disappears, however, a fault can be assumed to be present in the stage of the circuit just passed. The diode detector is only sensitive to amplitude modulation but even circuits that are normally used for other modulation schemes (such as FM radios) can be tested by using an AM test signal for testing the radio frequency circuits, then switching to an FM test signal (and switching out the diode detector) for testing the audio circuits of the radio. More sophisticated signal tracers may display digital levels using, for example, LEDs. For long pulse trains, a cyclic redundancy check may be calculated and displayed, giving the tech insight into the content of circuits that are switching rapidly. References Electronic test equipment
Signal tracer
Technology,Engineering
286
71,759,775
https://en.wikipedia.org/wiki/Rhopalophora%20clavispora
Rhopalophora () is a genus of lichen-like fungus in the family Dactylosporaceae. It contains the sole species R. clavispora, previously belonging to the genus Phialophora but redescribed in 2016 to compose this monotypic genus. Description Members of Rhopalophora are lignicolous fungi of mycelium made of hyaline or pigmented hyphae that are occasionally monilioid. They have no conidiomata. Their conidiophores are pale brown in color, unbranched, macronematous (i.e. morphologically different from vegetative hyphae), often reduced to phialides generated directly from undifferentiated hyphae, sometimes with percurrent regeneration. Phialides are light brown in color, paler towards the tip, integrated, subcylindrical and sometimes with sympodial proliferation, tapering toward the collarette. The conidia are hyaline, aseptate, clavate, truncate at the base, and arranged in chains or heads. The sexual morph of this genus is unknown. Taxonomy The genus Rhopalophora was described in 2016 by Martina Réblová, Wendy A. Untereiner & Walter Gams to accommodate the species R. clavispora, which was initially described in 1976 by the same Walter Gams as a member of Phialophora. The separation was due to a phylogenetic analysis revealing Phialophora clavispora to be more related to other fungal members of Sclerococcaceae than to Phialophora. References Lecanorales Fungi described in 1976 Fungus species
Rhopalophora clavispora
Biology
358
44,597,365
https://en.wikipedia.org/wiki/5356%20aluminium%20alloy
5356 aluminium alloy is an alloy in the wrought aluminium-magnesium family (5000 or 5xxx series). Unlike most aluminium-magnesium alloys, it is primarily used as welding filler. It is one of the most popular aluminium filler alloys, alongside 4043. It possesses relatively high strength, but at the expense of being more vulnerable to cracking. It is the preferred filler when making lap or butt welds on the popular 6061 aluminium alloy, or when the welded parts are to be anodized. References Aluminium alloy table Aluminium–magnesium alloys
5356 aluminium alloy
Chemistry
117
10,575,017
https://en.wikipedia.org/wiki/NGC%207662
NGC 7662 is a planetary nebula located in the northern constellation Andromeda. It is known as the Blue Snowball Nebula, Snowball Nebula, and Caldwell 22. This nebula was discovered October 6, 1784 by the German-born English astronomer William Herschel. In the New General Catalogue it is described as a "magnificent planetary or annular nebula, very bright, pretty small in angular size, round, blue, variable nucleus". The object has an apparent visual magnitude of 8.3 and spans an angular size of . Parallax measurements give a distance estimate of . NGC 7662 is a popular planetary nebula for casual observers. A small telescope will reveal a star-like object with slight nebulosity. A 6" telescope with a magnification around 100x will reveal a slightly bluish disk, while telescopes with a primary mirror at least 16" in diameter may reveal slight color and brightness variations in the interior. This nebula has an elliptical shape with a triple-shell structure. The brightest is the main shell, which spans . This is surrounded by a fainter outer shell, which has an elliptical form. Both shells are enclosed by a faint, circular halo, some in diameter. The two shells can be modeled as prolate spheroids, with the inner shell having the greater elongation, a major axis tilt of 50° to the line of sight, and a hull thickness of . Several knots and a jet-like structure are visible, which display emission lines and low ionization. Based on the expansion rate, the estimated age of the nebula is 3,080 years. The central star of the planetary nebula is a subdwarf O star with a spectral type of sdO. The best fit model for this star gives an effective temperature of 100 kK, with 5,250 times the luminosity of the Sun and 60.5% of the Sun's mass. X-ray emission from the nebula is being generated by the stellar wind from this star striking previously ejected matter. Image gallery See also NGC 2022, which resembles NGC 7662 NGC 2392 NGC 3242 List of NGC objects Planetary nebulae References External links Planetary nebulae O-type subdwarfs Andromeda (constellation) 7662 022b 17841006
NGC 7662
Astronomy
468
3,859,080
https://en.wikipedia.org/wiki/TSQ
6-Methoxy-(8-p-toluenesulfonamido)quinoline (TSQ) is one of the most efficient fluorescent stains for zinc(II). It was introduced by Soviet biochemists Toroptsev and Eshchenko in the early 1970s. The popularity of TSQ as physiological stain rose after seminal works by Christopher Frederickson two decades later. TSQ forms a 2:1 (ligand-metal) complex with zinc and emits blue light upon excitation at 365 nanometers. TSQ has been extensively applied for determination of extracellular or intracellular levels of Zn2+ in biological systems, also to study Zn2+ in mossy fibers of the hippocampus. References Fluorescent dyes Quinolinols Sulfonamides Aromatic amines
TSQ
Chemistry,Biology
169
11,466,157
https://en.wikipedia.org/wiki/Uromyces%20apiosporus
Uromyces apiosporus is a fungal species and plant pathogen infecting Primula. Including Primula minima in New Zealand. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Ornamental plant pathogens and diseases apiosporus Fungi described in 1873 Fungus species
Uromyces apiosporus
Biology
67
16,849,636
https://en.wikipedia.org/wiki/RNF5P1
Ring finger protein 5 pseudogene 1, also known as RNF5P1, is a human gene. References Further reading RING finger proteins Pseudogenes
RNF5P1
Chemistry
34
33,454,300
https://en.wikipedia.org/wiki/Xiaomi
Xiaomi Corporation (; ), commonly known as Xiaomi, is a Chinese designer and manufacturer of consumer electronics and related software, home appliances, automobiles and household hardware, with headquarters in Beijing, China. It is the second-largest manufacturer of smartphones in the world, behind Samsung, most of which run on the Xiaomi HyperOS (former MIUI) operating system. The company is ranked 338th and is the youngest company on the Fortune Global 500. Xiaomi was founded in 2010 in Beijing by Lei Jun along with six associates. Lei had founded Kingsoft as well as Joyo.com, the latter of which he sold to Amazon for $75 million in 2004. In August 2011, Xiaomi released its first smartphone and, by 2014, it had the largest market share of smartphones sold in China. Initially the company only sold its products online; however, it later opened brick and mortar stores. By 2015, it was developing a wide range of consumer electronics. In 2020, the company sold 149.4 million smartphones and its MIUI (now Xiaomi HyperOS) mobile operating system has over 500 million monthly active users. As of August 2024, Xiaomi is the second-largest seller of smartphones worldwide, with a market share of about 12%, according to Counterpoint. Its presence led some people to call Xiaomi the "Apple of China". It has come up with its own range of wearable items. It also is a major manufacturer of appliances including televisions, flashlights, unmanned aerial vehicles, and air purifiers using its Internet of things and Xiaomi Smart Home product ecosystems. Xiaomi keeps its prices close to its manufacturing costs and bill of materials costs by keeping most of its products in the market for 18 months, longer than most smartphone companies. The company also uses inventory optimization and flash sales to keep its inventory low. History 2010–2013 On 6 April 2010 Xiaomi was co-founded by Lei Jun and six others: Lin Bin (), vice president of the Google China Institute of Engineering Zhou Guangping (), senior director of the Motorola Beijing R&D center Liu De (), department chair of the Department of Industrial Design at the University of Science and Technology Beijing Li Wanqiang (), general manager of Kingsoft Dictionary Huang Jiangji (), principal development manager Hong Feng (), senior product manager for Google China Lei had founded Kingsoft as well as Joyo.com, the latter of which he sold to Amazon for $75 million in 2004. At the time of the founding of the company, Lei was dissatisfied with the products of other mobile phone manufacturers and thought he could make a better product. On 16 August 2010, Xiaomi launched its first Android-based firmware MIUI (Now Xiaomi HyperOS). In 2010, the company raised $41 million in a Series A round. In August 2011, the company launched its first phone, the Xiaomi Mi 1. The device had Xiaomi's MIUI firmware along with Android installation. In December 2011, the company raised $90 million in a Series B round. In June 2012, the company raised $216 million of funding in a Series C round at a $4 billion valuation. Institutional investors participating in the first round of funding included Temasek Holdings, IDG Capital, Qiming Venture Partners and Qualcomm. In August 2012, the Xiaomi user forum website suffered a data breach. In all, 7 million email addresses appeared in the breach although a significant portion of them were numeric aliases on the bbs_ml_as_uid.xiaomi.com domain. Usernames, IP addresses and passwords stored as salted MD5 hashes were also exposed. In August 2013, the company hired Hugo Barra from Google, where he served as vice president of product management for the Android platform. He was employed as vice president of Xiaomi to expand the company outside of mainland China, making Xiaomi the first company selling smartphones to poach a senior staffer from Google's Android team. He left the company in February 2017. In September 2013, Xiaomi announced its Xiaomi Mi 3 smartphone and an Android-based 47-inch 3D-capable Smart TV assembled by Sony TV manufacturer Wistron of Taiwan. In October 2013, it became the fifth-most-used smartphone brand in China. In 2013, Xiaomi sold 18.7 million smartphones. 2014–2017 In February 2014, Xiaomi announced its expansion outside China, with an international headquarters in Singapore. In April 2014, Xiaomi purchased the domain name mi.com for a record , the most expensive domain name ever bought in China, replacing xiaomi.com as the company's main domain name. In September 2014, Xiaomi acquired a 24.7% stake in Roborock. In December 2014, Xiaomi raised US$1.1 billion at a valuation of over US$45 billion, making it one of the most valuable private technology companies in the world. The financing round was led by Hong Kong-based technology fund All-Stars Investment Limited, a fund run by former Morgan Stanley analyst Richard Ji. In 2014, the company sold over 60 million smartphones. In 2014, 94% of the company's revenue came from mobile phone sales. In April 2015, Ratan Tata acquired a stake in Xiaomi. On 30 June 2015, Xiaomi announced its expansion into Brazil with the launch of locally manufactured Redmi 2; it was the first time the company assembled a smartphone outside of China. However, the company left Brazil in the second half of 2016. On 26 February 2016, Xiaomi launched the Mi5, powered by the Qualcomm Snapdragon 820 processor. On 3 March 2016, Xiaomi launched the Redmi Note 3 Pro in India, the first smartphone to be powered by a Qualcomm Snapdragon 650 processor. On 10 May 2016, Xiaomi launched the Mi Max, powered by the Qualcomm Snapdragon 650/652 processor. In June 2016, the company acquired patents from Microsoft. In September 2016, Xiaomi launched sales in the European Union (EU) through a partnership with ABC Data. Also in September 2016, the Xiaomi Mi Robot vacuum was released by Roborock. On 26 October 2016, Xiaomi launched the Mi Mix, powered by the Qualcomm Snapdragon 821 processor. On 22 March 2017, Xiaomi announced that it planned to set up a second manufacturing unit in India in partnership with contract manufacturer Foxconn. On 19 April 2017, Xiaomi launched the Mi6, powered by the Qualcomm Snapdragon 835 processor. In July 2017, the company entered into a patent licensing agreement with Nokia. On 5 September 2017, Xiaomi released Xiaomi Mi A1, the first Android One smartphone under the slogan: Created by Xiaomi, Powered by Google. Xiaomi stated started working with Google for the Mi A1 Android One smartphone earlier in 2017. An alternate version of the phone was also available with MIUI, the MI 5X. In 2017, Xiaomi opened Mi Stores in India, Pakistan and Bangladesh. The EU's first Mi Store was opened in Athens, Greece in October 2017. In Q3 2017, Xiaomi overtook Samsung to become the largest smartphone brand in India. Xiaomi sold 9.2 million units during the quarter. On 7 November 2017, Xiaomi commenced sales in Spain and western Europe. 2018–2021 In April 2018, Xiaomi announced a smartphone gaming brand called Black Shark. It had 6GB of RAM coupled with Snapdragon 845 SoC, and was priced at $508, which was cheaper than its competitors. On 2 May 2018, Xiaomi announced the launch of Mi Music and Mi Video to offer "value-added internet services" in India. On 3 May 2018, Xiaomi announced a partnership with 3 to sell smartphones in the United Kingdom, Ireland, Austria, Denmark, and Sweden. In May 2018, Xiaomi began selling smart home products in the United States through Amazon. In June 2018, Xiaomi became a public company via an initial public offering on the Hong Kong Stock Exchange, raising $4.72 billion. On 7 August 2018, Xiaomi announced that Holitech Technology Co. Ltd., Xiaomi's top supplier, would invest up to $200 million over the next three years to set up a major new plant in India. In August 2018, the company announced POCO as a mid-range smartphone line, first launching in India. In Q4 of 2018, the Xiaomi Pocophone F1 became the best-selling smartphone sold online in India. The Pocophone was sometimes referred to as the "flagship killer" for offering high-end specifications at an affordable price. The company opened new headquarters in Beijing in July 2019 after almost four years of construction. In October 2019, the company announced that it would launch more than 10 5G phones in 2020, including the Mi 10/10 Pro with 5G functionality. On 5 November 2019, Xiaomi announced that it would enter the Japanese market. It established a subsidiary, Xiaomi Japan, as parts of its effort to enter the Japanese smartphone market. On 17 January 2020, POCO India became a separate sub-brand of Xiaomi with entry-level and mid-range devices, followed by its global counterpart on 24 November 2020. In March 2020, Xiaomi launched their first foldable phone, the Mi Mix Fold. Powered by Qualcomm Snapdragon 888 with an 8.01-inch foldable AMOLED display when open and a 6.5-inch external display when folded. In March 2020, Xiaomi showcased its new 40W wireless charging solution, which was able to fully charge a smartphone with a 4,000mAh battery from flat in 40 minutes. In October 2020, Xiaomi became the third-largest smartphone maker in the world by shipment volume, shipping 46.2 million handsets in Q3 2020. On 30 March 2021, Xiaomi announced its intention to invest US$10 billion in electric vehicles over the following ten years. On 31 March 2021, Xiaomi announced a new logo for the company, designed by Kenya Hara. In July 2021, Xiaomi became the second largest smartphone maker in the world, according to Canalys. It also surpassed Apple for the first time in Europe, making it the second-largest in Europe according to Counterpoint. In August 2021, the company acquired autonomous driving company Deepmotion for $77 million. In December 2021, Xiaomi announced the Xiaomi 12 and Xiaomi 12 Pro. The phones are powered by the Snapdragon 8 Gen 1 chipset. Since 2022 In April 2022, Xiaomi officially joined the Car Connectivity Consortium (CCC) board. In May 2022, the Indian court lifted the $725 million freeze on Xiaomi by federal agencies. In June 2022, Xiaomi established Zhuhai Xinshi Semiconductor Technology Co., Ltd., with a registered capital of 200 million RMB. The business scope includes: integrated circuit manufacturing, integrated circuit chip design and services, integrated circuit chip and product manufacturing, integrated circuit design, manufacturing of specialized equipment for semiconductor devices, manufacturing of semiconductor discrete devices, manufacturing of semiconductor lighting devices etc. The company is jointly held by Xiaomi's affiliated company Hubei Xiaomi Changjiang Industrial Fund Management and others. In July 2022, Xiaomi and its sub-brand POCO combined held a 42% market share in the Russian smartphone market, ranking first. On 1 August 2022, Xiaomi India elevated COO Murali Krishnan B as president, responsible for the company's daily operations, services, public affairs, and strategic projects, stating that he would continue to work towards strengthening the company's commitment to the Made in India and Digital India initiatives. On 3 August 2022, the 2022 Fortune Global 500 list was released, with Xiaomi Group ranking 266th, a rise of 72 positions compared to the previous year. In December 2022, Xiaomi announced that the global cumulative sales of the Redmi Note series had exceeded 300 million units. On 28 February 2023, Redmi released a 300W fast charging technology, claiming that it can charge a 4100mAh battery by 10% in just 3 seconds, 50% in 2 minutes and 13 seconds, and fully charge it within 5 minutes. Corporate affairs Business trends The key trends for Xiaomi are (as of the financial year ending December 31): Corporate identity Name etymology Xiǎomǐ () is the Chinese word for "millet". In 2011, its CEO Lei Jun suggested there are more meanings than just the "millet and rice". He linked the xiǎo (, ) part to the Buddhist concept that "a single grain of rice of a Buddhist is as great as a mountain", suggesting that Xiaomi wants to work from the little things, instead of starting by striving for perfection, while mǐ () is an acronym for "Mobile Internet" and also "mission impossible", referring to the obstacles encountered in starting the company. He also stated that he thinks the name is cute. In 2012 Lei Jun said that the name is about revolution and being able to bring innovation into a new area. Xiaomi's new "Rifle" processor has given weight to several sources linking the latter meaning to the Chinese Communist Party's "millet and rifle" (小米加步枪) revolutionary idiom during the Second Sino-Japanese War. Logo and mascot Xiaomi's first logo consisted of a single orange square with the letters "MI" in white located in the center of the square. This logo was in use until 31 March 2021, when a new logo, designed by well-known Japanese designer Kenya Hara, replaced the old one, consisting of the same basic structure as the previous logo, but the square was replaced with a "squircle" with rounded corners instead, and with the letters "MI" remaining identical to the previous logo, along with a slightly darker hue. Xiaomi's mascot, Mitu, is a white rabbit wearing an Ushanka (known locally as a "Lei Feng hat" in China) with a red star and a red pioneer tie around its neck. Later on, the red star on the hat was replaced by the company's logo. Innovation and development In the 2021 review of WIPO's annual World Intellectual Property Indicators Xiaomi was ranked as 2nd in the world, with 216 designs in industrial design registrations being published under the Hague System during 2020. This position is up on their previous 3rd-place ranking in 2019 for 111 industrial design registrations being published. On 8 February 2022, Lei released a statement on Weibo to announce plans for Xiaomi to enter the high-end smartphone market and surpass Apple as the top seller of premium smartphones in China in three years. To achieve that goal, Xiaomi will invest US$15.7 billion in R&D over the next five years, and the company will benchmark its products and user experience against Apple's product lines. Lei described the new strategy as a "life-or-death battle for our development" in his Weibo post, after Xiaomi's market share in China contracted over consecutive quarters, from 17% to 14% between Q2 and Q3 2021, dipping further to 13.2% as of Q4 2021. According to a recent report by Canalys, Xiaomi leads Indian smartphone sales in Q1. Xiaomi is one of the leaders of the smartphone makers in India which maintains device affordability. In 2022, Xiaomi announced and debuted the company's humanoid robot prototype to the public, while the current state of the robot is very limited in its abilities, the announcement was made to mark the companies ambitions to integrate AI into its product designs as well as develop their humanoid robot project into the future. Electric vehicles In 2021, Xiaomi announced a US$10 billion investment into electric vehicles (EVs). In late 2023, Xiaomi Auto unveiled its first production vehicle, the Xiaomi SU7, and publicly announced a goal to become one of the five largest automakers in the world. On 28 March 2024, Xiaomi officially launched the SU7 sedan in Beijing. Xiaomi's SU7 was manufactured under contract with BAIC Group. Xiaomi obtained a production license for electric vehicles in July 2024, allowing it to independently manufacture its electric vehicles. Xiaomi's EV factory, located in the Beijing Economic-Technological Development Area, is centered around its proprietary integrated die casting system, the Hyper Die-Casting 79100 Cluster. This reportedly allows the factory to produce an SU7 every 76 seconds when running at full capacity. Xiaomi was included in Time 2024 list of influential companies. Partnerships Xiaomi and Harman Kardon In 2021, Harman Kardon collaborated with Xiaomi for its newest smartphone; the Xiaomi Mi 11 series are the first smartphones to feature with Harman Kardon-tuned dual speaker setup. Xiaomi and Leica In 2022, Leica Camera entered a strategic partnership with Xiaomi to jointly develop Leica cameras to be used in Xiaomi flagship smartphones, succeeding the partnership between Huawei and Leica. The first flagship smartphones under this new partnership were the Xiaomi 12S Ultra and Xiaomi MIX Fold 2, launched in July and August 2022, respectively. Xiaomi Studios In 2021, Xiaomi began collaborating with directors to create short films shot entirely using the Xiaomi Mi 11 line of phones. In 2022, they made two shorts with Jessica Henwick. The first, Bus Girl won several awards and was long-listed for Best British Short at the 2023 BAFTA. Reception Imitation of Apple Inc. Xiaomi has been accused of imitating Apple Inc. The hunger marketing strategy of Xiaomi was described as riding on the back of the "cult of Apple". After reading a book about Steve Jobs in college, Xiaomi's chairman and CEO, Lei Jun, carefully cultivated a Steve Jobs image, including jeans, dark shirts, and Jobs' announcement style at Xiaomi's earlier product announcements. He was characterized as a "counterfeit Jobs." In 2013, critics debated how many of Xiaomi's products were innovative, and how much of their innovation was just really good public relations. Others point out that while there are similarities to Apple, the ability to customize the software based upon user preferences through the use of Google's Android operating system sets Xiaomi apart. Xiaomi has also developed a much wider range of consumer products than Apple. Violation of GNU General Public License In January 2018, Xiaomi was criticized for its non-compliance with the terms of the GNU General Public License. The Android project's Linux kernel is licensed under the copyleft terms of the GPL, which requires Xiaomi to distribute the complete source code of the Android kernel and device trees for every Android device it distributes. By refusing to do so, or by unreasonably delaying these releases, Xiaomi is operating in violation of intellectual property law in China, as a WIPO state. Prominent Android developer Francisco Franco publicly criticized Xiaomi's behaviour after repeated delays in the release of kernel source code. Xiaomi in 2013 said that it would release the kernel code. The kernel source code was available on the GitHub website in 2020. Privacy concerns and data collection As a company based in China, Xiaomi is obligated to share data with the Chinese government under the China Internet Security Law and National Intelligence Law. There were reports that Xiaomi's Cloud messaging service sends some private data, including call logs and contact information, to Xiaomi servers. Xiaomi later released an MIUI update that made cloud messaging optional and that no private data was sent to Xiaomi servers if the cloud messaging service was turned off. On 23 October 2014, Xiaomi announced that it was setting up servers outside of China for international users, citing improved services and compliance to regulations in several countries. On 19 October 2014, the Indian Air Force issued a warning against Xiaomi phones, stating that they were a national threat as they sent user data to an agency of the Chinese government. In April 2019, researchers at Check Point found a security breach in Xiaomi phone apps. The security flaw was reported to be preinstalled. On 30 April 2020, Forbes reported that Xiaomi extensively tracks use of its browsers, including private browser activity, phone metadata and device navigation, and more alarmingly, without secure encryption or data anonymization, more invasively and to a greater extent than mainstream browsers. Xiaomi disputed the claims, while confirming that it did extensively collect browsing data, and saying that the data was not linked to any individuals and that users had consented to being tracked. Xiaomi posted a response stating that the collection of aggregated usage statistics data is used for internal analysis, and would not link any personally identifiable information to any of this data. However, after a follow-up by Gabriel Cirlig, the writer of the report, Xiaomi added an option to completely stop the information leak when using its browser in incognito mode. Censorship In September 2021, amidst a political spat between China and Lithuania, the Lithuanian Ministry of National Defence urged people to dispose the Chinese-made mobile phones and avoid buying new ones, after the National Cyber Security Centre of Lithuania claimed that Xiaomi devices have built-in censorship capabilities that can be turned on remotely. Xiaomi denied the accusations, saying that it "does not censor communications to or from its users", and that they would be engaging a third-party to assess the allegations. They also stated that regarding data privacy, it was compliant with two frameworks for following Europe's General Data Protection Regulation (GDPR), namely its ISO/IEC 27001 Information Security Management Standards and the ISO/IEC 27701 Privacy Information Management System. Legal actions State administration of radio, film and television issue In November 2012, Xiaomi's smart set-top box stopped working one week after the launch due to the company having run foul of China's National Radio and Television Administration. The regulatory issues were overcome in January 2013. Misleading sales figures The Taiwanese Fair Trade Commission investigated the flash sales and found that Xiaomi had sold fewer smartphones than advertised. Xiaomi claimed that the number of smartphones sold was 10,000 units each for the first two flash sales, and 8,000 units for the third one. However, FTC investigated the claims and found that Xiaomi sold 9,339 devices in the first flash sale, 9,492 units in the second one, and 7,389 for the third. It was found that during the first flash sale, Xiaomi had given 1,750 priority ‘F-codes’ to people who could place their orders without having to go through the flash sale, thus diminishing the stock that was publicly available. The FTC fined Xiaomi . Shut down of Australia store In March 2014, Xiaomi Store Australia (an unrelated business) began selling Xiaomi mobile phones online in Australia through its website, XiaomiStore.com.au. However, Xiaomi soon requested that the store be shut down by 25 July 2014. On 7 August 2014, shortly after sales were halted, the website was taken down. An industry commentator described the action by Xiaomi to get the Australian website closed down as unprecedented, saying, "I’ve never come across this [before]. It would have to be a strategic move." At the time, this left only one online vendor selling Xiaomi mobile phones into Australia, namely Yatango (formerly MobiCity), which was based in Hong Kong. This business closed in late 2015. Temporary ban in India due to patent infringement On 9 December 2014, the Delhi High Court granted an ex parte injunction that banned the import and sale of Xiaomi products in India. The injunction was issued in response to a complaint filed by Ericsson in connection with the infringement of its patent licensed under reasonable and non-discriminatory licensing. The injunction was applicable until 5 February 2015, the date on which the High Court was scheduled to summon both parties for a formal hearing of the case. On 16 December, the High Court granted permission to Xiaomi to sell its devices running on a Qualcomm-based processor until 8 January 2015. Xiaomi then held various sales on Flipkart, including one on 30 December 2014. Its flagship Xiaomi Redmi Note 4G phone sold out in six seconds. A judge extended the division bench's interim order, allowing Xiaomi to continue the sale of Qualcomm chipset-based handsets until March 2018. Lawsuit by KPN alleging patent infringement On 19 January 2021, KPN, a Dutch landline and mobile telecommunications company, sued Xiaomi and others for patent infringement. KPN filed similar lawsuits against Samsung in 2014 and 2015 in a court in the US. Lawsuit by Wyze alleging invalid patent In July 2021, Xiaomi submitted a report to Amazon alleging that Wyze Labs had infringed upon its 2019 "Autonomous Cleaning Device and Wind Path Structure of Same" robot vacuum patent. On 15 July 2021, Wyze filed a lawsuit against Xiaomi in the US District Court for the Western District of Washington, arguing that prior art exists and asking the court for a declaratory judgment that Xiaomi's 2019 robot vacuum patent is invalid. Asset seizure in India In April 2022, India's Enforcement Directorate seized assets from Xiaomi as part of an investigation into violations of foreign exchange laws. The asset seizure was subsequently put on hold by a court order, but later upheld. Sanctions US sanctions due to ties with People's Liberation Army In January 2021, towards the end of the presidency of Donald Trump, the United States government named Xiaomi as a company "owned or controlled" by the People's Liberation Army and thereby prohibited any American company or individual from investing in it. However, the investment ban was blocked by a US court ruling after Xiaomi filed a lawsuit in the United States District Court for the District of Columbia, with the court expressing skepticism regarding the government's national security concerns. Xiaomi denied the allegations of military ties and stated that its products and services were of civilian and commercial use. In May 2021, Xiaomi reached an agreement with the Defense Department to remove the designation of the company as military-linked. Russia operations After the beginning of Russian invasion of Ukraine Xiaomi reported the suspension of operations in Russia, but in July 2022, Xiaomi and its sub-brand POCO together held 42% of the Russian smartphone market, ranking first in terms of sales. On 13 April 2023 Xiaomi Corporation and 13 Xiaomi officials (responsible key management), namely Lei Jun, Lin Bin, Lu Weibing, Liu De, Zhang Feng, Zeng Xuezhong, Yan Kesheng, Lam Sai Wai Alain, Zhu Dan, Wang Xiaoyan, Qu Heng, Ma Ji and Yu Man, were listed by Ukraine's National Agency on Corruption Prevention (NACP) on their list of "international sponsors of war" because the company continued its operations in Russia after Russia's invasion and remained a leader in smartphone sales there. Chinese smartphone brands continued to gain market share in Russia filling the gap left by Western brands which withdrew following Russia's invasion of Ukraine, according to a local retailer. On 21 September 2023, Telia, DNA, and Elisa, Finland's major mobile carriers, halted the sale of Xiaomi Technology products due to the company's ongoing business activities in Russia. This decision reflects the company's commitment to maintaining its operations in Russia despite the Ukraine invasion. The Finnish carriers' move came after Xiaomi faced several challenges in its European business in 2023. In addition to this, the EU has implemented a ban on exporting various goods to Russia, including semiconductors crucial for smartphone manufacturing. Xiaomi's ongoing operations in Russia have sparked debate. While the company asserts its obligation to serve Russian customers and support its employees, some contend that it indirectly supports the Russian government financially. Overseas manufacturing Inaugural plant in Pakistan Xiaomi's mobile device manufacturing plant was inaugurated on 4 March 2022, to begin production in Pakistan. The plant was set up in conjunction with Select Technologies (Pvt) Limited, an Air Link fully owned subsidiary. The production plant is located in Lahore. As of July 2022, the future of the plant is uncertain due to the 2021–2023 global supply chain crisis. See also List of Xiaomi products References External links 2018 initial public offerings Chinese brands Chinese companies established in 2010 Companies listed on the Hong Kong Stock Exchange Computer hardware companies Computer systems companies Electronics companies established in 2010 Electronics companies of China Home automation companies Manufacturing companies established in 2010 Mobile phone companies of China Mobile phone manufacturers Multinational companies headquartered in China Networking hardware companies
Xiaomi
Technology
5,902
1,359,463
https://en.wikipedia.org/wiki/Dickey%E2%80%93Wicker%20Amendment
The Dickey–Wicker Amendment is the name of an appropriation bill rider attached to a bill passed by United States Congress in 1995, and signed by former President Bill Clinton, which prohibits the United States Department of Health and Human Services (HHS) from using appropriated funds for the creation of human embryos for research purposes or for research in which human embryos are destroyed. HHS funding includes the funding for the National Institutes of Health (NIH). It is named after Jay Dickey and Roger Wicker, two Republican Representatives. Technically, the Dickey Amendment is a rider to other legislation, which amends the original legislation. The rider receives its name from the name of the Congressman that originally introduced the amendment, Representative Dickey. The Dickey amendment language has been added to each of the Labor, HHS, and Education appropriations acts for fiscal years 1997 through 2009. The original rider can be found in Section 128 of P.L. 104–99. The wording of the rider is generally the same year after year. For fiscal year 2009, the wording in Division F, Section 509 of the Omnibus Appropriations Act, 2009, (enacted March 11, 2009) prohibits HHS, including NIH, from using fiscal year 2009 appropriated funds as follows: SEC. 509. (a) None of the funds made available in this Act may be used for-- (1) the creation of a human embryo or embryos for research purposes; or (2) research in which a human embryo or embryos are destroyed, discarded, or knowingly subjected to risk of injury or death greater than that allowed for research on fetuses in utero under 45 CFR 46.208(a)(2) and Section 498(b) of the Public Health Service Act (42 U.S.C. 289g(b)) (Title 42, Section 289g(b), United States Code). (b) For purposes of this section, the term "human embryo or embryos" includes any organism, not protected as a human subject under 45 CFR 46 (the Human Subject Protection regulations) ... that is derived by fertilization, parthenogenesis, cloning, or any other means from one or more human gametes (sperm or egg) or human diploid cells (cells that have two sets of chromosomes, such as somatic cells). In March 2009, President Obama issued an executive order which removed the restriction against federal funding of stem cell research. However, the Dickey–Wicker Amendment remains an obstacle for federally funded researchers seeking to create their own stem cell lines. In August 2010, as part of preliminary motions in Sherley vs Sebelius, Judge Royce C. Lamberth granted an injunction against federally funded embryonic stem cell (ESC) research on the grounds that the guidelines for ESC research "clearly violate" the Dickey–Wicker Amendment. In September 2010, he refused to lift the injunction pending the conclusion of the case and the issuance of his ruling and a likely appeal. In response, the Obama Justice Department asked the U.S. Court of Appeals for the District of Columbia Circuit to lift the injunction via an order pending the appeal of Judge Lamberth's ruling, which it did on April 29, 2011. Judge Lamberth was thereby obliged to reverse his ruling, and grudgingly dismissed the case entirely on July 27, 2011. In the 2–1 opinion of April 29, 2011, the appeals panel said that the Dickey–Wicker Amendment was "ambiguous" and that the National Institutes of Health had "reasonably concluded" that although federal funds could not be used to directly destroy an embryo, the amendment does not prohibit funding a research project using embryonic stem cells. This is an important distinction under the law, because for federal funds to be used directly to support the destruction of embryos—as opposed to indirect use just in embryo stem cell research that avoids killing the embryo—is supposedly a violation of the Hyde Amendment, which has been ruled constitutional and which prohibits abortions using federal tax dollar funds (those questions will now have to be settled by the whole Court of Appeals for the District of Columbia Circuit sitting en banc, or perhaps, ultimately, by the Supreme Court of the United States). History of US concerns about embryos Federal concern with human embryo research began over 25 years ago with the advent of assisted reproduction technologies, i.e. in vitro fertilization (IVF) or "test tube babies." Although the first report of laboratory studies of human fertilization appeared in Science in 1944, (the work was conducted in Brookline, Massachusetts), clinical IVF was successful first in Great Britain in 1978 for couples with infertility. IVF became standard of care in the United States in the early 1980s. As with all forms of clinical treatment, the medical community looked to basic science research to improve the safety and efficacy of IVF for mothers and babies. In 1979, an Ethics Advisory Board for the National Institutes of Health issued guidelines for research on early human embryos, but no action was taken. The Federal Policy for the Protection of Human Subjects (see: Human subject research legislation in the United States) enacted in 1977 remained in place: 45CFR § 46.204(d), "No application or proposal involving human in vitro fertilization may be funded by the Department or any component thereof until the application or proposal has been reviewed by the Ethical Advisory Board and the Board has rendered advice as to its acceptability from an ethical standpoint." Since there was no Ethics Advisory Board, federally funded research was not possible. See also Stem cell controversy References Bioethics Cloning Stem cells United States federal health legislation 1995 in American law
Dickey–Wicker Amendment
Technology,Engineering,Biology
1,174
5,949,507
https://en.wikipedia.org/wiki/ANSI%20device%20numbers
In electric power systems and industrial automation, ANSI Device Numbers can be used to identify equipment and devices in a system such as relays, circuit breakers, or instruments. The device numbers are enumerated in ANSI/IEEE Standard C37.2 Standard for Electrical Power System Device Function Numbers, Acronyms, and Contact Designations. Many of these devices protect electrical systems and individual system components from damage when an unwanted event occurs such as an electrical fault. Historically, a single protective function was performed by one or more distinct electromechanical devices, so each device would receive its own number. Today, microprocessor-based relays can perform many protective functions in one device. When one device performs several protective functions, it is typically denoted "11" by the standard as a "Multifunction Device", but ANSI Device Numbers are still used in documentation like single-line diagrams or schematics to indicate which specific functions are performed by that device. ANSI/IEEE C37.2-2008 is one of a continuing series of revisions of the standard, which originated in 1928 as American Institute of Electrical Engineers Standard No. 26. List of device numbers and acronyms 1 - Master Element 2 - Time-delay Starting or Closing Relay 3 - Checking or Interlocking Relay, complete Sequence 4 - Master Protective 5 - Stopping Device, Emergency Stop Switch 6 - Starting Circuit Breaker 7 - Rate of Change Relay 7F - Alternative number for Rate Of Change Of Frequency Relay (ROCOF) 8 - Control Power Disconnecting Device 9 - Reversing Device 10 - Unit Sequence Switch 11 - Multifunction Device 12 - Overspeed Device 13 - Synchronous-Speed Device 14 - Underspeed Device 15 - Speed or Frequency Matching Device 16 - Data Communications Device 17 - Shunting or Discharge Switch 18 - Accelerating or Decelerating Device 19 - Starting-to-Running Transition Contactor 20 - Electrically-Operated Valve (Solenoid Valve) 21 - Distance Relay 21G - Ground Distance 21P - Phase Distance 22 – Equalizer Circuit Breaker 23 – Temperature control device, Heater 24 – Volts per Hertz Relay (in some old analog applications, a 59 and an 81 device would be chained together as a 59/81 to implement the equivalent of V/Hz protection) 25 – Synchronizing or Synchronism-check Device 26 – Apparatus Thermal Device, Temperature Switch 27 – Undervoltage Relay 27P - Phase Undervoltage 27S - DC Undervoltage Relay 27TN - Third Harmonic Neutral Undervoltage 27TN/59N - 100% Stator Earth Fault 27X - Auxiliary Undervoltage 27 AUX - Undervoltage Auxiliary Input 27/27X - Bus/Line Undervoltage 27/50 - Inadvertent Energization 28 - Flame Detector 29 - Isolating Contactor 30 - Annunciator Relay 31 - Separate Excitation Device 32 - Directional Power Relay 32L - Low Forward Power 32H - High Directional Power 32N - Wattmetric Zero-Sequence Directional 32P - Directional Power 32R - Reverse Power 33 - Position Switch 34 - Master Sequence Device 35 - Brush-Operating or Slip-ring Short Circuiting Device 36 - Polarity or Polarizing Voltage Device 37 - Undercurrent or Underpower Relay 37P - Underpower 38 - Bearing Protective Device / Bearing Rtd 39 - Mechanical Condition Monitor (Vibration) 40 - Field Relay / Loss of Excitation 41 - Field Circuit Breaker 42 - Running Circuit Breaker 43 - Manual Transfer or Selector Device 44 - Unit Sequence Starting Relay 45 - Atmospheric Condition Monitor (fumes, smoke, fire) 46 - Reverse-Phase or Phase Balance Current Relay or Stator Current Unbalance 47 - Phase-Sequence or Phase Balance Voltage Relay 48 - Incomplete Sequence Relay / Blocked Rotor 49 - Machine or Transformer Thermal Relay / Thermal Overload 49RTD - RTD Biased Thermal Overload 50 - Instantaneous Overcurrent Relay 50BF - Breaker Failure or LBB (Local Breaker Back-up) 50DD - Current Disturbance Detector 50EF - End Fault Protection 50G - Ground Instantaneous Overcurrent 50IG - Isolated Ground Instantaneous Overcurrent 50LR - Acceleration Time 50N - Neutral Instantaneous Overcurrent 50NBF - Neutral Instantaneous Breaker Failure 50P - Phase Instantaneous Overcurrent 50SG - Sensitive Ground Instantaneous Overcurrent 50SP - Split Phase Instantaneous Current 50Q - Negative Sequence Instantaneous Overcurrent 50/27 - Inadvertent Energization 50/51 - Instantaneous / Time-delay Overcurrent relay 50/74 - CT Trouble 50/87 - Instantaneous Differential 51 - AC Time Overcurrent Relay 51C - Voltage Controlled Time Overcurrent 51G - Ground Time Overcurrent 51LR - AC Inverse Time Overcurrent (Locked Rotor) Protection Relay 51N - Neutral Time Overcurrent 51P - Phase Time Overcurrent 51R - Locked / Stalled Rotor 51V - Voltage Restrained Time Overcurrent 51Q - Negative Sequence Time Overcurrent 52 – AC Circuit Breaker 52a - AC Circuit Breaker Position (contact open when circuit breaker open) 52b - AC Circuit Breaker Position (contact closed when circuit breaker open) 53 - Exciter or DC Generator Relay 54 - Turning Gear Engaging Device 55 - Power Factor Relay 56 - Field Application Relay 57 - Short-Circuiting or Grounding Device 58 - Rectification Failure Relay 59 - Overvoltage Relay 59B - Bank Phase Overvoltage 59N - Neutral Overvoltage 59NU - Neutral Voltage Unbalance 59P - Phase Overvoltage 59X - Auxiliary Overvoltage 59Q - Negative Sequence Overvoltage 60 - Voltage or Current Balance Relay 60N - Neutral Current Unbalance 60P - Phase Current Unbalance 61 - Density Switch or Sensor 62 - Time-Delay Stopping or Opening Relay 63 - Pressure Switch Detector 64 - Ground Protective Relay 64F - Field Ground Protection 64R – Rotor Earth Fault 64REF – Restricted Earth Fault Differential 64S – Stator Earth Fault 64S - Sub-harmonic Stator Ground Protection 64TN - 100% Stator Ground 65 - Governor 66 - Notching or Jogging Device/Maximum Starting Rate/Starts per Hour/Time Between Starts 67 - AC Directional Overcurrent Relay 67G - Ground Directional Overcurrent 67N - Neutral Directional Overcurrent 67Ns – Earth Fault Directional 67P - Phase Directional Overcurrent 67SG - Sensitive Ground Directional Overcurrent 67Q - Negative Sequence Directional Overcurrent 68 - Blocking Relay / Power Swing Blocking 69 - Permissive Control Device 70 - Rheostat 71 - Liquid Switch, Level Switch 72 - DC Circuit Breaker 73 - Load-Resistor Contactor 74 - Alarm Relay 75 - Position Changing Mechanism 76 - DC Overcurrent Relay 77 - Telemetering Device, Speed Sensor 78 - Phase Angle Measuring or Out-of-Step Protective Relay 78V - Loss of Mains 79 - AC Reclosing Relay / Auto Reclose 80 - Liquid or Gas Flow Relay 81 - Frequency Relay 81O - Over Frequency 81R - common industry use for Rate Of Change Of Frequency Relay (ROCOF) 81U - Under Frequency 82 - DC Reclosing Relay 83 - Automatic Selective Control or Transfer Relay 84 - Operating Mechanism 85 - Pilot Communications, Carrier or Pilot-Wire Relay 86 - Lock-Out Relay, Master Trip Relay 87 - Differential Protective Relay 87B - Bus Differential 87G - Generator Differential 87GT - Generator/Transformer Differential 87L - Segregated Line Current Differential 87LG - Ground Line Current Differential 87M - Motor Differential 87N - Neutral Differential Protection / Restricted Earth Fault (REF) see also 87RGF 87O - Overall Differential 87PC - Phase Comparison 87RGF - Restricted Ground Fault see also 87N 87R - Restrained Differential 87S - Stator Differential 87S - Percent Differential 87T - Transformer Differential 87U - Unrestrained Differential 87V - Voltage Differential 87Z - High-Impedance Differential 88 - Auxiliary Motor or Motor Generator 89 - Line Switch 90 - Regulating Device 91 - Voltage Directional Relay 92 - Voltage And Power Directional Relay 93 - Field-Changing Contactor 94 - Tripping or Trip-Free Relay 95 – Trip Circuit Healthy 96 – Transmitter 97 – For specific applications where other numbers are not suitable 98 – For specific applications where other numbers are not suitable 99 – For specific applications where other numbers are not suitable Acronyms Description AFD - Arc Flash Detector CLK - Clock or Timing Source CLP - Cold Load Pickup DDR – Dynamic Disturbance Recorder DFR – Digital Fault Recorder DME – Disturbance Monitor Equipment ENV – Environmental Data HIZ – High Impedance Fault Detector HMI – Human Machine Interface HST – Historian LGC – Scheme Logic MET – Substation Metering PDC – Phasor Data Concentrator PMU – Phasor Measurement Unit PQM – Power Quality Monitor RIO – Remote Input/Output Device RTD - Resistance Temperature Detector RTU – Remote Terminal Unit/Data Concentrator SER – Sequence of Events Recorder TCM – Trip Circuit Monitor LRSS – Local/Remote selector switch VTFF - Vt Fuse Fail Suffixes Description _1 - Positive-Sequence _2 - Negative-Sequence A - Alarm, Auxiliary Power AC - Alternating Current AN - Anode B - Bus, Battery, or Blower BF - Breaker Failure BK - Brake BL - Block (Valve) BP - Bypass BT - Bus Tie BU - Backup C - Capacitor, Condenser, Compensator, Carrier Current, Case or Compressor CA - Cathode CH - Check (Valve) D - Discharge (Valve) DC - Direct Current DCB - Directional Comparison Blocking DCUB - Directional Comparison Unblocking DD - Disturbance Detector DUTT - Direct Underreaching Transfer Trip E - Exciter F - Feeder, Field, Filament, Filter, or Fan G - Ground or Generator GC - Ground Check H - Heater or Housing L - Line or Logic M - Motor or Metering MOC - Mechanism Operated Contact N - Neutral or Network O - Over P - Phase or Pump PC - Phase Comparison POTT - Pott: Permissive Overreaching Transfer Trip PUTT - Putt: Permissive Underreaching Transfer Trip R - Reactor, Rectifier, or Room S - Synchronizing, Secondary, Strainer, Sump, or Suction (Valve) SOTF - Switch On To Fault T - Transformer or Thyratron TD - Time Delay TDC - Time-Delay Closing Contact TDDO - Time Delayed Relay Coil Drop-Out TDO - Time-Delay Opening Contact TDPU - Time Delayed Relay Coil Pickup THD - Total Harmonic Distortion TH - Transformer (High-Voltage Side) TL - Transformer (Low-Voltage Side) TM - Telemeter TT - Transformer (Tertiary-Voltage Side) Q - Lube Oil W - Water F - Fuel G - Gas U - Under or Unit X - Auxiliary Z - Impedance Suffixes and prefixes A suffix letter or number may be used with the device number; for example, suffix N is used if the device is connected to a Neutral wire (example: 59N in a relay is used for protection against Neutral Displacement); and suffixes X, Y, Z are used for auxiliary devices. Similarly, the "G" suffix can denote a "ground", hence a "51G" is a time overcurrent ground relay. The "G" suffix can also mean "generator", hence an "87G" is a Generator Differential Protective Relay while an "87T" is a Transformer Differential Protective Relay. "F" can denote "field" on a generator or "fuse", as in the protective fuse for a pickup transformer. Suffix numbers are used to distinguish multiple "same" devices in the same equipment such as 51–1, 51–2. Device numbers may be combined if the device provides multiple functions, such as the Instantaneous / Time-delay Overcurrent relay denoted as 50/51. For device 16, the suffix letters further define the device: the first suffix letter is 'S' for serial or 'E' for Ethernet. The subsequent letters are: 'C' security processing function (e.g. VPN, encryption), 'F' firewall or message filter, 'M' network managed function, 'R' rotor, 'S' switch and 'T' telephone component. Thus a managed Ethernet switch would be 16ESM. References IEEE Standard for Electrical Power System Device Function Numbers, Acronyms, and Contact Designations', IEEE Std C37.2-2008 American National Standards Institute standards Electrical components
ANSI device numbers
Technology,Engineering
2,620
26,848,768
https://en.wikipedia.org/wiki/Eurocode%204%3A%20Design%20of%20composite%20steel%20and%20concrete%20structures
In the Eurocode series of European standards (EN) related to construction, Eurocode 4: Design of composite steel and concrete structures (abbreviated EN 1994 or, informally, EC 4) describes how to design of composite structures, using the limit state design philosophy. It was approved by the European Committee for Standardization (CEN) on 4 November 2004. Eurocode 4 is divided in two parts EN 1994-1 and EN 1994-2. Eurocode 4 is intended to be used in conjunction with: EN 1990: Eurocode - Basis of structural design; EN 1991: Eurocode 1 - Actions on structures; ENs, hENs, ETAGs and ETAs for construction products relevant for composite structures; EN 1090: Execution of steel structures and aluminium structures; EN 13670: Execution of concrete structures; EN 1992: Eurocode 2 - Design of concrete structures; EN 1993: Eurocode 3 - Design of steel structures; EN 1997: Eurocode 7 - Geotechnical design; EN 1998: Eurocode 8 - Design of structures for earthquake resistance, when composite structures are built in seismic regions. Part 1-1: General rules and rules for buildings EN 1994-1-1 gives a general basis for the design of composite structures together with specific rules for buildings. Contents General Basis of design Materials Durability Structural analysis Ultimate limit states Serviceability limit states Constructional details in buildings structures Composite slab with steel grids for the buildings Part 1-2: Structural fire design EN 1994-1-2 deals with the design of composite steel and concrete structures for the accidental situation of fire exposure and is intended to be used in conjunction with EN 1994-1-1 and EN 1991-1-2. This part only identifies differences from, or supplements to, normal temperature design and deals only with passive methods of fire protection. Active methods are not covered. Part 2: General rules and rules for bridges EN 1994-2 gives design rules for steel-concrete composite bridges or members of bridges, additional to the general rules in EN 1994-1-1. Cable stayed bridges are not fully covered by this part. Contents General Basis of design Materials Durability Structural analysis Ultimate limit states Serviceability limit states Decks with precast concrete slabs Composite plates in bridges External links The EN Eurocodes EN 1994: Design of composite steel and concrete structures EN 1994 - Eurocode 4: Design of composite steel and concrete structures - "Eurocodes: Background and applications" workshop 01994 Reinforced concrete 4 Structural steel
Eurocode 4: Design of composite steel and concrete structures
Engineering
502
38,174,746
https://en.wikipedia.org/wiki/Cam%20and%20groove
A cam and groove coupling, also called a camlock fitting, is a form of hose coupling. This kind of coupling is popular because it is a simple and reliable means of connecting and disconnecting hoses quickly and without tools. Standards Traditionally manufactured to US Military Specification MIL-C-27487, this specification covered the dimensions and machining tolerances, materials, closing torque, part numbers, pressure ratings, finish, inspection procedures and packing requirements. Compliance to this specification ensured inter-changeability of parts from different manufacturers. In 1998, the specification A-A-59326 replaced MIL-C-27487. In Europe the standard BS EN 14420-7 applies as well as the German DIN 2828 standard. Products produced to DIN 2828 are interchangeable with those made to the original MIL-C-27487 but have differences in the hose tail design, thread, part number and other details. Function The cams at the end of each lever on the female end align with a circumferential groove on the male end. When the levers are rotated to the locked position, they pull the male end into the female socket, creating a tight seal against a gasket within the female socket. The arms lock into position using over-center geometry, preventing accidental decoupling. Further, lever safety pins are common features for additional security, and female-end "self-locking" levers are also available. Because the groove is cut all the way around the male end, there is no specific rotational alignment necessary to couple, as there would be with threaded connectors, and there is no opportunity for cross-threading. This results in a fast, error-resistant coupling operation. Because the compression between the two fittings is limited by the size of the cams on the end of the levers and the rotation of the levers themselves, there is also no possibility of over- or under-tightening the fitting; the pressure against the sealing gasket is effectively constant from one coupling operation to the next, reducing possibility of leaks. Materials and uses Cam and groove fittings are commonly available in several materials, including stainless steel, aluminum, brass, and polypropylene. Because there are no threads to become fouled, cam and groove couplings are popular in moderately dirty environments, such as septic tank pump trucks and chemical or fuel tanker trucks. The system is especially well suited to a situation where frequent changes of hoses are required, such as for petroleum trucks, etc. As examples of industrial application, cam and groove fittings can be used in a system where rapid filling of chemical drums takes place, or by factories that have needs of dye, paint, and ink medium transfers. Note: Cam and Groove couplings are not recommended for any type of compressed gas service, including steam or air. Types and sizes Generally speaking, the most common types of cam and groove coupling are the following types. The letter codes are the common designations, while the roman numeral codes come from the GSA CID A-A-59326 standard: Type A or Type I: adapter (male end) with female thread, e.g. BSP or NPT Type B or Type VII: coupler (female end) with male thread, e.g. BSP or NPT Type C or Type VI: coupler with shank (hose barb) Type D or Type V: coupler with female thread Type E or Type II: adapter with shank Type F or Type III: adapter with male thread Type IV: adapter with flange, TTMA (Truck Trailer Manufacturer's Association) Type VIII: coupler with flange, TTMA Type DC or Type IX: dust caps (female) Type DP or Type X: dust plugs (male) Apart from these basic types, the hose/pipe connection side of a cam and groove coupler can be of various other types such as with a flange, for butt welding to a container, for truck use with a sight glass, etc. These couplings are available in the following diameters: Gallery See also References https://www.proflow-dynamics.com/media/wysiwyg/Catalog/Cam_and_Groove_Dimensions.pdf Mechanics Plumbing valves
Cam and groove
Physics,Engineering
881
14,469,908
https://en.wikipedia.org/wiki/Rural%20internet
Rural Internet describes the characteristics of Internet service in rural areas (also referred to as "the country" or "countryside"), which are settled places outside towns and cities. Inhabitants live in villages, hamlets, on farms and in other isolated houses. Mountains and other terrain can impede rural Internet access. Internet service in many rural areas is provided over voiceband by 56k modem. Poor-quality telephone lines, many of which were installed or last upgraded between the 1930s and the 1960s, often limit the speed of the network to bit rates of 26 kbit/s or less. Since many of these lines serve relatively few customers, phone company maintenance and speed of repair of these lines has degraded and their upgrade for modern quality requirements is unlikely. This results in a digital divide. High-speed, wireless Internet service is becoming increasingly common in rural areas. Here, service providers deliver Internet service over radio-frequency via special radio-equipped antennas. Methods for broadband Internet access in rural areas include: Mobile Internet (broadband if HSPA or higher) Hybrid Access Networks Power-line Internet Terrestrial Wireless Internet Satellite Internet ADSL loop extender Internet of Things White Space Internet Digital divide Scholarship on the topic of the digital divide has shifted from an understanding of people who do and do not have access to the internet to an analysis of the quality of internet access. Because opting out of internet activity is no longer a choice with internet-only customer service, online banking, and online schooling, internet access has become an increasing need in rural communities with inadequate infrastructure. Although government programs such as E-rate provisions provide internet connection to schools and libraries under the U.S. federal government, more general internet access to a broader community has not been directly addressed in policy. The provision of "national" internet services tends to favor urban metropolitan regions. For a long time, even, many within the U.S. considered the internet to be a luxury. In 2001, then FCC Chair Michael Powell said, “I think there’s a Mercedes divide. I’d like to have one. I can’t afford one” when asked about solutions to shrinking the digital divide. At the time, the internet was still largely new, as less than half of the U.S. did not have access to any home internet. In 2021, 77% of Americans have home broadband according to the most recent Pew Research Center survey. The attitude in the U.S. has largely shifted since Powell's remarks, however, as under the current administration and President Joe Biden there is a common belief that "broadband is infrastructure" and that is must be treated as such. The digital divide is even more prominent in developing countries, where physical access to internet services are at a much lower rate. While developed countries such as the U.S. face the challenge of providing universal service (ensuring that everyone has access to internet service in the home), developing countries face the challenge of providing universal access (ensuring that everyone has the opportunity to make use of the internet). For example, in Egypt there are only about six phone lines per 100 people, with less than two lines per 100 people in rural areas, which makes it even more difficult for people to access the internet. In the United States The United States Department of Agriculture’s Economic Research Service has provided numerous studies and data on the Internet in rural America. One such article from the Agricultural Outlook magazine, Communications & the Internet in Rural America, summarizes internet uses in rural areas of the United States in 2002. It indicates, "Internet use by rural and urban households has also increased significantly during the 1990s, so significantly that it has one of the fastest rates of adoption for any household service." Another area for inclusion of the Internet is American farming. One study reviewed data from 2003 and found that "56 percent of farm operators used the Internet while 31 percent of rural workers used it at their place of work." In later years challenges to economical rural telecommunications remain. People in inner city areas are closer together, so the access network to connect them is shorter and cheaper to build and maintain, while rural areas require more equipment per customer. However, even with this challenge the demand for services continues to grow. In 2011 the Federal Communications Commission (FCC) proposed to use the Universal Service Fund to subsidize rural broadband Internet services. In 2019, the FCC estimated that only 73.6% of the rural population had access to broadband services at 25 Mbps in 2017, compared to 98.3% of the population in urban areas. However, many studies have contested FCC findings, claiming a greater number of Americans are without access to internet services at sufficient speeds. For instance, in 2019 Pew Research Center found that only about two-thirds of rural Americans claimed to have a broadband internet connection at home, and although the gap in mobile technology ownership between rural and urban adults has narrowed, rural adults remain less likely to own these devices. One study in particular examined the ways in which inaccessibility for rural and "quasi-rural" residents affects their daily life, conceptualizing issues of accessibility as a form of socioeconomic inequity. By using Illinois as a case study - a state with both urban and rural environments—the authors demonstrate how the rural-urban digital divide negatively impacts those that live in areas that fall between the two distinct categories of rural and urban. Interviews with residents from Illinois describe "missed pockets," or areas in which service installation is not available or far too expensive. This inaccessibility leads many to experience sentiments of social isolation as residents feel disconnected from current events, cultural trends, and even close friends and family members. Internet access inequalities are further deepened by public policy and commercial investment. In 2003, The Information Society published an article explaining how exchange areas and local access transport areas (LATAs) arrange citizens into markets for telecommunication companies, which centralizes access rather than encouraging businesses to cater to more remote communities. These areas were created through regulatory measures intended to ensure greater access and are perpetuated by investment patterns as more disparate communities hold less potential for profits, thus creating "missed pockets." In Canada In Canada, when pressed by Member of Parliament David de Burgh Graham, the Federation of Canadian Municipalities did not see access to the internet a right. Telecommunications co-operatives like Antoine-Labelle provide an alternative to big Internet Service Providers. In Spain In Spain, the Guifi.net project has been for some people the only alternative to get access to the Internet. Usually, neighbors are the responsible to collect the necessary money to buy the network equipment that will do a Wireless link with another zone that already has internet access. There have also been cases in which the own city council has invested in the infrastructure. In the United Kingdom In the UK, the government aimed to provide superfast broadband (speeds of 24 Mbit/s or more) to 95% of the country by 2017. In 2014, a study by the Oxford Internet Institute found that in areas less than from large cities, internet speed dropped below 2 Mbit/s, the speed designated as "adequate" by the government. Frustrated by the slow progress being made by private telecoms companies, some rural communities have built their own broadband networks, such as the B4RN initiative. In India India has the second-biggest online market globally, yet a large portion of its populace – almost 700 million individuals – are detached. Indian internet network access AirJaldi has collaborated with Microsoft to give reasonable online access to rural areas. Dependable broadband associations are imperative for many youngsters who are being homeschooled during the pandemic for COVID-19. That may change as Indian web access provider, AirJaldi, is widening access through an imaginative undertaking with worldwide tech giant Microsoft. Internet of Things Due to poor telecommunication access in most rural areas, low-energy solutions such as those offered by Internet of Things networks are seen as a cost-effective solution well-adapted to agricultural environments. Tasks such as controlling livestock conditions and numbers, the state of crops, and pests are progressively being taken over by m2m communications. Companies such as Sigfox, Cisco Systems and Fujitsu are delving into the agricultural market, offering innovative solutions to common problems in countries such as the U.S., Japan, Ireland and Uruguay. Innovation and solutions There is increasing conversation around the growing social necessity of being connected in today's world and moreover, growing social expectation that one is connected either with at home broadband, reliable cell-service, and at least email access. Currently, rural areas often depend on small, unreliable ISP providers and scrape by "siphoning from surplus data and bandwidth capacity, creating their own systems of redundancy, or (in some cases) launching community-based, local ISP when large incumbent providers fail to show an interest in the area." Many of the difficulties faced by rural communities are "geo-policy barriers," defined as "chokepoints [or] mechanisms of control created through the interaction of geography, market forces, and public policies" that constrict not just access, but "also construct both communication and communities." In the US, regulatory mandates have helped extend basic telecommunications to rural areas while mitigating market failure. However, despite efforts from the government, the telecommunications industry has stayed relatively monopolized therefore little competition has resulted in basic telecommunications without adequate connectivity for the developing needs of rural citizens. One state-based effort that has proved successful in adequately connecting Americans are EAS, or "expanded area service", programs, which "generally reduce intra-LATAS [local access transport areas] long-distance costs between specific exchanges or throughout a contiguous geographic area." In regards to Internet access, one of the most important EAS programs creates "flat-rate calling zones that allow remote customers to reach an Internet service provider in a more populous area." Issues of rural connectivity have been exacerbated by the COVID-19 pandemic and reveal how "poor management of the Universal Service Fund, which subsidizes phone and internet access in rural areas, has meant some companies get the money without delivering on the promised numbers of households served or service quality." Therefore, one immediate fix to rural connectivity would be accountability within U.S.F programs and arguably, more funding. While governments begin pondering questions such as, "is Internet access a right?", ideas on how to approach this issue fall along political party lines. Mainly, Democrats believe more government funding would help connect rural Americans while Republicans are backing new 5G mobile Internet technology to replace home Internet lines and solve access gaps. These arguments are very similar to political arguments about "electricity and phone service in the early 1900s." The Federal Communications Commission (FCC) recently released an overview of initiatives based on "bridging the digital divide for all Americans," some of these include: Launching the Rural Digital Opportunity Fund, which would direct up to $20.4 billion to expand broadband in unserved rural areas. Establishing the Digital Opportunity Data Collection, a new process for collecting fixed broadband data to improve mapping and better identify gaps in broadband coverage across the nation. Approving $950 million in funding to improve, expand, and harden communications networks in Puerto Rico and the U.S. Virgin Islands. Updating rules that govern access to utility poles and conduits, which can be a costly and time-consuming barrier to broadband deployment. Revising rules that needlessly delay or even stop companies from replacing copper with fiber and that delay discontinuance of technologies from the 1970s in favor of services using Internet Protocol (IP) technologies. See also Dial-up Internet access Broadband Internet access Hybrid Access Networks Coverage Flat fee Internet in the United States Open Access Network Rural electrification Rural free delivery ASTRA2Connect example of a rural satellite internet system Starlink satellite internet Project Kuiper satellite internet constellation Notes External links “Rural Telecommunications Briefing Room.” (February 9, 2006). Economic Research Service. Retrieved December 30, 2008. “Telecommunications Resources.” (August 22, 2008). National Agricultural Library. Rural Information Center. Retrieved December 30, 2008. “Rural High-Speed Internet Ontario.” (June 21, 2019). Rural Internet Provider in Southwestern Ontario Digital divide Internet access Rural geography
Rural internet
Technology
2,501
37,583
https://en.wikipedia.org/wiki/Wine%20%28software%29
Wine is a free and open-source compatibility layer to allow application software and computer games developed for Microsoft Windows to run on Unix-like operating systems. Developers can compile Windows applications against WineLib to help port them to Unix-like systems. Wine is predominantly written using black-box testing reverse-engineering, to avoid copyright issues. No code emulation or virtualization occurs. Wine is primarily developed for Linux and macOS. In a 2007 survey by desktoplinux.com of 38,500 Linux desktop users, 31.5% of respondents reported using Wine to run Windows applications. This plurality was larger than all x86 virtualization programs combined, and larger than the 27.9% who reported not running Windows applications. History Bob Amstadt, the initial project leader, and Eric Youngdale started the Wine project in 1993 as a way to run Windows applications on Linux. It was inspired by two Sun Microsystems products, Wabi for the Solaris operating system, and the Public Windows Interface, which was an attempt to get the Windows API fully reimplemented in the public domain as an ISO standard but rejected due to pressure from Microsoft in 1996. Wine originally targeted 16-bit applications for Windows 3.x, but focuses on 32-bit and 64-bit versions which have become the standard on newer operating systems. The project originated in discussions on Usenet in comp.os.linux in June 1993. Alexandre Julliard has led the project since 1994. The project has proven time-consuming and difficult for the developers, mostly because of incomplete and incorrect documentation of the Windows API. While Microsoft extensively documents most Win32 functions, some areas such as file formats and protocols have no public, complete specification available from Microsoft. Windows also includes undocumented low-level functions, undocumented behavior and obscure bugs that Wine must duplicate precisely in order to allow some applications to work properly. Consequently, the Wine team has reverse-engineered many function calls and file formats in such areas as thunking. The Wine project originally released Wine under the same MIT License as the X Window System, but owing to concern about proprietary versions of Wine not contributing their changes back to the core project, work as of March 2002 has used the LGPL for its licensing. Wine officially entered beta with version 0.9 on 25 October 2005. Version 1.0 was released on 17 June 2008, after 15 years of development. Version 1.2 was released on 16 July 2010, version 1.4 on 7 March 2012, version 1.6 on 18 July 2013, version 1.8 on 19 December 2015 and version 9.0 on 16 January 2024. Development versions are released roughly every two weeks. Wine-staging is an independently maintained set of aggressive patches not deemed ready by WineHQ developers for merging into the Wine repository, but still considered useful by the wine-compholio fork. It mainly covers experimental functions and bug fixes. Since January 2017, patches in wine-staging begins to be actively merged into the WineHQ upstream as wine-compholio transferred the project to Alistair Leslie-Hughes, a key WineHQ developer. , WineHQ also provides pre-built versions of wine-staging. Corporate sponsorship The main corporate sponsor of Wine is CodeWeavers, which employs Julliard and many other Wine developers to work on Wine and on CrossOver, CodeWeavers' supported version of Wine. CrossOver includes some application-specific tweaks not considered suitable for the upstream version, as well as some additional proprietary components. The involvement of Corel for a time assisted the project, chiefly by employing Julliard and others to work on it. Corel had an interest in porting WordPerfect Office, its office suite, to Linux (especially Corel Linux). Corel later cancelled all Linux-related projects after Microsoft made major investments in Corel, stopping their Wine effort. Other corporate sponsors include Google, which hired CodeWeavers to fix Wine so Picasa ran well enough to be ported directly to Linux using the same binary as on Windows; Google later paid for improvements to Wine's support for Adobe Photoshop CS2. Wine is also a regular beneficiary of Google's Summer of Code program. Valve works with CodeWeavers to develop Proton, a Wine-based compatibility layer for Microsoft Windows games to run on Linux-based operating systems. Proton includes several patches that upstream Wine does not accept for various reasons, such as Linux-specific implementations of Win32 functions. Valve's involvement in the development of Proton (and, thus, the improvement of Linux gaming) has helped to improve Wine compatibility with Windows games. Design The goal of Wine is to implement the Windows APIs fully or partially that are required by programs that the users of Wine wish to run on top of a Unix-like system. Basic architecture The programming interface of Microsoft Windows consists largely of dynamic-link libraries (DLLs). These contain a huge number of wrapper sub-routines for the system calls of the kernel, the NTOS kernel-mode program (ntoskrnl.exe). A typical Windows program calls some Windows DLLs, which in turn calls user-mode gdi/user32 libraries, which in turn uses the kernel32.dll (win32 subsystem) responsible for dealing with the kernel through system calls. The system-call layer is considered private to Microsoft programmers as documentation is not publicly available, and published interfaces all rely on subsystems running on top of the kernel. Besides these, there are a number of programming interfaces implemented as services that run as separate processes. Applications communicate with user-mode services through RPCs. Wine implements the Windows application binary interface (ABI) entirely in user space, rather than as a kernel module. Wine mostly mirrors the hierarchy, with services normally provided by the kernel in Windows instead provided by a daemon known as the wineserver, whose task is to implement basic Windows functionality, as well as integration with the X Window System, and translation of signals into native Windows exceptions. Although wineserver implements some aspects of the Windows kernel, it is not possible to use native Windows drivers with it, due to Wine's underlying architecture. Libraries and applications Wine allows for loading both Windows DLLs and Unix shared objects for its Windows programs. Its built-in implementation of the most basic Windows DLLs, namely NTDLL, KERNEL32, GDI32, and USER32, uses the shared object method because they must use functions in the host operating system as well. Higher-level libraries, such as WineD3D, are free to use the DLL format. In many cases users can choose to load a DLL from Windows instead of the one implemented by Wine. Doing so can provide functionalities not yet implemented by Wine, but may also cause malfunctions if it relies on something else not present in Wine. Wine tracks its state of implementation through automated unit testing done at every git commit. Graphics and gaming While most office software does not make use of complex GPU-accelerated graphics APIs, computer games do. To run these games properly, Wine would have to forward the drawing instructions to the host OS, and even translate them to something the host can understand. DirectX is a collection of Microsoft APIs for rendering, audio and input. As of 2019, Wine 4.0 contains a DirectX 12 implementation for Vulkan API, and DirectX 11.2 for OpenGL. Direct2D support has been updated to Direct2D 1.2. Wine 4.0 also allows Wine to run Vulkan applications by handing draw commands to the host OS, or in the case of macOS, by translating them into the Metal API by MoltenVK. XAudio , Wine 4.3 uses the FAudio library (and Wine 4.13 included a fix for it) to implement the XAudio2 audio API (and more). XInput and Raw Input Wine, since 4.0 (2019), supports game controllers through its builtin implementations of these libraries. They are built as Unix shared objects as they need to access the controller interfaces of the underlying OS, specifically through SDL. Direct3D Much of Wine's DirectX effort goes into building WineD3D, a translation layer from Direct3D and DirectDraw API calls into OpenGL. As of 2019, this component supports up to DirectX 11. As of 12 December 2016, Wine is good enough to run Overwatch with D3D11. Besides being used in Wine, WineD3D DLLs have also been used on Windows itself, allowing for older GPUs to run games using newer DirectX versions and for old DDraw-based games to render correctly. Some work is ongoing to move the Direct3D backend to Vulkan API. Direct3D 12 support in 4.0 is provided by a "vkd3d" subproject, and WineD3D has in 2019 been experimentally ported to use the Vulkan API. Another implementation, DXVK, translates Direct3D 8, 9, 10, and 11 calls using Vulkan as well and is a separate project. Wine, when patched, can alternatively run Direct3D 9 API commands directly via a free and open-source Gallium3D State Tracker (aka Gallium3D GPU driver) without translation into OpenGL API calls. In this case, the Gallium3D layer allows a direct pass-through of DX9 drawing commands which results in performance improvements of up to a factor of 2. As of 2020, the project is named Gallium.Nine. It is available now as a separate standalone package and no longer needs a patched Wine version. User interface Wine is usually invoked from the command-line interpreter: wine program.exe. winecfg There is the utility winecfg that starts a graphical user interface with controls for adjusting basic options. It is a GUI configuration utility included with Wine. Winecfg makes configuring Wine easier by making it unnecessary to edit the registry directly, although, if needed, this can be done with the included registry editor (similar to Windows regedit). Third-party applications Some applications require more tweaking than simply installing the application in order to work properly, such as manually configuring Wine to use certain Windows DLLs. The Wine project does not integrate such workarounds into the Wine codebase, instead preferring to focus solely on improving Wine's implementation of the Windows API. While this approach focuses Wine development on long-term compatibility, it makes it difficult for users to run applications that require workarounds. Consequently, many third-party applications have been created to ease the use of those applications that do not work out of the box within Wine itself. The Wine wiki maintains a page of current and obsolete third-party applications. Winetricks is a script to install some basic components (typically Microsoft DLLs and fonts) and tweak settings required for some applications to run correctly under Wine. It can fully automate the install of a number of applications and games, including applying any needed workarounds. Winetricks has a GUI. The Wine project will accept bug reports for users of Winetricks, unlike most third-party applications. It is maintained by Wine developer Austin English. Q4Wine is an open GUI for advanced setup of Wine. Wine-Doors is an application management tool for the GNOME desktop which adds functionality to Wine. Wine-Doors is an alternative to WineTools which aims to improve upon WineTools' features and extend on the original idea with a more modern design approach. IEs4Linux is a utility to install all versions of Internet Explorer, including versions 4 to 6 and version 7 (in beta). Wineskin is a utility to manage Wine engine versions and create wrappers for macOS. PlayOnLinux is an application to ease the installation of Windows applications (primarily games). There is also a corresponding Macintosh version called PlayOnMac. Lutris is an open-source application to install Windows games on Linux. Bordeaux is a proprietary Wine GUI configuration manager that runs winelib applications. It also supports installation of third-party utilities, installation of applications and games, and the ability to use custom configurations. Bordeaux currently runs on Linux, FreeBSD, PC-BSD, Solaris, OpenSolaris, OpenIndiana, and macOS computers. Bottles is an open-source graphical Wine prefix and runners manager for Wine based on GTK4+Libadwaita. It provides a repository-based dependency installation system and bottle versioning to restore a previous state. WineGUI is a free and open-source graphical interface to manage Wine. It allows a user to create Wine bottles and install Windows applications or games. Functionality The developers of the Direct3D portions of Wine have continued to implement new features such as pixel shaders to increase game support. Wine can also use native DLLs directly, thus increasing functionality, but then a license for Windows is needed unless the DLLs were distributed with the application itself. Wine also includes its own open-source implementations of several Windows programs, such as Notepad, WordPad, Control Panel, Internet Explorer, and Windows Explorer. The Wine Application Database (AppDB) is a community-maintained on-line database about which Windows programs works with Wine and how well they work. Backward compatibility Wine ensures good backward compatibility with legacy Windows applications, including those written for Windows 3.1x. Wine can mimic different Windows versions required for some programs, going as far back as Windows 2.0. However, Windows 1.x and Windows 2.x support was removed from Wine development version 1.3.12. If DOSBox is installed on the system (see below on MS-DOS), Wine development version 1.3.12 and later nevertheless show the "Windows 2.0" option for the Windows version to mimic, but Wine still will not run most Windows 2.0 programs because MS-DOS and Windows functions are not currently integrated. Backward compatibility in Wine is generally superior to that of Windows, as newer versions of Windows can force users to upgrade legacy Windows applications, and may break unsupported software forever as there is nobody adjusting the program for the changes in the operating system. In many cases, Wine can offer better legacy support than newer versions of Windows with "Compatibility Mode". Wine can run 16-bit Windows programs (Win16) on a 64-bit operating system, which uses an x86-64 (64-bit) CPU, a functionality not found in 64-bit versions of Microsoft Windows. WineVDM allows 16-bit Windows applications to run on 64-bit versions of Windows. Wine partially supports Windows console applications, and the user can choose which backend to use to manage the console (choices include raw streams, curses, and user32). When using the raw streams or curses backends, Windows applications will run in a Unix terminal. 64-bit applications Preliminary support for 64-bit Windows applications was added to Wine 1.1.10, in December 2008. , the support is considered stable. The two versions of Wine are built separately, and as a result only building wine64 produces an environment only capable of running x86-64 applications. , Wine has stable support for a WoW64 build, which allows both 32-bit and 64-bit Windows applications to run inside the same Wine instance. To perform such a build, one must first build the 64-bit version, and then build the 32-bit version referencing the 64-bit version. Just like Microsoft's WoW64, the 32-bit build process will add parts necessary for handling 32-bit programs to the 64-bit build. This functionality is seen from at least 2010. MS-DOS Early versions of Microsoft Windows run on top of MS-DOS, and Windows programs may depend on MS-DOS programs to be usable. Wine does not have good support for MS-DOS, but starting with development version 1.3.12, Wine tries running MS-DOS programs in DOSBox if DOSBox is available on the system. However, due to a bug, current versions of Wine incorrectly identify Windows 1.x and Windows 2.x programs as MS-DOS programs, attempting to run them in DOSBox (which does not work). Winelib Wine provides Winelib, which allows its shared-object implementations of the Windows API to be used as actual libraries for a Unix program. This allows for Windows code to be built into native Unix executables. Since October 2010, Winelib also works on the ARM platform. Non-x86 architectures Support for Solaris SPARC was dropped in version 1.5.26. ARM, Windows CE, and Windows RT Wine provides some support for ARM (as well as ARM64/AArch64) processors and the Windows flavors that run on it. , Wine can run ARM/Win32 applications intended for unlocked Windows RT devices (but not Windows RT programs). Windows CE support (either x86 or ARM) is missing, but an unofficial, pre-alpha proof-of-concept version called WineCE allows for some support. Wine for Android On 3 February 2013 at the FOSDEM talk in Brussels, Alexandre Julliard demonstrated an early demo of Wine running on Google's Android operating system. Experimental builds of WINE for Android (x86 and ARM) were released in late 2017. It has been routinely updated by the official developers ever since. The default builds do not implement cross-architecture emulation via QEMU, and as a result ARM versions will only run ARM applications that use the Win32 API. Microsoft applications Wine, by default, uses specialized Windows builds of Gecko and Mono to substitute for Microsoft's Internet Explorer and .NET Framework. Wine has built-in implementations of JScript and VBScript. It is possible to download and run Microsoft's installers for those programs through winetricks or manually. Wine is not known to have good support for most versions of Internet Explorer (IE). Of all the reasonably recent versions, Internet Explorer 8 for Windows XP is the only version that reports a usable rating on Wine's AppDB, out-of-the-box. However Google Chrome gets a gold rating (as of Wine 5.5-staging), and Microsoft's IE replacement web browser Edge, is known to be based on that browser (after switching from Microsoft's own rendering engine). Winetricks offer auto-installation for Internet Explorer 6 through 8, so these versions can be reasonably expected to work with its built-in workarounds. An alternative for installing Internet Explorer directly is to use the now-defunct IEs4Linux. It is not compatible with the latest versions of Wine, and the development of IEs4Linux is inactive. Other versions of Wine The core Wine development aims at a correct implementation of the Windows API as a whole and has sometimes lagged in some areas of compatibility with certain applications. Direct3D, for example, remained unimplemented until 1998, although newer releases have had an increasingly complete implementation. CrossOver CodeWeavers markets CrossOver specifically for running Microsoft Office and other major Windows applications, including some games. CodeWeavers employs Alexandre Julliard to work on Wine and contributes most of its code to the Wine project under the LGPL. CodeWeavers also released a new version called CrossOver Mac for Intel-based Apple Macintosh computers on 10 January 2007. Unlike upstream wine, CrossOver is notably able to run on the x64-only versions of macOS, using a technique known as "wine32on64". As of 2012, CrossOver includes the functionality of both the CrossOver Games and CrossOver Pro lines therefore CrossOver Games and CrossOver Pro are no longer available as single products. CrossOver Games was optimized for running Windows video games. Unlike CrossOver, it didn't focus on providing the most stable version of Wine. Instead, experimental features are provided to support newer games. Proton On 21 August 2018, Valve announced a new variation of Wine, named Proton, designed to integrate with the Linux version of the company's Steam software (including Steam installations built into their Linux-based SteamOS operating system and Steam Machine computers). Valve's goal for Proton is to enable Steam users on Linux to play games which lack a native Linux port (particularly back-catalog games), and ultimately, through integration with Steam as well as improvements to game support relative to mainline Wine, to give users "the same simple plug-and-play experience" that they would get if they were playing the game natively on Linux. Proton entered public beta immediately upon being announced. Valve had already been collaborating with CodeWeavers since 2016 to develop improvements to Wine's gaming performance, some of which have been merged to the upstream Wine project. Some of the specific improvements incorporated into Proton include Vulkan-based Direct3D 9, 10, 11, and 12 implementations via vkd3d, DXVK, and D9VK multi-threaded performance improvements via esync, improved handling of fullscreen games, and better automatic game controller hardware support. Proton is fully open-source and available via GitHub. WINE@Etersoft The Russian company Etersoft has been developing a proprietary version of Wine since 2006. WINE@Etersoft supports popular Russian applications (for example, 1C:Enterprise by 1C Company). Other projects using Wine source code Other projects using Wine source code include: OTVDM, a 16-bit app compatibility layer for 64-bit Windows. ReactOS, a project to write an operating system compatible with Windows NT versions 5.x and up (which includes Windows 2000 and its successors) down to the device driver level. ReactOS uses Wine source code considerably; however due to architectural differences with ReactOS its code is not generally reused in Wine, such as in the case of ReactOS specific DLLs, such as ntdll, user32, kernel32, gdi32, and advapi. In July 2009, Aleksey Bragin, the ReactOS project lead, started a new ReactOS branch called Arwinss, and it was officially announced in January 2010. Arwinss is an alternative implementation of the core Win32 components, and uses mostly unchanged versions of Wine's user32.dll and gdi32.dll. WineBottler, a wrapper around Wine in the form of a normal Mac application. It manages multiple Wine configurations for different programs in the form of "bottles." Wineskin, an open source Wine GUI configuration manager for macOS. Wineskin creates a wrapper around Wine in the form of a normal Mac Application. The wrapper can also be used to make a distributable "port" of software. Odin, a project to run Win32 binaries on OS/2 or convert them to OS/2 native format. The project also provides the Odin32 API to compile Win32 programs for OS/2. Virtualization products such as Parallels Desktop for Mac and VirtualBox use WineD3D to make use of the GPU. WinOnX, a commercial package of Wine for macOS that includes a GUI for adding and managing applications and virtual machines. WineD3D for Windows, a compatibility wrapper which emulates old Direct3D versions and features that were removed by Microsoft in recent Windows releases, using OpenGL. This sometimes gets older games working again. Apple Game Porting Toolkit, a suite of software introduced at Apple's Worldwide Developer Conference in June 2023 to facilitate porting games from Windows to Mac. Discontinued Cedega / WineX: TransGaming Inc. (now Findev Inc. since the sale of its software businesses) produced the proprietary Cedega software. Formerly known as WineX, Cedega represented a fork from the last MIT-licensed version of Wine in 2002. Much like CrossOver Games, TransGaming's Cedega was targeted towards running Windows video games. On 7 January 2011, TransGaming Inc. announced continued development of Cedega Technology under the GameTree Developer Program. TransGaming Inc. allowed members to keep using their Cedega ID and password until 28 February 2011. Cider: TransGaming also produced Cider, a library for Apple–Intel architecture Macintoshes. Instead of being an end-user product, Cider (like Winelib) is a wrapper allowing developers to adapt their games to run natively on Intel Mac without any changes in source code. Darwine: a port of the Wine libraries to Darwin and Mac OS X for the PowerPC and Intel x86 (32-bit) architectures, created by the OpenDarwin team in 2004. Its PowerPC version relied on QEMU. Darwine was merged back into Wine in 2009. E/OS LX: a project attempting to allow any program designed for any operating system to be run without the need to actually install any other operating system. Pipelight: a custom version of Wine (wine-compholio) that acts as a wrapper for Windows NPAPI plugins within Linux browsers. This tool permits Linux users to run Microsoft Silverlight, the Microsoft equivalent of Adobe Flash, and the Unity web plugin, along with a variety of other NPAPI plugins. The project provides an extensive set of patches against the upstream Wine project, some of which were approved and added to upstream Wine. Pipelight is largely obsolete, as modern browsers no longer support NPAPI plugins and Silverlight has been deprecated by Microsoft. Reception The Wine project has received a number of technical and philosophical complaints and concerns over the years. Security Because of Wine's ability to run Windows binary code, concerns have been raised over native Windows viruses and malware affecting Unix-like operating systems as Wine can run limited malware made for Windows. A 2018 security analysis found that 5 out of 30 malware samples were able to successfully run through Wine, a relatively low rate that nevertheless posed a security risk. For this reason the developers of Wine recommend never running it as the superuser. Malware research software such as ZeroWine runs Wine on Linux in a virtual machine, to keep the malware completely isolated from the host system. An alternative to improve the security without the performance cost of using a virtual machine, is to run Wine in an LXC container, as Anbox software is doing by default with Android. Another security concern is when the implemented specifications are ill-designed and allow for security compromise. Because Wine implements these specifications, it will likely also implement any security vulnerabilities they contain. One instance of this problem was the 2006 Windows Metafile vulnerability, which saw Wine implementing the vulnerable SETABORTPROC escape. Wine vs. native Unix applications A common concern about Wine is that its existence means that vendors are less likely to write native Linux, macOS, and BSD applications. As an example of this, it is worth considering IBM's 1994 operating system, OS/2 Warp. An article describes the weaknesses of OS/2 which killed it, the first one being: However, OS/2 had many problems with end user acceptance. Perhaps the most serious was that most computers sold already came with DOS and Windows, and many people didn't bother to evaluate OS/2 on its merits due to already having an operating system. "Bundling" of DOS and Windows and the chilling effect this had on the operating system market frequently came up in United States v. Microsoft Corporation. The Wine project itself responds to the specific complaint of "encouraging" the continued development for the Windows API on one of its wiki pages: Also, the Wine Wiki page claims that Wine can help break the chicken-and-egg problem for Linux on the desktop: The use of Wine for gaming has proved specifically controversial in the Linux community, as some feel it is preventing, or at least hindering, the further growth of native Linux gaming on the platform. One quirk however is that Wine is now able to run 16-bit and even certain 32-bit applications and games that do not launch on current 64-bit Windows versions. This use-case has led to running Wine on Windows itself via Windows Subsystem for Linux or third-party virtual machines, as well as encapsulated by means such as BoxedWine and Otvdm. Microsoft Until 2020, Microsoft had not made any public statements about Wine. However, the Windows Update online service will block updates to Microsoft applications running in Wine. On 16 February 2005, Ivan Leo Puoti discovered that Microsoft had started checking the Windows Registry for the Wine configuration key and would block the Windows Update for any component. As Puoti noted: "It's also the first time Microsoft acknowledges the existence of Wine." In January 2020, Microsoft cited Wine as a positive consequence of being able to reimplement APIs, in its amicus curiae brief for Google LLC v. Oracle America, Inc. In August 2024, Microsoft donated the Mono Project, a reimplementation of the .NET Framework, to the developers of Wine. See also Anbox Columbia Cycada Darling (software) Executor (software) List of free and open-source software packages Linux kernel API Mono (software) PlayOnLinux PlayOnMac ReactOS Windows Interface Source Environment Windows Subsystem for Linux Notes References Further reading Jeremy White's Wine Answers – Slashdot interview with Jeremy White of CodeWeavers Appointment of the Software Freedom Law Center as legal counsel to represent the Wine project Wine: Where it came from, how to use it, where it's going – a work by Dan Kegel External links 1993 software Compatibility layers Computing platforms Cross-platform software Free software programmed in C Free system software Linux APIs Linux emulation software Software using the GNU Lesser General Public License
Wine (software)
Technology
6,122
44,687,845
https://en.wikipedia.org/wiki/Gas%20giant
A gas giant is a giant planet composed mainly of hydrogen and helium. Jupiter and Saturn are the gas giants of the Solar System. The term "gas giant" was originally synonymous with "giant planet". However, in the 1990s, it became known that Uranus and Neptune are really a distinct class of giant planets, being composed mainly of heavier volatile substances (which are referred to as "ices"). For this reason, Uranus and Neptune are now often classified in the separate category of ice giants. Jupiter and Saturn consist mostly of elements such as hydrogen and helium, with heavier elements making up between 3 and 13 percent of their mass. They are thought to consist of an outer layer of compressed molecular hydrogen surrounding a layer of liquid metallic hydrogen, with probably a molten rocky core inside. The outermost portion of their hydrogen atmosphere contains many layers of visible clouds that are mostly composed of water (despite earlier consensus that there was no water anywhere in the Solar System besides Earth) and ammonia. The layer of metallic hydrogen located in the mid-interior makes up the bulk of every gas giant and is referred to as "metallic" because the very large atmospheric pressure turns hydrogen into an electrical conductor. The gas giants' cores are thought to consist of heavier elements at such high temperatures () and pressures that their properties are not yet completely understood. The placement of the solar system's gas giants can be explained by the grand tack hypothesis. The defining differences between a very low-mass brown dwarf (which can have a mass as low as roughly 13 times that of Jupiter) and a gas giant are debated. One school of thought is based on formation; the other, on the physics of the interior. Part of the debate concerns whether brown dwarfs must, by definition, have experienced nuclear fusion at some point in their history. Terminology The term gas giant was coined in 1952 by the science fiction writer James Blish and was originally used to refer to all giant planets. It is, arguably, something of a misnomer because throughout most of the volume of all giant planets, the pressure is so high that matter is not in gaseous form. Other than solids in the core and the upper layers of the atmosphere, all matter is above the critical point, where there is no distinction between liquids and gases. The term has nevertheless caught on, because planetary scientists typically use "rock", "gas", and "ice" as shorthands for classes of elements and compounds commonly found as planetary constituents, irrespective of what phase the matter may appear in. In the outer Solar System, hydrogen and helium are referred to as "gases"; water, methane, and ammonia as "ices"; and silicates and metals as "rocks". In this terminology, since Uranus and Neptune are primarily composed of ices, not gas, they are more commonly called ice giants and distinct from the gas giants. Classification Theoretically, gas giants can be divided into five distinct classes according to their modeled physical atmospheric properties, and hence their appearance: ammonia clouds (I), water clouds (II), cloudless (III), alkali-metal clouds (IV), and silicate clouds (V). Jupiter and Saturn are both class I. Hot Jupiters are class IV or V. Extrasolar Cold gas giants A cold hydrogen-rich gas giant more massive than Jupiter but less than about () will only be slightly larger in volume than Jupiter. For masses above , gravity will cause the planet to shrink (see degenerate matter). Kelvin–Helmholtz heating can cause a gas giant to radiate more energy than it receives from its host star. Gas dwarfs Although the words "gas" and "giant" are often combined, hydrogen planets need not be as large as the familiar gas giants from the Solar System. However, smaller gas planets and planets closer to their star will lose atmospheric mass more quickly via hydrodynamic escape than larger planets and planets farther out. A gas dwarf could be defined as a planet with a rocky core that has accumulated a thick envelope of hydrogen, helium and other volatiles, having as result a total radius between 1.7 and 3.9 Earth-radii. The smallest known extrasolar planet that is likely a "gas planet" is Kepler-138d, which has the same mass as Earth but is 60% larger and therefore has a density that indicates a thick gas envelope. A low-mass gas planet can still have a radius resembling that of a gas giant if it has the right temperature. Precipitation and meteorological phenomena Jovian weather Heat that is funneled upward by local storms is a major driver of the weather on gas giants. Much, if not all, of the deep heat escaping the interior flows up through towering thunderstorms. These disturbances develop into small eddies that eventually form storms such as the Great Red Spot on Jupiter. On Earth and Jupiter, lightning and the hydrologic cycle are intimately linked together to create intense thunderstorms. During a terrestrial thunderstorm, condensation releases heat that pushes rising air upward. This "moist convection" engine can segregate electrical charges into different parts of a cloud; the reuniting of those charges is lightning. Therefore, we can use lightning to signal to us where convection is happening. Although Jupiter has no ocean or wet ground, moist convection seems to function similarly compared to Earth. Jupiter's Red Spot The Great Red Spot (GRS) is a high-pressure system located in Jupiter's southern hemisphere. The GRS is a powerful anticyclone, swirling at about 430 to 680 kilometers per hour counterclockwise around the center. The Spot has become known for its ferocity, even feeding on smaller Jovian storms. Tholins are brown organic compounds found within the surface of various planets that are formed by exposure to UV irradiation. The tholins that exist on Jupiter's surface get sucked up into the atmosphere by storms and circulation; it is hypothesized that those tholins that become ejected from the regolith get stuck in Jupiter's GRS, causing it to be red. Helium rain on Saturn and Jupiter Condensation of helium creates liquid helium rain on gas giants. On Saturn, this helium condensation occurs at certain pressures and temperatures when helium does not mix in with the liquid metallic hydrogen present on the planet. Regions on Saturn where helium is insoluble allow the denser helium to form droplets and act as a source of energy, both through the release of latent heat and by descending deeper into the center of the planet. This phase separation leads to helium droplets that fall as rain through the liquid metallic hydrogen until they reach a warmer region where they dissolve in the hydrogen. Since Jupiter and Saturn have different total masses, the thermodynamic conditions in the planetary interior could be such that this condensation process is more prevalent in Saturn than in Jupiter. Helium condensation could be responsible for Saturn's excess luminosity as well as the helium depletion in the atmosphere of both Jupiter and Saturn. See also List of gravitationally rounded objects of the Solar System List of planet types Hot Jupiter Ice giant Kepler-1704b Brown dwarf References Types of planet Solar System
Gas giant
Astronomy
1,469
6,243,353
https://en.wikipedia.org/wiki/Signaling%20compression
For data compression, signaling compression, or SigComp, is a compression method designed especially for compression of text-based communication data as SIP or RTSP. SigComp had originally been defined in RFC 3320 and was later updated with RFC 4896. A Negative Acknowledgement Mechanism for Signaling Compression is defined in RFC 4077. The SigComp work is performed in the ROHC working group in the transport area of the IETF. Overview SigComp specifications describe a compression schema that is located in between the application layer and the transport layer (e.g. between SIP and UDP). It is implemented upon a virtual machine configuration which executes a specific set of commands that are optimized for decompression purposes (namely UDVM, Universal Decompressor Virtual Machine). One strong point for SigComp is that the bytecode to decode messages can be sent over SigComp itself, so this allows to use any kind of compression schema given that it is expressed as bytecode for the UDVM. Thus any SigComp compatible device may use compression mechanisms that did not exist when it was released without any firmware change. Additionally, some decoders may be already been standardised, so SigComp may recall that code so it is not needed to be sent over the connection. To assure that a message is decodable the only requirement is that the UDVM code is available, so the compression of messages is executed off the virtual machine, and native code can be used. As an independent system a mechanism to signal the application conversation (e.g. a given SIP session), a compartment mechanism is used, so a given application may have any given number of different, independent conversations, while persisting all the session status (as needed/specified per compression schema and UDVM code). General architecture References Related standards documents – Signaling Compression (SigComp) – Signaling Compression (SigComp) – Extended Operations – The Session Initiation Protocol (SIP) and Session Description Protocol (SDP) Static Dictionary for Signaling Compression (SigComp) – Compressing the Session Initiation Protocol (SIP) – A Negative Acknowledgement Mechanism for Signaling Compression – Signaling Compression (SigComp) Users' Guide – Signaling Compression (SigComp) Torture Tests – Signaling Compression (SigComp) Corrections and Clarifications – Applying Signaling Compression (SigComp) to the Session Initiation Protocol (SIP) – The Presence-Specific Static Dictionary for Signaling Compression (Sigcomp) 3GPP TR23.979 Annex C – Required SigComp performance Data compression Multimedia Signal processing VoIP protocols Presentation layer protocols
Signaling compression
Technology,Engineering
545
63,352,392
https://en.wikipedia.org/wiki/Target%20selection
Target selection is the process by which axons (nerve fibres) selectively target other cells for synapse formation. Synapses are structures which enable electrical or chemical signals to pass between nerves. While the mechanisms governing target specificity remain incompletely understood, it has been shown in many organisms that a combination of genetic and activity-based mechanisms govern initial target selection and refinement. The process of target selection has multiple steps that include axon pathfinding when neurons extend processes to specific regions, cellular target selection when neurons choose appropriate partners in a target region from a multitude of potential partners, and subcellular target selection where axons often target particular regions of a partner neuron. Description As bundled axons finish navigating through various neural circuits during neural development, the growth cones must selectively target with which cells it will synapse. This can be particularly well observed in the visual and olfactory systems of organisms. In order to develop into a properly functioning nervous system, there must be an extremely high degree of accuracy in which cell the growth cone forms neural connections. Although the target cell selection must be highly accurate, the degree of specificity that the neural connectivity achieves varies based on the neuronal circuitry system. The target selection process of an axon to develop synaptic connections with specific cells can be broken down into multiple stages that are not necessarily confined to exact chronological order. The stages of targeting include: region specification target cell specification subcellular specification synaptic refinement Region specification The first stage in target selection is specification of target region, a process known as axon pathfinding. Growing neurites follow gradients of cell surface molecules that serve as chemoattractants and repellents to the growth cone. This perspective is an evolution of the chemoaffinity hypothesis posited by the neurobiologist Roger Wolcott Sperry in the 1960s. Sperry studied how the neurons in the visual systems of amphibians and goldfish form topographic maps in the brain, noting that if the optic nerve is crushed and allowed to regenerate, the axons will trace back the same patterns of connections. Sperry hypothesized that the target cells carried "identification tags" that would guide the growing axon, which we now know as recognition molecules that bind the growth cone along a gradient. Neurons in sensory systems like the visual, auditory, or olfactory cortex grow into topographic maps such that neighboring neurons in the periphery correspond to adjacent target locations in the central nervous system. For example, neurons nearby on the retina will project to nearby cortical cells, creating a so-called retinotopic map. This cortical organization allows organisms to more easily decode stimuli. The mechanisms governing region specification have been well studied in numerous systems. In Drosophila, numerous axon guidance molecules have been shown to be involved in precise regionalization of the ventral nerve cord. Target cell specification Once a growing neuron has entered the target area, they must locate and enter the appropriate target cell with which to synapse. This is accomplished through sequential signaling of attractive and repulsive cues, largely neurotrophins. The axon grows along its chemoattractant gradient until approaching the target cell, when its growth is slowed down by a sudden drop in the concentration of chemoattractant. This serves as a signal to enter the target cell.[1] As the growth cone slows down, branches begin to form through one of two modalities: splitting of the growth cone, or interstitial branching. Growth cone splitting results in bifurcation of the main axon and is associated with axon guidance and innervating multiple faraway targets. Conversely, interstitial branching increases axonal coverage locally to define its presynaptic territory. Most mammalian CNS branches extend interstitially.[7] Branching can be caused by repulsive cues in the environment that cause the growth cone to pause and collapse, resulting in the formation of branches. [8] To ensure successful innervation, inappropriate targeting must be prevented. Once the axon has reached its target area and started to slow down and branch, it can be held within the target area by a perimeter of cues repulsive to the growth cone. Cell-to-cell interactions Axons express patterns of cell-surface adhesion molecules that allow them to match with specific layer targets. An important family of adhesion molecules is constituted by the cadherins, whose different combination on targeting cells allow the traction and guidance of the forming axons. A typical example of layers with combinatorial expression of these molecules is the tectal laminae in the chick tectum, where the N-cadherin molecule is present only in those layers that receive axons form the retina. Extracellular cues Matrix factors and secreted cues are also very important in the formation of layered structures, and can be divided into attractive and repulsive cues, though the same factor can have both functions under varying conditions. For example, semaphorin is a substance with a repulsive effect that has been shown to have a fundamental role in layering between different somatosensory modalities in the spinal cord system. Synapse formation The molecular mechanism of synapse formation is a process composed by different stages that relies on complex intracellular mechanisms involving both the pre- and postsynaptic cell. When the growth cone of the growing presynaptic axon makes contact with the target cell, it loses the filopodia, while both cells start expressing adhesion molecules on their respective membranes to form tight junctions, called "puncta adherens", which are similar to an adherens junction. Different classes of adhesion molecules, like SynCAM, cadherins and neuroligins/neurexins play an important role in synapse stabilization and enable synaptic formation. After the synapses have been stabilized, the pre- and postsynaptic cells undergo subcellular changes on each side of the synapses. Namely, there is an accumulation of the Golgi apparatus on the postsynaptic side, while there is an accumulation of vesicles in the presynaptic terminal. Finally at the end of synaptogenesis, there is an apposition of extracellular matrix between the cells with the formation of a synaptic cleft. Characteristic of the postsynaptic cell is the presence of a postsynaptic density (PSD), formed by PDZ-domain-containing scaffold proteins whose function is to keep the neurotransmitter receptors clustered inside the synapse. References Cell biology Neural circuitry Nervous system
Target selection
Biology
1,371
18,436,744
https://en.wikipedia.org/wiki/Henrik%20I.%20Christensen
Henrik Iskov Christensen (born July 16, 1962 in Frederikshavn, Denmark) is a Danish roboticist and Professor of Computer Science at Dept. of Computer Science and Engineering, at the UC San Diego Jacobs School of Engineering. He is also the Director of the Contextual Robotics Institute at UC San Diego. Prior to UC San Diego, he was a Distinguished Professor of Computer Science in the School of Interactive Computing at the Georgia Institute of Technology. At Georgia Tech, Christensen served as the founding director of the Institute for Robotics and Intelligent Machines (IRIM@GT) and the KUKA Chair of Robotics. Previously, Christensen was the Founding Chairman of European Robotics Research Network (EURON) and an IEEE Robotics and Automation Society Distinguished Lecturer in Robotics. Biography Christensen received his Certificate of Apprenticeship in Mechanical Engineering from the Frederikshavn Technical School, Denmark in 1981. He received his M.Sc. and Ph.D. in Electrical Engineering from Aalborg University in 1987 and 1990, respectively. His doctoral thesis Aspects of Real Time Image Sequence Analysis was advised by Erik Granum. After receiving his Ph.D., Christensen held teaching and research positions at Aalborg University, Oak Ridge National Laboratory, and the Royal Institute of Technology. In 2006, Christensen accepted a part-time position at the Georgia Institute of Technology as a Distinguished Professor of Computer Science and the KUKA Chair of Robotics, and transitioned to full-time in early 2007. At Georgia Tech, Christensen served as the founding director of the Center for Robotics and Intelligent Machines (RIM@GT), an interdepartmental research units consists of the College of Computing, College of Engineering, and the Georgia Tech Research Institute (GTRI). During his tenure, RIM@GT experienced an unprecedented growth, including (as of 2008) 36 faculty members as well as a dedicated interdisciplinary Ph.D. program in Robotics. He joined UC San Diego the fall of 2016 to be the director of the UC San Diego Contextual Robotics Institute. The institute does research on robots in the context of empowering people in their daily lives from work over leisure to domestic tasks. An important consideration is the context in which the robot is to perform its tasks. Research DARPA Urban Grand Challenge In 2007, Christensen led Georgia Tech's team in the DARPA Urban Grand Challenge as the principal investigator. The 2007 UGC was the third installment of the DARPA Grand Challenges (in 2004 and 2005), and took place on November 3, 2007 at the site of the now-closed George Air Force Base (currently used as Southern California Logistics Airport), in Victorville, California. The course involved a urban area course, to be completed in less than 6 hours while obeying all traffic regulations. Professional activities Associate Editor of Journal of Machine Vision and Applications, Springer Verlag (1996–2004). Associate Editor of International Journal of Pattern Recognition and Artificial Intelligence, World Scientific Press (1997–2005) Associate Editor of MIT Press series on "Intelligent Robotics and Autonomous Agents", (1997–) Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (1999–2003) Associate Editor Robotics and Autonomous Systems journal, Elsevier, Competition Corner, (1999–2002) Associate Editor of AAAI AI Magazine (2000–2007) Associate Editor of Springer Series on "Springer Tracts in Advanced Robotics", (2001–) Associate Editor of International Journal of Robotics Research, (2002–) Associate Editor of Service Robotics, (2005–) Patents Position Estimation Method, H.I. Christensen & G. Zunino, World patent (WO03062937) 3 March 11, 2008 Förfarande för en anordning på hjul, G. Zunino & H.I. Christensen, Swedish Patent (SE0200197) Mobile Robot, P. Jensfelt & H.I. Christensen, World Patent. Honors and awards The Foundation Vision North 1991 Research Award. Awarded for contribution to advancement of research at the Laboratory of Image Analysis, Aalborg University. August 1991. Elected Officer of International Foundation of Robotics Research (2003–) IEEE RAS Distinguished Lecturer in Robotics (2004–2006) Engelberger Award for Education (2011) References External links Home page UC San Diego announcement of Christensen's hire 1962 births Living people People from Frederikshavn Roboticists Control theorists Georgia Tech faculty University of California, San Diego faculty Danish scientists Computer vision researchers Aalborg University alumni
Henrik I. Christensen
Engineering
897
42,852
https://en.wikipedia.org/wiki/Radio%20frequency
Radio frequency (RF) is the oscillation rate of an alternating electric current or voltage or of a magnetic, electric or electromagnetic field or mechanical system in the frequency range from around to around . This is roughly between the upper limit of audio frequencies and the lower limit of infrared frequencies, and also encompasses the microwave range. These are the frequencies at which energy from an oscillating current can radiate off a conductor into space as radio waves, so they are used in radio technology, among other uses. Different sources specify different upper and lower bounds for the frequency range. Electric current Electric currents that oscillate at radio frequencies (RF currents) have special properties not shared by direct current or lower audio frequency alternating current, such as the 50 or 60 Hz current used in electrical power distribution. Energy from RF currents in conductors can radiate into space as electromagnetic waves (radio waves). This is the basis of radio technology. RF current does not penetrate deeply into electrical conductors but tends to flow along their surfaces; this is known as the skin effect. RF currents applied to the body often do not cause the painful sensation and muscular contraction of electric shock that lower frequency currents produce. This is because the current changes direction too quickly to trigger depolarization of nerve membranes. However, this does not mean RF currents are harmless; they can cause internal injury as well as serious superficial burns called RF burns. RF current can ionize air, creating a conductive path through it. This property is exploited by "high frequency" units used in electric arc welding, which use currents at higher frequencies than power distribution uses. Another property is the ability to appear to flow through paths that contain insulating material, like the dielectric insulator of a capacitor. This is because capacitive reactance in a circuit decreases with increasing frequency. In contrast, RF current can be blocked by a coil of wire, or even a single turn or bend in a wire. This is because the inductive reactance of a circuit increases with increasing frequency. When conducted by an ordinary electric cable, RF current has a tendency to reflect from discontinuities in the cable, such as connectors, and travel back down the cable toward the source, causing a condition called standing waves. RF current may be carried efficiently over transmission lines such as coaxial cables. Frequency bands The radio spectrum of frequencies is divided into bands with conventional names designated by the International Telecommunication Union (ITU): {| class="wikitable" style="text-align:right" |- ! scope="col" rowspan="2" | Frequencyrange !! scope="col" rowspan="2" | Wavelengthrange !! scope="col" colspan="2" | ITU designation !! scope="col" rowspan="2" | IEEE bands |- ! scope="col" | Full name ! scope="col" | Abbreviation |- ! scope="row" | Below 3 Hz | >105 km || || style="text-align:center" | || |- ! scope="row" | 3–30 Hz | 105–104 km|| Extremely low frequency || style="text-align:center" | ELF || |- ! scope="row" | 30–300 Hz | 104–103 km|| Super low frequency || style="text-align:center" | SLF || |- ! scope="row" | 300–3000 Hz | 103–100 km|| Ultra low frequency || style="text-align:center" | ULF || |- ! scope="row" | 3–30 kHz | 100–10 km|| Very low frequency || style="text-align:center" | VLF || |- ! scope="row" | 30–300 kHz | 10–1 km|| Low frequency || style="text-align:center" | LF || |- ! scope="row" | 300 kHz – 3 MHz | 1 km – 100 m|| Medium frequency || style="text-align:center" | MF || |- ! scope="row" | 3–30 MHz | 100–10 m|| High frequency || style="text-align:center" | HF || style="text-align:center" | HF |- ! scope="row" | 30–300 MHz | 10–1 m|| Very high frequency || style="text-align:center" | VHF || style="text-align:center" | VHF |- ! scope="row" | 300 MHz – 3 GHz | 1 m – 100 mm|| Ultra high frequency || style="text-align:center" | UHF || style="text-align:center" | UHF, L, S |- ! scope="row" | 3–30 GHz | 100–10 mm|| Super high frequency || style="text-align:center" | SHF || style="text-align:center" | S, C, X, Ku, K, Ka |- ! scope="row" | 30–300 GHz | 10–1 mm|| Extremely high frequency || style="text-align:center" | EHF || style="text-align:center" | Ka, V, W, mm |- ! scope="row" | 300 GHz – 3 THz | 1 mm – 0.1 mm|| Tremendously high frequency || style="text-align:center" | THF || |- | | |} Frequencies of 1 GHz and above are conventionally called microwave, while frequencies of 30 GHz and above are designated millimeter wave. More detailed band designations are given by the standard IEEE letter- band frequency designations and the EU/NATO frequency designations. Applications Communications Radio frequencies are used in communication devices such as transmitters, receivers, computers, televisions, and mobile phones, to name a few. Radio frequencies are also applied in carrier current systems including telephony and control circuits. The MOS integrated circuit is the technology behind the current proliferation of radio frequency wireless telecommunications devices such as cellphones. Medicine Medical applications of radio frequency (RF) energy, in the form of electromagnetic waves (radio waves) or electrical currents, have existed for over 125 years, and now include diathermy, hyperthermy treatment of cancer, electrosurgery scalpels used to cut and cauterize in operations, and radiofrequency ablation. Magnetic resonance imaging (MRI) uses radio frequency fields to generate images of the human body. Non-surgical weight loss equipment Radio Frequency or RF energy is also being used in devices that are being advertised for weight loss and fat removal. The possible effects RF might have on the body and whether RF can lead to fat reduction needs further study. Currently, there are devices such as trusculpt ID, Venus Bliss and many others utilizing this type of energy alongside heat to target fat pockets in certain areas of the body. That being said, there is limited studies on how effective these devices are. Measurement Test apparatus for radio frequencies can include standard instruments at the lower end of the range, but at higher frequencies, the test equipment becomes more specialized. Mechanical oscillations While RF usually refers to electrical oscillations, mechanical RF systems are not uncommon: see mechanical filter and RF MEMS. See also Amplitude modulation (AM) Bandwidth (signal processing) Electromagnetic interference Electromagnetic radiation Electromagnetic spectrum EMF measurement Frequency allocation Frequency modulation (FM) Plastic welding Pulsed electromagnetic field therapy Radio astronomy Spectrum management References External links Analog, RF and EMC Considerations in Printed Wiring Board (PWB) Design Definition of frequency bands (VLF, ELF ... etc.) IK1QFK Home Page (vlf.it) Radio, light, and sound waves, conversion between wavelength and frequency RF Terms Glossary Radio spectrum Radio waves Radio waves Television terminology
Radio frequency
Physics,Technology,Engineering
1,673
26,451,167
https://en.wikipedia.org/wiki/Phosphinidene
Phosphinidenes (IUPAC: phosphanylidenes, formerly phosphinediyls) are low-valent phosphorus compounds analogous to carbenes and nitrenes, having the general structure RP. The parent phosphinidine has the formula PH. More common are the organic analogues where R = alkyl or aryl. In these compounds phosphorus has only 6 electrons in its valence level. Most phosphinidenes are highly reactive and short-lived, thereby complicating empirical studies on their chemical properties. A variety of strategies have been employed to stabilize phosphinidenes (e.g. π-donation, steric protection, transition metal complexation), Furthermore reagents and systems have been developed that can generate and transfer phosphinidenes as intermediates in the synthesis of various organophosphorus compounds. Electronic structure Like carbenes, phosphinidenes can exist in either a singlet state or triplet state, with the triplet state typically being more stable. The stability of these states and their relative energy difference (the singlet-triplet energy gap) depends on the substituents. The ground state in the parent phosphinidene (PH) is a triplet that is 22 kcal/mol more stable than the lowest singlet state. This singlet-triplet energy gap is considerably larger than that of the simplest carbene methylene (9 kcal/mol). Ab initio calculations from Nguyen et al. found that alkyl- and silyl-substituted phosphinidenes have triplet ground states, possibly in-part due to a negative hyperconjugation. Substituents containing lone pairs (e.g. -NX2, -OX, -PX2 ,-SX) stabilize the singlet state, presumably by π-donation into an empty phosphorus 3p orbital; in most of these cases, the energies of the lowest singlet and triplet states were close to degenerate. A singlet ground state could be induced in amino- and phosphino-phosphinidenes by introducing bulky β-substituents, which are thought to destabilize the triplet state by distorting the pyramidal geometry through increased nuclear repulsion. Case studies Dibenzo-7-phosphanorbornadiene derivatives One way to generate phosphinidines employs the decyclization of phosphaanthracene complexes. Treatment of a bulky phosphine chloride (RPCl2) with magnesium anthracene affords a dibenzo-7-phosphanorbornadiene compound (RPA). Under thermal conditions, the RPA compound (R = NiPr2) decomposes to yield anthracene; kinetic experiments found this decomposition to be first-order. It was hypothesized that the amino-phosphinidene iPr2NP is formed as a transient intermediate species, and this was corroborated by an experiment where 1,3-cyclohexadiene was used as a trapping agent, forming anti-iPr2NP(C6H8). Molecular beam mass spectrometry has enabled the detection of the evolution of amino-phosphinidene fragments from a number of alkylamide derivatives (e.g. Me2NP+ and Me2NPH+ from Me2NPA) in the gas-phase at elevated temperatures. Phosphino-phosphinidene The first singlet phosphino-phosphinidene has been prepared using extremely bulky substituents. The authors prepared a chlorodiazaphospholidine with bulky (2,6-bis[(4-tert-butylphenyl)methyl]-4-methylphenyl) groups, and then synthesized the corresponding phosphaketene. Subsequent photolytic decarbonylation of the phosphaketene produced the phosphino-phosphinidene product as a yellow-orange solid that is stable at room temperature but decomposes immediately in the presence of air and moisture. 31P NMR spectroscopy shows assigned product peaks at 80.2 and -200.4 ppm, with a J-coupling constant of JPP = 883.7 Hz. The very high P-P coupling constant is indicative of P-P multiple bond character. The air/water sensitivity and high solubility of this compound prevented characterization by X-ray crystallography. Density functional theory and Natural bond orbital (NBO) calculations were used to gain insight into the structure and bonding of these phosphino-phosphinidenes. DFT calculations at the M06-2X/Def2-SVP level of theory on the phospino-phosphinidene with bulky 2,6-bis[4-tert-butylphenyl)methyl]-4-methylphenyl groups suggest that the tri-coordinated phosphorus atom exists in a planar environment. Calculations at the M06-2X/def2-TZVPP//M06-2X/def2-SVP level of theory were applied to a simplified model compound with diisopropylphenyl (Dipp) groups so as to reduce the computational cost for detailed NBO analysis. Inspection of the outputted wavefunctions shows that the HOMO and HOMO-1 are P-P π-bonding orbitals and the LUMO is a P-P π*-antibonding orbital. Further evidence of multiple bond character between the phosphorus atoms was provided by natural resonance theory and a large Wiberg bond index (P1-P2: 2.34). Natural population analysis assigned a negative partial charge to the terminal phosphorus atom (-0.34 q) and a positive charge to the tri-coordinated phosphorus atom (1.16 q). Despite the negative charge on the terminal phosphorus atom, subsequent studies have shown that this particular phosphinidene is electrophilic at the phosphinidene center. This phosphino-phosphinidene reacts with a number of nucleophiles (CO, isocyanides, carbenes, phosphines, etc.) to form phosphinidene-nucleophile adducts Upon nucleophilic addition, the tri-coordinated phosphorus atom becomes non-planar, and it is postulated that the driving force of the reaction is provided by the instability of the phosphinidene's planar geometry. Phospha-Wittig fragmentation In 1989, Fritz et al. synthesized the phospha-Wittig species shown to the right. Phospha-Wittig compounds can be viewed as a phosphinidene stabilized by a phosphine. These compounds have been given the label of "phospha-Wittig" as they have two dominant resonance structures (a neutral form and a zwitterionic form) that are analogous to those of the phosphonium ylides that are used in the Wittig reaction. Fritz et al. found that this particular phospha-Wittig reagent thermally decomposes at 20 °C to give tBu2PBr, LiBr, and cyclophosphanes. The authors proposed that the singlet phosphino-phosphinidene tBu2PP was formed as an intermediate in this reaction. Further evidence for this was provided by trapping experiments, where the thermal decomposition of the phospha-Wittig reagent in the presence of 3,4,-dimethyl-1,3-butadiene and cyclohexene gave rise to the products shown in the figure below. Metal complexes Terminal phosphinidine complexes Terminal transition-metal-complexed phosphinidenes LnM=P-R are phosphorus analogs of transition metal carbene complexes. The first "metal-phosphinidine" was reported by Marinetti et al. They generated the transient species [(OC)5M=P-Ph] by fragmentation of 7-phosphanorbornadiene molybdenum and tungsten complexes inside a mass spectrometer. Soon after, they discovered that these 7-phosphanorbornadiene complexes could be used to transfer the phosphinidene complex [(OC)5M=P-R] to various unsaturated substrates. Donor-stabilized terminal phosphinidene complexes are also known, which could release free phosphinidene complexes LnM=P-R at mild conditions by P-donor dissociation reactions. The phosphinidene complexes decomposed to white phosphorus if no unsaturated substrates were provided. Terminal phosphinidene complexes of the type Cp2M=P-R (M = Mo, W) can be obtained by combining aryl-dichlorophosphines RPCl2 with [Cp2MHLi]4. Phosphinidine-based clusters Metal clusters containing RP substituents are numerous. They typically arise by the reaction of metal carbonyls with primary phosphines (compounds with the formula RPH2). A partucularly well-studied case is , which forms from iron pentacarbonyl and phenylphosphine according to the following idealized equation: A related example is the tert-butylphosphinidene complex (t-BuP)Fe3(CO)10. See also Carbene analog Phosphorus compounds References Reactive intermediates Organophosphorus compounds
Phosphinidene
Chemistry
2,085
38,707,040
https://en.wikipedia.org/wiki/Truncated%20order-7%20square%20tiling
In geometry, the truncated order-7 square tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{4,7}. Related polyhedra and tiling References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Uniform tilings in hyperbolic plane List of regular polytopes External links KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Hyperbolic tilings Isogonal tilings Order-7 tilings Square tilings Truncated tilings Uniform tilings
Truncated order-7 square tiling
Physics
157
16,472,194
https://en.wikipedia.org/wiki/Attalus%20of%20Rhodes
Attalus of Rhodes () was an ancient Greek grammarian, astronomer, and mathematician, who lived in Rhodes in the 2nd century BC, and was a contemporary of Hipparchus. He wrote a commentary on the Phaenomena of Aratus. Although this work is lost, Hipparchus cites him in his Commentary on the Phaenomena of Eudoxus and Aratus. Attalus sought to defend both Aratus and Eudoxus against criticisms from contemporary astronomers and mathematicians. Book IV of Apollonius of Perga's Conics is addressed to someone named Attalus, and it has been suggested that this may have been Attalus of Rhodes. However, this is not a good match chronologically, and Attalus was a common name at the time, so the connection is only speculative. References Ancient Greek astronomers Ancient Greek mathematicians 2nd-century BC Rhodians 2nd-century BC Greek writers Ancient Rhodian grammarians Ancient Rhodian scientists 2nd-century BC mathematicians 2nd-century BC astronomers
Attalus of Rhodes
Astronomy
216
25,213,018
https://en.wikipedia.org/wiki/Days%20payable%20outstanding
Days payable outstanding (DPO) is an efficiency ratio that measures the average number of days a company takes to pay its suppliers. The formula for DPO is: where ending A/P is the accounts payable balance at the end of the accounting period being considered and Purchase/day is calculated by dividing the total cost of goods sold per year by 365 days. DPO provides one measure of how long a business holds onto its cash. DPO can also be used to compare one company's payment policies to another. Having fewer days of payables on the books than your competitors means they are getting better credit terms from their vendors than you are from yours. If a company is selling something to a customer, they can use that customer's DPO to judge when the customer will pay (and thus what payment terms to offer or expect). Having a greater days payables outstanding may indicate the Company's ability to delay payment and conserve cash. This could arise from better terms with vendors. DPO is also a critical part of the "Cash Cycle", which measures DPO and the related Days Sales Outstanding and Days In Inventory. When combined these three measurements tell us how long (in days) between a cash payment to a vendor into a cash receipt from a customer. This is useful because it indicates how much cash a business must have to sustain itself. See also Working capital analysis Days Sales Outstanding Days In Inventory Cash Conversion Cycle Notes External links Basic Instruments of Working Capital Management Financial ratios Working capital management Accounts payable
Days payable outstanding
Mathematics
311
36,359,521
https://en.wikipedia.org/wiki/Peziza%20phyllogena
Peziza phyllogena, commonly known as the common brown cup or the pig-ear cup, is a species of fungus in the family Pezizaceae. A saprobic species, the fungus produces brownish, cup-shaped fruit bodies that grow singly or in clusters on either soil or well-rotted wood. It is found in Europe, North America, and Iceland, where it fruits in the spring. Taxonomy The species was first described by Mordecai Cubitt Cooke in 1877, based on material from South Carolina sent to him by American botanist Henry William Ravenel. In a 1987 publication, Donald Pfister placed Peziza badioconfusa in synonymy with P. phyllogena. The former species had been described by Richard Korf in 1954; in that publication, Korf noted "It is perhaps our commonest large cup-fungus, and it seems to me that it must have been described before 1897 by some European or American author, but I have seen no types which match it." It is commonly known as the common brown cup, or the pig-ear cup. Description The fruit bodies of Peziza phyllogena are cup-shaped, measuring in diameter. The flesh is thin and fragile, and the sides of the cup are often compressed or lobed. The cups do not have a stem, and instead are attached to the substrate at a narrow central point on the bottom. The inner surface of the cup is dark purplish brown to dark reddish gray, while the outer surface is similar to the inner surface, or may have more purplish tones. The cup margin is thin, with a sharp edge, and it turns black as it dries. The edibility of the fungus was unknown, but Roger Phillips considers it edible. The spore print is hyaline (translucent) to pale cream. The spores are ellipsoid, covered with warts, and measure 17–23 by 8–13 μm. The asci (spore-bearing cells) are operculate (containing a lid-like covering over the opening), eight-spored, and cylindrical, measuring 215–285 by 11.5–13.5 μm. Habitat and distribution Peziza phyllogena grows solitarily or in dense clusters on soil or on well-decayed logs. Fruit bodies usually appear in early spring. The fungus has a widespread distribution in North America, especially the upper Midwest of the United States. It was newly recorded from Iceland in 2007. References External links Pezizaceae Edible fungi Fungi described in 1877 Fungi of Europe Fungi of North America Taxa named by Mordecai Cubitt Cooke Fungi of Iceland Fungus species
Peziza phyllogena
Biology
559