id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
3,027,162 | https://en.wikipedia.org/wiki/Spindle%20poison | A spindle poison, also known as a spindle toxin, is a poison that disrupts cell division by affecting the protein threads that connect the centromere regions of chromosomes, known as spindles. Spindle poisons effectively cease the production of new cells by interrupting the mitosis phase of cell division at the spindle assembly checkpoint (SAC). However, as numerous and varied as they are, spindle poisons are not yet 100% effective at ending the formation of tumors (neoplasms). Although not 100% effective, substantive therapeutic efficacy has been found in these types of chemotherapeutic treatments. The mitotic spindle is composed of microtubules (polymerized tubulin) that aid, along with regulatory proteins, each other in the activity of appropriately segregating replicated chromosomes. Certain compounds affecting the mitotic spindle have proven highly effective against solid tumors and hematological malignancies.
Two specific families of antimitotic agents — vinca alkaloids and taxanes — interrupt the cell’s division by the agitation of microtubule dynamics. The vinca alkaloids work by causing the inhibition of the polymerization of tubulin into microtubules, resulting in the G2/M arrest within the cell cycle and eventually cell death. In contrast, the taxanes arrest the mitotic cell cycle by stabilizing microtubules against depolymerization. Even though numerous other spindle proteins exist that could be the target of novel chemotherapeutics, tubulin-binding agents are the only types in clinical use. Agents that affect the motor protein kinesin are beginning to enter clinical trials. Another type, paclitaxel, acts by attaching to tubulin within existing microtubules. Next, it stabilizes the polymer.
Spindle assembly checkpoint (SAC)
Normally, cells duplicate their genetic material and then produce two equal daughter cells. Tampering with this tightly monitored distribution system can result in the production of irregular chromosome content, within each cell, commonly referred to as aneuploidy. Cells have developed various checkpoints to carry out mitosis with great accuracy. Early research concluded that spindle poisons, inserted to cells, caused a considerable reduction in the number of cells that exited mitosis, while the number of cells that entered mitosis dramatically increased. The SAC was found to be the key signaling pathway to the mitotic arrest. The precise division of chromosomes is the primary responsibility of SAC. Its origin stems from kinetochores, proteins that aid in joining DNA and microtubules on the chromatids. Only one unattached kinetochore is required to fuel a response that ultimately blocks cell cycle progression. The end result is each chromosome is attached to the spindle in the initial stage of anaphase.
Mitosis
During normal mitosis, the SAC is active for a short duration of minutes. During this period, spindle microtubules attach to chromosomes and rectify any improper attachments. High cyclin B levels are also maintained through inhibition of an E3 ubiquitin ligase that normally seeks out cyclin B for degradation. This particular ligase is referred to as (APC/C) anaphase promoting complex or cyclosome. When the APC/C is inhibited, cyclin B levels are kept high by the SAC and it ultimately protects cyclin-dependent kinase (CDK1). Mitosis is prompted by the activation of (CDK1) by cyclin B. After confirmation of proper attachment of all chromosomes, the SAC is turned off and degradation of cyclin B occurs by way of the (APC/C). Spindle poisons, in contrast, inhibit kinetochores during mitosis and prevent them from forming proper attachments to spindle microtubules. Permanent activation of the SAC ensues along with a mitotic arrest that lasts several hours. These cells will either exit mitosis by a different pathway not normal to mitosis or they will apoptose.
Examples
Some spindle poisons:
Mebendazole
Colchicine
Griseofulvin
Vinca Alkaloids
Paclitaxel (Taxol)
See also
Taxane
Mitotic inhibitor
References
Poisons
Mitotic inhibitors | Spindle poison | [
"Environmental_science"
] | 878 | [
"Poisons",
"Harmful chemical substances",
"Toxicology",
"Mitotic inhibitors"
] |
3,027,218 | https://en.wikipedia.org/wiki/Drunk%20dialing | Drunk dialing refers to an intoxicated person making phone calls that they would not likely make if sober, often a lonely individual calling former or current love interests.
Drunk texting, emailing, and editing internet sites are related phenomena, and potentially yet more embarrassing for the sender as, when the message is sent, it cannot be rescinded; the message may be misspelled (due to being drunk), and it might be reviewed and shared among many.
Hurtful communication
A 2021 study, that examined the relationship between drunk texting and emotional dysregulation, found a positive correlation. The findings suggest that interventions targeting emotional regulation skills may be beneficial.
In popular culture
In Kurt Vonnegut's 1969 novel Slaughterhouse-Five, the main character describes his tendency to drunk dial:
In the 2004 film Sideways, Miles Raymond (Paul Giamatti) gets drunk and calls his ex-wife while at a restaurant. When he returns to the table, his friend Jack (Thomas Haden Church) asks him, "Did you drink and dial?"
In media
The New York Post, The New York Times, and The Washington Post, have all reported on drunk dialing. Cell phone manufacturers and carriers are helping callers prevent drunk dialing. Virgin Mobile has launched an option to help its users stop drunk dialing by initiating multi-hour bans on calling specific numbers and the LG Group introduced the LP4100 mobile phone, which includes a breathalyzer. Although the breathalyzer function was incorporated to help the user assess fitness to drive, rather than fitness to phone, the owner can program the LP4100 to restrict calls to specific telephone numbers on certain days or after a certain hour, a feature that might help limit drunk dialing by eliminating calls when the user is more likely to be intoxicated. This requires prior planning or awareness that one will become intoxicated at a later time. Some reports indicate that this phone, or a planned future version for U.S. release, would activate the call-blocking function in tandem with the blood alcohol content results from the breathalyzer.
A mobile app Drunk Mode was launched in April 2013. Drunk Mode prevents users from calling or sending messages to specific contacts for up to 12 hours. A reported feature also sets notifications every 30, 60, 90 or 120 minutes to remind users not to engage in certain "drunk behaviors".
References
Drinking culture
Mobile phone culture
Error
Telephony
Interpersonal relationships | Drunk dialing | [
"Biology"
] | 504 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
3,027,310 | https://en.wikipedia.org/wiki/Kretschmann%20scalar | In the theory of Lorentzian manifolds, particularly in the context of applications to general relativity, the Kretschmann scalar is a quadratic scalar invariant. It was introduced by Erich Kretschmann.
Definition
The Kretschmann invariant is
where is the Riemann curvature tensor and is the Christoffel symbol. Because it is a sum of squares of tensor components, this is a quadratic invariant.
Einstein summation convention with raised and lowered indices is used above and throughout the article. An explicit summation expression is
Examples
For a Schwarzschild black hole of mass , the Kretschmann scalar is
where is the gravitational constant.
For a general FRW spacetime with metric
the Kretschmann scalar is
Relation to other invariants
Another possible invariant (which has been employed for example in writing the gravitational term of the Lagrangian for some higher-order gravity theories) is
where is the Weyl tensor, the conformal curvature tensor which is also the completely traceless part of the Riemann tensor. In dimensions this is related to the Kretschmann invariant by
where is the Ricci curvature tensor and is the Ricci scalar curvature (obtained by taking successive traces of the Riemann tensor). The Ricci tensor vanishes in vacuum spacetimes (such as the Schwarzschild solution mentioned above), and hence there the Riemann tensor and the Weyl tensor coincide, as do their invariants.
Gauge theory invariants
The Kretschmann scalar and the Chern-Pontryagin scalar
where is the left dual of the Riemann tensor, are mathematically analogous (to some extent, physically analogous) to the familiar invariants of the electromagnetic field tensor
Generalising from the gauge theory of electromagnetism to general non-abelian gauge theory, the first of these invariants is
,
an expression proportional to the Yang–Mills Lagrangian. Here is the curvature of a covariant derivative, and is a trace form. The Kretschmann scalar arises from taking the connection to be on the frame bundle.
See also
Carminati-McLenaghan invariants, for a set of invariants
Classification of electromagnetic fields, for more about the invariants of the electromagnetic field tensor
Curvature invariant, for curvature invariants in Riemannian and pseudo-Riemannian geometry in general
Curvature invariant (general relativity)
Ricci decomposition, for more about the Riemann and Weyl tensor
References
Further reading
Riemannian geometry
Lorentzian manifolds
Tensors in general relativity | Kretschmann scalar | [
"Physics",
"Engineering"
] | 523 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
3,027,417 | https://en.wikipedia.org/wiki/Site%20Security%20Handbook | The Site Security Handbook, RFC 2196, is a guide on setting computer security policies and procedures for sites that have systems on the Internet (however, the information provided should also be useful to sites not yet connected to the Internet). The guide lists issues and factors that a site must consider when setting their own policies. It makes a number of recommendations and provides discussions of relevant areas.
This guide is only a framework for setting security policies and procedures. In order to have an effective set of policies and procedures, a site will have to make many decisions, gain agreement, and then communicate and implement these policies.
The guide is a product of the IETF SSH working group, and was published in 1997, obsoleting the earlier RFC 1244 from 1991.
See also
RFC 2504 - Users' Security Handbook
References
RFC 2196 - Site Security Handbook
Computer security | Site Security Handbook | [
"Technology"
] | 176 | [
"Computer security stubs",
"Computing stubs"
] |
3,027,800 | https://en.wikipedia.org/wiki/Methylglyoxal | Methylglyoxal (MGO) is the organic compound with the formula CH3C(O)CHO. It is a reduced derivative of pyruvic acid. It is a reactive compound that is implicated in the biology of diabetes. Methylglyoxal is produced industrially by degradation of carbohydrates using overexpressed methylglyoxal synthase.
Chemical structure
Gaseous methylglyoxal has two carbonyl groups: an aldehyde and a ketone. In the presence of water, it exists as hydrates and oligomers. The formation of these hydrates is indicative of the high reactivity of MGO, which is relevant to its biological behavior.
Biochemistry
Biosynthesis and biodegradation
In organisms, methylglyoxal is formed as a side-product of several metabolic pathways. Methylglyoxal mainly arises as side products of glycolysis involving glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. It is also thought to arise via the degradation of acetone and threonine. Illustrative of the myriad pathways to MGO, aristolochic acid caused 12-fold increase of methylglyoxal from 18 to 231 μg/mg of kidney protein in poisoned mice. It may form from 3-aminoacetone, which is an intermediate of threonine catabolism, as well as through lipid peroxidation. However, the most important source is glycolysis. Here, methylglyoxal arises from nonenzymatic phosphate elimination from glyceraldehyde phosphate and dihydroxyacetone phosphate (DHAP), two intermediates of glycolysis. This conversion is the basis of a potential biotechnological route to the commodity chemical 1,2-propanediol.
Since methylglyoxal is highly cytotoxic, several detoxification mechanisms have evolved. One of these is the glyoxalase system. Methylglyoxal is detoxified by glutathione. Glutathione reacts with methylglyoxal to give a hemithioacetal, which converted into S--lactoyl-glutathione by glyoxalase I. This thioester is hydrolyzed to -lactate by glyoxalase II.
Biochemical function
Methylglyoxal is involved in the formation of advanced glycation end products (AGEs). In this process, methylglyoxal reacts with free amino groups of lysine and arginine and with thiol groups of cysteine forming AGEs. Histones are also heavily susceptible to modification by methylglyoxal and these modifications are elevated in breast cancer.
DNA damages are induced by reactive carbonyls, principally methylglyoxal and glyoxal, at a frequency similar to that of oxidative DNA damages. Such damage, referred to as DNA glycation, can cause mutation, breaks in DNA and cytotoxicity. In humans, a protein DJ-1 (also named PARK7), has a key role in the repair of glycated DNA bases.
Biomedical aspects
Due to increased blood glucose levels, methylglyoxal has higher concentrations in diabetics and has been linked to arterial atherogenesis. Damage by methylglyoxal to low-density lipoprotein through glycation causes a fourfold increase of atherogenesis in diabetics. Methylglyoxal binds directly to the nerve endings and by that increases the chronic extremity soreness in diabetic neuropathy.
Occurrence, other
Methylglyoxal is a component of some kinds of honey, including manuka honey; it appears to have activity against E. coli and S. aureus and may help prevent formation of biofilms formed by P. aeruginosa.
Research suggests that methylglyoxal contained in honey does not cause an increased formation of advanced glycation end products (AGEs) in healthy persons.
See also
Dicarbonyl
1,2-Dicarbonyl, methylglyoxal can be classified as an 1,2-dicarbonyl
References
Aldehydes
Endogenous aldehydes
GABAA receptor agonists
Metabolism
Conjugated ketones
Advanced glycation end-products | Methylglyoxal | [
"Chemistry",
"Biology"
] | 896 | [
"Biomolecules by chemical classification",
"Carbohydrates",
"Senescence",
"Endogenous aldehydes",
"Biomolecules",
"Cellular processes",
"Biochemistry",
"Advanced glycation end-products",
"Metabolism"
] |
13,364,159 | https://en.wikipedia.org/wiki/Triol | In chemistry, a triol is an organic compound containing three hydroxyl groups ( functional groups), such as glycerol.
See also
chemical compounds with one hydroxyl group
Alcohols
Phenols
Enols
Ynols
Polyols, chemical compounds with multiple hydroxyl groups
Diols, chemical compounds with two hydroxyl groups
References
Functional groups | Triol | [
"Chemistry"
] | 75 | [
"Functional groups"
] |
13,364,985 | https://en.wikipedia.org/wiki/Superhero%20Movie | Superhero Movie is a 2008 American superhero parody film written and directed by Craig Mazin, produced by Robert K. Weiss and David Zucker, and starring Drake Bell, Sara Paxton, Christopher McDonald, Kevin Hart, Brent Spiner, Jeffrey Tambor, Robert Joy, Regina Hall, Pamela Anderson, and Leslie Nielsen. It was originally titled Superhero! as a nod to one of the Zuckers's previous films, Airplane! (1980), in which Nielsen also starred.
A spoof of the superhero film genre, primarily Sam Raimi's Spider-Man (2002) and Christopher Nolan's Batman Begins (2005), as well as other lampooned cameos of mid-2000s Marvel film adaptations from 20th Century Fox such as Fantastic 4 (2005) and X-Men: The Last Stand (2006), the film follows in the footsteps of the Scary Movie series of comedies (2000–present), with which the film's poster shares a resemblance. It was also inspired by, and contains homages to some of Zucker, Abrahams and Zucker's earlier spoof films such as Airplane! and The Naked Gun (1988–1994).
Production began on September 17, 2007, in Los Angeles. It was released on March 28, 2008, in the United States to generally negative reviews from critics.
Plot
Rick Riker is an unpopular student at Empire High School. He lives with his Uncle Albert and Aunt Lucille, and his best friend, Trey. Rick has a crush on Jill Johnson, but she is dating bully Lance Landers. One day, Rick's class goes on a school field trip to an animal research lab run by terminally ill businessman Lou Landers, Lance's uncle. During the trip, Rick accidentally saturates himself in animal-attraction liquid, which causes a group of animals to flock to him, including a chemically enhanced radioactive dragonfly, which bites his neck.
Meanwhile, Lou creates a machine designed to heal illness. Testing it on himself, he gains perfect health at the cost of needing to drain life energy from a victim per day. To avoid arrest for murder, Lou becomes the villain Hourglass. During a science fair, Rick's body changes, which creates a number of mishaps. He later realizes he has developed superpowers from the dragonfly bite. Rick reveals his secret to his uncle and Trey, and an argument starts between him and Albert. The next day, while visiting the bank with Lucille, Rick accidentally allows a bank robber to make off with stolen cash. The robber then shoots and injures Albert.
Charles Xavier contacts Rick and introduces his school for mutants, where Mrs. Xavier tells him to make a costume to be a superhero. At home, Rick creates a superhero costume and dubs himself The Dragonfly. The Dragonfly starts watching over the city and fighting crime, quickly becoming a media sensation despite being unable to fly. Later, Dragonfly attempts to stop Hourglass from robbing a warehouse full of "ceryllium" as part of his evil plan but fails, allowing Hourglass to escape.
Later that night, Jill is attacked by thieves, but The Dragonfly saves her and they share a kiss. Meanwhile, Lou plans to construct a machine that will kill people and give him enough life energy to make him immortal. Later that night, Lou and Lance have dinner with Rick's family and Jill, but Lou secretly learns of Rick's true identity when he notices the same injuries on Rick as on The Dragonfly. Making up an awkward excuse, he and Lance leave. Lou returns minutes later as Hourglass and kills Aunt Lucille. Albert awakens from his coma and learns about her death by his moronic doctor. After her funeral, Jill meets Rick and offers to begin a relationship with him. However, Rick fears for her safety, and rejects Jill, leaving her hurt and furious.
Rick decides to end his superhero career, but knowing that Hourglass would head to an awards ceremony to kill thousands of people, he gets Albert to take him there. At the ceremony, Lou tells Rick the Dalai Lama is Hourglass, causing The Dragonfly to assault the Dalai Lama, causing chaos. Meanwhile, Jill discovers that Lou is Hourglass. When Hourglass clashes with Dragonfly on a rooftop, he activates his machine. Dragonfly manages to destroy both the machine and the Hourglass with his own bomb.
The explosion throws Jill off the roof and The Dragonfly dives after her, eventually growing wings and flies. Jill learns that Rick is The Dragonfly due to a family ring he wears being exposed through a hole in his glove and the two begin a relationship. After being thanked for saving the city, Rick flies away with Jill, but the two are unexpectedly rammed by a passing helicopter.
Cast
Production
The film was initially slated for theatrical release on February 9, 2007, as Superhero! under the direction of David Zucker. It was delayed, and the film later began production on September 17, 2007, in New York, and the director's chair was shifted to Craig Mazin, with Zucker being pushed back to being a producer. Though the film was produced in New York, the flyover scenes used as transitions in the film use footage of the business district in downtown Kansas City, Missouri.
Zucker said the film primarily parodied Spider-Man and Batman Begins, but also spoofed X-Men, Fantastic Four, and Superman. The producer elaborated, "It's a spoof of the whole superhero genre, but this one probably has more of a unified plot, like The Naked Gun had."
Release
Critical response
Superhero Movie received generally negative reviews from critics. On Rotten Tomatoes the film has an approval rating of 16% based on 51 reviews with an average rating of 3.80/10. The site's critical consensus reads, "Superhero Movie is not the worst of the spoof genre, but relies on tired gags and lame pop culture references all the same." On Metacritic, the film has a score of 33 out of 100 based on 14 critics, indicating "generally unfavorable reviews". Audiences polled by CinemaScore gave the film an average grade of "C+" on an A+ to F scale.
It was considered an improvement over Meet the Spartans.
LA Weekly's Luke Y. Thompson compared it to Mazin's first film "The Specials" saying it was " is everything his first film wasn't: predictable, flat, full of name-dropping, tragically unhip, and likely to make a decent amount of cash."
Box office
On its opening weekend, the film grossed $9,510,297 in 2,960 theaters averaging to about $3,212 per venue and ranked No. 3 at the box office. It grossed $26,638,520 in North America, and $46,387,782 internationally for a total of $73,026,302 in worldwide box office receipts.
Home media
Superhero Movie was released on DVD July 8, 2008. It was released in the rated PG-13 theatrical version (75 minutes) and the extended edition (81 minutes). The extended DVD features commentary by Zucker, Weiss, and Mazin, deleted scenes, and an alternate ending. There is also a Blockbuster Exclusive version of the Film which is the PG-13 version with the bonus features on the Unrated version and even more deleted scenes.
Audio commentary by writer/director Craig Mazin and producers David Zucker and Robert K. Weiss — Extended Version Only
Deleted scenes
Alternate ending
Meet the Cast featurette
The Art of Spoofing featurette
Theatrical trailer
The European (Region 2) DVD has 15 certificate and has all the features of the Extended Region 1 version.
Music
Sara Paxton performed the song heard during the credits, titled "I Need A Hero", which she also wrote with Michael Jay and Johnny Pedersen.
Superhero! Song
Star of the film Drake Bell composed (along with Michael Corcoran) and recorded a song for the movie entitled "Superhero! Song" during the movie's post-production. Co-star Sara Paxton provided backup vocals for the song. This song can be heard in the credits of the movie, however it is credited as being titled "Superbounce". It originally appeared on Bell's Myspace Music page. It was released in iTunes Store as a digital downloadable single on April 8, 2008.
Parody targets
The film parodies the entire superhero genre but is mainly a parody of Spider-Man and Batman Begins but also the X-Men and the Fantastic Four.
The film also makes references to other films such as when Rick Riker and Trey are in a bus and Trey is pointing out the different groups of cliques, this parodies the Mean Girls scene where Janis explains to Cady the cliques. One of the cliques is "Frodos" – kids dressed up as Hobbits looking similar to Frodo, The Lord of the Rings character.
The film also makes fun of certain celebrities and their real-life actions such as Tom Cruise's Scientology video and Barry Bonds' alleged use of steroids. It also makes fun of British scientist Stephen Hawking.
See also
Spoof film
References
External links
2008 films
2008 action comedy films
2000s American films
2000s English-language films
2000s parody films
2000s superhero comedy films
2000s teen comedy films
American action comedy films
American parody films
American slapstick comedy films
American superhero comedy films
American teen comedy films
Cultural depictions of Nelson Mandela
Cultural depictions of Stephen Hawking
Cultural depictions of the 14th Dalai Lama
Cultural depictions of Tom Cruise
Dimension Films films
English-language action comedy films
Films produced by David Zucker
Films produced by Robert K. Weiss
Films scored by James L. Venable
Films shot in Los Angeles
Films with screenplays by Craig Mazin
Metro-Goldwyn-Mayer films
Parodies of Spider-Man
Parodies of Superman
Parody superheroes
Teen Choice Award winning films
Teen superhero comedy films
The Weinstein Company films
X-Men in other media | Superhero Movie | [
"Astronomy"
] | 2,027 | [
"Cultural depictions of astronomers",
"Cultural depictions of Stephen Hawking"
] |
13,365,973 | https://en.wikipedia.org/wiki/Nathaniel%20Kleitman | Nathaniel Kleitman (April 26, 1895 – August 13, 1999) was an American physiologist and sleep researcher who served as Professor Emeritus in Physiology at the University of Chicago. He is recognized as the father of modern sleep research, and is the author of the seminal 1939 book Sleep and Wakefulness.
Biography
Early life
Nathaniel Kleitman was born in Chișinău, also known as Kishinev, the capital of the province of Bessarabia (now Moldova), in 1895 to a Jewish family. He was deeply interested in consciousness and reasoned that he could get insight in consciousness by studying the unconsciousness of sleep. Pogroms drove him to Palestine, and in 1915 he emigrated to the United States as a result of World War I. At the age of twenty, he landed in New York City penniless; in 1923, at age twenty-eight, he had worked his way through City College of New York and earned a PhD from the University of Chicago's Department of Physiology. His thesis was "Studies on the physiology of sleep." Soon after, in 1925, he joined the faculty there. An early sponsor of Kleitman's sleep research was the Wander Company, which manufactured Ovaltine and hoped to promote it as a remedy for insomnia.
REM sleep
Eugene Aserinsky, one of Kleitman's graduate students, decided to hook sleepers up to an early version of an electroencephalogram machine, which scribbled across of paper each night. In the process, Aserinsky noticed that several times each night the sleepers went through periods when their eyes darted wildly back and forth. Kleitman insisted that the experiment be repeated yet again, this time on his daughter, Esther. In 1953, he and Aserinsky introduced the world to "rapid-eye movement," or REM sleep. Kleitman and Aserinsky demonstrated that REM sleep was correlated with dreaming and brain activity. Another of Kleitman's graduate students, William C. Dement, who was a professor of psychiatry at the Stanford medical school, described this as the year that "the study of sleep became a true scientific field."
Rest activity cycle
Kleitman made countless additional contributions to the field of sleep research and was especially interested in "rest-activity" cycles, leading to many fundamental findings on circadian and ultradian rhythms. Kleitman proposed the existence of a Basic rest activity cycle, or BRAC, during both sleep and wakefulness.
Other experiments
Renowned for his personal and experimental rigor, he conducted well-known sleep studies underground in Mammoth Cave, Kentucky and lesser-known studies underwater in submarines during World War II and above the Arctic Circle.
See also
Chronotype
References
External links
Guide to the Nathaniel Kleitman Papers 1896-2001 at the University of Chicago Special Collections Research Center
1895 births
1999 deaths
Scientists from Chișinău
People from Kishinyovsky Uyezd
Moldovan Jews
Emigrants from the Russian Empire to the Ottoman Empire
Emigrants from the Russian Empire to the United States
American people of Moldovan-Jewish descent
American physiologists
Sleep researchers
American men centenarians
Chronobiologists
University of Chicago alumni
University of Chicago faculty
Jewish neuroscientists
American neuroscientists
Jewish centenarians | Nathaniel Kleitman | [
"Biology"
] | 674 | [
"Sleep researchers",
"Behavior",
"Sleep"
] |
13,366,494 | https://en.wikipedia.org/wiki/List%20of%20marine%20ecoregions | The following is a list of marine ecoregions, as defined by the WWF and The Nature Conservancy
The WWF/Nature Conservancy scheme groups the individual ecoregions into 12 marine realms, which represent the broad latitudinal divisions of polar, temperate, and tropical seas, with subdivisions based on ocean basins. The marine realms are subdivided into 62 marine provinces, which include one or more of the 232 marine ecoregions.
The WWF/Nature Conservancy scheme currently encompasses only coastal and continental shelf areas.
Arctic realm
(no provinces identified)
North Greenland
North and East Iceland
East Greenland Shelf
West Greenland Shelf
Northern Grand Banks-Southern Labrador
Northern Labrador
Baffin Bay-Davis Strait
Hudson Complex
Lancaster Sound
High Arctic Archipelago
Beaufort-Admunsen-Viscount Melville-Queen Maud
Beaufort Sea-continental coast and shelf
Chukchi Sea
Eastern Bering Sea
East Siberian Sea
Laptev Sea
Kara Sea
North and East Barents Sea
White Sea
Temperate Northern Atlantic
Northern European Seas
South and West Iceland
Faroe Plateau
Southern Norway
Northern Norway and Finnmark
Baltic Sea
North Sea
Celtic Seas
Lusitanian
South European Atlantic Shelf
Saharan Upwelling
Azores Canaries Madeira
Mediterranean Sea
Adriatic Sea
Aegean Sea
Levantine Sea
Tunisian Plateau/Gulf of Sidra
Ionian Sea
Western Mediterranean
Alboran Sea
Black Sea
Black Sea
Cold Temperate Northwest Atlantic
Gulf of St. Lawrence-Eastern Scotian Shelf
Southern Grand Banks-South Newfoundland
Scotian Shelf
Gulf of Maine-Bay of Fundy
Virginian
Warm Temperate Northwest Atlantic
Carolinian
Northern Gulf of Mexico
Temperate Northern Pacific
Cold Temperate Northwest Pacific
Sea of Okhotsk
Kamchatka Shelf and Coast
Oyashio Current
Northern Honshu
Sea of Japan
Yellow Sea
Warm Temperate Northwest Pacific
Central Kuroshio Current
East China Sea
Cold Temperate Northeast Pacific
Aleutian Islands
Gulf of Alaska
North American Pacific Fjordland
Puget Trough/Georgia Basin
Oregon, Washington, Vancouver Coast and Shelf
Northern California
Warm Temperate Northeast Pacific
Southern California Bight
Cortezian
Magdalena Transition
Tropical Atlantic
Tropical Northwestern Atlantic
Bermuda
Bahamian
Eastern Caribbean
Greater Antilles
Southwestern Caribbean
Western Caribbean
Southern Gulf of Mexico
Floridian
North Brazil Shelf
Guianian
Amazonia
Tropical Southwestern Atlantic
Sao Pedro and Sao Paulo Islands
Fernando de Noronha and Atol das Rocas
Northeastern Brazil
Eastern Brazil
Trindade and Martin Vaz Islands
St. Helena and Ascension Islands
St. Helena and Ascension Islands
West African Transition
Cape Verde
Sahelian Upwelling
Gulf of Guinea
Gulf of Guinea West
Gulf of Guinea Upwelling
Gulf of Guinea Central
Gulf of Guinea Islands
Gulf of Guinea South
Angolan
Western Indo-Pacific
Red Sea and Gulf of Aden
Northern and Central Red Sea
Southern Red Sea
Gulf of Aden
Somali/Arabian
Persian Gulf
Gulf of Oman
Western Arabian Sea
Central Somali Coast
Western Indian Ocean
Northern Monsoon Current Coast
East African Coral Coast
Seychelles
Cargados Carajos
Tromelin Island
Mascarene Islands
Southeast Madagascar
Western and Northern Madagascar
Bight of Sofala/Swamp Coast
Delagoa
West and South Indian Shelf
Western India
South India and Sri Lanka
Central Indian Ocean Islands
Maldives
Chagos
Bay of Bengal
Eastern India
Northern Bay of Bengal
Andaman Sea
Andaman and Nicobar Islands
Andaman Sea Coral Coast
Western Sumatra
Central Indo-Pacific
South China Sea
Gulf of Tonkin
Southern China
South China Sea Oceanic Islands
Sunda Shelf
Gulf of Thailand
Southern Vietnam
Sunda Shelf/Java Sea
Malacca Strait
Java Transitional
Southern Java
Cocos-Keeling/Christmas Island
South Kuroshio
South Kuroshio
Tropical Northwestern Pacific
Ogasawara Islands
Mariana Islands
East Caroline Islands
West Caroline Islands
Western Coral Triangle
Palawan/North Borneo
Eastern Philippines
Sulawesi Sea/Makassar Strait
Halmahera
Papua
Banda Sea
Lesser Sunda
Northeast Sulawesi
Eastern Coral Triangle
Bismarck Sea
Solomon Archipelago
Solomon Sea
Southeast Papua New Guinea
Sahul Shelf
Gulf of Papua
Arafura Sea
Arnhem Coast to Gulf of Carpentaria
Bonaparte Coast
Northeast Australian Shelf
Torres Strait and Northern Great Barrier Reef
Central and Southern Great Barrier Reef
Northwest Australian Shelf
Exmouth to Broome
Ningaloo
Tropical Southwestern Pacific
Tonga Islands
Fiji Islands
Vanuatu
New Caledonia
Coral Sea
Lord Howe and Norfolk Islands
Lord Howe and Norfolk Islands
Eastern Indo-Pacific
Hawaii
Hawaii
Marshall, Gilbert, and Ellice Islands
Marshall Islands
Gilbert and Ellice Islands
Central Polynesia
Line Islands
Phoenix Islands/Tokelau/Northern Cook Islands
Samoa Islands
Southeast Polynesia
Tuamotus
Rapa-Pitcairn
Southern Cook/Austral Islands
Society Islands
Marquesas
Marquesas
Easter Island
Easter Island
Tropical Eastern Pacific
Tropical East Pacific
Revillagigedos
Clipperton
Mexican Tropical Pacific
Chiapas-Nicaragua
Nicoya
Cocos Islands
Panama Bight
Guayaquil
Galapagos
Northern Galapagos Islands
Eastern Galapagos Islands
Western Galapagos Islands
Temperate South America
Warm Temperate Southeastern Pacific
Central Peru
Humboldtian
Central Chile
Araucanian
Juan Fernandez and Desventuradas
Juan Fernandez and Desventuradas
Warm Temperate Southwestern Atlantic
Southeastern Brazil
Rio Grande
Rio de la Plata
Uruguay-Buenos Aires Shelf
Magellanic
North Patagonian Gulfs
Patagonian Shelf
Falkland Islands
Channels and Fjords of Southern Chile
Chiloense
Tristan-Gough
Tristan-Gough
Temperate Southern Africa
Benguela
Namib
Namaqua
Agulhas
Agulhas Bank
Natal
Amsterdam-St Paul
Amsterdam-Saint-Paul
Temperate Australasia
Northern New Zealand
Kermadec Islands (195)
Northeastern New Zealand (196)
Three Kings-North Cape (197)
Southern New Zealand
Chatham Island (198)
Central New Zealand (199)
South New Zealand (200)
Snares Island (201)
East Central Australian Shelf
Tweed-Moreton (202)
Manning-Hawkesbury (203)
Southeast Australian Shelf
Cape Howe (204)
Bassian (205)
Western Bassian (206)
Southwest Australian Shelf
South Australian Gulfs (207)
Great Australian Bight (208)
Leeuwin (209)
Western Central Australian Shelf
Shark Bay (210)
Houtman (211)
Southern Ocean
Subantarctic Islands
Macquarie Island
Heard Island and McDonald Islands
Kerguelen Islands
Crozet Islands
Prince Edward Islands
Bouvet Island
Peter the First Island
Scotia Sea
South Sandwich Islands
South Georgia
South Orkney Islands
South Shetland Islands
Antarctic Peninsula
Continental High Antarctic
East Antarctic Wilkes Land
East Antarctic Enderby Land
East Antarctic Dronning Maud Land
Weddell Sea
Amundsen/Bellingshausen Sea
Ross Sea
Subantarctic New Zealand
Bounty and Antipodes Islands
Campbell Island
Auckland Island
See also
List of terrestrial ecoregions (WWF)
Notes
References
Spalding, Mark D., Helen E. Fox, Gerald R. Allen, Nick Davidson et al. "Marine Ecoregions of the World: A Bioregionalization of Coastal and Shelf Areas". Bioscience Vol. 57 No. 7, July/August 2007, pp. 573–583.
External links
World Wildlife Fund—WWF: Marine Ecoregions of the World (MEOW)
Queries listing ecoregions and provinces from Wikidata
.List
.marine
marine
Aquatic ecology
Biological oceanography
ecoregions
marine | List of marine ecoregions | [
"Biology"
] | 1,396 | [
"Aquatic ecology",
"Ecosystems"
] |
13,368,871 | https://en.wikipedia.org/wiki/HIST%20Award%20for%20Outstanding%20Achievement%20in%20the%20History%20of%20Chemistry | The HIST Award for Outstanding Achievement in the History of Chemistry (2013–present) is given by the Division of the History of Chemistry of the American Chemical Society (ACS). The award was originally known as the Dexter Award (1956–2001) and then briefly as the Sidney M. Edelstein Award (2002–2009), both given by the ACS.
The Dexter Award was originally established by Sidney Milton Edelstein, a founder of the Dexter Chemical Corporation, to recognize an "outstanding career of contributions to the history of chemistry". As the Dexter Award, it was sponsored by the Dexter Corporation except for its final two years, when it was sponsored by the Mildred and Sidney Edelstein Foundation.
The award was briefly known as the Sidney M. Edelstein Award from 2002 to 2009, but was still given by the ACS. As such, the Sidney M. Edelstein Award should be distinguished from the Sidney Edelstein Prize (1968–present), which has been given continuously since 1968 by the Society for the History of Technology to recognize "an outstanding scholarly book in the history of technology."
Recipients
HIST Award (2013–present)
2024 James L. Marshall and Virginia R. Marshall
2023 Geoffrey Rayner-Canham and Marelene Rayner-Canham
2022 Marco Beretta
2021 Mary Virginia Orna
2020 Lawrence M. Principe
2019 Otto Theodor Benfey
2018 David E. Lewis
2017 Jeffrey I. Seeman
2016 Ursula Klein
2015 Christoph Meinel
2014 Ernst Homburg
2013 William R. Newman
2012 No Award
2011 No Award
Sidney M. Edelstein Award (2002–2009)
2009 Trevor Harvey Levere
2008 John Shipley Rowlinson
2007 Anthony S. Travis
2006 Peter J. T. Morris (Peter John Turnbull Morris)
2005 William B. Jensen
2004 Joseph B. Lambert
2003 David M. Knight
2002 John Parascandola
Dexter Award (1956–2001)
2001 William Arthur Smeaton
2000
1999 Mary Jo Nye
1998
1997 Bernadette Bensaude-Vincent
1996 Keith J. Laidler
1995 William H. Brock
1994 Frederic L. Holmes
1993 Joseph S. Fruton
1992 John T. Stock
1991 Owen Hannaway
1990 Colin A. Russell
1989 D. Stanley Tarbell
1988 (Lutz F. Haber)
1987 Allen G. Debus
1986 Robert G. W. Anderson
1985 Robert Multhauf
1984
1983 Arnold Thackray
1982 John H. Wotiz
1981 Cyril Stanley Smith
1980
1979 Joseph Needham
1978 George B. Kauffman
1977
1976 Trevor Illtyd Williams
1975 (Johannes Willem van Spronsen)
1974 No Award
1973 Bernard Jaffe
1972 Henry Guerlac
1971
1970 Ferenc Szabadváry
1969 Walter Pagel
1968 Aaron J. Ihde
1967 Mary Elvira Weeks
1966 Earle R. Caley
1965
1964 Eduard Farber
1963 Douglas McKie
1962 Henry M. Leicester
1961 James R. Partington
1960
1959 John Read
1958 Eva Armstrong
1957 Williams Haynes
1956 Ralph E. Oesper
See also
List of chemistry awards
List of history awards
References
Chemistry awards
American history awards
History of science awards
Awards established in 1956
1956 establishments in the United States | HIST Award for Outstanding Achievement in the History of Chemistry | [
"Technology"
] | 644 | [
"Science and technology awards",
"Chemistry awards",
"History of science awards"
] |
13,369,267 | https://en.wikipedia.org/wiki/SOLAR%20%28ISS%29 | SOLAR was an ESA science observatory on the Columbus Laboratory, which is part of the International Space Station. SOLAR was launched with Columbus in February 2008 aboard STS-122. It was externally mounted to Columbus with the European Technology Exposure Facility (EuTEF). SOLAR has three main space science instruments: SOVIM, SOLSPEC and SOL-ACES. Together they provide detailed measurements of the Sun's spectral irradiance. The SOLAR platform and its instruments are controlled from the Belgian User Support and Operations Centre (B.USOC), located at the Belgian Institute for Space Aeronomy (BISA) in Uccle, Belgium.
Instruments
SOVIM (Solar Variability and Irradiance Monitor) instrument is based on an earlier instrument (SOVA) which flew aboard the European Retrievable Carrier, launched on STS-46 in 1992. It is designed to measure solar radiation with wavelengths from 200 nanometers - 100 micrometers. This covers near-ultraviolet, visible and infrared areas of the spectrum.
SOLSPEC (Solar Spectral irradiance measurements) is designed to measure the solar spectral irradiance in the 165- to 3000-nanometer range with high spectral resolution.
SOL-ACES (Auto-calibrating Extreme Ultraviolet and Ultraviolet spectrometers) consists of four grazing incidence grating spectrometers. They are designed to measure the EUV/UV spectral regime (17 nanometers - 220 nanometers) with moderate spectral resolution.
Mission
The mission was originally planned for a 2003 launch, but was delayed following the Space Shuttle Columbia disaster. Some other components are also planned to be mounted externally on Columbus on future missions, including the Atomic Clock Ensemble in Space (ACES). Another name for SOLAR may be Solar Monitoring Observatory or SMO.
In 2012, the entire 450-tonne station was rotated so SOLAR could observe a full rotation of the Sun continuously. A Solar rotation takes about 24–28 days depending on the latitude.
SOLAR's mission ended in 2017 with the failure of all but one of its instruments. On the morning of January 28, 2020 SOLAR was removed from FRAM 1 where it rested since it was delivered on STS 122 and strapped to the side of Cygnus NG-12 with the SDS placed on the other side. SOLAR was released from the station on February 3, 2020 and burnt up in the atmosphere over the Pacific Ocean on March 13, 2020 ending the mission which spent a decade photographing the sun.
Visuals
See also
Scientific research on the ISS
Spectroscopy
References
External links
SOLAR at eoPortal
SOLAR at ESA
SOLSPEC homepage (includes photo gallery)
Science facilities on the International Space Station
Columbus (ISS module)
Missions to the Sun
Crewed space observatories
Solar observatories
Ultraviolet telescopes | SOLAR (ISS) | [
"Astronomy"
] | 565 | [
"Space telescopes",
"Crewed space observatories"
] |
13,369,887 | https://en.wikipedia.org/wiki/Translate%20Toolkit | The Translate Toolkit is a localization and translation toolkit. It provides a set of tools for working with localization file formats and files that might need localization. The toolkit also provides an API on which to develop other localization tools.
The toolkit is written in the Python programming language. It is free software originally developed and released by Translate.org.za in 2002 and is now maintained by Translate.org.za and community developers.
Translate Toolkit uses Enchant as spellchecker.
History
The toolkit was originally developed as the mozpotools by David Fraser for Translate.org.za. Translate.org.za had focused on translating KDE which used Gettext PO files for localization. With an internal change to focus on end-user, cross-platform, OSS software, the organisation decided to localize the Mozilla Suite. This required using new tools and new formats that were not as rich as Gettext PO. Thus mozpotools was created to convert the Mozilla DTD and .properties files to Gettext PO.
Various tools were developed as needed, including , a tool to count source text words to allow correct estimations for work, , to search through translations, and , to check for various quality issues.
When Translate.org.za began translating OpenOffice.org it was only natural that the Translate Toolkit would be adapted to handle the OpenOffice.org internal file format. Translating OpenOffice.org using PO files is now the default translation method.
As part of the WordForge project the work received a major boost and the toolkit was further extended to manage XLIFF files alongside PO files. Further funded development has added other features including the ability to convert Open Document Format to XLIFF and the management of placeholders (Variables, acronyms, terminology, etc.).
Design goals
The main aim of the toolkit is to increase the quality of localisation and translation. This is achieved by firstly, focusing on good localisation formats thus the toolkit makes use of the PO and XLIFF localisation formats. This has the benefit that it stops the proliferation of localization formats and allows localizers to work with one good localization tool. For the toolkit this means building converters that can transform files to be translated into these two basic formats.
Secondly, to build tools that allow localizers to increase the general quality of their localization. These tools allow for the extraction of terminology and for checking for the consistent use of terminology. The tools allow for checking for various technical errors such as the correct use of variables.
Lastly, the toolkit provides a powerful localisation API that acts as a base on which to build other localization related tools.
Users
Many translators use the toolkit directly, to do quality checks and to transform files for translation. Further there are and have been several indirect users of the Translate Toolkit API:
Pootle - an online translation tool
open-tran - providing translation memory lookup (was shut down on January 31, 2014.)
Wordforge (old name Pootling) - an offline translation tool for Windows and Linux
Rosetta - free translation web service offered by LaunchPad. It is used mainly by the Ubuntu community translation tool. See it in action in Launchpad Translations
LibreOffice/Apache OpenOffice - most community localization is done through PO files produced by the toolkit
Virtaal - a localisation and translation tool
translatewiki.net (now discontinued due to new terms)
Weblate - web based translation tool with tight Git integration
Supported document formats
Primary Localization Formats
Gettext PO
XLIFF (Normal and PO representations)
Other Localization Related Formats
TBX
Java .properties
Qt .ts, .qm and .qph (Qt Phrase Book)
Gettext .mo
OmegaT glossaries
Haiku catkeys files
Other Formats
OpenDocument Format
Plain Text
Wiki: DokuWiki and MediaWiki syntax
Mozilla DTD
OpenOffice.org SDF
PHP strings
.NET Resource files (.resx)
OS X strings
Adobe Flex files
INI file
Windows / Wine .rc files
iCalendar
Symbian localization files
Subtitles
Translation Memory Formats
TMX
Wordfast TM
OpenDocument Format support
Work was started in June 2008 to incorporate OpenDocument Format support. This work is funded by the NLnet Foundation and is a collaboration between Translate.org.za and Itaapy
See also
Computer-assisted Translation
Translation Memory
References
External links
Translate Toolkit home page
Supported document formats
Python package index
Software-localization tools
Free software programmed in Python
Internationalization and localization
Computer-assisted translation software for Linux | Translate Toolkit | [
"Technology"
] | 964 | [
"Natural language and computing",
"Internationalization and localization"
] |
13,370,236 | https://en.wikipedia.org/wiki/NORBIT | In electronics, the NORBIT family of modules is a very early form (since 1960) of digital logic developed by Philips (and also provided through and Mullard) that uses modules containing discrete components to build logic function blocks in resistor–transistor logic (RTL) or diode–transistor logic (DTL) technology.
Overview
The system was originally conceived as building blocks for solid-state hard-wired programmed logic controllers (the predecessors of programmable logic controllers (PLC)) to replace electro-mechanical relay logic in industrial control systems for process control and automation applications, similar to early Telefunken/AEG Logistat, Siemens Simatic, Brown, Boveri & Cie, ACEC Logacec or Estacord systems.
Each available logical function was recognizable by the color of its plastic container, black, blue, red, green, violet, etc. The most important circuit block contained a NOR gate (hence the name), but there were also blocks containing drivers, and a timer circuit similar to the later 555 timer IC.
The original Norbit modules of the YL 6000 series introduced in 1960 had potted single in-line packages with up to ten long flying leads arranged in two groups of up to five leads in a row. These modules were specified for frequencies of less than 1 kHz at ±24 V supply.
Also available in 1960 were so called Combi-Element modules in single-in line packages with ten evenly spaced stiff leads in a row (5.08 mm / 0.2-inch pitch) for mounting on a PCB. They were grouped in the 1-series (aka "100 kHz series") with ±6 V supply. The newer 10-series and 20-series had similarly sized packages, but came with an additional parallel row of nine staggered leads for a total of 19 leads. The 10-series uses germanium alloy transistors, whereas in the 20-series silicon planar transistors are used for a higher cut-off frequency of up to 1 MHz (vs. 30 kHz) and a higher allowed temperature range of +85 °C (vs. +55 °C).
In 1967, the Philips/Mullard NORBIT 2 aka Valvo NORBIT-S family of modules was introduced, first consisting of the 60-series for frequencies up to 10 kHz at a single supply voltage of 24 V, only. Later, the 61-series, containing thyristor trigger and control modules, was added. A 90-series became available in the mid-1970s as well. There were three basic types contained in a large (one by two inch-sized) 17 pins dual in-line package, with nine pins spaced 5.08 mm (0.2-inch) on one side and eight staggered pins on the other side.
Modules
Original Norbit family
YL 6000 series
YL6000 - NOR gate (red) ("NOR")
YL6001 - Emitter follower (yellow) ("EF")
YL6004 - High power output (Double-sized module) ("HP")
YL6005, YL6005/00 - Counter unit (triple binary) ("3C") (violet)
YL6005/05 - Single divide by 2 counter (violet) ("1C")
YL6006 - Timer (brown) ("TU")
YL6007 - Chassis ("CU")
YL6008 - Medium power output (orange) ("MP")
YL6009 - Low power output (white) ("LP")
YL6010 - Photo-electric detector head ("PD")
YL6011 - Photo-electric lamp head ("PL")
YL6012 - Twin 2-input NOR gate (black) ("2.2 NOR")
YL 6100 series
YL6101 - Rectifier unit, 3…39V 1A
YL6102 - Rectifier unit, 3…39V 5A
YL6103/00 - Regulator unit, 6…30V 250mA
YL6103/01 - Regulator unit, 1…6V 250mA
YL6104 - Longitudinal link for regulator unit
YL6105 - Regulator unit, 6V 150mA
88930 Relay series
Used to control relays using variable-length pulse sequences (as with telephone pulse dialing).
88930/30 - Input/Output unitFilters an input pulse string and can drive two command circuits and two relay unitsContains 1×/48, 2×/51, and 2×/57.
88930/33 - Primary pulse counting unit (dual command)Can trigger two different signals via two different pulse sequences. The number of pulses that will trigger each command is configurable.
88930/36 - Dual command unitAdds two additional commands to the /33.
88930/37 - Quad command unitAdds four additional commands to the /33.
88930/39 - Output unitCan drive two command circuits (in /36 or /37 command units) plus two /60 relay units.Contains 2×/51 and 2×/57.
88930/42 - Empty unitFor adding custom circuitry. Comprises an empty housing, connector, and blank circuit board.
88930/48 - Pulse shaper unit for /33 (no housing)
88930/51 - Command preparation unit (no housing)For providing input to command units.
88930/54 - Reset unit
88930/57 - Relay amplifier unit (no housing)For driving a low-impedance relay such as the /60 relay block unit.
88930/60 - Relay block unitDouble-pole, double throw 250V 2A relay. Accepts a /57 relay amplifier unit.
88930/64 - Power supply unitProvides 280V 45mA, 150V 2mA, 24V 750mA, and 15V 120mA.
Combi-Element family
1-series / B890000 series
B893000, B164903 - Twin 3-input AND gates (orange) ("2.3A1", "2x3N1")
B893001, B164904 - Twin 2-input AND gates (orange) ("2.2A1", "2x2N1")
B893002, 2P72729 - Twin 3-input OR gates (orange) ("2.3O1", "23O1", "2x3P1")
B893003, 2P72730 - Twin 2-input OR gates (orange) ("2.2O1", "22O1", "2x2P1")
B894002, B164910 - Twin inverter amplifier (yellow) ("2IA1", "2.IA1", "2xIA1")
B894005, 2P72728 - Twin inverter amplifier (yellow) ("2IA2", "2xIA2")
B894001, B164909 - Twin emitter follower (yellow) ("2EF1", 2xEF1")
B894003, 2P72727 - Twin emitter follower (yellow) ("2EF2", "2xEF2")
B894000, B164907 - Emitter follower/inverter amplifier (yellow) ("EF1/IA1")
B895000, B164901 - Pulse shaper (Schmitt trigger + amplifier) (green) ("PS1")
B895001, B164908 - One-shot multivibrator ("OS1")
B895003 - One-shot multivibrator ("OS2")
B892000, B164902 - Flip-flop (red) ("FF1")
B892001, 2P72707 - Shift-register Flip-flop (red) ("FF2")
B892002 - Flip-flop (red) ("FF3")
B892003 - Flip-flop (red) ("FF4")
B893004, 2P72726 - Pulse logic (orange) ("PL1", "2xPL1")
B893007 - Pulse logic (orange) ("2xPL2")
B885000, B164911 - Decade counter ("DC1")
B890000 - Power amplifier ("PA1")
B896000 - Twin selector switch for core memories ("2SS1")
B893005 - Selection gate for core memories ("SG1")
2P72732 - Pulse generator for core memories ("PG1")
2P72731 - Read amplifier for core memories ("RA1")
10-series
2P73701 - Flip-flop ("FF10")
2P73702 - Flip-flop ("FF11")
2P73703 - Flip-flop / Bistable multivibrator with built-in trigger gates and set-reset inputs (black) ("FF12")
Dual trigger gate ("2.TG13")
Dual trigger gate ("2.TG14")
Quadruple trigger gate ("4.TG15")
Dual positive gate inverter amplifier ("2.GI10")
Dual positive gate inverter amplifier ("2.GI11")
Dual positive gate inverter amplifier ("2.GI12")
Gate amplifier ("GA11")
One-shot multivibrator ("OS11")
Timer unit ("TU10")
Pulse driver ("PD11")
Relay driver ("RD10")
Relay driver ("RD11")
Power amplifier ("PA10")
Pulse shaper ("PS10")
Numerical indicator tube driver ("ID10")
20-series
2P73710 - ("2.GI12", "2GI12")
Norbit 2 / Norbit-S family
60-series
2NOR60, 2.NOR60 - Twin NOR (black)
4NOR60, 4.NOR60 - Quadruple NOR (black)
2.IA60, 2IA60 - Twin inverter amplifier for low power output (blue)
LPA60 - Twin low power output
2.LPA60, 2LPA60 - Twin low power output (blue)
PA60 - Medium power output (blue)
HPA60 - High power output (black)
2.SF60, 2SF60 - Twin input switch filter (green)
TU60 - Timer (red)
FF60 - Flip-flop
GLD60 - Grounded load driver (black)
61-series
TT61 - Trigger transformer
UPA61 - Universal power amplifier
RSA61 - Rectifier and synchroniser
DOA61 - Differential operational amplifier
2NOR61, 2.NOR61 - Twin NOR
90-series
PS90 - Pulse shaper (green)
FF90 - Flip-flop (red)
2TG90, 2.TG90 - Twin trigger gate (red)
Accessories
PSU61 - Power supply
PCB60 - Printed wiring board
MC60 - Mounting chassis
UMC60 - Universal mounting chassis
MB60 - Mounting bar
Photo gallery
See also
Logic family
fischertechnik
Notes
References
Further reading
(43 pages) (NB. Also part of the Valvo-Handbuch 1962 pages 83–125.)
(253 pages) (NB. Contains a chapter about Norbit modules as well.)
(25 pages)
External links
Logic families
Digital electronics
Solid state engineering
Industrial automation
Control engineering | NORBIT | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,514 | [
"Digital electronics",
"Industrial engineering",
"Automation",
"Electronic engineering",
"Control engineering",
"Condensed matter physics",
"Industrial automation",
"Solid state engineering"
] |
13,371,175 | https://en.wikipedia.org/wiki/Phenomorphan | Phenomorphan is an opioid analgesic. It is not currently used in medicine, but has similar side-effects to other opiates, which include itching, nausea and respiratory depression.
Phenomorphan is a highly potent drug due to the N-phenethyl group, which boosts affinity to the μ-opioid receptor, and so phenomorphan is around 10x more potent than levorphanol, which is itself 6-8x the potency of morphine. Other analogues where the N-(2-phenylethyl) group has been replaced by other aromatic rings are even more potent, with the N-(2-(2-furyl)ethyl) and the N-(2-(2-thienyl)ethyl) analogues being 60x and 45x stronger than levorphanol, respectively.
See also
14-Cinnamoyloxycodeinone
14-Phenylpropoxymetopon
7-PET
N-Phenethylnormorphine
N-Phenethylnordesomorphine
N-Phenethyl-14-ethoxymetopon
RAM-378
Ro4-1539
References
Synthetic opioids
Morphinans
Hydroxyarenes
Mu-opioid receptor agonists | Phenomorphan | [
"Chemistry"
] | 288 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
13,371,195 | https://en.wikipedia.org/wiki/ANOVA%E2%80%93simultaneous%20component%20analysis | In computational biology and bioinformatics, analysis of variance – simultaneous component analysis (ASCA or ANOVA–SCA) is a method that partitions variation and enables interpretation of these partitions by SCA, a method that is similar to principal components analysis (PCA). Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze differences. Statistical coupling analysis (SCA) is a technique used in bioinformatics to measure covariation between pairs of amino acids in a protein multiple sequence alignment (MSA).
This method is a multivariate or even megavariate extension of analysis of variance (ANOVA). The variation partitioning is similar to ANOVA. Each partition matches all variation induced by an effect or factor, usually a treatment regime or experimental condition. The calculated effect partitions are called effect estimates. Because even the effect estimates are multivariate, interpretation of these effects estimates is not intuitive. By applying SCA on the effect estimates one gets a simple interpretable result.<ref>Daniel J Vis, Johan A Westerhuis, Age K Smilde: Jan van der Greef (2007) "Statistical validation of megavariate effects in ASCA", BMC Bioinformatics", 8:322 </ref> In case of more than one effect, this method estimates the effects in such a way that the different effects are not correlated.
Details
Many research areas see increasingly large numbers of variables in only few samples. The low sample to variable ratio creates problems known as multicollinearity and singularity. Because of this, most traditional multivariate statistical methods cannot be applied.
ASCA algorithm
This section details how to calculate the ASCA model on a case of two main effects with one interaction effect. It is easy to extend the declared rationale to more main effects and more interaction effects. If the first effect is time and the second effect is dosage, only the interaction between time and dosage exists. We assume there are four time points and three dosage levels.
Let X be a matrix that holds the data. X is mean centered, thus having zero mean columns. Let A and B denote the main effects and AB the interaction of these effects. Two main effects in a biological experiment can be time (A) and pH (B), and these two effects may interact. In designing such experiments one controls the main effects to several (at least two) levels. The different levels of an effect can be referred to as A1, A2, A3 and A4, representing 2, 3, 4, 5 hours from the start of the experiment. The same thing holds for effect B, for example, pH 6, pH 7 and pH 8 can be considered effect levels.
A and B are required to be balanced if the effect estimates need to be orthogonal and the partitioning unique. Matrix E holds the information that is not assigned to any effect. The partitioning gives the following notation:
Calculating main effect estimate A (or B)
Find all rows that correspond to effect A level 1 and average these rows. The result is a vector. Repeat this for the other effect levels. Make a new matrix of the same size of X and place the calculated averages in the matching rows. That is, give all rows that match effect (i.e.) A level 1 the average of effect A level 1.
After completing the level estimates for the effect, perform an SCA. The scores of this SCA are the sample deviations for the effect, the important variables of this effect are in the weights of the SCA loading vector.
Calculating interaction effect estimate AB
Estimating the interaction effect is similar to estimating main effects. The difference is that for interaction estimates the rows that match effect A level 1 are combined with the effect B level 1 and all combinations of effects and levels are cycled through. In our example setting, with four time point and three dosage levels there are 12 interaction sets {A1-B1, A1B2, A2B1, A2B2 and so on}. It is important to deflate (remove) the main effects before estimating the interaction effect.
SCA on partitions A, B and AB
Simultaneous component analysis is mathematically identical to PCA, but is semantically different in that it models different objects or subjects at the same time.
The standard notation for a SCA – and PCA – model is:
where X is the data, T are the component scores and P are the component loadings. E'' is the residual or error matrix. Because ASCA models the variation partitions by SCA, the model for effect estimates looks like this:
Note that every partition has its own error matrix. However, algebra dictates that in a balanced mean centered data set every two level system is of rank 1. This results in zero errors, since any rank 1 matrix can be written as the product of a single component score and loading vector.
The full ASCA model with two effects and interaction including the SCA looks like this:
Decomposition:
Time as an effect
Because 'time' is treated as a qualitative factor in the ANOVA decomposition preceding ASCA, a nonlinear multivariate time trajectory can be modeled. An example of this is shown in Figure 10 of this reference.
References
Analysis of variance
Bioinformatics | ANOVA–simultaneous component analysis | [
"Engineering",
"Biology"
] | 1,095 | [
"Bioinformatics",
"Biological engineering"
] |
13,371,727 | https://en.wikipedia.org/wiki/Physics%20of%20Fluids | Physics of Fluids is a monthly peer-reviewed scientific journal covering fluid dynamics, established by the American Institute of Physics in 1958, and is published by AIP Publishing. The journal focus is the dynamics of gases, liquids, and complex or multiphase fluids—and the journal contains original research resulting from theoretical, computational, and experimental studies.
History
From 1958 through 1988, the journal included plasma physics. From 1989 until 1993, the journal split into Physics of Fluids A covering fluid dynamics, and Physics of Fluids B, on plasma physics. In 1994, the latter was renamed Physics of Plasmas, and the former continued under its original name, Physics of Fluids.
The journal was originally published by the American Institute of Physics in cooperation with the American Physical Society's Division of Fluid Dynamics. In 2016, the American Institute of Physics became the sole publisher. From 1985 to 2015, Physics of Fluids published the Gallery of Fluid Motion, containing award-winning photographs, images, and visual streaming media of fluid flow.
With funding from the American Institute of Physics the annual "François Naftali Frenkiel Award" was established by the American Physical Society in 1984 to reward a young scientist who published a paper containing significant contributions to fluid dynamics during the previous year. The award-winning paper was chosen from Physics of Fluids until 2016, but is presently chosen from Physical Review Fluids. Similarly, the invited papers from plenary talks at the annual American Physical Society Division of Fluid Dynamics were formerly published in Physics of Fluids but, since 2016, are now published in either this journal or Physical Review Fluids.
Reception
Physics of Fluids A, Physics of Fluids B, and Physics of Fluids were ranked 3, 4, and 6, respectively based on their citation impact from 1981 to 2004 within the category of journals on the physics of fluids and plasmas. According to the Journal Citation Reports, the journal has a 2023 impact factor of 4.1.
Editors-in-chief
The following persons are or have been editors-in-chief:
1958–1981: François Naftali Frenkiel
1982–1997: Andreas Acrivos
1998–2015: John Kim, L. Gary Leal
2016–present: Alan Jeffrey Giacomin
See also
List of fluid mechanics journals
References
Further reading
External links
Physics of Fluids Gallery of fluid motion
Fluid dynamics journals
Academic journals established in 1958
Monthly journals
English-language journals
American Institute of Physics academic journals | Physics of Fluids | [
"Chemistry"
] | 485 | [
"Fluid dynamics journals",
"Fluid dynamics"
] |
13,371,925 | https://en.wikipedia.org/wiki/Instant | In physics and the philosophy of science, instant refers to an infinitesimal interval in time, whose passage is instantaneous. In ordinary speech, an instant has been defined as "a point or very short space of time," a notion deriving from its etymological source, the Latin verb instare, from in- + stare ('to stand'), meaning 'to stand upon or near.'
The continuous nature of time and its infinite divisibility was addressed by Aristotle in his Physics, where he wrote on Zeno's paradoxes. The philosopher and mathematician Bertrand Russell was still seeking to define the exact nature of an instant thousands of years later. In 2024, John William Stafford used algorithms to demonstrate that a time difference of zero could theoretically continue to expand (in various ways) to infinity, and subsequently described a new concept that he referred to as instantaneous. He concluded by stating that instantaneous is, with respect to the measurement of time, mutually exclusive. In addition, a theoretical model of multiple Universes was proposed which exist within the context of instantaneous.
, the smallest time interval certified in regulated measurements is on the order of 397 zeptoseconds (397 × 10−21 seconds).
18th and 19th century usage
Instant (usually abbreviated in print to inst.) can be used to indicate "Of the current month". For example, "the 11th inst." means the 11th day of the current month, whether that date is in the past, or the future, from the date of publication.
See also
Infinitesimal
Planck time
Present
References
Time | Instant | [
"Physics",
"Mathematics"
] | 330 | [
"Physical quantities",
"Time",
"Time stubs",
"Quantity",
"Spacetime",
"Wikipedia categories named after physical quantities"
] |
13,372,056 | https://en.wikipedia.org/wiki/Nonivamide | Nonivamide, also called pelargonic acid vanillylamide or PAVA, is an organic compound and a capsaicinoid. It is an amide of pelargonic acid (n-nonanoic acid) and vanillyl amine. It is present in chili peppers, but is commonly manufactured synthetically. It is more heat-stable than capsaicin.
Nonivamide is used as a food additive to add pungency to seasonings, flavorings, and spice blends. It is also used in the confectionery industry to create a hot sensation, and in the pharmaceutical industry in some formulations as a cheaper alternative to capsaicin.
Like capsaicin, it can deter mammals (but not birds or insects) from consuming plants or seeds (e.g. squirrels and bird feeder seeds). This is consistent with nonivamide's role as a TRPV1 ion channel agonist. Mammalian TRPV1 is activated by heat and capsaicin, but the avian form is insensitive to capsaicin.
Nonivamide is used (under the name PAVA) as the payload in "less-lethal munitions" such as the FN Herstal's FN 303 projectiles or as the active ingredient in most pepper sprays, which may be used as a chemical weapon. As a chemical irritant, pepper sprays have been used both as a riot control munition and also a weapon to disperse peaceful demonstrators; they have also been used in other contexts, such as military or police training exercises. While irritants commonly cause only "transient lacrimation, blepharospasm, superficial pain, and disorientation," their use and misuse also presents serious risks of more severe injury and disability.
Treatment
Nonivamide is not soluble in water, however water will dilute it and wash it away. One study found that milk of magnesia, baby shampoo, 2% lidocaine gel, or milk, did not demonstrate significantly better performance than water, when used on pepper spray.
See also
PAVA spray
Phenylacetylrinvanil
References
Riot control agents
Lachrymatory agents
Capsaicinoids
Phenol ethers | Nonivamide | [
"Chemistry"
] | 466 | [
"Riot control agents",
"Lachrymatory agents",
"Chemical weapons"
] |
13,372,642 | https://en.wikipedia.org/wiki/Toroidal%20ring%20model | The toroidal ring model, known originally as the Parson magneton or magnetic electron, is a physical model of subatomic particles. It is also known as the plasmoid ring, vortex ring, or helicon ring. This physical model treated electrons and protons as elementary particles, and was first proposed by Alfred Lauck Parson in 1915.
Theory
Instead of a single orbiting charge, the toroidal ring was conceived as a collection of infinitesimal charge elements, which orbited or circulated along a common continuous path or "loop". In general, this path of charge could assume any shape, but tended toward a circular form due to internal repulsive electromagnetic forces. In this configuration the charge elements circulated, but the ring as a whole did not radiate due to changes in electric or magnetic fields since it remained stationary. The ring produced an overall magnetic field ("spin") due to the current of the moving charge elements. These elements circulated around the ring at the speed of light c, but at frequency ν = c/2πR, which depended inversely on the radius R. The ring's inertial energy increased when compressed, like a spring, and was also inversely proportional to its radius, and therefore proportional to its frequency ν. The theory claimed that the proportionality constant was the Planck constant h, the conserved angular momentum of the ring.
According to the model, electrons or protons could be viewed as bundles of "fibers" or "plasmoids" with total charge ±e. The electrostatic repulsion force between charge elements of the same sign was balanced by the magnetic attraction force between the parallel currents in the fibers of a bundle, per Ampère's law. These fibers twisted around the torus of the ring as they progressed around its radius, forming a Slinky-like helix. Circuit completion demanded that each helical plasmoid fiber twisted around the ring an integer number of times as it proceeded around the ring. This requirement was thought to account for "quantum" values of angular momentum and radiation. Chirality demanded the number of fibers to be odd, probably three, like a rope. The helicity of the twist, was thought to distinguish the electron from the proton.
The toroidal or "helicon" model did not demand a constant radius or inertial energy for a particle. In general its shape, size, and motion adjusted according to the external electromagnetic fields from its environment. These adjustments or reactions to external field changes constituted the emission or absorption of radiation for the particle. The model, then, claimed to explain how particles linked together to form atoms.
History
Beginnings
The development of the helicon or toroidal ring began with André-Marie Ampère, who in 1823 proposed tiny magnetic "loops of charge" to explain the attractive force between current elements. In that same era Carl Friedrich Gauss and Michael Faraday also uncovered foundational laws of classical electrodynamics, later collected by James Maxwell as Maxwell's equations. When Maxwell expressed the laws of Gauss, Faraday, and Ampère in differential form, he assumed point particles, an assumption that remains foundational to relativity theory and quantum mechanics today. In 1867 Lord Kelvin suggested that the vortex rings of a perfect fluid discovered by Hermann von Helmholtz represented "the only true atoms". Then shortly before 1900, as scientists still debated over the very existence of atoms, J. J. Thomson and Ernest Rutherford sparked a revolution with experiments confirming the existence and properties of electrons, protons, and nuclei. Max Planck added to the fire when he solved the blackbody radiation problem by assuming not only discrete particles, but discrete frequencies of radiation emanating from these "particles" or "resonators". Planck's famous paper, which incidentally calculated both the Planck constant h and the Boltzmann constant kB, suggested that something in the "resonators" themselves provided these discrete frequencies.
Numerous theories about the structure of the atom developed in the wake of all the new information, of which the 1913 model of Niels Bohr came to predominate. The Bohr model proposed electrons in circular orbit around the nucleus with quantized values of angular momentum. Instead of radiating energy continuously, as classical electrodynamics demanded from an accelerating charge, Bohr's electron radiated discretely when it "leaped" from one state of angular momentum to another.
Parson magneton
In 1915, Alfred Lauck Parson proposed his "magneton" as an improvement over the Bohr model, depicting finite-sized particles with the ability to maintain stability and emit and absorb radiation from electromagnetic waves. At about the same time Leigh Page developed a classical theory of blackbody radiation assuming rotating "oscillators", able to store energy without radiating. Gilbert N. Lewis was inspired in part by Parson's model in developing his theory of chemical bonding. Then David L. Webster wrote three papers connecting Parson's magneton with Page's oscillator and explaining mass and alpha scattering in terms of the magneton. In 1917 Lars O. Grondahl confirmed the model with his experiments on free electrons in iron wires. Parson's theory next attracted the attention of Arthur Compton, who wrote a series of papers on the properties of the electron, and H. Stanley Allen, whose papers also argued for a "ring electron".
Current status
The aspect of the Parson magneton with the most experimental relevance (and the aspect investigated by Grondahl and Webster) was the existence of an electron magnetic dipole moment; this dipole moment is indeed present. However, later work by Paul Dirac and Alfred Landé showed that a pointlike particle could have an intrinsic quantum spin, and also a magnetic moment. The highly successful modern theory, Standard Model of particle physics describes a pointlike electron with an intrinsic spin and magnetic moment. On the other hand, the usual assertion that an electron is pointlike may be conventionally associated only with a "bare" electron. The pointlike electron would have a diverging electromagnetic field, which should create a strong vacuum polarization. In accordance with QED, deviations from the Coulomb law are predicted at Compton scale distances from the centre of electron, 10−11 cm. Virtual processes in the Compton region determine the spin of electron and renormalization of its charge and mass. It shows that the Compton region of the electron should be considered as a coherent whole with its pointlike core, forming a physical ("dressed") electron. Notice that the Dirac theory of electron also exhibits the peculiar behaviour of the Compton region. In particular, electrons display zitterbewegung at the Compton scale. From this point of view, the ring model does not contradict QED or the Dirac theory and some versions could possibly be used to incorporate gravity in quantum theory.
The question of whether the electron has a substructure of any sort must be decided by experiment. All experiments to date agree with the Standard Model of the electron, with no substructure, ring-like or otherwise. The two major approaches are high-energy electron–positron scattering and high-precision atomic tests of quantum electrodynamics, both of which agree that the electron is point-like at resolutions down to 10−20 m. At present, the Compton region of virtual processes, 10−11 cm across, is not exhibited in the high-energy experiments on electron–positron scattering.
Nikodem Popławski use the Papapetrou method of multipole expansion to show that torsion modifies Burinskii’s model of the Dirac electron by replacing the Kerr–Newman singular ring of the Compton size with a toroidal structure with the outer radius of the Compton size and the inner radius of the Cartan size (10−27 m) in the Einstein–Cartan theory of gravity.
References
Further reading
David L. Bergman, J. Paul Wesley ; Spinning Charged Ring Model of Electron Yielding Anomalous Magnetic Moment, Galilean Electrodynamics. Vol. 1, 63-67 (Sept./Oct. 1990).
Particle physics
Nuclear physics
Obsolete theories in physics | Toroidal ring model | [
"Physics"
] | 1,673 | [
"Obsolete theories in physics",
"Theoretical physics",
"Particle physics",
"Nuclear physics"
] |
13,373,895 | https://en.wikipedia.org/wiki/Diaphragm%20%28structural%20system%29 | In structural engineering, a diaphragm is a structural element that transmits lateral loads to the vertical resisting elements of a structure (such as shear walls or frames). Diaphragms are typically horizontal but can be sloped in a gable roof on a wood structure or concrete ramp in a parking garage. The diaphragm forces tend to be transferred to the vertical resisting elements primarily through in-plane shear stress. The most common lateral loads to be resisted are those resulting from wind and earthquake actions, but other lateral loads such as lateral earth pressure or hydrostatic pressure can also be resisted by diaphragm action.
The diaphragm of a structure often does double duty as the floor system or roof system in a building, or the deck of a bridge, which simultaneously supports gravity loads.
Parts of a diaphragm include:
the collector (or membrane), used as a shear panel to carry in-plane shear
The drag strut member, used to transfer the load to the shear walls or frames
the chord, used to resist the tension and compression forces that develop in the diaphragm since the collector is usually incapable of handling these loads alone
Diaphragms are usually constructed of plywood or oriented strand board in timber construction; metal deck or composite metal deck in steel construction; or a concrete slab in concrete construction.
The two primary types of the diaphragm are flexible and rigid. Flexible diaphragms resist lateral forces depending on the tributary area, irrespective of the flexibility of the members to they are transferring force to. On the other hand, rigid diaphragms transfer load to frames or shear walls depending on their flexibility and their location in the structure. Diaphragms that cannot be classified as either flexible or rigid are referred to as semirigid. The flexibility of a diaphragm affects the distribution of lateral forces to the vertical components of the lateral force-resisting elements in a structure.
References
Structural system
Floors | Diaphragm (structural system) | [
"Technology",
"Engineering"
] | 404 | [
"Structural engineering",
"Building engineering",
"Floors",
"Structural system",
"Architecture stubs",
"Architecture"
] |
13,375,387 | https://en.wikipedia.org/wiki/Rhoca-Gil | Rhoca-Gil is a type of industrial sealant produced by Rhône-Poulenc, used in the construction of tunnels to block the passage of groundwater inside. The sealant begins as a liquid, then is injected into cavities which need to be sealed, and polymerises, causing it to harden.
Process
Rhoca-Gil consists of two fluids, that are mixed, thinned out with water and then sprayed into cracks in the bedrock. One of the fluids contains acrylamide (a toxic substance) and methylolacrylamide. The mixed solution becomes a viscous fluid that penetrates cracks and holes in the rock, where the fluid reacts—polymerizes—to a tight plastic substance. When it is completely polymerized it is stable.
Controversies
In 1992 construction of the Hallandsås tunnel in Sweden began, with opening plans for 1995. Groundwater leaking into the tunnel was however a major problem, slowing the progress, and Rhoca-Gil was used. In 1997, fish and local cattle started dying as a result of Rhoca-Gil in its liquid form leaking into the water supply, contaminating it with acrylamide, a known carcinogen, mutagen and neurotoxin. Furthermore, the contamination of the area led to a ban on agricultural products from the area, as well as a ban on using water from the area affecting local residents.
The main contractor, Skanska, along with Rhône-Poulenc and the Swedish Rail Administration had criminal charges brought against them. Some senior executives resigned. Construction was halted in late 1997. The main critique of the use of Rhoca-Gil was against Rhône-Poulenc for not pointing out the risks of using the sealant, as well as against Skanska for not informing local residents about the usage of Rhoca-Gil.
A similar incident occurred at the construction of Gardermobanen in Norway, leading to a ban of the substance in Norway in 1997.
References
Moisture protection
Polymers | Rhoca-Gil | [
"Chemistry",
"Materials_science"
] | 412 | [
"Polymers",
"Polymer chemistry"
] |
13,375,871 | https://en.wikipedia.org/wiki/The%20Copernican%20Revolution%20%28book%29 | The Copernican Revolution is a 1957 book by the philosopher Thomas Kuhn, in which the author provides an analysis of the Copernican Revolution, documenting the pre-Ptolemaic understanding through the Ptolemaic system and its variants until the eventual acceptance of the Keplerian system.
Kuhn argues that the Ptolemaic system provided broader appeal than a simple astronomical system but also became intertwined in broader philosophical and theological beliefs. Kuhn argues that this broader appeal made it more difficult for other systems to be proposed.
Note that while some of the illustrations used are a bit complex, Kuhn limits the technical information included in the primary text, and leaves them for a technical appendix at the back of the book.
Summary
Introduction - Importance of Understanding the Development of Science
Before diving into a historical overview of the scientific understanding of the planets, stars and other celestial bodies, Kuhn prefaces the main ideas in The Copernican Revolution (in Chapter 1) by arguing that the story of the shift from a geocentric understanding of the universe to a heliocentric one offers a great deal of insight far beyond the specifics of that shift. Kuhn would later develop his theory regarding the development of science in his later work “The Structure of Scientific Revolutions,” which was originally published in 1962 and remains his best known work. In this work, he focuses on a one particular example; namely the Copernican Revolution, which is a paradigmatic example of such a change.
That Kuhn saw the significance and importance of this understanding as crucial for a contemporary understanding of science, and that he saw the Copernican Reboluvion as a representative example can be seen by what he focuses on in Chapter 1. Here, regarding the Copernican Revolution he notes: "...it has an additional significance which transcends its specific subject: it illustrates a process that today we badly need to understand. Contemporary Western civilization is more dependent, both for its everyday philosophy and for its bread and butter, upon scientific concepts than any past civilization has been. But the scientific theories that bulk so large in our daily lives are unlikely to be final... The mutability of its fundamental concepts is not an argument for rejecting science... But an age as dominated by science as our own does need a perspective from which to examine the scientific beliefs which it takes so much for granted." Kuhn stresses that our lack of familiarity with the process of development of science is a dangerous gap in our knowledge because without it, we cannot expect to reasonably assess the success or accuracy of scientific ideas and theories. Kuhn passed away in 1996 and did not live to experience the COVID-19 pandemic, but this event, and the confusion that it created, supports Kuhn's assertion regarding the need for an understanding of how to assess scientific beliefs and how science develops.
The Heavens in Primitive Cosmologies
After the brief introduction which included at the beginning of the first chapter, Kuhn takes the remainder of the chapter to explain the pre-Copernican understanding of the celestial world. He quickly shows that this worldview was not simply the result of a simple, unscientific perspective, but in fact contained many of the components that we expect to see in a sophisticated, scientific worldview. For example, Kuhn shows how a “two sphere universe” - the model that saw the earth as a small sphere at the center of the universe with an outer sphere or stars that rotates (and the sun traveling in between) provided a framework that matched observations, allowed for mathematical predictions about the locations of stars in the sky at a future date, simplified what otherwise seemed to be the confounding movement of the sun, provided a simple explanation for many observed phenomena, explained differences in observations that were made from different places on Earth etc. Kuhn develops this convincingly by walking the reader through a range of observations about the movement of the sun and stars and details about how these corresponded to the model of the universe. This is supported by the fact that there are use cases even today, as Kuhn highlights, where we continue to use a version of this model of the universe.
The Problem of the Planets
After using the first chapter to show how primitive conceptions of the celestial spheres satisfied many requirements for a scientific theory, Kuhn highlights the most vexing issues with the model. While the model was quite satisfactory in explaining and predicting the movement of the stars, it struggled mightily to explain the movement of the planets. The definition of a planet at that time differs somewhat from our own, so Kuhn explains: "The term planet is derived from a Greek word meaning, "wanderer" and it was employed until after Copnicus' lifetime to distinguish between those celestial bodies that moved or "wandered" among the stars from those who relative positions were fixed. For the Greeks and their successors the sun was one of the seven planets. The others were the moon, Mercury, Venus, Mars, Jupiter and Saturn. The stars and these seven planets were the only bodies recognized as celestial in antiquity." While the stars generally moved in lockstep, in predictable and organized fashion, the planets seemingly had a much more complex motion. Tracking their movement in the sky (based on observations over time) there are many inconsistencies. While the planets (other than the sun and moon) generally moved eastward in the sky, at times they would be observed moving westward, or "retrograde". Further, unlike the stars, that moved in lockstep, the planets each seemed to have their own schedules, traveling at different speeds. Astrologers throughout the years had many different theories to explain the movement. Most (but not all) models involved the planets rotating around the earth inside the stellar sphere (the sphere that was assumed to hold the stars).
Conclusion
At the end of the book, Kuhn summarizes the achievements of Copernicus and Newton, while comparing the incompatibility of Newtonian physics with Aristotelian concepts that preceded the then new physics. Kuhn also noted that discoveries, such as that produced by Newton, were not in agreement with the prevailing worldview during his lifetime.
References
Further reading
Kuhn, T. S. (1970, 2012). The Structure of Scientific Revolutions. (4th ed.) University of Chicago Press. ISBNs: 0226458113, 0226458121, 9780226458113, and 9780226458120.
1957 non-fiction books
Books by Thomas Kuhn
English-language non-fiction books
Harvard University Press books
History of astrology
Works about the history of astronomy
Books about the history of physics
American non-fiction books
Copernican Revolution
Philosophy of science literature | The Copernican Revolution (book) | [
"Astronomy"
] | 1,383 | [
"History of astronomy",
"Works about astronomy",
"Works about the history of astronomy",
"Copernican Revolution",
"History of astrology"
] |
13,376,002 | https://en.wikipedia.org/wiki/Optical%20modulation%20amplitude | In telecommunications, optical modulation amplitude (OMA) is the difference between two optical power levels, of a digital signal generated by an optical source, e.g., a laser diode.
It is given by
where P1 is the optical power level generated when the light source is "on," and P0 is the power level generated when the light source is "off." The OMA may be specified in peak-to-peak mW.
The OMA can be related to the average power and the extinction ratio
In the limit of a high extinction ratio, . However, OMA is often used to express the effective usable modulation in a signal when the extinction ratio is not high and this approximation may not be valid.
External links
OMA presentation by Optillion, New Orleans, September 2000
Optical communications | Optical modulation amplitude | [
"Engineering"
] | 166 | [
"Optical communications",
"Telecommunications engineering"
] |
8,704,332 | https://en.wikipedia.org/wiki/Space%20art | Space art, also known as astronomical art, is a genre of art that visually depicts the universe through various artistic styles. It may also refer to artworks sent into space.
The development of space art was closely linked to advancements in telescope and imaging technology, which enabled more precise observations of the night sky. Some space artists work directly with scientists to explore new ways to expand the arts, humanities, and cultural expressions relative to space. Space art may communicate ideas about space, often including an artistic interpretation of cosmological phenomena and scientific discoveries.
For many decades, visual artists have explored the topic of space using traditional painting media, followed recently by the use of digital media for the same purpose. Science-fiction magazines and picture essay magazines were some of the first major outlets for space art, often featuring planets, spaceships, and dramatic alien landscapes. Chesley Bonestell, R. A. Smith, Lucien Rudaux, David A. Hardy, and Ludek Pesek were some of the artists actively involved in visualizing topics such as space exploration and colonization in the early days of the genre. Astronomers and experts in rocketry also played roles in inspiring artists in this genre.
NASA’s second administrator, James E. Webb, created the space agency's Space Art program in 1962, four years after its inception. Bonestell's work in this program often depicted various celestial bodies and landscapes, highlighting both the destinations and the imagined technologies used to reach them.
Astronomical art
Astronomical art is a genre of space art that focuses on visual representations of outer space. It encompasses various themes, including the space environment as a new frontier for humanity, depictions of alien worlds, representations of extreme phenomena like black holes, and artistic concepts inspired by astronomy.
Astronomical art emerged as a distinct genre in the 1940s and 1950s. Chesley Bonestell was recognized for his skills in addressing perspective challenges and creating visual representations of astronomical concepts. Contemporary artists continue to contribute to the visualization of ideas within the space community, such as depicting theoretical capabilities for interstellar travel and illustrating hypothetical deep-space phenomena.
Astronomical art is the most recent of several art movements that have explored ideas emerging from the ongoing exploration of Earth. Finding its roots in genres such as the Hudson River School or Luminism, most astronomical artists use traditional painting methods or digital equivalents in a way that brings the viewer to the frontiers of human knowledge gathered in the exploration of space. Such works usually portray things in the visual language of realism extrapolated to exotic environments, whose details reflect ongoing knowledge and educated guesswork. An example of the process of creating astronomical art would be studying and visiting desert environments to experience something of what it might be like on Mars and painting based on such experiences. Another would be to hear of an astronomical concept, and then seek out published articles or experts in the field. Usually, there is an artistic effort to emphasize the favourable visual elements, just as a photographer composes a picture. Notable astronomical art often reflects the artist's interpretation and imagination regarding the subject portrayed.
Science fiction magazines such as Fantasy and Science Fiction, Amazing, Astounding (later renamed Analog), and Galaxy were platforms for space and astronomical art in the 1950s. Picture essay magazines of the time, such as Life, Collier's, and Coronet, were other major outlets for such art. Today, astronomical art can be seen in magazines such as Sky and Telescope, The Planetary Report, and occasionally in Scientific American. The NASA fine arts program has been an ongoing effort to hire artists to create works generally specific to a particular space project. The program documents historical events in recognizable form for professional artists. The NASA Fine Arts Program operated in an era of forward progress under its first head director, James Dean. Even then, pictorial realism seemed a subset rather than a dominant visual influence.
The works that document space flight situations, such as those referenced above, are similar in concept to government efforts during World War II to send artists to battle zones for documentation. Much of which appeared in contemporary Life magazines. Most of today's widely published space and astronomical artists have belonged to the International Association of Astronomical Artists since 1983.
Photography
The first photographs of the entire Earth by satellites and crewed Apollo missions brought a new sense of Earth and promoted ideas of the unity of humanity. Photographs taken by explorers on the Moon evoked the experience of being in another world. The Pillars of Creation taken by the Hubble Space Telescope and other Hubble photos often evoke intense responses from viewers; for example, Hubble's planetary nebula images.
Artistry
Artists have experienced free-fall conditions during flights flown with NASA, the Russian and French Space Agencies, and the Zero Gravity Arts Consortium. Early efforts by artists to have art pieces placed in space have already been accomplished with painting, holography, micro-gravity mobiles, floating literary works, and sculpture.
History
Early examples of space art are depictions of celestial bodies in ancient artifacts. The 'Land Grant to Ḫunnubat-Nanaya Kudurru,' a Babylonian limestone artifact from the 12th century BC, features early representations of Venus, the lunar crescent, and the solar disk.
Albrecht Altdorfer's painting The Battle of Issus (1529) shows the curvature of the Earth from a great height. Galileo's sketches of the Moon from the Sidereus Nuncius (1610) were published among other early descriptions of the Moon's topography. In 1711, Donato Creti painted a series of astronomers viewing other planets of the Solar System through a telescope to interest the Vatican in establishing an astronomical observatory.
19th century
In the early 1870s-1900s, Étienne Léopold Trouvelot published a series of Chromolithographs of his pastels of astronomical subjects.
In 1874, James Carpenter and James Nasmyth's work The Moon: Considered as a Planet, a World, and a Satellite included photographs of sculpted models of Lunar features, in the marked vertical exaggeration of the actual relief of the Moon.
In 1877, Paul Dominique Philippoteaux and engraver Laplante illustrated Jules Verne's story Off on a Comet, including an imaginative view looking up at the rings of Saturn from the planet itself.
20th century
In 1918, Howard Russell Butler deliberately made use of the dynamic range of human vision in painting a total eclipse based on direct observation.
In 1927, Scriven Bolten created lunar landscape images for the Illustrated London News using painted photos of plaster models.
In 1937, Lucien Rudaux painted many works for Sur Les Autres Mondes.
In 1944, Chesley Bonestell's paintings of Saturn seen from its different moons appeared in Life magazine, introducing astronomical art to a wide American audience. Books featuring Bonestell's art include The Conquest Of Space (1949), The Exploration Of Mars (1956), and Life'''s The World We Live In (1955).
The second Hayden Planetarium Symposium on Space Travel, held in New York in October 1952, resulted in a series of widely read space flight articles in Collier's magazine, illustrated by Bonestell and others.
In 1963, Ludek Pesek's paintings filled the large volumes of The Moon And the Planets, and the 1968 volume Our Planet Earth-From The Beginning.
The 1980 Cosmos PBS television show and book used the work of many space artists. Host Carl Sagan used such art in several of his books.
The 21st century expanded to sending art into space.
Art in space
First art created in space
The first active artist in space was Alexei Leonov, who produced the first drawing in space onboard Voskhod 2 in 1965, depicting an orbital sunrise.
The first original oil paintings flown into outer space
An art conservation experiment from Vertical Horizons, founded by Howard Wishnow and Ellery Kurtz, was flown aboard the Space Shuttle Columbia STS-61-C on January 12, 1986. Four original oil paintings by American artist Ellery Kurtz were flown in one of NASA's GetAway Special (G.A.S.) containers mounted to a bridge in the shuttle cargo bay. These original works of art are the first oil paintings to enter Earth's orbit. This NASA GAS canister, designated G-481, was the 46th such canister flown aboard a Space Shuttle. The Space Shuttle Columbia orbited the Earth 98 times during its mission duration of 6 days, 2 hours, 3 minutes, and 51 seconds. Columbia was launched from Kennedy Space Center, Cape Canaveral, Florida, on January 12, 1986, and landed at the Kennedy Space Center on January 18, 1986.
Zero-G space art
Small art objects have been carried on several Apollo missions, such as gold emblems and a small Fallen Astronaut figurine that was left on the Moon during the 1971 Apollo 15 mission. Visual observations have been recorded in drawings and commentary by earlier cosmonauts and astronauts of difficult-to-photograph phenomena such as the airglow, twilight colors, and outer details of the solar corona.
Another work, later brought to Earth orbit sometime in the mid-80s, was a study of the golden sunlight on a Soviet space station by Russian artist Andrei Sokolov, carried aboard the Soviet Mir space station starting with modules in February 1986. In 1984, Joseph McShane and Lowry Burgess had their conceptual artwork flown aboard the Space Shuttle utilizing NASA's 'Get Away Special' program. The first sculpture specifically designed for human habitat in orbit was Arthur Woods' Cosmic Dancer which was sent to the Mir station in 1993. In 1995, Arthur Woods organized Ars ad Astra, the first art exhibition in Earth orbit. consisting of 20 original artworks from 20 artists and an electronic archive also took place on the Mir space station as part of ESA's EUROMIR'95 mission. In 1998, Frank Pietronigro flew Research Project Number 33: Investigating the Creative Process in a Micro-gravity Environment, where he created 'drift paintings' and danced in microgravity space. In 2006, the artist returned to micro-gravity flight to create three new works, one in collaboration with Lowry Burgess; Moments in the Infinite Absolute, Flags in Space!, and a new form of microgravity mobile.
The Slovenian theater director Dragan Živadinov staged a performance called Noordung Zero Gravity Biomechanical during a parabolic flight organized through the Yuri Gagarin Cosmonaut Training Center facility in Star City in 1999. The UK arts group The Arts Catalyst, with the MIR consortium (Arts Catalyst, Projekt Atol, V2 Organisation, Leonardo-Olats), organized a series of parabolic 'zero gravity' flights for artistic and cultural experimentation with the Gagarin Cosmonaut Training Centre, as well as with the European Space Agency, between 2000 and 2004, including Investigations in Microgravity, MIR Flight 001, and MIR Campaign 2003.HighBeam Artists who participated in these flights and visits to Russia and ESA have included the Otolith Group, shortlisted in 2011 for the Turner Prize, Stefan Gec, Ansuman Biswas and Jem Finer, Kitsou Dubois, Yuri Leiderman, and Marcel·li Antunez Roca.
Richard Garriott visited the International Space Station, via the Soyuz TMA-13 on October 12, 2008, where he displayed an art exhibition, Celestial Matters, during his 12 days in orbit. Celestial Matters included works by ten American artists as well as work Garriott created himself while in orbit, honoring his heritage in art and science. The art was later exhibited at the Charles Bank Gallery in New York City in October 2011. Garriott also exhibited Astrogeneris Mementos, two small works, somewhat reminiscent of memento mori or hairwork, containing locks of hair from Richard Garriott and Owen Garriott sealed in chambers by Steve Brudniak, the first assemblage sculptures exhibited in outer space.Brannon, Mike, 2018. Profile, Steve Burdniak: Psychedelic Surrrealism Texas Style. 71 Magazine, Jan/Feb 2018: 66-75 pp. (see page 71). Accessed June 15, 2024.
In 2009, NASA astronaut Nicole Stott having brought watercolor paint and watercolor paper with her for the long-duration Expedition 21 mission to the International Space Station became the first astronaut to paint in space.
The Mexican artist and musician Nahum directed the art and science project Matters of Gravity (La Gravedad de los Asuntos in Spanish), a project reflecting on gravity in its absence. The first mission consisting only of Latin American artists was executed in a zero-gravity flight at the Yuri Gagarin Cosmonaut Training Center in 2014. The participating artists include Tania Candiani, Ale de la Puente, Ivan Puig, Arcángelo Constantini, Fabiola Torres-Alzaga, Gilberto Esparza, Juan Jose Diaz Infante, Nahum, and Marcela Armas. The project included the participation of Mexican scientist Miguel Alcubierre and curators Rob La Frenais and Kerry Anne Doyle.
Performance art has also occurred in space, as with Chris Hadfield's 2013, edited performance of David Bowie's 1969 song "Space Oddity and Thomas Pesquet's 2017 edited performance of "L'Art de la joie par les Spacelatorz" ."
Sojourner 2020 project onboard the International Space Station
In the Sojourner 2020 project from MIT, the Space Exploration Initiative took nine selected artists to develop art projects on board the International Space Station. Sojourner 2020 was a 1.5U size device (100mm x 100mm x 152.4mm) that was launched into low Earth orbit between March 7 and April 7 during the COVID-19 pandemic. It featured a three-layer telescoping structure that simulated three different "gravities": zero gravity, lunar gravity, and Martian gravity. Each layer of the structure rotated independently. The top layer remained still in weightlessness, while the middle and bottom layers spun at different speeds to produce centripetal accelerations that mimicked lunar gravity and Martian gravity respectively. Each layer carried six pockets that held the projects. Each pocket was a container with a diameter of 10 mm and a depth of 12 mm. The artist proposed and accomplished artworks in a variety of different mediums, including carved stone sculptures by Erin Genia, liquid pigment experiments by Andrea Ling and Levi Cai, sculptures made of transgender hormone replacement medicines by Adriana Knouf, and living organisms, like marine diatoms of the genus Phaeodactylum Tricornutum, by Luis Guzmán.
The nine artist groups selected onboard Sojourner 2020 were:
Luis Bernardo Guzmán - bio architectures (Cosmo biology) - Chile
Xin Liu, Lucia Monge - Unearthing the Futures - China and Peru
Levi Cai & Andrea Ling - Abiogenetic Triptych - USA, Canada
Kat Kohl - Memory Chain: A Pas de Deux of Artifact - USA
Henry Tan - Pearl of Lunar - Thai
Janet Biggs - Finding Equilibrium - USA
Masahito Ono - Nothing, Something, Everything - Japan
Adriana Knouf - TX-1 - USA
Erin Genia - Canupa Inyan: Falling Star Woman - American Sisseton Wahpeton Oyate Artworks launched into outer space
The Golden Record: Greetings and Sounds of the Earth
The Contour of Presence by Nahum
Orbital Reflector by Trevor Paglen
Enoch by Tavares Strachan
Moon Gallery by the Moon Gallery Foundation
Echoes From the Valley of Existence by Amy Karle
In Praise of Mystery by Ada Limon
Humans have engaged in many cultural activities in space, particularly on space stations, recontextualizing terrestrial culture and art.
See also
Futurism
List of space artists
List of space art-related books
Russian cosmism
Science-fiction
Space Advocacy
Time capsule
References
Further reading
Space Art, Ron Miller, Starlog Magazine Visions of Space, David A. Hardy, Paper Tiger 1989
Worlds Beyond: The Art of Chesley Bonestell, Ron Miller & Frederick C. Durant, III
Star Struck: One Thousand Years of the art of Science and Astronomy, Ronald Brashear & Daniel Lewis, 2001 Univ. of Washington Press
Futures: 50 Years in Space, David A. Hardy & Patrick Moore, AAPPL 2004Out of the Cradle: Exploring the Frontiers beyond Earth, William K. Hartmann, Ron Miller and Pamela Lee (Workman Publishing, 1984)
Space Art: How to Draw and Paint Planets, Moons, and Landscapes of Alien Worlds, Michael Carroll, 2007 Watson Guptill/Random House
The Impact of American and Russian Cosmism on the Representation of Space Exploration in 20th Century American and Soviet Space Art'', Kornelia Boczkowska, Wydawnictwo Naukowe UAM, 2016
External links
International Association of Astronomical Artists
numerous space art site links
Space advocacy
Spaceflight
Visual arts genres
Space Age | Space art | [
"Astronomy"
] | 3,442 | [
"Space art",
"Spaceflight",
"Outer space"
] |
8,705,069 | https://en.wikipedia.org/wiki/Pure%20%28company%29 | Pure International Ltd. is a British consumer electronics company, based in Kings Langley, Hertfordshire, founded in 2002. They are best known for designing and manufacturing digital audio broadcasting (DAB) and DAB+ radios. In recent years the company has moved away from being a digital radio company with more broad-based audio products in the radio, bluetooth and wireless speaker market.
The imprint on the devices' casing states that they were designed in the UK and manufactured in China.
Pure have sold over five million products worldwide.
Pure products are available in the United Kingdom, Australia, Denmark, France, Germany, Ireland, Italy, Netherlands, Norway and Switzerland, and via online suppliers.
History
2002
Pure was formerly a division of another Hertfordshire-based company, Imagination Technologies, which primarily designs Central processing units and Graphics processing units. Imagination did not originally set out to sell consumer electronics and the first Pure radio was merely a demonstration platform for its DAB decoding chip. The success of the first sub-£100 DAB receiver, the Evoke-1, led to the development of further products.
2003
In 2003, Pure launched the PocketDAB 1000. It was the world's first pocket digital radio.
2004
Pure released the Bug, the first-ever digital radio with EPG, pause, rewind and record.
2005
Sonus-1XT was launched by Pure and became the world's first digital radio for the blind and visually impaired.
2007
Pure released Highway, the world's first in-car digital radio adapter, in 2007.
2008
Pure launched the first Energy Saving Trust approved radio range. Under the name EcoPlus, products had reduced power consumption, packaging materials from recycled and sustainable sources and components selected to minimise their environmental impact.
2009
The world's first high-resolution touchscreen digital radio, Sensia, is launched by Pure.
2012
Pure celebrated its tenth anniversary in 2012 with a brand revamp. They also commemorated the landmark with the launch of the Evoke Mio Union Jack Edition.
2014
Pure introduced its three-year warranty.
2015
Pure shipped in excess of five million digital radios worldwide positioning itself as the best-selling digital radio manufacturer.
2016
Pure became Pure International Ltd.
Parent company Imagination Technologies sold the Pure brand to AVenture AT, in September 2016.
2019
Pure acquired the license for Braun Audio from Braun, a division of Procter & Gamble.
In 2019 Pure launched the Braun Audio range of design speakers, referencing the LE1 speakers designed by Dieter Rams .
References
Audio equipment manufacturers of the United States
Electronics companies established in 2002
Companies based in Three Rivers District
British brands
Manufacturing companies of the United Kingdom
2002 establishments in England
Radio manufacturers
Loudspeaker manufacturers | Pure (company) | [
"Engineering"
] | 546 | [
"Radio electronics",
"Radio manufacturers"
] |
8,705,174 | https://en.wikipedia.org/wiki/Wender%20Taxol%20total%20synthesis | Wender Taxol total synthesis in organic chemistry describes a Taxol total synthesis (one of six to date) by the group of Paul Wender at Stanford University published in 1997. This synthesis has much in common with the Holton Taxol total synthesis in that it is a linear synthesis starting from a naturally occurring compound with ring construction in the order A,B,C,D. The Wender effort is shorter by approximately 10 steps.
Raw materials for the preparation of Taxol by this route include verbenone, prenyl bromide, allyl bromide, propiolic acid, Gilman reagent, and Eschenmoser's salt.
AB ring synthesis
The taxol synthesis started from the terpene verbenone 1 in Scheme 1, which is the oxidation product of naturally occurring α-pinene and forming ring A. Construction of ring B started with abstraction of the pendant methyl group proton by potassium tert-butoxide (conjugated anion is formed) followed by nucleophilic displacement of the bromine atom in prenyl bromide 2 to form diene 3. Ozonolysis of the prenyl group (more electron-rich than the internal double bond) formed aldehyde 4, which, after isomerization or photorearrangement to the chrysanthenone 5, was reacted with the lithium salt (via LDA) of the ethyl ester of propiolic acid 6 in a nucleophilic addition to the alcohol 7. This compound was not isolated but trapped in situ with trimethylsilyl chloride to the silyl ether 9. In the next step, Gilman reagent 8 is a methylating reagent in nucleophilic conjugate addition through the alkyne group to the ketone group, which formed the alcohol 10. The silyl ether protective group was removed by reaction with acetic acid to alcohol 11, which was then oxidized to the ketone 12 with RuCl2(PPh3)3 and NMO as the sacrificial catalyst. The acyloin group in 13 was introduced by KHMDS and Davis’ oxaziridine (see Holton Taxol total synthesis for another use of this system) and its hydroxyl group together with the ester group were reduced by lithium aluminium hydride to tetrol 14. Finally, the primary alcohol group was protected as a tert-butyldimethylsilyl ether by the corresponding silylchloride and imidazole in triol 15.
In the second part (Scheme 2) the procedures are still confined to rings A and B. More protective groups were added to triol 15 as reaction with PPTS and 2-methoxypropene gives the acetonide 16. At this point the double bond in ring A was epoxidized with m-CPBA and sodium carbonate to epoxide 17 and a Grob fragmentation (also present in the Holton effort) initiated by DABCO opened up the AB ring system in alcohol 18, which was not isolated but protected as a TIPS silyl ether 19 with triisopropylsilyl triflate and 2,6-lutidine. The C1 position was next oxidized by the phosphite ester, P(OEt)3 and the strong base KOt-Bu, and oxygen to alcohol 20 (the stereochemistry controlled by bowl-shaped AB ring with hydroxylation from unhindered convex direction), the primary alcohol group was deprotected with ammonium chloride in methanol to diol 21 and two reductions first with NaBH4 to triol 22 and then hydrogen gas and Crabtree's catalyst give triol 23. These positions were protected by trimethylsilyl chloride and pyridine to 24 and then triphosgene to 25 in order to facilitate the oxidation of the primary alcohol group to the aldehyde 26 by PCC.
C ring synthesis
The next part constructed the C ring starting from aldehyde 26, which was extended by one carbon atom to homologue 27 in a Wittig reaction with methoxymethylenetriphenylphosphine (Scheme 3). The acetonide group was removed by dilute hydrochloric acid and sodium iodide in dioxane and one hydroxyl group in the resulting diol 28 was protected as the triethylsilyl ether (TES) 29 with the corresponding silyl chloride and pyridine enabling oxidation of the remaining hydroxyl group to the ketone 30 with the Dess-Martin periodinane. Reaction with Eschenmoser's salt placed a methylene group (C20 in the Taxol framework) in the alpha position of the aldehyde to 31 and the next reaction introduced (the still lacking) C6 and C7 as the Grignard reagent of allyl bromide in a nucleophilic addition aided by zinc(II) chloride, which blocked the Grignard from attack on carbonate group, to alcohol 32. The newly formed alcohol was protected as the BOM ether 33 with BOMCl and N,N-diisopropylethylamine. After removal of the TES protecting group with ammonium fluoride, the carbonate group in 34 was converted to a hydroxybenzoate group by action of phenyllithium and the secondary alcohol to the acetate 35 by in situ reaction with acetic anhydride and DMAP. In the next step the acyloin group had its positions swapped by reaction with triazabicyclodecene (other amine bases fail) forming 36 and in the final steps ring closure of ring C was accomplished by ozonolysis at the allyl group to 37 and Aldol reaction with 4-pyrrolidinopyridine to 38.
D ring synthesis
The final part dealt with the construction of oxetane ring D starting with protection of the alcohol group in 38 (Scheme 4). as a TROC alcohol 39 with 2,2,2-trichloroethyl chloroformate and pyridine. The OBOM group was replaced by a bromine group in three steps: deprotection to 40 with hydrochloric acid and sodium iodide, mesylation to 41 with mesyl chloride, DMAP and pyridine and nucleophilic substitution with inversion of configuration with lithium bromide to bromide 42. Because the oxidation of the alkene group to the diol 43 with osmium tetroxide was accompanied by the undesired migration of the benzoate group, this step was taken to completion with imidazole as 44. Two additional countermeasures were required: reprotection of the diol as the carbonate ester 45 with triphosgene and removal of the benzoate group (KCN) to alcohol 46 in preparation of the actual ring closure to the oxetane 47 with N,N-diisopropylethylamine. In the final steps the tertiary alcohol was acylated in 48, the TIPS group removed in 49 and the benzoate group re-introduced in 50.
Tail addition of the Ojima lactam 51 was not disclosed in detail but finally taxol 52 was formed in several steps similar to the other efforts.
External links
Wender Taxol Synthesis @ SynArchive.com
The Wender Taxol Mug: Link
See also
Paclitaxel total synthesis
Danishefsky Taxol total synthesis
Holton Taxol total synthesis
Kuwajima Taxol total synthesis
Mukaiyama Taxol total synthesis
Nicolaou Taxol total synthesis
References
Total synthesis
Taxanes | Wender Taxol total synthesis | [
"Chemistry"
] | 1,606 | [
"Total synthesis",
"Chemical synthesis"
] |
8,707,155 | https://en.wikipedia.org/wiki/Statistical%20shape%20analysis | Statistical shape analysis is an analysis of the geometrical properties of some given set of shapes by statistical methods. For instance, it could be used to quantify differences between male and female gorilla skull shapes, normal and pathological bone shapes, leaf outlines with and without herbivory by insects, etc. Important aspects of shape analysis are to obtain a measure of distance between shapes, to estimate mean shapes from (possibly random) samples, to estimate shape variability within samples, to perform clustering and to test for differences between shapes. One of the main methods used is principal component analysis (PCA). Statistical shape analysis has applications in various fields, including medical imaging, computer vision, computational anatomy, sensor measurement, and geographical profiling.
Landmark-based techniques
In the point distribution model, a shape is determined by a finite set of coordinate points, known as landmark points. These landmark points often correspond to important identifiable features such as the corners of the eyes. Once the points are collected some form of registration is undertaken. This can be a baseline methods used by Fred Bookstein for geometric morphometrics in anthropology. Or an approach like Procrustes analysis which finds an average shape.
David George Kendall investigated the statistical distribution of the shape of triangles, and represented each triangle by a point on a sphere. He used this distribution on the sphere to investigate ley lines and whether three stones were more likely to be co-linear than might be expected. Statistical distribution like the Kent distribution can be used to analyse the distribution of such spaces.
Alternatively, shapes can be represented by curves or surfaces representing their contours, by the spatial region they occupy.
Shape deformations
Differences between shapes can be quantified by investigating deformations transforming one shape into another. In particular a diffeomorphism preserves smoothness in the deformation. This was pioneered in D'Arcy Thompson's On Growth and Form before the advent of computers. Deformations can be interpreted as resulting from a force applied to the shape. Mathematically, a deformation is defined as a mapping from a shape x to a shape y by a transformation function , i.e., . Given a notion of size of deformations, the distance between two shapes can be defined as the size of the smallest deformation between these shapes.
Diffeomorphometry is the focus on comparison of shapes and forms with a metric structure based on diffeomorphisms, and is central to the field of Computational anatomy. Diffeomorphic registration, introduced in the 90's, is now an important player with existing codes bases organized around ANTS, DARTEL, DEMONS, LDDMM, StationaryLDDMM, and FastLDDMM are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images. Voxel-based morphometry (VBM) is an important technology built on many of these principles. Methods based on diffeomorphic flows are also used. For example, deformations could be diffeomorphisms of the ambient space, resulting in the LDDMM (Large Deformation Diffeomorphic Metric Mapping) framework for shape comparison.
See also
Active shape model
Geometric data analysis
Shape analysis (disambiguation)
Procrustes analysis
Computational anatomy
Large Deformation Diffeomorphic Metric Mapping
Bayesian Estimation of Templates in Computational Anatomy
Bayesian model of computational anatomy
3D Face Morphable Model
References
Statistical data types
Spatial analysis
Computer vision
Geometric shapes | Statistical shape analysis | [
"Physics",
"Mathematics",
"Engineering"
] | 701 | [
"Geometric shapes",
"Packaging machinery",
"Mathematical objects",
"Spatial analysis",
"Space",
"Geometric objects",
"Artificial intelligence engineering",
"Spacetime",
"Computer vision"
] |
8,707,236 | https://en.wikipedia.org/wiki/CXCL10 | C-X-C motif chemokine ligand 10 (CXCL10) also known as Interferon gamma-induced protein 10 (IP-10) or small-inducible cytokine B10 is an 8.7 kDa protein that in humans is encoded by the CXCL10 gene. C-X-C motif chemokine 10 is a small cytokine belonging to the CXC chemokine family.
Gene
The gene for CXCL10 is located on human chromosome 4 in a cluster among several other CXC chemokines.
Function
CXCL10 is secreted by several cell types in response to IFN-γ. These cell types include monocytes, endothelial cells and fibroblasts. CXCL10 has been attributed to several roles, such as chemoattraction for monocytes/macrophages, T cells, NK cells, and dendritic cells, promotion of T cell adhesion to endothelial cells, antitumor activity, and inhibition of bone marrow colony formation and angiogenesis.
This chemokine elicits its effects by binding to the cell surface chemokine receptor CXCR3.
Structure
The three-dimensional crystal structure of this chemokine has been determined under 3 different conditions to a resolution of up to 1.92 Å. The Protein Data Bank accession codes for the structures of CXCL10 are , , and .
Biomarkers
CXCL9, CXCL10 and CXCL11 have proven to be valid biomarkers for the development of heart failure and left ventricular dysfunction, suggesting an underlining pathophysiological relation between levels of these chemokines and the development of adverse cardiac remodeling.
Clinical significance
Baseline pre-treatment plasma levels of CXCL10 are elevated in patients chronically infected with hepatitis C virus (HCV) of genotypes 1 or 4 who do not achieve a sustained viral response (SVR) after completion of antiviral therapy. CXCL10 in plasma is mirrored by intrahepatic CXCL10 mRNA, and both strikingly predict the first days of elimination of HCV RNA (“first phase decline”) during interferon/ribavirin therapy for all HCV genotypes. This also applies for patients co-infected with HIV, where pre-treatment IP-10 levels below 150 pg/mL are predictive of a favorable response, and may thus be useful in encouraging these otherwise difficult-to-treat patients to initiate therapy. The pathogen Leishmania major utilizes a protease, GP63, that cleaves CXCL10, implicating CXCL10 in host defense mechanisms of certain intracellular pathogens like Leishmania.
References
External links
Further reading
Cytokines | CXCL10 | [
"Chemistry"
] | 588 | [
"Cytokines",
"Signal transduction"
] |
8,707,643 | https://en.wikipedia.org/wiki/Electronic%20circuit | An electronic circuit is composed of individual electronic components, such as resistors, transistors, capacitors, inductors and diodes, connected by conductive wires or traces through which electric current can flow. It is a type of electrical circuit. For a circuit to be referred to as electronic, rather than electrical, generally at least one active component must be present. The combination of components and wires allows various simple and complex operations to be performed: signals can be amplified, computations can be performed, and data can be moved from one place to another.
Circuits can be constructed of discrete components connected by individual pieces of wire, but today it is much more common to create interconnections by photolithographic techniques on a laminated substrate (a printed circuit board or PCB) and solder the components to these interconnections to create a finished circuit. In an integrated circuit or IC, the components and interconnections are formed on the same substrate, typically a semiconductor such as doped silicon or (less commonly) gallium arsenide.
An electronic circuit can usually be categorized as an analog circuit, a digital circuit, or a mixed-signal circuit (a combination of analog circuits and digital circuits). The most widely used semiconductor device in electronic circuits is the MOSFET (metal–oxide–semiconductor field-effect transistor).
Analog circuits
Analog electronic circuits are those in which current or voltage may vary continuously with time to correspond to the information being represented.
The basic components of analog circuits are wires, resistors, capacitors, inductors, diodes, and transistors. Analog circuits are very commonly represented in schematic diagrams, in which wires are shown as lines, and each component has a unique symbol. Analog circuit analysis employs Kirchhoff's circuit laws: all the currents at a node (a place where wires meet), and the voltage around a closed loop of wires is 0. Wires are usually treated as ideal zero-voltage interconnections; any resistance or reactance is captured by explicitly adding a parasitic element, such as a discrete resistor or inductor. Active components such as transistors are often treated as controlled current or voltage sources: for example, a field-effect transistor can be modeled as a current source from the source to the drain, with the current controlled by the gate-source voltage.
When the circuit size is comparable to a wavelength of the relevant signal frequency, a more sophisticated approach must be used, the distributed-element model. Wires are treated as transmission lines, with nominally constant characteristic impedance, and the impedances at the start and end determine transmitted and reflected waves on the line. Circuits designed according to this approach are distributed-element circuits. Such considerations typically become important for circuit boards at frequencies above a GHz; integrated circuits are smaller and can be treated as lumped elements for frequencies less than 10GHz or so.
Digital circuits
In digital electronic circuits, electric signals take on discrete values, to represent logical and numeric values. These values represent the information that is being processed. In the vast majority of cases, binary encoding is used: one voltage (typically the more positive value) represents a binary '1' and another voltage (usually a value near the ground potential, 0 V) represents a binary '0'. Digital circuits make extensive use of transistors, interconnected to create logic gates that provide the functions of Boolean logic: AND, NAND, OR, NOR, XOR and combinations thereof. Transistors interconnected so as to provide positive feedback are used as latches and flip flops, circuits that have two or more metastable states, and remain in one of these states until changed by an external input. Digital circuits therefore can provide logic and memory, enabling them to perform arbitrary computational functions. (Memory based on flip-flops is known as static random-access memory (SRAM). Memory based on the storage of charge in a capacitor, dynamic random-access memory (DRAM), is also widely used.)
The design process for digital circuits is fundamentally different from the process for analog circuits. Each logic gate regenerates the binary signal, so the designer need not account for distortion, gain control, offset voltages, and other concerns faced in an analog design. As a consequence, extremely complex digital circuits, with billions of logic elements integrated on a single silicon chip, can be fabricated at low cost. Such digital integrated circuits are ubiquitous in modern electronic devices, such as calculators, mobile phone handsets, and computers. As digital circuits become more complex, issues of time delay, logic races, power dissipation, non-ideal switching, on-chip and inter-chip loading, and leakage currents, become limitations to circuit density, speed and performance.
Digital circuitry is used to create general purpose computing chips, such as microprocessors, and custom-designed logic circuits, known as application-specific integrated circuit (ASICs). Field-programmable gate arrays (FPGAs), chips with logic circuitry whose configuration can be modified after fabrication, are also widely used in prototyping and development.
Mixed-signal circuits
Mixed-signal or hybrid circuits contain elements of both analog and digital circuits. Examples include comparators, timers, phase-locked loops, analog-to-digital converters, and digital-to-analog converters. Most modern radio and communications circuitry uses mixed signal circuits. For example, in a receiver, analog circuitry is used to amplify and frequency-convert signals so that they reach a suitable state to be converted into digital values, after which further signal processing can be performed in the digital domain.
Design
Prototyping
References
External links
Electronics Circuits Textbook
Electronics Fundamentals | Electronic circuit | [
"Engineering"
] | 1,187 | [
"Electronic engineering",
"Electronic circuits"
] |
8,707,733 | https://en.wikipedia.org/wiki/List%20of%20biogeographic%20provinces | This page features a list of biogeographic provinces that were developed by Miklos Udvardy in 1975, later modified by other authors. Biogeographic Province is a biotic subdivision of biogeographic realms subdivided into ecoregions, which are classified based on their biomes or habitat types and, on this page, correspond to the floristic kingdoms of botany.
The provinces represent the large areas of Earth's surface within which organisms have been evolving in relative isolation over long periods of time, separated from one another by geographic features, such as oceans, broad deserts, or high mountain ranges, that constitute barriers to migration.
Biomes are characterized by similar climax vegetation, though each realm may include a number of different biomes. A tropical moist broadleaf forest in Brazil, for example, may be similar to one in New Guinea in its vegetation type and structure, climate, soils, etc., but these forests are inhabited by plants with very different evolutionary histories.
Afrotropical Realm
Tropical humid forests
Guinean Rainforest
Congo Rainforest
Malagasy Rainforest
Tropical dry or deciduous forests (incl. Monsoon forests) or woodlands
West African Woodland/Savanna
East African Woodland/Savanna
Congo Woodland/Savanna
Miombo Woodland/Savanna
South African Woodland/Savanna
Malagasy Woodland/Savanna
Malagasy Thorn Forest
Evergreen sclerophyllous forests, scrubs or woodlands
Cape Sclerophyll
Warm deserts and semideserts
Western Sahel
Eastern Sahel
Somalian
Namib
Kalahari
Karroo
Mixed mountain and highland systems with complex zonation
Ethiopian Highlands
Guinean Highlands
Central African Highlands
East African Highlands
South African Highlands
Mixed island systems
Ascension and St. Helena Islands
Comores Islands and Aldabra
Mascarene Islands
Lake systems
Lake Rudolph
Lake Ukerewe (Victoria)
Lake Tanganyika
Lake Malawi (Nyassa)
Antarctic Realm
Tundra communities and barren Antarctic desert
Subtropical and temperate rain forests or woodlands
Australasian Realm
Tropical humid forests
Queensland Coastal
Subtropical and temperate rain forests or woodlands
Tasmanian
Tropical dry or deciduous forests (incl. Monsoon forests) or woodlands
Northern Coastal
Evergreen sclerophyllous forests, scrubs or woodlands
Western Sclerophyll
Eastern Sclerophyll
Brigalow
Warm deserts and semideserts
Western Mulga
Central Desert
Southern Mulga/Saltbush
Tropical grasslands and savannas
Northern Savanna
Northern Grasslands
Temperate grasslands
Eastern Grasslands and Savannas
Indomalayan Realm
Tropical humid forests
Malabar Rainforest
Ceylonese Rainforest
Bengalian Rainforest
Burman Rainforest
Indochinese Rainforest
South Chinese Rainforest
Malayan Rainforest
Tropical dry or deciduous forests (incl. Monsoon forests) or woodlands
Indus-Ganges Monsoon Forest
Burman Monsoon Forest
Thai Monsoon Forest
Mahanadian
Coromandel
Ceylonese Monsoon Forest
Deccan Thorn Forest
Warm deserts and semideserts
Thar Desert
Mixed island systems
Seychelles and Amirantes Islands
Laccadives Islands
Maldives and Chagos Islands
Cocos-Keeling and Christmas Islands
Andaman and Nicobar Islands
Sumatra
Java
Lesser Sunda Islands
Celebes
Borneo
Philippines
Taiwan
Nearctic Realm
Subtropical and temperate rain forests or woodlands
Sitkan
Oregonian
Temperate needle-leaf forests or woodlands
Yukon Taiga
Canadian Taiga
Temperate broad-leaf forests or woodlands, and subpolar deciduous thickets
Eastern Forest
Austroriparian
Evergreen sclerophyllous forests, scrubs or woodlands
Californian
Warm deserts and semideserts
Sonoran
Chihuahuan
Tamaulipan
Cold-winter (continental) deserts and semideserts
Great Basin
Tundra communities and barren Arctic desert
Aleutian Islands
Alaskan Tundra
Canadian Tundra
Arctic Archipelago
Greenland Tundra
Arctic Desert and Icecap
Temperate grasslands
Grasslands
Mixed mountain and highland systems with complex zonation
Rocky Mountains
Sierra-Cascade
Madrean-Cordilleran
Lake systems
Great Lakes
Neotropical Realm
Tropical humid forests
Campechean
Panamanian
Colombian Coastal
Guayanan
Amazon rainforest
Madeiran
Serra do Mar (Bahian coast)
Subtropical and temperate rain forests or woodlands
Brazilian Rainforest (Brazilian Deciduous Forest)
Brazilian Planalto (Brazilian Araucaria Forest)
Valdivian Forest (Chilean Temperate Rain Forest)
Chilean Nothofagus
Tropical dry or deciduous forests (incl. Monsoon forests) or woodlands
Everglades
Sinaloan
Guerreran
Yucatecan (Yucatán)
Central American (Carib-Pacific)
Venezuelan Dry Forest
Venezuelan Deciduous Forest
Ecuadoran Dry Forest
Caatinga
Gran Chaco
Temperate broad-leaf forests or woodlands, and subpolar deciduous thickets
Chilean Araucaria Forest
Evergreen sclerophyllous forests, scrubs or woodlands
Chilean Sclerophyll
Warm deserts and semideserts
Pacific Desert (Peruvian and Atacama Desert)
Monte (Argentinian Thorn-scrub)
Cold-winter (continental) deserts and semideserts
Patagonian
Tropical grasslands and savannas
Llanos
Campos Limpos (Guyana highlands)
Babacu
Campos Cerrados (Campos)
Temperate grasslands
Argentinian Pampas (Pampas)
Uruguayan Pampas
Mixed mountain and highland systems with complex zonation
Northern Andean
Colombian Montane
Yungas (Andean cloud forest)
Puna
Southern Andean
Mixed island systems
Bahamas-Bermudan
Cuban
Greater Antillean (Jamaica, Hispaniola and Puerto Rico)
Lesser Antillean
Revillagigedo Archipelago
Cocos Island
Galapagos Islands
Fernando de Noronha Island
South Trindade Island
Lake systems
Lake Titicaca
Oceanian Realm
Mixed island systems
Papuan
Micronesian
Hawaiian
Southeastern Polynesian
Central Polynesian
New Caledonian
East Melanesian
Palearctic Realm
Subtropical and temperate rain forests or woodlands
Chinese Subtropical Forest
Japanese Evergreen Forest (Japanese Subtropical Forest)
Temperate needle-leaf forests or woodlands
West Eurasian Taiga
East Siberian Taiga
Temperate broad-leaf forests or woodlands, and subpolar deciduous thickets
Icelandic
Subarctic Birchwoods
Kamchatkan
British Isles (British and Irish Forest)
Atlantic (West European Forest)
Boreonemoral (Baltic Lowland)
Middle European Forest (East European Mixed Forest)
Pannonian (Danubian Steppe)
West Anatolian
Manchu-Japanese Mixed Forest
Oriental Deciduous Forest
Evergreen sclerophyllous forests, scrubs or woodlands
Iberian Highlands
Mediterranean Sclerophyll
Warm deserts and semideserts
Sahara
Arabian Desert (Arabia)
Anatolian-Iranian Desert (Turkish-Iranian Scrub-steppe)
Cold-winter (continental) deserts and semideserts
Turanian (Kazakh Desert Scrub-steppe)
Talka-Makan-Gobi Desert
Tibetan
Iranian Desert
Tundra communities and barren Arctic desert
Arctic Desert
Higharctic Tundra
Lowarctic Tundra
Temperate grasslands
Atlas Steppe (Atlas Highlands)
Pontian Steppe (Ukraine-Kazakh Steppe)
Mongolian-Manchurian Steppe (Gobi-Manchurian Steppe)
Mixed mountain and highland systems with complex zonation
Scottish Highlands
Central European Highlands
Balkan Highlands
Caucaso-Iranian Highlands (Caucasus and Kurdistan-Iran Highlands)
Altai Highlands
Pamir-Tian Shan Highlands
Hindu Kush Highlands
Himalayan Highlands
Szechwan Highlands
Mixed island systems
Macaronesian Islands
Ryukyu Islands
Lake systems
Lake Ladoga
Aral Sea
Lake Baikal
Region coding
The hierarchy of the scheme is (with early replaced terms in parentheses):
biogeographic realm (= biogeographic regions and subregions), with 8 categories
biogeographic province (= biotic province), with 193 categories, each characterized by a major biome or biome-complex
biome, with 14 types: tropical humid forests (1); subtropical and temperate rain forests or woodlands (2); temperate needle-leaf forests or woodlands (3); tropical dry of deciduous forests (including monsoon forests) or woodlands (4); temperate broad-lead forests or woodlands and subpolar deciduous thickets (5); evergreen sclerophyllous forests, scrubs or woodlands (6); warm deserts and semideserts (7); cold-winter (continental) deserts and semideserts (8); tundra communities and barren arctic desert (9); tropical grassland and savannas (10); temperate grasslands (11); mixed mountain and highland systems with complex zonation (12); mixed island systems (13); lake systems (14).
So, for example, the Australian Central Desert province is in the Australasian realm (6), is the 9th biogeographic province in that realm, and its biome falls within "warm deserts and semideserts" (7), so it is coded 6.9.7.
The realms and provinces of the scheme are hence coded as follows:
1.1.2 Sitkan province
1.2.2. Oregonian province
1.3.3 Yukon taiga province
1.4.3 Canadian taiga province
1.5.5. Eastern forest province
1.6.5 Austroriparian province
1.7.6 Californian province
1.8.7 Sonoran province
1.9.7 Chihuahuan province
1.10.7 Tamaulipan province
1.11.8 Great Basin province
1.12.9 Aleutian Islands province
1.13.9 Alaskan tundra province
1.14.9 Canadian tundra province
1.15.9 Arctic Archipelago province
1.16.9 Greenland tundra province
1.17.9 Arctic desert and icecap province
1.18.11 Grasslands province
1.19.12 Rocky Mountains province
1.20.12 Sierra-Cascade province
1.21.12 Madrean-Cordilleran province
1.22.14 Great Lakes province
2.1.2. Chinese Subtropical Forest province
2.2.2 Japanese Evergreen Forest province (= Japanese Subtropical Forest)
2.3.3 West Eurasian Taiga province
2.4.3 East Siberian Taiga province
2.5.5 Icelandian province (= Iceland)
2.6.5 Subarctic Birchwoods province
2.7.5 Kamchatkan province
2.8.5 British Islands province (= British + Irish Forest)
2.9.5 Atlantic province (West European Forest, in part)
2.10.5 Boreonemoral province (Baltic Lowland, in part)
2.11.5 Middle European Forest province (= East European Mixed Forest)
2.12.5 Pannonian province (= Danubian Steppe)
2.13.5 West Anatolian province
2.14.5 Manchu-Japanese Mixed Forest province (= Manchurian + Japanese Mixed Forest)
2.15.6 Oriental Deciduous Forest province
2.16.6 Iberian Highlands province
2.17.7 Mediterranean Sclerophyll province
2.18.7 Sahara province
2.19.7 Arabian Desert province (= Arabia)
2.20.8 Anatolian-Iranian Desert province (= Turkish-Iranian Scrub-steppe)
2.21.8 Turanian province (= Kazakh Desert Scrub-steppe)
2.22.8 Takla-Makan-Gobi Desert steppe province
2.23.8 Tibetan province
2.24.9 Iranian Desert province
2.25.9 Arctic Desert province
2.26.9 Higharctic Tundra province (= Eurasian Tundra, in part)
2.27.11 Lowarctic Tundra province (= Eurasian Tundra, in part)
2.28.11 Atlas Steppe province (= Atlas Highlands)
2.29.11 Pontian Steppe province (= Ukraine-Kazakh Steppe)
2.30.11 Mongolian-Manchurian Steppe province (= Gobi + Manchurian Steppe)
2.31.12 Scottish Highlands Highlands province
2.32.12 Central European Highlands province
2.33.12 Balkan Highlands province
2.34.12 Caucaso-Iranian Highlands (= Caucasus + Kurdistan-Iran) province
2.35.12 Altai Highlands province
2.36.12 Pamir-Tian-Shan Highlands province
2.37.12 Hindu Kush Highlands province
2.38.12 Himalayan Highlands province
2.39.12 Szechwan Highlands province
2.40.13 Macaronesian Islands province (= 4 island provinces)
2.41.13 Ryukyu Islands province
2.42.14 Lake Ladoga province
2.43.14 Aral Sea province
2.44.14 Lake Baikal province
3.1.1 Guinean Rain Forest province
3.2.1 Congo Rain Forest province
3.3.1 Malagasy Rain Forest province
3.4.4 West African Woodland/savanna province
3.5.4 East African Woodland/savanna province
3.6.4 Congo Woodland/savanna province
3.7.4 Miombo Woodland/savanna province
3.8.4 South African Woodland/savanna province
3.9.4 Malagasy Woodland/savanna province
3.10.4 Malagasy Thorn Forest province
3.11.6 Cape Sclerophyll province
3.12.7 Western Sahel province
3.13.7 Eastern Sahel province
3.14.7 Somalian province
3.15.7 Namib province
3.16.7 Kalahari province
3.17.7 Karroo province
3.18.12 Ethiopian Highlands province
3.19.12 Guinean Highlands province
3.20.12 Central African Highlands province
3.21.12 East African Highlands province
3.22.12 South African Highlands province
3.23.13 Ascension and St. Helena Islands province
3.24.13 Comores Islands and Aldabra province
3.25.13 Mascarene Islands province
3.26.14 Lake Rudolf province
3.27.14 Lake Ukerewe (Victoria) province
3.28.14 Lake Tanganyika province
3.29.14 Lake Malawi (Nyasa) province
4.1.1 Malabar Rainforest province
4.2.1 Ceylonese Rainforest province
4.3.1 Bengalian Rainforest province
4.4.1 Burman Rainforest province
4.5.1 Indochinese Rainforest province
4.6.1 South Chinese Rainforest province
4.7.1 Malayan Rainforest province
4.8.4 Indus-Ganges Monsoon Forest province
4.9.4 Burma Monsoon Forest province
4.10.4 Thailandian Monsoon Forest province
4.11.4 Mahanadian province
4.12.4 Coromandel province
4.13.4 Ceylonese Monsoon Forest province
4.14.4 Deccan Thorn Forest province
4.15.7 Thar Desert province
4.16.12 Seychelles and Amirantes Islands province
4.17.12 Laccadives Islands province
4.18.12 Maldives and Chagos Islands province
4.19.12 Cocos-Keeling and Christmas Islands province
4.20.12 Andaman and Nicobar Islands province
4.21.12 Sumatra province
4.22.12 Java province
4.23.12 Lesser Sunda Islands province
4.24.12 Celebes province
4.25.12 Borneo province
4.26.12 Philippines province
4.27.12 Taiwan province
5.1.13 Papuan province
5.2.13 Micronesian province
5.3.13 Hawaiian province
5.4.13 Southeastern Polynesian province
5.5.13 Central Polynesian province
5.6.13 New Caledonian province
5.7.13 East-Melanesian province
6.1.1 Queensland Coastal province
6.2.2 Tasmanian province
6.3.4 Northern Coastal province
6.4.6 Western Sclerophyll province
6.5.6 Southern Sclerophyll province
6.6.6 Eastern Sclerophyll province≠
6.7.6 Brigalow province
6.8.7 Western Mulga province
6.9.7 Central Desert province
6.10.7 Southern Mulga/Saltbush province
6.11.10 Northern Savanna province
6.12.10 Northern Grasslands province
6.13.11 Eastern Grasslands and Savannas province
7.1.2 Neozealandia province
7.2.9 Maudlandia province
7.3.9 Marielandia province
7.4.9 Insulantarctica province
8.1.1 Campechean province (= Campeche)
8.2.1 Panamanian province
8.3.1 Colombian Coastal province
8.4.1. Guyanese province
8.5.1 Amazonian province
8.6.1. Madeiran province
8.7.1 Serra do mar province (= Bahian coast)
8.8.2 Brazilian Rain Forest province (= Brazilian Deciduous Forest)
8.9.2 Brazilian Planalto province (= Brazilian Araucaria Forest)
8.10.2 Valdivian Forest province (= Chilean Temperate Rain Forest, in part)
8.11.2 Chilean Nothofagus province (= Chilean Temperate Rain Forest, in part)
8.12.4 Everglades province
8.13.4 Sinaloan province
8.14.4 Guerreran province
8.15.4 Yucatecan province (= Yucatán)
8.16.4 Central American province (= Carib-Pacific)
8.17.4 Venezuelan Dry Forest province
8.18.4 Venezuelan Deciduous Forest province
8.19.4 Ecuadorian Dry Forest province
8.20.4 Caatinga province
8.21.4 Gran Chaco province
8.22.5 Chilean Araucaria Forest province
8.23.6 Chilean Sclerophyll province
8.24.7 Pacific Desert province (= Peruvian + Atacama Desert)
8.25.7 Monte (= Argentinian Thorn-scrub) province
8.26.8 Patagonian province
8.27.10 Llanos province
8.28.10 Campos Limpos province (= Guyana highlands)
8.29.10 Babacu province
8.30.10 Campos Cerrados province (= Campos)
8.31.11 Argentinian Pampas province (= Pampas)
8.32.11 Uruguayan Pampas province
8.33.12 Northern Andean province (= Northern Andes)
8.34.12 Colombian Montane province
8.35.12 Yungas province (= Andean cloud forest)
8.36.12 Puna province
8.37.12 Southern Andean province (= Southern Andes)
8.38.13 Bahamas-Bermudan province (= Bahamas + Bermuda)
8.39.13 Cuban province
8.40.13 Greater Aritillean province (= Jamaica + Hispaniola + Puerto Rico)
8.41.13 Lesser Antillean province (= Lesser Antilles)
8.42.13 Revilla Gigedo Island province
8.43.13 Cocos Island province
8.44.13 Galapagos Islands province
8.45.13 Fernando de Noronja Island province
8.46.13 South Trinidade Island province
8.47.14 Lake Titicaca province
See also
Biogeographic provinces of hydrothermal vent systems
Floristic province
List of ecoregions
References
Bibliography
Dassman, Raymond (1976). "Biogeographical Provinces". CoEvolution Quarterly, Number 11, Fall 1976.
Biogeography
Ecoregions
Floristic provinces
Floristic regions
Landscape ecology
Ecology
Physical geography | List of biogeographic provinces | [
"Biology"
] | 3,847 | [
"Biogeography",
"Ecology"
] |
8,708,358 | https://en.wikipedia.org/wiki/CXCL11 | C-X-C motif chemokine 11 (CXCL11) is a protein that in humans is encoded by the CXCL11 gene.
C-X-C motif chemokine 11 is a small cytokine belonging to the CXC chemokine family that is also called Interferon-inducible T-cell alpha chemoattractant (I-TAC) and Interferon-gamma-inducible protein 9 (IP-9). It is highly expressed in peripheral blood leukocytes, pancreas and liver, with moderate levels in thymus, spleen and lung and low expression levels were in small intestine, placenta and prostate.
Gene expression of CXCL11 is strongly induced by IFN-γ and IFN-β, and weakly induced by IFN-α. This chemokine elicits its effects on its target cells by interacting with the cell surface chemokine receptor CXCR3, with a higher affinity than do the other ligands for this receptor, CXCL9 and CXCL10. CXCL11 is chemotactic for activated T cells. Its gene is located on human chromosome 4 along with many other members of the CXC chemokine family.
Biomarkers
CXCL9, -10, -11 have proven to be valid biomarkers for the development of heart failure and left ventricular dysfunction, suggesting an underlining pathophysiological relation between levels of these chemokines and the development of adverse cardiac remodeling.
References
External links
Further reading
Cytokines | CXCL11 | [
"Chemistry"
] | 342 | [
"Cytokines",
"Signal transduction"
] |
8,709,090 | https://en.wikipedia.org/wiki/M13%20link | The M13 link, formally Link, Cartridge, Metallic Belt, 7.62mm, M13, is the U.S. military designation for a metallic disintegrating link specifically designed for ammunition belt-fed firearms and 7.62×51mm NATO rounds. It was introduced in the mid-20th century. It is the primary link type for the United States and among NATO for the 7.62×51mm NATO cartridge. , it has been in use for over 60 years and is used on the Dillon M134D Minigun, M60 Machine Gun, FN MAG/M240, Mk 48, MG3, HK21, MG5, UKM-2000, K16, SS-77, and Negev NG-7, among others. Some countries redesignated the M13 link when it was adopted.
History
The M13 link replaced the older M1 links designed for .30-06 Springfield ammunition, which bound cartridges to each other at the neck, used on the older M1917 Browning machine gun and M1919 Browning machine gun family, though some conversions of the M1919 to the M13 were done, such as on the U.S. Navy Mark 21 Mod 0 machine gun, which saw service in the Vietnam War. Once converted, it cannot use other link types, as firearms made for the M13 Link are not backward-compatible with the M1 link (or other systems). The M9 link is technically very similar to the M1 link but designed for 12.7×99mm NATO/.50 BMG ammunition used in heavy machine guns like the M2 machine gun. The M1 and M9 links are pull-out designs. Rounds are extracted by pulling them rearward out of the link.
The NATO Standardization Agreement STANAG 2329 Links for Disintegrating Belts for Use with NATO 7.62mm Cartridges described the M13 link in 1982. STANAG 2329 has been rendered inactive.
The DEF STAN 13-33 - Standard NATO 7.62 Millimetre Rounds and Associated Chargers and Links is a 1982 standard by the Ministry of Defence of the United Kingdom. This Defence Standard specifies 7.62 mm small arms ammunition and its associated chargers and links for use by the Ministry of Defence to meet its commitment to NATO in the United Kingdom.
The United States Army MIL-DTL-45403E (3) CONT. DIST. - Link, Cartridge, Metallic Belt, 7.62 Millimeter - M13 2021 specification covers the requirements and verification methods for the Link, Cartridge, Metallic Belt, 7.62mm - M13 for use in 7.62mm machine guns.
Design details
The M13 link is a push-through design. Rounds are extracted by pushing them forward out of the link. The left side of a single link has a semi-circular loop which holds the main body of the cartridge case below the shoulder, and an extension on the right that forms two similar loops which were designed to fit in between the two right-side loops of the next link, and which have a small metal tab that extends down to the cartridge base and fit into the extraction groove of the case.
The M13 link binds the rounds from halfway down the length of the case to the case head. This was designed so that the bolt of the machine gun using the link would come forward upon squeezing the trigger and strip a round from its link from below the cartridge, and the round would be chambered, fired then extracted and ejected. The feeding pawl in the gun would pull the belt to the right as the gun was fired or cocked, sending the loose link out to the right side of the receiver, where the expended case was also ejected, normally separately from a different ejector port to the link.
MIL-L-45403D stipulates that the force to strip a NATO approved round from the M13 link should be between and the belt have a minimal tensile strength of . A single M13 link weighs approximately .
The links often have an extra anti-corrosion surface treatment, generally (oil impregnated) black phosphate, and can be collected and reassembled by hand with fresh ammunition, but in practice this is not commonly done as it is labor-intensive, and the inexpensive links are considered disposable. Sometimes the ejected link pieces are collected to avoid littering the interior of aircraft and vehicles or reuse.
The early 1970s M27 link is a link of smaller, but identical design, used among NATO for 5.56×45mm NATO chambered light machine guns, such as the FN Minimi/M249, HK21, MG4, CETME Ameli, K3, Mini-SS and Negev, among others.
See also
M1 link
M27 link
List of firearms
References
http://www.army-technology.com/contractors/ammunition/eurolinks/
M13 link technical data
Firearm components
Ammunition
Military equipment introduced in the 1960s | M13 link | [
"Technology"
] | 1,013 | [
"Firearm components",
"Components"
] |
8,709,293 | https://en.wikipedia.org/wiki/Margot%20Taul%C3%A9 | Margot Taulé (August 30, 1920 - 11 July 2008) was an architect-engineer and the first woman to become a registered professional engineer and architect in the Dominican Republic.
Life and education
Rose Marguerite Taulé Casso was born on 30 August 1920 in Santo Domingo, Dominican Republic.
She was one of two daughters of French immigrants.
She studied in the Department of Civil Engineering and Architecture in the former Universidad de Santo Domingo (now the Universidad Autónoma de Santo Domingo) from 1940 to 1944, and was awarded her bachelor's degree in Engineering and Architecture in 1948.
Works
She was responsible for the structural design of the building that houses the Dominican National Congress. This structure was commissioned by the dictator Rafael Leónidas Trujillo in the 1960s and is still in use today. She also worked as a structural engineer alongside other great Dominican architects such as Henry Gazón, Guillermo Gonzales, Leo Pou and José A. Caro.
Academia
Margot Taulé also made very significant and lasting contributions to the academic development of the engineering and architecture profession in Dominican Republic. In 1956 she earned by opposition the title of Full Professor in the University of Santo Domingo. She held the position until 1964 when the university changed its name to Universidad Autónoma de Santo Domingo.
In 1966 a group of professors and distinguished intellectuals frustrated with the situation in the Universidad Autónoma de Santo Domingo founded the Universidad Nacional Pedro Henriquez Urena (UNPHU). Margot Taule was one of the founding professors and a member of the main steering committee. In UNPHU she worked as professor in the civil engineering, architecture and mathematics department. At various times she also held the position Dean of Architecture, Dean of Engineering and in 2003 was elected by the board of trustees as President of the university, a position she held until 2005. In 1985 she received the title of Distinguished Professor by UNPHU citing her contributions to education in engineering and architecture.
Margot Taulé died July 11, 2008, in Santo Domingo, Dominican Republic.
References
1920 births
2008 deaths
Structural engineers
Dominican Republic architects
Dominican Republic women architects
Dominican Republic women engineers
Women engineers | Margot Taulé | [
"Engineering"
] | 424 | [
"Structural engineering",
"Structural engineers"
] |
8,711,785 | https://en.wikipedia.org/wiki/Ulam%20number | In mathematics, the Ulam numbers comprise an integer sequence devised by and named after Stanislaw Ulam, who introduced it in 1964. The standard Ulam sequence (the (1, 2)-Ulam sequence) starts with U1 = 1 and U2 = 2. Then for n > 2, Un is defined to be the smallest integer that is the sum of two distinct earlier terms in exactly one way and larger than all earlier terms.
Examples
As a consequence of the definition, 3 is an Ulam number (1 + 2); and 4 is an Ulam number (1 + 3). (Here 2 + 2 is not a second representation of 4, because the previous terms must be distinct.) The integer 5 is not an Ulam number, because 5 = 1 + 4 = 2 + 3. The first few terms are
1, 2, 3, 4, 6, 8, 11, 13, 16, 18, 26, 28, 36, 38, 47, 48, 53, 57, 62, 69, 72, 77, 82, 87, 97, 99, 102, 106, 114, 126, 131, 138, 145, 148, 155, 175, 177, 180, 182, 189, 197, 206, 209, 219, 221, 236, 238, 241, 243, 253, 258, 260, 273, 282, ... .
There are infinitely many Ulam numbers. For, after the first n numbers in the sequence have already been determined, it is always possible to extend the sequence by one more element: is uniquely represented as a sum of two of the first n numbers, and there may be other smaller numbers that are also uniquely represented in this way, so the next element can be chosen as the smallest of these uniquely representable numbers.
Ulam is said to have conjectured that the numbers have zero density, but they seem to have a density of approximately 0.07398.
Properties
Apart from 1 + 2 = 3 any subsequent Ulam number cannot be the sum of its two prior consecutive Ulam numbers.
Proof: Assume that for n > 2, Un−1 + Un = Un+1 is the required sum in only one way; then so does Un−2 + Un produce a sum in only one way, and it falls between Un and Un+1. This contradicts the condition that Un+1 is the next smallest Ulam number.
For n > 2, any three consecutive Ulam numbers (Un−1, Un, Un+1) as integer sides will form a triangle.
Proof: The previous property states that for n > 2, Un−2 + Un ≥ Un + 1. Consequently Un−1 + Un > Un+1 and because Un−1 < Un < Un+1 the triangle inequality is satisfied.
The sequence of Ulam numbers forms a complete sequence.
Proof: By definition Un = Uj + Uk where j < k < n and is the smallest integer that is the sum of two distinct smaller Ulam numbers in exactly one way. This means that for all Un with n > 3, the greatest value that Uj can have is Un−3 and the greatest value that Uk can have is Un−1.
Hence Un ≤ Un−1 + Un−3 < 2Un−1 and U1 = 1, U2 = 2, U3 = 3. This is a sufficient condition for Ulam numbers to be a complete sequence.
For every integer n > 1 there is always at least one Ulam number Uj such that n ≤ Uj < 2n.
Proof: It has been proved that there are infinitely many Ulam numbers and they start at 1. Therefore for every integer n > 1 it is possible to find j such that Uj−1 ≤ n ≤ Uj. From the proof above for n > 3, Uj ≤ Uj−1 + Uj−3 < 2Uj−1. Therefore n ≤ Uj < 2Uj−1 ≤ 2n. Also for n = 2 and 3 the property is true by calculation.
In any sequence of 5 consecutive positive integers {i, i + 1,..., i + 4}, i > 4 there can be a maximum of 2 Ulam numbers.
Proof: Assume that the sequence {i, i + 1,..., i + 4} has its first value i = Uj an Ulam number then it is possible that i + 1 is the next Ulam number Uj+1. Now consider i + 2, this cannot be the next Ulam number Uj+2 because it is not a unique sum of two previous terms. i + 2 = Uj+1 + U1 = Uj + U2. A similar argument exists for i + 3 and i + 4.
Inequalities
Ulam numbers are pseudo-random and too irregular to have tight bounds. Nevertheless from the properties above, namely, at worst the next Ulam number Un+1 ≤ Un + Un−2 and in any five consecutive positive integers at most two can be Ulam numbers, it can be stated that
≤ Un ≤ Nn+1 for n > 0,
where Nn are the numbers in Narayana’s cows sequence: 1,1,1,2,3,4,6,9,13,19,... with the recurrence relation Nn = Nn−1 +Nn−3 that starts at N0.
Hidden structure
It has been observed that the first 10 million Ulam numbers satisfy except for the four elements (this has now been verified for the first Ulam numbers). Inequalities of this type are usually true for sequences exhibiting some form of periodicity but the Ulam sequence does not seem to be periodic and the phenomenon is not understood. It can be exploited to do a fast computation of the Ulam sequence (see External links).
Generalizations
The idea can be generalized as (u, v)-Ulam numbers by selecting different starting values (u, v). A sequence of (u, v)-Ulam numbers is regular if the sequence of differences between consecutive numbers in the sequence is eventually periodic. When v is an odd number greater than three, the (2, v)-Ulam numbers are regular. When v is congruent to 1 (mod 4) and at least five, the (4, v)-Ulam numbers are again regular. However, the Ulam numbers themselves do not appear to be regular.
A sequence of numbers is said to be s-additive if each number in the sequence, after the initial 2s terms of the sequence, has exactly s representations as a sum of two previous numbers. Thus, the Ulam numbers and the (u, v)-Ulam numbers are 1-additive sequences.
If a sequence is formed by appending the largest number with a unique representation as a sum of two earlier numbers, instead of appending the smallest uniquely representable number, then the resulting sequence is the sequence of Fibonacci numbers.
Notes
References
External links
Ulam Sequence from MathWorld
Fast computation of the Ulam sequence by Philip Gibbs
Description of Algorithm by Donald Knuth
The github page of Daniel Ross
Eponymous numbers in mathematics
Integer sequences | Ulam number | [
"Mathematics"
] | 1,472 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Numbers",
"Number theory"
] |
8,712,440 | https://en.wikipedia.org/wiki/Switched%20video | Switched video or switched digital video (SDV), sometimes referred to as switched broadcast (SWB), is a telecommunications industry term for a network scheme for distributing digital video via a cable. Switched video sends the digital video more efficiently freeing bandwidth. The scheme applies to digital video distribution both on typical cable TV systems using QAM channels, or on IPTV systems.
Description
In hybrid fibre-coaxial systems, a fiber optic network extending from the operator's head end carries video channels out to a fiber optic node that services up to 2000 end points. Video is then sent via coaxial cable. Note that only a percentage of these homes are actively watching channels at a given time. Rarely are all channels being accessed by the homes in the service group.
In a switched video system, the unwatched channels do not need to be sent.
In US cable systems, equipment in the home sends a channel request signal back to the distribution hub. If a channel is requested, the distribution hub allocates a QAM channel and transmits the channel to the coaxial cable. For this to work, the home equipment must have two-way communication ability. Switched video uses the same mechanisms as video on demand and may be viewed as non-ending video on demand that users share.
Technical
Two-way communication is handled differently between cable and IPTV schemes. IPTV uses Internet communication protocols but requires a different distribution infrastructure. US cable companies elected the less costly approach of upgrading existing infrastructure. In the upgrade approach, various proprietary schemes use specific frequencies for messaging the distribution hub.
For switched video to work on cable systems, digital television users in a subscription group must have devices capable of communicating to the distribution hub in a compatible manner. Unlike other features dependent on two-way communication such as video on demand, the requirement to upgrade all digital set-top boxes within a group makes conversion to switched video expensive. CableLabs proposed in the CableCARD 2.0 specification that two-way communication be supported with a scheme that required more powerful hardware capable of running Java software. Many cable companies indicated they would build lower cost devices that do not require this OCAP programming environment, so that upgrading to switched video would not be as costly. Consumer electronics companies also prefer a lighter weight solution, and so absent a standard, the conversion to switched video may require many years.
History
BigBand Networks (acquired by Arris Group in 2011) was the switched video pioneer, and received the Technology & Engineering Emmy Award in 2008 for innovation in the HFC market.
Major vendors like Arris Group and Cisco also provide SDV solutions for the cable operators. An emerging market supplies back office applications to analyze and control performance.
See also
Cable television
Internet Protocol television (IPTV)
Quadrature amplitude modulation
References
External links
Definition of switched video - PC Magazine Overview
Switched Digital Video Architecture Guide - Cisco White Paper
How Switched Digital Video Works - HowStuffWorks.com Outline
Using Bandwidth More Efficiently with Switched Digital Video - Motorola White Paper (archived)
Broadband
Cable television technology
Streaming television | Switched video | [
"Technology"
] | 617 | [
"Multimedia",
"Streaming television"
] |
8,712,675 | https://en.wikipedia.org/wiki/Hamming%287%2C4%29 | In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.
The Hamming code adds three additional check bits to every four data bits of the message. Hamming's (7,4) algorithm can correct any single-bit error, or detect all single-bit and two-bit errors. In other words, the minimal Hamming distance between any two correct codewords is 3, and received words can be correctly decoded if they are at a distance of at most one from the codeword that was transmitted by the sender. This means that for transmission medium situations where burst errors do not occur, Hamming's (7,4) code is effective (as the medium would have to be extremely noisy for two out of seven bits to be flipped).
In quantum information, the Hamming (7,4) is used as the base for the Steane code, a type of CSS code used for quantum error correction.
Goal
The goal of the Hamming codes is to create a set of parity bits that overlap so that a single-bit error in a data bit or a parity bit can be detected and corrected. While multiple overlaps can be created, the general method is presented in Hamming codes.
{| class="wikitable"
|-
!Bit # !! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7
|-
!Transmitted bit !! !! !! !! !! !! !!
|-
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|-
|
|
|
|
|
|
|
|
|}
This table describes which parity bits cover which transmitted bits in the encoded word. For example, p2 provides an even parity for bits 2, 3, 6, and 7. It also details which transmitted bit is covered by which parity bit by reading the column. For example, d1 is covered by p1 and p2 but not p3 This table will have a striking resemblance to the parity-check matrix (H) in the next section.
Furthermore, if the parity columns in the above table were removed
{| class="wikitable"
|-
! !! !! !! !!
|-
|
|
|
|
|
|-
|
|
|
|
|
|-
|
|
|
|
|
|}
then resemblance to rows 1, 2, and 4 of the code generator matrix (G) below will also be evident.
So, by picking the parity bit coverage correctly, all errors with a Hamming distance of 1 can be detected and corrected, which is the point of using a Hamming code.
Hamming matrices
Hamming codes can be computed in linear algebra terms through matrices because Hamming codes are linear codes. For the purposes of Hamming codes, two Hamming matrices can be defined: the code generator matrix G and the parity-check matrix H:
As mentioned above, rows 1, 2, and 4 of G should look familiar as they map the data bits to their parity bits:
p1 covers d1, d2, d4
p2 covers d1, d3, d4
p3 covers d2, d3, d4
The remaining rows (3, 5, 6, 7) map the data to their position in encoded form and there is only 1 in that row so it is an identical copy. In fact, these four rows are linearly independent and form the identity matrix (by design, not coincidence).
Also as mentioned above, the three rows of H should be familiar. These rows are used to compute the syndrome vector at the receiving end and if the syndrome vector is the null vector (all zeros) then the received word is error-free; if non-zero then the value indicates which bit has been flipped.
The four data bits — assembled as a vector p — is pre-multiplied by G (i.e., ) and taken modulo 2 to yield the encoded value that is transmitted. The original 4 data bits are converted to seven bits (hence the name "Hamming(7,4)") with three parity bits added to ensure even parity using the above data bit coverages. The first table above shows the mapping between each data and parity bit into its final bit position (1 through 7) but this can also be presented in a Venn diagram. The first diagram in this article shows three circles (one for each parity bit) and encloses data bits that each parity bit covers. The second diagram (shown to the right) is identical but, instead, the bit positions are marked.
For the remainder of this section, the following 4 bits (shown as a column vector) will be used as a running example:
Channel coding
Suppose we want to transmit this data (1011) over a noisy communications channel. Specifically, a binary symmetric channel meaning that error corruption does not favor either zero or one (it is symmetric in causing errors). Furthermore, all source vectors are assumed to be equiprobable. We take the product of G and p, with entries modulo 2, to determine the transmitted codeword x:
This means that 0110011 would be transmitted instead of transmitting 1011.
Programmers concerned about multiplication should observe that each row of the result is the least significant bit of the Population Count of set bits resulting from the row and column being Bitwise ANDed together rather than multiplied.
In the adjacent diagram, the seven bits of the encoded word are inserted into their respective locations; from inspection it is clear that the parity of the red, green, and blue circles are even:
red circle has two 1's
green circle has two 1's
blue circle has four 1's
What will be shown shortly is that if, during transmission, a bit is flipped then the parity of two or all three circles will be incorrect and the errored bit can be determined (even if one of the parity bits) by knowing that the parity of all three of these circles should be even.
Parity check
If no error occurs during transmission, then the received codeword r is identical to the transmitted codeword x:
The receiver multiplies H and r to obtain the syndrome vector z, which indicates whether an error has occurred, and if so, for which codeword bit. Performing this multiplication (again, entries modulo 2):
Since the syndrome z is the null vector, the receiver can conclude that no error has occurred. This conclusion is based on the observation that when the data vector is multiplied by G, a change of basis occurs into a vector subspace that is the kernel of H. As long as nothing happens during transmission, r will remain in the kernel of H and the multiplication will yield the null vector.
Error correction
Otherwise, suppose, we can write
modulo 2, where ei is the unit vector, that is, a zero vector with a 1 in the , counting from 1.
Thus the above expression signifies a single bit error in the place.
Now, if we multiply this vector by H:
Since x is the transmitted data, it is without error, and as a result, the product of H and x is zero. Thus
Now, the product of H with the standard basis vector picks out that column of H, we know the error occurs in the place where this column of H occurs.
For example, suppose we have introduced a bit error on bit #5
The diagram to the right shows the bit error (shown in blue text) and the bad parity created (shown in red text) in the red and green circles. The bit error can be detected by computing the parity of the red, green, and blue circles. If a bad parity is detected then the data bit that overlaps only the bad parity circles is the bit with the error. In the above example, the red and green circles have bad parity so the bit corresponding to the intersection of red and green but not blue indicates the errored bit.
Now,
which corresponds to the fifth column of H. Furthermore, the general algorithm used (see Hamming code#General algorithm) was intentional in its construction so that the syndrome of 101 corresponds to the binary value of 5, which indicates the fifth bit was corrupted. Thus, an error has been detected in bit 5, and can be corrected (simply flip or negate its value):
This corrected received value indeed, now, matches the transmitted value x from above.
Decoding
Once the received vector has been determined to be error-free or corrected if an error occurred (assuming only zero or one bit errors are possible) then the received data needs to be decoded back into the original four bits.
First, define a matrix R:
Then the received value, pr, is equal to Rr. Using the running example from above
Multiple bit errors
It is not difficult to show that only single bit errors can be corrected using this scheme. Alternatively, Hamming codes can be used to detect single and double bit errors, by merely noting that the product of H is nonzero whenever errors have occurred. In the adjacent diagram, bits 4 and 5 were flipped. This yields only one circle (green) with an invalid parity but the errors are not recoverable.
However, the Hamming (7,4) and similar Hamming codes cannot distinguish between single-bit errors and two-bit errors. That is, two-bit errors appear the same as one-bit errors. If error correction is performed on a two-bit error the result will be incorrect.
Similarly, Hamming codes cannot detect or recover from an arbitrary three-bit error; Consider the diagram: if the bit in the green circle (colored red) were 1, the parity checking would return the null vector, indicating that there is no error in the codeword.
All codewords
Since the source is only 4 bits then there are only 16 possible transmitted words. Included is the eight-bit value if an extra parity bit is used (see Hamming(7,4) code with an additional parity bit). (The data bits are shown in blue; the parity bits are shown in red; and the extra parity bit shown in green.)
E7 lattice
The Hamming(7,4) code is closely related to the E7 lattice and, in fact, can be used to construct it, or more precisely, its dual lattice E7∗ (a similar construction for E7 uses the dual code [7,3,4]2). In particular, taking the set of all vectors x in Z7 with x congruent (modulo 2) to a codeword of Hamming(7,4), and rescaling by 1/, gives the lattice E7∗
This is a particular instance of a more general relation between lattices and codes. For instance, the extended (8,4)-Hamming code, which arises from the addition of a parity bit, is also related to the E8 lattice.
References
External links
A programming problem about the Hamming Code(7,4)
Coding theory
Error detection and correction
Computer arithmetic | Hamming(7,4) | [
"Mathematics",
"Engineering"
] | 2,374 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction",
"Computer arithmetic",
"Arithmetic"
] |
8,713,219 | https://en.wikipedia.org/wiki/Helical%20wheel | A helical wheel is a type of plot or visual representation used to illustrate the properties of alpha helices in proteins.
The sequence of amino acids that make up a helical region of the protein's secondary structure are plotted in a rotating manner where the angle of rotation between consecutive amino acids is 100°, so that the final representation looks down the helical axis.
Polarity and characteristics
The plot reveals whether hydrophobic amino acids are concentrated on one side of the helix, usually with polar or hydrophilic amino acids on the other. This arrangement is common in alpha helices within globular proteins, where one face of the helix is oriented toward the hydrophobic core and one face is oriented toward the solvent-exposed surface. Specific patterns characteristic of protein folds and protein docking motifs are also revealed, as in the identification of leucine zipper dimerization regions and coiled coils. This projection diagram is often called and "Edmundson wheel" after its inventor.
Drawing helical wheels
Helical wheels can be drawn by a variety of software packages including helixvis in R, heliquest in R, or via the HELIQUEST server.
References
Further reading
External links
Less traditional, more colorful wheels (requires Macromedia Flash)
DrawCoil 1.0
Helical Wheel Projections, UC Riverside
NetWheels, High quality Helical Wheel and Net projections
High-quality helical wheels in R
Helixvis
Heliquest
Protein structure
Protein methods
Amino acids | Helical wheel | [
"Chemistry",
"Biology"
] | 301 | [
"Biochemistry methods",
"Biomolecules by chemical classification",
"Protein methods",
"Protein biochemistry",
"Amino acids",
"Structural biology",
"Protein structure"
] |
8,713,276 | https://en.wikipedia.org/wiki/Inventory%20Information%20Approval%20System | The Inventory Information Approval System, or IIAS, is a point-of-sale technology used by retailers that accept FSA debit cards, which are issued for use with medical flexible spending accounts (FSAs), health reimbursement accounts (HRAs), and some health savings accounts (HSAs) in the United States.
By the end of 2007, all grocery stores, discount stores, and online pharmacies that accept FSA debit cards must have an IIAS; by the end of 2008, most pharmacies have an IIAS as well.
The predecessor to the current IIAS was developed by the online retailer drugstore.com for its "FSA store" in 2005; it was first introduced to brick-and-mortar retailing by Walgreens in 2006. Wal-Mart became the first discounter with an IIAS in late 2006.
How IIAS works
IIAS is similar to the system used by grocery stores ever since they introduced the first barcode scanners in the 1970s to separate items eligible for purchase under the Food Stamp Program from those that are not eligible. Every item in the grocery store's database is flagged "yes" or "no" for food-stamp eligibility; the scanner automatically keeps a separate total for food-stamp items. In the beginning, the cashier pressed a special "food-stamp total" key, and the customer presented paper food stamps; today, the customer swipes an Electronic Benefit Transfer (EBT) card and selects the "food stamp" account, and the register charges only the food-stamp total to the EBT card. The remaining balance must be paid for by other means.
IIAS works in much the same way, but with medical FSAs, HRAs, or HSAs instead of food stamps: (Usually, the term "FSA" is used to cover all of them; HRAs, HSAs, and non-medical FSAs are relatively rare, and HSAs can also have regular debit cards though many of them have FSA debit cards instead.)
Every item in the store's scanner database is flagged "yes" or "no" for FSA eligibility. (This flag is separate from the one for food stamps, if there is one.)
Prescription drugs are usually not in the main scanner database (though they may be made scannable by tying the pharmacy system into the scanners), but they are almost always FSA-eligible; therefore, the pharmacy department is often categorically flagged as FSA-eligible, the only department to be so treated. (In contrast, multiple departments of most grocery stores are categorically flagged as food-stamp eligible, including the meat, produce, and dry-grocery departments.)
At checkout, the scanner (for brick-and-mortar retailers) or shopping cart (for online retailers) keeps a separate total for those items that are "FSA-eligible".
If an FSA debit card is presented for payment, the scanner or shopping cart will charge the card, but for no more than the "FSA-eligible" total.
If there are other items in the order (or if the FSA debit card did not pay for all eligible items), the scanner or shopping cart then demands another form of payment, such as cash, check, credit card or debit card, to pay for the remaining items.
IIAS does have one additional requirement that is not normally found with food stamps, though the U.S. Department of Agriculture can audit retailers directly for similar purposes: Beginning January 1, 2007, the merchant must make a record of each transaction available to the employer, or more commonly, to the employer's FSA or HRA provider. This can be done contemporaneously with the transaction, or it may be provided later if the Internal Revenue Service ever audits the employer.
The terminology used by the IRS in its descriptions of IIAS may seem unusual at first. This stems from the history of IIAS, as it was first developed by an online retailer (drugstore.com) and only later adapted to brick-and-mortar retailing. For example, IIAS is described by the IRS as an "inventory control" system tied to SKUs; but it's generally easier to understand as it was implemented by Walgreens and Wal-Mart, i.e., as a point-of-sale system tied to UPCs.
IRS requirements to use IIAS
Though IIAS was first used in 2005, it was not officially approved by the Internal Revenue Service until July 2006, in IRS Notice 2006–69. At the same time, the IRS decided to crack down on FSA/HRA providers that were not following prior IRS guidance on FSA debit cards. As part of this, the IRS decided that grocery and discount stores would not be allowed to accept FSA debit cards unless they installed an IIAS; they decided it would be too easy to misuse the cards if they could be used at grocers and discounters for anything they sold, even if the grocer or discounter also had a pharmacy. However, they permitted stand-alone chain or independent pharmacies (known as "true pharmacies") to accept the card without an IIAS.
Grocers and discounters immediately challenged the IRS ruling, claiming that their pharmacies were being discriminated against, and that since most "true pharmacies" sold ineligible goods as well, the risk from them was just as great. Therefore, two changes were made by IRS Ruling 2007–02 in December 2006:
Grocers and discounters are allowed to keep accepting the cards until December 31, 2007; this was to give them sufficient time to install an IIAS.
"True pharmacies" are required to install an IIAS after June 30, 2009, unless at least 90% of the individual pharmacy's sales are of "FSA-eligible" items, i.e., prescription drugs or over-the-counter (OTC) items.
Most major pharmacy chains report that 60–65% of their sales come from the pharmacy; therefore, OTC would have to account for 25–30% of their total sales for them to qualify, which is unlikely—especially since each individual pharmacy must qualify separately. Therefore, only independent pharmacies are likely to qualify for the exemption.
Because of this ruling, by 2009 most grocers, discounters, and chain or Internet pharmacies in the U.S. must have an IIAS in place in order to accept FSA debit cards.
Importance of IIAS
In addition to the above IRS requirements, IIAS is important in promoting the use of tax-favored health accounts, especially FSAs (which are usually set up by employees), for these reasons:
While other IRS-approved "auto-adjudication" systems for electronic substantiation of FSA debit card charges are geared towards health plan expenses, such as copay matching or electronic transmittal of explanations of benefits, IIAS is the only one that is designed for use with over-the-counter drugs and similar items (OTC) as well as prescription drugs.
IIAS is the first system with 100% "auto-adjudication" of an entire class of FSA debit card charges that has been widely adopted by the FSA industry. A few FSA vendors had previously used proprietary systems which provided 100% auto-adjudication of prescription charges thru a pharmacy benefits manager, but they ran into numerous technical and educational issues and were not adaptable to OTC.
Some of the IRS rules on what OTC items are and are not eligible for FSAs have proven rather arcane in practice; for example, condoms are OK since they prevent pregnancy, but K-Y Jelly is not if it is used to lubricate them. IIAS effectively manages this problem by verifying eligibility of each OTC item at point-of-sale.
Both paper claims and manual substantiation of FSA debit card charges often required the submission of receipts with "full names" of OTC items; but many stores abbreviate item names in such a way that it is almost impossible to tell if the item is eligible or not. Also, most providers did not reimburse sales tax on paper claims with "mixed" FSA/non-FSA receipts because they could not "split" the tax line item without being versed in the sales tax laws of every state and locality in the U.S., a near impossibility. IIAS avoids this by having the retailer itself verify item eligibility and "split" the sales tax.
The process of demanding receipts or reimbursement for FSA debit card charges that are not "auto-adjudicated", known as "pay and chase" in the industry (a term recognized by the IRS in Notice 2007–02), proved particularly cumbersome for OTC items due to the lack of "auto-adjudication" systems and the high potential for fraudulent or erroneous charges; IIAS eliminates this by providing an "auto-adjudication" system for OTC while preventing many fraudulent or erroneous charges at retailers.
Since IIAS eliminates many of the roadblocks that previously existed for use of medical FSAs at retailers (especially for OTC items), it is hoped that it will lead to increased enrollment in medical FSAs.
References
IRS Notice 2006–69
IRS Notice 2007-02
Example of an IIAS system
Debit cards
Retail store elements
E-commerce in the United States
Healthcare in the United States | Inventory Information Approval System | [
"Technology"
] | 1,982 | [
"Components",
"Retail store elements"
] |
8,713,532 | https://en.wikipedia.org/wiki/Tekufah | Tekufot (, singular təqufā, literally, "turn" or "cycle") are the four seasons of the year recognized by Talmud writers. According to Samuel Yarḥinai, each tekufah marks the beginning of a period of 91 days 7½ hours. The four tekufot are:
Tekufat Nisan, the vernal equinox, when the sun enters Aries; this is the beginning of spring, or "eit hazera" (seed-time), when day and night are equal.
Tekufat Tammuz, the summer solstice, when the sun enters Cancer; this is the summer season, or et ha-katsir (harvest-time), when the day is the longest in the year.
Tekufat Tishrei, the autumnal equinox, when the sun enters Libra, and autumn, or "et ha-batsir" (vintage-time), begins, and when the day again equals the night.
Tekufat Tevet, the winter solstice, when the sun enters Capricornus; this is the beginning of winter, or "et ha-ḥoref" (winter-time) when the night is the longest during the year.
Superstition
An ancient superstition is connected with the tekufot. All water that may be in the house or stored away in vessels in the first hour of the tekufah is thrown away in the belief that the water is then poisoned, and if drunk would cause swelling of the body, sickness, and sometimes death. Several reasons are advanced for this. Some say it is because the angels who protect the water change guard at the tekufah and leave it unwatched for a short time. Others say that Cancer fights with Libra and drops blood into the water. Another authority accounts for the drops of blood in the water at Tekufat Nisan by pointing out that the waters in Egypt turned to blood at that particular moment. At Tekufat Tammuz, Moses smote the rock and caused drops of blood to flow from it. At Tekufat Tishrei the knife which Abraham held to slay Isaac dropped blood. At Tekufat Tevet, Jephthah sacrificed his daughter.
The origin of the superstition cannot be traced. Hai ben Sherira, in the 10th century, in reply to a question as to the prevalence of this custom in the "West" (i.e., west of Mesopotamia), said it was followed only so that the new season might be begun with a supply of fresh, sweet water. Ibn Ezra ridicules the fear that the tekufah water will cause swelling and ascribes the belief to the "gossip of old women." Hezekiah da Silva, however, warns his co-religionists to pay no attention to ibn Ezra's remarks, asserting that in his time, many persons who drank water when the tekufah occurred fell ill and died in consequence. Da Silva says the principal danger lies in the first tekufah (Nisan), and the beadle made a special announcement of its occurrence of the congregation. The danger lurks only in unused water, not in water that has been boiled or used in salting or pickling. The danger in unused water may be avoided by putting in it a piece of iron or an iron vessel. Yaakov ben Moshe Levi Moelin required that a new iron nail should be lowered using a string into the water used for baking matza during Tekufat Nisan.
Notes
References
Bibliography
External links
Jewish Encyclopedia article for Tekufah, by Joseph Jacobs and Judah David Eisenstein.
Hebrew calendar
Winter solstice
Summer solstice
Spring equinox
Autumn equinox | Tekufah | [
"Astronomy"
] | 789 | [
"Time in astronomy",
"Astronomical events",
"Winter solstice",
"Summer solstice"
] |
8,714,796 | https://en.wikipedia.org/wiki/Chou%E2%80%93Fasman%20method | The Chou–Fasman method is an empirical technique for the prediction of secondary structures in proteins, originally developed in the 1970s by Peter Y. Chou and Gerald D. Fasman. The method is based on analyses of the relative frequencies of each amino acid in alpha helices, beta sheets, and turns based on known protein structures solved with X-ray crystallography. From these frequencies a set of probability parameters were derived for the appearance of each amino acid in each secondary structure type, and these parameters are used to predict the probability that a given sequence of amino acids would form a helix, a beta strand, or a turn in a protein. The method is at most about 50–60% accurate in identifying correct secondary structures, which is significantly less accurate than the modern machine learning–based techniques.
Amino acid propensities
The original Chou–Fasman parameters found some strong tendencies among individual amino acids to prefer one type of secondary structure over others. Alanine, glutamate, leucine, and methionine were identified as helix formers, while proline and glycine, due to the unique conformational properties of their peptide bonds, commonly end a helix. The original Chou–Fasman parameters were derived from a very small and non-representative sample of protein structures due to the small number of such structures that were known at the time of their original work. These original parameters have since been shown to be unreliable and have been updated from a current dataset, along with modifications to the initial algorithm.
The Chou–Fasman method takes into account only the probability that each individual amino acid will appear in a helix, strand, or turn. Unlike the more complex GOR method, it does not reflect the conditional probabilities of an amino acid to form a particular secondary structure given that its neighbors already possess that structure. This lack of cooperativity increases its computational efficiency but decreases its accuracy, since the propensities of individual amino acids are often not strong enough to render a definitive prediction.
Algorithm
The Chou–Fasman method predicts helices and strands in a similar fashion, first searching linearly through the sequence for a "nucleation" region of high helix or strand probability and then extending the region until a subsequent four-residue window carries a probability of less than 1. As originally described, four out of any six contiguous amino acids were sufficient to nucleate helix, and three out of any contiguous five were sufficient for a sheet. The probability thresholds for helix and strand nucleations are constant but not necessarily equal; originally 1.03 was set as the helix cutoff and 1.00 for the strand cutoff.
Turns are also evaluated in four-residue windows, but are calculated using a multi-step procedure because many turn regions contain amino acids that could also appear in helix or sheet regions. Four-residue turns also have their own characteristic amino acids; proline and glycine are both common in turns. A turn is predicted only if the turn probability is greater than the helix or sheet probabilities and a probability value based on the positions of particular amino acids in the turn exceeds a predetermined threshold. The turn probability p(t) is determined as:
where j is the position of the amino acid in the four-residue window. If p(t) exceeds an arbitrary cutoff value (originally 7.5e–3), the mean of the p(j)'s exceeds 1, and p(t) exceeds the alpha helix and beta sheet probabilities for that window, then a turn is predicted. If the first two conditions are met but the probability of a beta sheet p(b) exceeds p(t), then a sheet is predicted instead.
See also
List of protein structure prediction software
References
External links
Gerald D. Fasman on the Internet
Bioinformatics
Protein methods | Chou–Fasman method | [
"Chemistry",
"Engineering",
"Biology"
] | 790 | [
"Biochemistry methods",
"Biological engineering",
"Protein methods",
"Protein biochemistry",
"Bioinformatics"
] |
8,714,830 | https://en.wikipedia.org/wiki/Mitochondrial%20membrane%20transport%20protein | Mitochondrial membrane transport proteins, also known as mitochondrial carrier proteins, are proteins which exist in the membranes of mitochondria. They serve to transport molecules and other factors, such as ions, into or out of the organelles. Mitochondria contain both an inner and outer membrane, separated by the inter-membrane space, or inner boundary membrane. The outer membrane is porous, whereas the inner membrane restricts the movement of all molecules. The two membranes also vary in membrane potential and pH. These factors play a role in the function of mitochondrial membrane transport proteins. There are 53 discovered human mitochondrial membrane transporters, with many others that are known to still need discovered.
Mitochondrial outer membrane
The outer mitochondrial membrane forms the border of mitochondria towards the cellular environment. The outer membrane mitochondrial proteins carry out functions for mitochondrial biogenesis and integration between mitochondria and the cellular system. The outer membrane consists of two types of integral proteins, including proteins with transmembrane β-barrel and proteins with one or more α-helical membrane anchors.
β-Barrel Outer Membrane Proteins
TOM complex
The TOM complex, part of the TOM/TIM supercomplex, is essential for the translocase of almost all mitochondrial proteins which consists of at least 7 different subunits. Tom20 and Tom70 are the primary receptors while Tom40, Tom22, Tom7, Tom6, and Tom5 subunits form the stable TOM Complex. The receptor proteins Tom70 and Tom20 recognize incoming precursor proteins, in which Tom70 is responsible for docking of precursors of hydrophobic proteins accompanied by cytosolic chaperones and Tom 20 recognizes precursor proteins of the presequence pathways. Tom40 is the protein-conducting channel of the complex with beta-barrel structure, which forms a cation-selective channel. Tom40 has a large pore diameter of 22Å that can allow the accommodation of partially folded protein structure The inner wall of Tom40 has a charged region that allows interaction with hydrophilic precursor proteins while the hydrophobic precursor of ADP/ATP carrier can be crosslinked with the hydrophobic region of Tom40. Three small proteins Tom5, Tom6, Tom7 interact closely with Tom40 to assemble and stabilize the complex. The TOM complex also consists of a dimer of Tom40 or small Tom proteins that are held together by two Tom22 subunits. Protein sorting into the mitochondrial compartments always starts at the TOM complex. The TOM complex forms two exit sites for precursor proteins—Tom40, Tom7, and the intermembrane space domain of Tom22—promote the transfer of presequence-containing precursors to the TIM23 complex.
SAM complex
The SAM Complex is essential for sorting and assembling beta-barrel proteins from the intermembrane space side into the outer membrane. The SAM complex consists of three subunits: The β-barrel protein Sam50 and two peripheral subunits Sam35 and Sam37. Sam50 belongs to the conserved Omp85 protein family which can be characterized by a 16-stranded β-barrel and by a different number of polypeptide transport-associated (POTRA) domains. Sam50 exposes a single POTRA domain towards the intermembrane space. Sam35 caps the Sam50 β-barrel, stabilizing the core of the protein translocase. Sam50 and Sam35 are responsible for the binding of precursors of β-barrel proteins, which contain conserved β-signal that is formed by the last β-strand. The β-barrel of Sam50 is the functional domain that inserts and folds substrate proteins into the outer membrane.
Sam35 binds to Sam50 and closely interacts with Sam37, in which Sam37 does not bind to Sam50. Sam37 and Sam35 have a conformation similar to glutathione-S-transferase, except they do not possess residues required for enzymatic activity. Sam37 accommodates the release of the folded β-barrel proteins from the SAM complex.
Voltage-dependent anion ion channel or VDAC
VDAC (voltage-dependent anion ion channel) is important for the exchange of small hydrophilic ions and metabolites with the cytosol, which is driven by the gradient concentration across the outer membrane. VDAC is the most abundant protein in the outer membrane. Like Tom40, VDAC has a β-barrel structure with antiparallel β-strands that can facilitate the passage of β-barrel membrane proteins. VDAC has a pore size of 2-4 nm for small hydrophilic molecules. VDAC plays a crucial role in facilitating energy metabolism by transporting ADP and ATP in and out of the outer membrane. VDAC also accommodates the passage of NADH and many anionic metabolites. VDAC operation is voltage-dependent in which it closes at high voltage and can partially open towards slightly reduced anion selectivity.
α-Helical outer membrane proteins
The Mitochondrial import complex (MIM)
The import pathways of α-helical membrane anchors or signal-anchored proteins are carried out mainly by outer membrane proteins. Precursors of the polytopic or multi-spanning proteins can be recognized by Tom70, but cannot be passed through the Tom40 channel. Tom70 transfers the precursor proteins to the MIM Complex. The MIM complex constitutes the major inserts for alpha-helical proteins into the target membrane. The MIM Complex consists of several copies of Mim1 and one or two copies of Mim2. Both subunits are necessary for stabilizing partner proteins and for outer membrane protein biogenesis
Mitochondrial inner membrane
The inner mitochondrial membrane is a structure that surrounds the mitochondrial matrix, characterized by many folds and compartments that form crista and is the site of oxidative phosphorylation and ATP synthesis. The high concentration of cardiolipin, a type of lipid and about 20% of the inner membrane composition, makes it impermeable to most molecules. Specialized transporters arranged in specific configurations are required to regulate the diffusion of molecules across the membrane. The inner membrane's structure causes a membrane potential of approximately 180 mV.
Respiratory chain supercomplex
The respiratory chain supercomplex is located in the cristae of the inner membrane. It is composed of multiple complexes that work together to drive oxidative phosphorylation and ATP synthesis. The complexes cannot function without the other parts of the respiratory supercomplex being present. The supercomplex is the site of the mitochondrial electron transport chain.
NADH/ubiquinone oxidoreductase
NADH/ubiquinone oxidoreductase, also known as complex I, is the first and largest protein in the mitochondrial respiratory chain. It consists of a membrane arm, embedded inside the inner mitochondrial membrane, and a matrix arm, extending out of the membrane. There are 78 transmembrane helices and three proton pumps. The junction of the two arms is the site of conduction of NADH to ubiquinol. Complex I is a scaffold needed for complex III and IV, and it will not function without these other complexes being present.
Cytochrome c reductase, succinate dehydrogenase, and cytochrome c oxidase
Cytochrome c reductase, also known as complex III, is the second protein in the respiratory chain. It pumps electrons from complex I, through succinate dehydrogenase (complex II) to cytochrome c (complex IV). Complex III and IV are proton pumps, pumping H+ protons out of the mitochondrial matrix, and work in conjunction with complex I to create the proton gradient found at the inner membrane. Cytochrome c is and electron carrier protein that travels between complex III and IV, and triggers apoptosis if it leaves the cristae. Complex IV passes electrons to oxygen, the final acceptor in the mitochondrial electron transport chain.
Inner membrane translocases
TIM complex
The TIM complex is a protein translocase located on the inner membrane. It is part of the TOM/TIM supercomplex, which spans the intermembrane space. The TIM complex is responsible for sorting proteins into the mitochondrial matrix or into the membrane. TIM22 and TIM23 are the main subunits. TIM 22 is responsible for allowing other mitochondrial transporters to insert themselves into the inner membrane, whereas TIM23 reads proteins with an N-terminus precursor for import into the membrane or matrix.
ADP, ATP translocase
ADP, ATP translocase is responsible for regulating the movement of ADP and ATP in and out of the inner membrane. ATP is sorted into the cytosol, while ADP is sorted into the mitochondrial matrix to undergo oxidative phosphorylation. Due to the constant demand of ATP production, ADP, ATP translocases are in higher abundance than other transporters. ADP, ATP translocase is a small protein, ~30-33 kDa, composed of 6 transmembrane α-helices, that form 3 repeat domains for an overall funnel-like structure in the membrane. Towards the center of the funnel structure it has a 7 amino acid loop 12. It is structurally unique when compared to other proteins that interact with ATP in that it lacks adenosine monophosphate and requires at least two phosphate groups to allow for passage of the molecule. It's composed of 297 amino acid residues, with 18 of them being charged molecules. The ADP, ATP translocase is opened in the presence of Ca2+.
Phosphate transport proteins
Phosphate transport proteins are similar in structure and are both part of the same family of mitochondrial carriers. It consists of 6 transmembrane α-helices, but lacks the 7 amino acid loop 12 found in ADP, ATP translocase. Phosphate transport proteins are responsible for transport of phosphate across the inner membrane so it can be used in the phosphorylation of ADP.
Mutations of mitochondrial membrane transporters
Mutations of DNA coding for mitochondrial membrane transport proteins are linked to a wide range of diseases and disorders, such as cardiomyopathy, encephalopathy, muscular dystrophy, epilepsy, neuropathy, and fingernail dysplasia. Most mutations of mitochondrial membrane transporters are autosomal recessive. Mutations to transporters within the inner mitochondrial membrane mostly affect high-energy tissues due to the disruption of oxidative phosphorylation. For example, decreased mitochondrial function has been linked to heart failure and hypertrophy. This mitochondrial response translates into a shift towards glycolysis and lactate production that can cause tumor formation and proliferation of the tissues.
Examples
Examples of mitochondrial transport proteins include the following:
The mitochondrial permeability transition pore, which opens in response to increased mitochondrial calcium (Ca2+) load and oxidative stress
The mitochondrial calcium uniporter which transports calcium from the cytosol of the cell into the mitochondrial matrix
The mitochondrial sodium/calcium exchanger, which carries Ca2+ ions out of the matrix in exchange for Na+ ions. These transport proteins serve to maintain the proper electrical and chemical gradients in mitochondria by keeping ions and other factors in the right balance between the inside and outside of mitochondria.
See also
Mitochondrial carrier
Membrane transport protein
References
Transport proteins
Mitochondria
Transmembrane proteins | Mitochondrial membrane transport protein | [
"Chemistry"
] | 2,342 | [
"Mitochondria",
"Metabolism"
] |
8,714,937 | https://en.wikipedia.org/wiki/Discharging%20method%20%28discrete%20mathematics%29 | The discharging method is a technique used to prove lemmas in structural graph theory. Discharging is most well known for its central role in the proof of the four color theorem. The discharging method is used to prove that every graph in a certain class contains some subgraph from a specified list. The presence of the desired subgraph is then often used to prove a coloring result.
Most commonly, discharging is applied to planar graphs.
Initially, a charge is assigned to each face and each vertex of the graph.
The charges are assigned so that they sum to a small positive number. During the Discharging Phase the charge at each face or vertex may be redistributed to nearby faces and vertices, as required by a set of discharging rules. However, each discharging rule maintains the sum of the charges. The rules are designed so that after the discharging phase each face or vertex with positive charge lies in one of the desired subgraphs. Since the sum of the charges is positive, some face or vertex must have a positive charge. Many discharging arguments use one of a few standard initial charge functions (these are listed below). Successful application of the discharging method requires creative design of discharging rules.
An example
In 1904, Wernicke introduced the discharging method to prove the following theorem, which was part of an attempt to prove the four color theorem.
Theorem: If a planar graph has minimum degree 5, then it either has an edge
with endpoints both of degree 5 or one with endpoints of degrees 5 and 6.
Proof:
We use , , and to denote the sets of vertices, faces, and edges, respectively.
We call an edge light if its endpoints are both of degree 5 or are of degrees 5 and 6.
Embed the graph in the plane. To prove the theorem, it is sufficient to only consider planar triangulations (because, if it holds on a triangulation, when removing nodes to return to the original graph, neither node on either side of the desired edge can be removed without reducing the minimum degree of the graph below 5). We arbitrarily add edges to the graph until it is a triangulation.
Since the original graph had minimum degree 5, each endpoint of a new edge has degree at least 6.
So, none of the new edges are light.
Thus, if the triangulation contains a light edge, then that edge must have been in the original graph.
We give the charge to each vertex and the charge to each face , where denotes the degree of a vertex and the length of a face. (Since the graph is a triangulation, the charge on each face is 0.) Recall that the sum of all the degrees in the graph is equal to twice the number of edges; similarly, the sum of all the face lengths equals twice the number of edges. Using Euler's Formula, it's easy to see that the sum of all the charges is 12:
We use only a single discharging rule:
Each degree 5 vertex gives a charge of 1/5 to each neighbor.
We consider which vertices could have positive final charge.
The only vertices with positive initial charge are vertices of degree 5.
Each degree 5 vertex gives a charge of 1/5 to each neighbor.
So, each vertex is given a total charge of at most .
The initial charge of each vertex v is .
So, the final charge of each vertex is at most . Hence, a vertex can only have positive final charge if it has degree at most 7. Now we show that each vertex with positive final charge is adjacent to an endpoint of a light edge.
If a vertex has degree 5 or 6 and has positive final charge, then received charge from an adjacent degree 5 vertex , so edge is light. If a vertex has degree 7 and has positive final charge, then received charge from at least 6 adjacent degree 5 vertices. Since the graph is a triangulation, the vertices adjacent to must form a cycle, and since it has only degree 7, the degree 5 neighbors cannot be all separated by vertices of higher degree; at least two of the degree 5 neighbors of must be adjacent to each other on this cycle. This yields the light edge.
References
.
.
. (Lecture text for Spring School on Combinatorics).
.
.
Graph theory | Discharging method (discrete mathematics) | [
"Mathematics"
] | 899 | [
"Discrete mathematics",
"Mathematical relations",
"Graph theory",
"Combinatorics"
] |
8,714,962 | https://en.wikipedia.org/wiki/Burning%20the%20Clocks | Burning the Clocks is a winter solstice festival that takes place each year in Brighton, England. It has taken place since 1994 as a response to Christmas commercialisation.
The event
Founded in 1993, the celebration is based on a procession of lanterns and costumes, made from withies (willow canes) and white tissue paper, led by local bands with a carnival atmosphere. The procession makes its way through Brighton city centre to the seafront where the festivities culminate in a lantern bonfire, accompanied by fireworks. The costumes all include a clockface to represent the passing of time, although each year has a slight change of theme.
Same Sky, an arts initiative first organised the event with Brighton Co-op to commemorate the founding of the Co-operative Movement, 150 years before. Brighton Co-op provided the finance for the firework display and Same Sky organised local schools producing the lanterns for the parade. They explain:
"Burning the Clocks is an antidote to the excesses of the commercial Christmas. People gather together to make paper and willow lanterns to carry through their city and burn on the beach as a token for the end of the year ... The lantern makers become part of the show as they invest the lanterns with their wishes, hopes, and fears and then pass them into the fire. Same Sky [create] new urban rituals to replace those traditional festivals that were lost in the dash to be new and non-superstitious."
In 2000, Brighton Museum commissioned a costume from Same Sky artist Nikki Gunson. She created "Mother Time Keeper" and performed it in the parade before returning to the museum. Incidentally the festival took place on New Year's Eve' that year. Local colleges also participate; Sussex Downs College have been contributing since 1998.
The event was cancelled in 2009 due to snow and low temperatures making the streets and pavements of the city unsuitable for the processions and anticipated crowds, and in 2020 and 2021 due to the COVID-19 pandemic.
Ethos
The Same Sky arts initiative describes the festival as "the giving and sharing of thoughts and wishes… and put them into a secular format that can be enjoyed by all regardless of faith or creed" and says that the intention is to "[create] new urban rituals to replace those traditional festivals that were lost in the dash to be new and non superstitious". Brighton newspaper The Argus argue that the event "[creates] new urban rituals to replace traditional festivals lost in the politically correct drive to be modern, secular and non-superstitious."
References
External links
SameSky – Burning the Clocks
Photos of Sussex Downs College students in costume
Winter festivals in the United Kingdom
Festivals in Brighton and Hove
1993 establishments in England
Recurring events established in 1993
Festivals established in 1993
Winter events in England
Winter solstice | Burning the Clocks | [
"Astronomy"
] | 567 | [
"Astronomical events",
"Winter solstice"
] |
8,715,408 | https://en.wikipedia.org/wiki/Non-shrink%20grout | Non-shrink grout is a hydraulic cement grout that, when hardened under stipulated test conditions, does not shrink, so its final volume is greater than or equal to the original installed volume. It is often used as a transfer medium between load-bearing members.
Testing
Test standards used to designate a grout as non-shrink include, but are not limited to:
C1090-01(2005)e1 Standard Test Method for Measuring Changes in Height of Cylindrical Specimens of Hydraulic-Cement Grout
ASTM C 1107
Typical characteristics
Often sets rapidly
Usually a pre-mix product that needs only to be mixed with [water]
Includes ingredients to compensate against cement stone shrinkage
Use of shrinkage-compensating ingredients can result in volume increase over time.
Has a high strength of over 10,000 psi or near 100 MPa per ASTM C109.
Typical cementitious materials caveats
Despite the use of expanding or shrinkage-compensating ingredients, users are ordinarily cautioned to avoid environments detrimental to the forming of cement stone. These include but are not limited to the following:
Avoid high wind across the curing surface.
Avoid high temperatures during the cure.
Avoid common cement poisons, such as sulphates, acids, etc.
Failure to follow these precautions can adversely affect the quality of all cementitious products.
See also
ASTM International
Canadian Standards Association
Deutsches Institut für Normung
External links
Portland Cement Association
Building materials
Cement | Non-shrink grout | [
"Physics",
"Engineering"
] | 300 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
8,715,410 | https://en.wikipedia.org/wiki/Surface-enhanced%20laser%20desorption/ionization | Surface-enhanced laser desorption/ionization (SELDI) is a soft ionization method in mass spectrometry (MS) used for the analysis of protein mixtures. It is a variation of matrix-assisted laser desorption/ionization (MALDI). In MALDI, the sample is mixed with a matrix material and applied to a metal plate before irradiation by a laser, whereas in SELDI, proteins of interest in a sample become bound to a surface before MS analysis. The sample surface is a key component in the purification, desorption, and ionization of the sample. SELDI is typically used with time-of-flight (TOF) mass spectrometers and is used to detect proteins in tissue samples, blood, urine, or other clinical samples, however, SELDI technology can potentially be used in any application by simply modifying the sample surface.
Sample preparation and instrumentation
SELDI can be seen as a combination of solid-phase chromatography and TOF-MS. The sample is applied to a modified chip surface, which allows for the specific binding of proteins from the sample to the surface. Contaminants and unbound proteins are then washed away. After washing the sample, an energy absorbing matrix, such as sinapinic acid (SPA) or α-Cyano-4-hydroxycinnamic acid (CHCA), is applied to the surface and allowed to crystallize with the sample. Alternatively, the matrix can be attached to the sample surface by covalent modification or adsorption before the sample is applied. The sample is then irradiated by a pulsed laser, causing ablation and desorption of the sample and matrix.
SELDI-TOF-MS
Samples spotted on a SELDI surface are typically analyzed using time-of-flight mass spectrometry. An irradiating laser ionizes peptides from crystals of the sample/matrix mixture. The matrix absorbs the energy of the laser pulse, preventing destruction of the molecule, and transfers charge to the sample molecules, forming ions. The ions are then briefly accelerated through an electric potential and travel down a field-free flight tube where they are separated by their velocity differences. The mass-to-charge ratio of each ion can be determined from the length of the tube, the kinetic energy given to ions by the electric field, and the velocity of the ions in the tube. The velocity of the ions is inversely proportional to the square root of the mass-to-charge ratio of the ion; ions with low mass-to-charge ratios are detected earlier than ions with high mass-to-charge ratios.
SELDI surface
The binding of proteins to the SELDI surface acts as a solid-phase chromatographic separation step, and as a result, the proteins attached to the surface are easier to analyze. The surface is composed primarily of materials with a variety of physico-chemical characteristics, metal ions, or anion or cation exchangers. Common surfaces include CM10 (weak cation exchange), H50 (hydrophobic surface, similar to C6-C12 reverse phase chromatography), IMAC30 (metal-binding surface), and Q10 (strong anion exchange). SELDI surfaces can also be modified to study DNA-protein binding, antibody-antigen assays, and receptor-ligand interactions.
Additional surface methods
The SELDI process is a combination of surface-enhanced neat desorption (SEND),surface-enhanced affinity-capture (SEAC), and surface-enhanced photolabile attachment and release (SEPAR) mass spectrometry. With SEND, analytes can be desorbed and ionized without adding a matrix; the matrix is incorporated into the sample surface. In SEAC, the sample surface is modified to bind the analyte of interest for analysis with laser desorption/ionization mass spectrometry (LDI-MS). SEPAR is a combination of SEND and SEAC; the modified sample surface also acts as an energy absorbing matrix for ionization.
History
SELDI technology was developed by T. William Hutchens and Tai-Tung Yip at Baylor College of Medicine in 1993. Hutchens and Yip attached single-stranded DNA to agarose beads and used the beads to capture lactoferrin, an iron-binding glycoprotein, from preterm infant urine. The beads were incubated in the sample and then removed, washed, and analyzed with a MALDI-MS probe tip. This research led to the idea that MALDI surfaces could be derivatized with SEAC devices; the technique was later described by Hutchens and Yip in 1998.
SELDI technology was first commercialized by Ciphergen Biosystems in 1997 as the ProteinChip system, and is now produced and marketed by Bio-Rad Laboratories.
Applications
SELDI technology can potentially be used in any application by modifying the SELDI surface. SELDI-TOF-MS is optimal for analyzing low molecular weight proteins (<20 kDa) in a variety of biological materials, such as tissue samples, blood, urine, and serum. This technique is often used in combination with immunoblotting and immunohistochemistry as a diagnostic tool to aid in the detection of biomarkers for diseases, and has also been applied to the diagnosis of cancer and neurological disorders. SELDI-TOF-MS has been used in biomarker discovery for lung, breast, liver, colon, pancreatic, bladder, kidney, cervical, ovarian, and prostate cancers. SELDI technology is most widely used in biomarker discovery to compare protein levels in serum samples from healthy and diseased patients. Serum studies allow for a minimally invasive approach to disease monitoring in patients and are useful in the early detection and diagnosis of diseases and neurological disorders, such as amyotrophic lateral sclerosis (ALS) and Alzheimer's.
SELDI-TOF-MS can also be used in biological applications to detect post-translationally modified proteins and to study phosphorylation states of proteins.
Advantages
A major advantage of the SELDI process is the chromatographic separation step. While liquid chromatography-mass spectrometry (LC-MS) is based on the elution of analytes in the separated sample, separation in SELDI is based on retention. Any sample components that interfere with analytical measurements, such as salts, detergents, and buffers, are washed away before analysis with mass spectrometry. Only the analytes that are bound to the surface are analyzed, reducing the overall complexity of the sample. As a result, there is an increased probability of detecting analytes that are present in lower concentrations. Because of the initial separation step, protein profiles can be obtained from samples of as few as 25-50 cells.
In biological applications, SELDI-TOF-MS has a major advantage in that the technique does not require the use of radioactive isotopes. Furthermore, an assay can be sampled at multiple time points during an experiment. Additionally, in proteomics, the biomarker discovery, identification, and validation steps can all be done on the SELDI surface.
Limitations
SELDI is often criticized for its reproducibility due to differences in the mass spectra obtained when using different batches of chip surfaces. While the method has been successful with analyzing low molecular weight proteins, consistent results have not been obtained when analyzing high molecular weight proteins. There also exists a potential for sample bias, as nonspecific absorption matrices favor the binding of analytes with higher abundances in the sample at the expense of less abundant analytes. While SELDI-TOF-MS has detection limits in the femtomolar range, the baseline signal in the spectra varies and noise due to the matrix is maximal below 2000 Da, with Ciphergen Biosystems suggesting to ignore spectral peaks below 2000 Da.
See also
Soft laser desorption
List of mass spectrometry software
References
Proteomics
Biochemistry methods
Proteins
Ion source | Surface-enhanced laser desorption/ionization | [
"Physics",
"Chemistry",
"Biology"
] | 1,692 | [
"Biochemistry methods",
"Biomolecules by chemical classification",
"Spectrum (physical sciences)",
"Ion source",
"Mass spectrometry",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
8,715,575 | https://en.wikipedia.org/wiki/Phosphoproteomics | Phosphoproteomics is a branch of proteomics that identifies, catalogs, and characterizes proteins containing a phosphate group as a posttranslational modification. Phosphorylation is a key reversible modification that regulates protein function, subcellular localization, complex formation, degradation of proteins and therefore cell signaling networks. With all of these modification results, it is estimated that between 30–65% of all proteins may be phosphorylated, some multiple times. Based on statistical estimates from many datasets, 230,000, 156,000 and 40,000 phosphorylation sites should exist in human, mouse, and yeast, respectively.
Compared to expression analysis, phosphoproteomics provides two additional layers of information. First, it provides clues on what protein or pathway might be activated because a change in phosphorylation status almost always reflects a change in protein activity. Second, it indicates what proteins might be potential drug targets as exemplified by the kinase inhibitor Gleevec. While phosphoproteomics will greatly expand knowledge about the numbers and types of phosphoproteins, its greatest promise is the rapid analysis of entire phosphorylation based signalling networks.
Overview
A sample large-scale phosphoproteomic analysis includes cultured cells undergo SILAC encoding; cells are stimulated with factor of interest (e.g. growth factor, hormone); stimulation can occur for various lengths of time for temporal analysis, cells are lysed and enzymatically digested, peptides are separated using ion exchange chromatography; phosphopeptides are enriched using phosphospecific antibodies, immobilized metal affinity chromatography or titanium dioxide (TiO2) chromatography; phosphopeptides are analyzed using mass spectrometry, and peptides are sequenced and analyzed.
Tools and methods
The analysis of the entire complement of phosphorylated proteins in a cell is certainly a feasible option. This is due to the optimization of enrichment protocols for phosphoproteins and phosphopeptides, better fractionation techniques using chromatography, and improvement of methods to selectively visualize phosphorylated residues using mass spectrometry. Although the current procedures for phosphoproteomic analysis are greatly improved, there is still sample loss and inconsistencies with regards to sample preparation, enrichment, and instrumentation. Bioinformatics tools and biological sequence databases are also necessary for high-throughput phosphoproteomic studies.
Enrichment strategies
Previous procedures to isolate phosphorylated proteins included radioactive labeling with 32P-labeled ATP followed by SDS polyacrylamide gel electrophoresis or thin layer chromatography. These traditional methods are inefficient because it is impossible to obtain large amounts of proteins required for phosphorylation analysis. Therefore, the current and simplest methods to enrich phosphoproteins are affinity purification using phosphospecific antibodies, immobilized metal affinity chromatography (IMAC), strong cation exchange (SCX) chromatography, or titanium dioxide chromatography. Antiphosphotyrosine antibodies have been proven very successful in purification, but fewer reports have been published using antibodies against phosphoserine- or phosphothreonine-containing proteins. IMAC enrichment is based on phosphate affinity for immobilized metal chelated to the resin. SCX separates phosphorylated from non-phosphorylated peptides based on the negatively charged phosphate group. Titanium dioxide chromatography is a newer technique that requires significantly less column preparation time. Many phosphoproteomic studies use a combination of these enrichment strategies to obtain the purest sample possible.
Mass spectrometry analysis
Mass spectrometry is currently the best method to adequately compare pairs of protein samples. The two main procedures to perform this task are using isotope-coded affinity tags (ICAT) and stable isotopic amino acids in cell culture (SILAC). In the ICAT procedure samples are labeled individually after isolation with mass-coded reagents that modify cysteine residues. In SILAC, cells are cultured separately in the presence of different isotopically labeled amino acids for several cell divisions allowing cellular proteins to incorporate the label. Mass spectrometry is subsequently used to identify phosphoserine, phosphothreonine, and phosphotyrosine-containing peptides.
Signal transduction studies
Intracellular signal transduction is primarily mediated by the reversible phosphorylation of various signalling molecules by enzymes dubbed kinases. Kinases transfer phosphate groups from ATP to specific serine, threonine or tyrosine residues of target molecules. The resultant phosphorylated protein may have altered activity level, subcellular localization or tertiary structure.
Phosphoproteomic analyses are ideal for the study of the dynamics of signalling networks. In one study design, cells are exposed to SILAC labelling and then stimulated by a specific growth factor. The cells are collected at various timepoints, and the lysates are combined for analysis by tandem MS. This allows experimenters to track the phosphorylation state of many phosphoproteins in the cell over time. The ability to measure the global phosphorylation state of many proteins at various time points makes this approach much more powerful than traditional biochemical methods for analyzing signalling network behavior.
One study was able to simultaneously measure the fold-change in phosphorylation state of 127 proteins between unstimulated and EphrinB1-stimulated cells. Of these 127 proteins, 40 showed increased phosphorylation with stimulation by EphrinB1. The researchers were able to use this information in combination with previously published data to construct a signal transduction network for the proteins downstream of the EphB2 receptor.
Another recent phosphoproteomic study included large-scale identification and quantification of phosphorylation events triggered by the anti-diuretic hormone vasopressin in kidney collecting duct. A total of 714 phosphorylation sites on 223 unique phosphoproteins were identified, including three novel phosphorylation sites in the vasopressin-sensitive water channel aquaporin-2 (AQP2).
Cancer research
Since the inception of phosphoproteomics, cancer research has focused on changes to the phosphoproteome during tumor development. Phosphoproteins could be cancer markers useful to cancer diagnostics and therapeutics. In fact, research has shown that there are distinct phosphotyrosine proteomes of breast and liver tumors. There is also evidence of hyperphosphorylation at tyrosine residues in breast tumors but not in normal tissues. Findings like these suggest that it is possible to mine the tumor phosphoproteome for potential biomarkers.
Increasing amounts of data are available suggesting that distinctive phosphoproteins exist in various tumors and that phosphorylation profiling could be used to fingerprint cancers from different origins. In addition, systematic cataloguing of tumor-specific phosphoproteins in individual patients could reveal multiple causative players during cancer formation. By correlating this experimental data to clinical data such as drug response and disease outcome, potential cancer markers could be identified for diagnosis, prognosis, prediction of drug response, and potential drug targets.
Limitations
While phosphoproteomics has greatly expanded knowledge about the numbers and types of phosphoproteins, along with their role in signaling networks, there are still several limitations to these techniques. To begin with, isolation methods such as anti-phosphotyrosine antibodies do not distinguish between isolating tyrosine-phosphorylated proteins and proteins associated with tyrosine-phosphorylated proteins. Therefore, even though phosphorylation dependent protein-protein interactions are very important, it is important to remember that a protein detected by this method is not necessarily a direct substrate of any tyrosine kinase. Only by digesting the samples before immunoprecipitation can isolation of only phosphoproteins and temporal profiles of individual phosphorylation sites be produced. Another limitation is that some relevant proteins will likely be missed since no extraction condition is all encompassing. It is possible that proteins with low stoichiometry of phosphorylation, in very low abundance, or phosphorylated as a target for rapid degradation will be lost. Bioinformatics analyses of low-throughput phosphorylation data together with high-throughput phosphoproteomics data (based mostly on MS/MS) estimate that current high-throughput protocols, after several repetitions are capable of capturing 70% to 95% of total phosphoproteins, but only 40% to 60% of total phosphorylation sites.
See also
Glycoproteomics
References
External links
Phosida Phosphorylation Site Database
Analysis of Signal Transduction YouTube Video
CDPD Collecting Duct Phosphoprotein Database
Biotechnology | Phosphoproteomics | [
"Biology"
] | 1,972 | [
"nan",
"Biotechnology"
] |
8,715,728 | https://en.wikipedia.org/wiki/Rule%20of%20three%20%28C%2B%2B%20programming%29 | The rule of three and rule of five are rules of thumb in C++ for the building of exception-safe code and for formalizing rules on resource management. The rules prescribe how the default members of a class should be used to achieve these goals systematically.
Rule of three
The rule of three (also known as the law of the big three or the big three) is a rule of thumb in C++ (prior to C++11) that claims that if a class defines any of the following then it should probably explicitly define all three:
destructor
copy constructor
copy assignment operator
These three functions are special member functions. If one of these functions is used without first being declared by the programmer it will be implicitly implemented by the compiler with the following default semantics:
Destructor – call the destructors of all the object's class-type members
Copy constructor – construct all the object's members from the corresponding members of the copy constructor's argument, calling the copy constructors of the object's class-type members, and doing a plain assignment of all non-class type (e.g., int or pointer) data members
Copy assignment operator – assign all the object's members from the corresponding members of the assignment operator's argument, calling the copy assignment operators of the object's class-type members, and doing a plain assignment of all non-class type (e.g. int or pointer) data members.
The rule of three claims that if one of these had to be defined by the programmer, it means that the compiler-generated version does not fit the needs of the class in one case and it will probably not fit in the other cases either. The term "Rule of three" was coined by Marshall Cline in 1991.
An amendment to this rule is that if the class is designed in such a way that resource acquisition is initialization (RAII) is used for all its (nontrivial) members, the destructor may be left undefined (also known as The Law of The Big Two). A ready-to-go example of this approach is the use of smart pointers instead of plain ones.
Because implicitly-generated constructors and assignment operators simply copy all class data members ("shallow copy"), one should define explicit copy constructors and copy assignment operators for classes that encapsulate complex data structures or have external references such as pointers, if you need to copy the objects pointed to by the class members. If the default behavior ("shallow copy") is actually the intended one, then an explicit definition, although redundant, will be "self-documenting code" indicating that it was an intention rather than an oversight. Modern C++ includes a syntax for expressly specifying that a default function is desired without having to type out the function body.
Rule of five
With the advent of C++11 the rule of three can be broadened to the rule of five (also known as "the rule of the big five") as C++11 implements move semantics, allowing destination objects to grab (or steal) data from temporary objects. The following example also shows the new moving members: move constructor and move assignment operator. Consequently, for the rule of five we have the following special members:
destructor
copy constructor
copy assignment operator
move constructor
move assignment operator
Situations exist where classes may need destructors, but cannot sensibly implement copy and move constructors and copy and move assignment operators. This happens, for example, when the base class does not support these latter Big Four members, but the derived class's constructor allocates memory for its own use. In C++11, this can be simplified by explicitly specifying the five members as default.
See also
C++ classes
Class (computer programming)
References
C++
Computer programming folklore
Software engineering folklore
Articles with example C++ code | Rule of three (C++ programming) | [
"Engineering"
] | 804 | [
"Software engineering",
"Software engineering folklore"
] |
5,522,139 | https://en.wikipedia.org/wiki/Sitafloxacin | Sitafloxacin (INN; also called DU-6859a) is a fluoroquinolone antibiotic that shows promise in the treatment of Buruli ulcer. The molecule was identified by Daiichi Sankyo Co., which brought ofloxacin and levofloxacin to the market. Sitafloxacin is currently marketed in Japan by Daiichi Sankyo under the tradename Gracevit.
See also
Quinolone
References
Further reading
External links
Gracevit グレースビット (PDF) Daiichi Sankyo Co. January 2008.
Fluoroquinolone antibiotics
Chloroarenes
Cyclopropanes
Organofluorides
Spiro compounds
Daiichi Sankyo | Sitafloxacin | [
"Chemistry"
] | 152 | [
"Organic compounds",
"Spiro compounds"
] |
5,522,291 | https://en.wikipedia.org/wiki/Eric%20Brill | Eric Brill is a computer scientist specializing in natural language processing. He created the Brill tagger, a supervised part of speech tagger. Another research paper of Brill introduced a machine learning technique now known as transformation-based learning.
Biography
Brill earned a BA in mathematics from the University of Chicago in 1987 and a MS in Computer Science from UT Austin in 1989. In 1994, he completed his PhD at the University of Pennsylvania. He was an assistant professor at Johns Hopkins University from 1994 to 1999. In 1999, he left JHU for Microsoft Research, he developed a system called "Ask MSR" that answered search engine queries written as questions in English, and was quoted in 2004 as predicting the shift of Google's web-page based search to information based search. In 2009 he moved to eBay to head their research laboratories.
References
Artificial intelligence researchers
Living people
Year of birth missing (living people)
Place of birth missing (living people)
Johns Hopkins University faculty
University of Texas at Austin College of Natural Sciences alumni
University of Chicago alumni
University of Pennsylvania alumni
Natural language processing researchers
Computational linguistics researchers | Eric Brill | [
"Technology"
] | 224 | [
"Computing stubs",
"Computer specialist stubs"
] |
5,522,946 | https://en.wikipedia.org/wiki/Institute%20of%20Transportation%20Engineers | The Institute of Transportation Engineers (ITE) is an international educational and scientific association of transportation professionals who are responsible for meeting mobility and safety needs. ITE facilitates the application of technology and scientific principles to research, planning, functional design, implementation, operation, policy development, and management for any mode of ground transportation.
History
The organization was formed in October 1930 amid growing public demand for experts to alleviate traffic congestion and the frequency of crashes that came from the rapid development of automotive transportation. Various national and regional conferences called for discussions of traffic problems. These discussions led to a group of transportation engineers starting the creation of the first professional traffic society. A meeting took place in Pittsburgh on October 2, 1930, where a tentative draft of the organization's constitution and by-laws came to fruition. The constitution and by-laws were later adopted at a meeting in New York on January 20, 1931. The first chapter of the Institute of Traffic Engineers was established consisting of 30 men with Ernest P. Goodrich as its first president.
The organization consists of 10 districts, 62 sections, and 30 chapters from various parts of the world.
Transportation Professional Certification Board
ITE founded the Transportation Professional Certification Board Inc. (TPCB) in 1996 as an autonomous certification body. TPCB facilitates multiple testing and certification pathways for transportation professionals.
Standards development
ITE is also a standards development organization designated by the United States Department of Transportation (USDOT). One of the current standardization efforts is the advanced transportation controller. ITE is also known for publishing articles about trip generation, parking generation, parking demand, and various transportation-related material through ITE Journal, a monthly publication.
Criticism
Urbanists such as Jeff Speck have criticized ITE standards for encouraging towns to build more, wider streets making pedestrians less safe and cities less walkable. Donald Shoup in his book The High Cost of Free Parking argues that the ITE Trip Generation Manual estimates give towns the false confidence to regulate minimum parking requirements which reinforce sprawl.
See also
National Transportation Communications for Intelligent Transportation System Protocol (NTCIP)
Canadian Institute of Transportation Engineers
References
External links
Transportation engineering
Organizations based in Washington, D.C.
Road transport organizations
Organizations established in 1930
Engineering organizations
Transportation organizations based in the United States | Institute of Transportation Engineers | [
"Engineering"
] | 454 | [
"Transportation engineering",
"Civil engineering",
"nan",
"Industrial engineering"
] |
5,523,192 | https://en.wikipedia.org/wiki/Herz%20reaction | The Herz reaction, named after the chemist Richard Herz, is the chemical conversion of an aniline to the benzodithiazolium salt by its reaction with disulfur dichloride. The salt is called a Herz salt. Hydrolysis of this Herz salt give the corresponding sodium thiolate, which can be further converted to the 2-aminothiophenol.
The 2-aminothiophenols are suitable for diazotization, giving benzothiadiazoles. Instead the sodium 2-aminothiophenolate can be converted to a 1,3-benzothiazole.
Dyes
Aniline 5 is converted to compound 6, in three steps;
conversion to an ortho-aminothiol through the Herz-reaction (aniline 5 and disulfur dichloride), followed by
conversion to an ortho-aminoarylthioglycolacid and
conversion of the aromatic amine function to a nitrile via the Sandmeyer reaction.
In a last step the nitrile is hydrolysed resulting in 6. This compound is converted to 7 via a ring-closing reaction and decarboxylation.
The compound, (thioindoxyl, 7) is an important intermediate in the organic synthesis of some dyes. Condensation with acenaphthoquinone gives 8, a dye of the so-called Ciba-Scarlet type, while condensation of 7 with isatin results in the thio-Indigo dye 9.
References
Addition reactions
Heterocycle forming reactions
Name reactions | Herz reaction | [
"Chemistry"
] | 331 | [
"Name reactions",
"Heterocycle forming reactions",
"Organic reactions"
] |
5,523,586 | https://en.wikipedia.org/wiki/Progenitor%20cell | A progenitor cell is a biological cell that can differentiate into a specific cell type. Stem cells and progenitor cells have this ability in common. However, stem cells are less specified than progenitor cells. Progenitor cells can only differentiate into their "target" cell type. The most important difference between stem cells and progenitor cells is that stem cells can replicate indefinitely, whereas progenitor cells can divide only a limited number of times. Controversy about the exact definition remains and the concept is still evolving.
The terms "progenitor cell" and "stem cell" are sometimes equated.
Properties
Most progenitors are identified as oligopotent. In this point of view, they can compare to adult stem cells, but progenitors are said to be in a further stage of cell differentiation. They are "midway" between stem cells and fully differentiated cells. The kind of potency they have depends on the type of their "parent" stem cell and also on their niche. Some research found that progenitor cells were mobile and that these progenitor cells could move through the body and migrate towards the tissue where they are needed. Many properties are shared by adult stem cells and progenitor cells.
Research
Progenitor cells have become a hub for research on a few different fronts. Current research on progenitor cells focuses on two different applications: regenerative medicine and cancer biology. Research on regenerative medicine has focused on progenitor cells, and stem cells, because their cellular senescence contributes largely to the process of aging. Research on cancer biology focuses on the impact of progenitor cells on cancer responses, and the way that these cells tie into the immune response.
The natural aging of cells, called their cellular senescence, is one of the main contributors to aging on an organismal level. There are a few different ideas to the cause behind why aging happens on a cellular level. Telomere length has been shown to positively correlate to longevity. Increased circulation of progenitor cells in the body has also positively correlated to increased longevity and regenerative processes. Endothelial progenitor cells (EPCs) are one of the main focuses of this field. They are valuable cells because they directly precede endothelial cells, but have characteristics of stem cells. These cells can produce differentiated cells to replenish the supply lost in the natural process of aging, which makes them a target for aging therapy research. This field of regenerative medicine and aging research is still currently evolving.
Recent studies have shown that haematopoietic progenitor cells contribute to immune responses in the body. They have been shown to respond a range of inflammatory cytokines. They also contribute to fighting infections by providing a renewal of the depleted resources caused by the stress of an infection on the immune system. Inflammatory cytokines and other factors released during infections will activate haematopoietic progenitor cells to differentiate to replenish the lost resources.
Examples
The characterization or the defining principle of progenitor cells, in order to separate them from others, is based on the different cell markers rather than their morphological appearance.
Satellite cells found in muscles. They play a major role in muscle cell differentiation and injury recoveries.
Intermediate progenitor cells formed in the subventricular zone. Some of these transit amplifying neural progenitors migrate via rostral migratory stream to the olfactory bulb and differentiate further into specific types of neural cells.
Radial glial cells found in developing regions of the brain, most notably the cortex. These progenitor cells are easily identified by their long radial process.
Bone marrow stromal cells found in the epidermis and make up 10% of progenitor cells. They are often classed as stem cells due to their high plasticity and potential for unlimited capacity for self-renewal.
Periosteum contains progenitor cells that develop into osteoblasts and chondroblasts.
Pancreatic progenitor cells are among the most studied progenitors. They are used in research to develop a cure against diabetes type-1.
Angioblasts or endothelial progenitor cells (EPC). These are very important for research on fracture and wounds healing.
Blast cells are involved in generation of B- and T-lymphocytes, which participate in immune responses.
Boundary cap cells from the neural crest form a barrier between the cells of the central nervous system and cells of the peripheral nervous system. Boundary cap neural crest stem cells promote survival of mutant SOD1 motor neurons.
Development of the human cerebral cortices
Before embryonic day 40 (E40), progenitor cells generate other progenitor cells; after that period, progenitor cells produce only dissimilar mesenchymal stem cell daughters. The cells from a single progenitor cell form a proliferative unit that creates one cortical column; these columns contain a variety of neurons with different shapes.
See also
Endothelial progenitor cell
EndoMac progenitor cell
List of distinct cell types in the adult human body
References
Stem cells
Biotechnology
Cell biology
Developmental biology
Cloning | Progenitor cell | [
"Engineering",
"Biology"
] | 1,062 | [
"Behavior",
"Developmental biology",
"Cell biology",
"Reproduction",
"Cloning",
"Genetic engineering",
"Biotechnology",
"nan"
] |
5,524,046 | https://en.wikipedia.org/wiki/National%20Atmospheric%20Release%20Advisory%20Center | The National Atmospheric Release Advisory Center (NARAC) is located at the University of California's Lawrence Livermore National Laboratory. It is a national support and resource center for planning, real-time assessment, emergency response, and detailed studies of incidents involving a wide variety of hazards, including nuclear, radiological, chemical, biological, and natural emissions.
NARAC provides tools and services to federal, state and local governments, that map the probable spread of hazardous material accidentally or intentionally released into the atmosphere.
NARAC provides atmospheric plume predictions in time for an emergency manager to decide if protective action is necessary to protect the health and safety of people in affected areas.
The NARAC facility includes
Scientific and technical staff who provide support and training for NARAC tools, as well as quality assurance and detailed analysis of atmospheric releases.
24 hour x 7 day on-duty or on-call staff.
Training facility.
An Operations Center with uninterruptible power, backup power generators, and robust computer systems.
Links to over 100 emergency operations centers on the U.S.
A team of research and operational staff with expertise in atmospheric research, operational meteorology, numerical modeling, computer science, software engineering, geographical information systems, computer graphics, hazardous material (radiological, chemical, biological) properties and effects.
The Emergency Response System: Real time dispersion modeling
The NARAC emergency response central modeling system consists of an integrated suite of meteorological and atmospheric dispersion models. The meteorological data assimilation model, ADAPT, constructs fields of such variables as the mean winds, pressure, precipitation, temperature, and turbulence. Non-divergent wind fields are produced by a procedure based on the variational principle and a finite-element discretization. The dispersion model, LODI, solves the 3-D advection-diffusion equation using a Lagrangian stochastic, Monte Carlo method. LODI includes methods for simulating the processes of mean wind advection, turbulent diffusion, radioactive decay and production, bio-agent degradation, first-order chemical reactions, wet deposition, gravitational settling, dry deposition, and buoyant/momentum plume rise.
The models are coupled to NARAC databases providing topography, geographical data, chemical-biological-nuclear agent properties and health risk levels, real-time meteorological observational data, and global and mesoscale forecast model predictions. The NARAC modeling system also includes an in-house version of the Naval Research Laboratory's mesoscale weather forecast model COAMPS.
See also
Materials MASINT
Accidental release source terms
Air Resources Laboratory
Air Quality Modeling Group
Atmospheric dispersion modeling
Department of Public Safety
National Center for Atmospheric Research
University Corporation for Atmospheric Research
References
External links and sources
National Atmospheric Release Advisory Center (official website)
Lawrence Livermore National Laboratory (official website)
Atmospheric dispersion modeling
Lawrence Livermore National Laboratory | National Atmospheric Release Advisory Center | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 579 | [
"Atmospheric dispersion modeling",
"Environmental modelling",
"Environmental engineering"
] |
5,524,740 | https://en.wikipedia.org/wiki/Osmosis%20Jones | Osmosis Jones is a 2001 American live-action/animated buddy cop comedy film written by Marc Hyman. Combining live-action sequences directed by the Farrelly brothers and animation directed by Piet Kroon and Tom Sito, the film stars the voices of Chris Rock, Laurence Fishburne, David Hyde Pierce, Brandy Norwood and William Shatner alongside Molly Shannon, Chris Elliott and Bill Murray in live-action roles. It follows the titular character, an anthropomorphic white blood cell, as he teams up with a cold pill to protect his unhealthy human host from a deadly virus he unintentionally contracted.
The film premiered on August 7, 2001, and was released theatrically three days later. It received mixed reviews from critics, who praised the world building, the animation, story, and voice performances, but criticized the inconsistent tone of the live-action portions and overuse of gross-out humor. The film was also a commercial failure, grossing $14 million worldwide against a $70 million budget. Despite the poor financial response, the film was followed by the animated television series Ozzy & Drix, which aired on Kids' WB for two seasons and twenty-six episodes from 2002 to 2004.
Plot
Frank DeTorre is an unkempt zookeeper at the Sucat Memorial Zoo in Rhode Island. He copes with his wife Maggie's death by overeating and foregoing basic hygiene, much to his daughter Shane's concern. Inside his body, white blood cell Osmosis "Ozzy" Jones is an overzealous officer of the "Frank Police Department," the body's center for responses against bodily threats.
Facing an election against Tom Colonic, Mayor Phlegmming doubles down on his junk food policies, so he could go to a food festival in Buffalo, New York, ignoring its effects on Frank's health. This causes Frank to eat a boiled egg covered in chimp saliva, allowing Thrax, a deadly virus known mainly as "The Red Death," to enter his body and inflame his throat. Phlegmming instructs Frank to take a cold pill. The pill, Special Agent Drixenol "Drix" Drixobenzometaphedramine, proceeds to disinfect the throat, covering up evidence of Thrax's arrival. Ozzy is told to assist Drix in his investigation, much to his displeasure. Thrax assumes leadership of a gang of sweat germs and breaks down Frank's mucus dam, nearly killing the duo and causing a runny nose. Ozzy tells Drix how he once caused Frank to vomit on Shane's teacher, Mrs. Boyd at a school science fair after Frank ate a contaminated oyster which led to Frank's dismissal from his previous job and Mrs. Boyd filing a restraining order against him; Frank’s brother Bob had gotten him his job at the zoo and Ozzy was suspended for unnecessary force.
Ozzy and Drix visit Chill, a flu vaccine and an informant, who directs them to Thrax's hideout in a germ-ridden nightclub in a large zit on Frank's forehead. Ozzy goes undercover and infiltrates Thrax's gang, where he learns that Thrax intends to masquerade as a common cold and use his knowledge of DNA to kill Frank. When Ozzy is discovered, Drix comes to his aid, causing a brawl which culminates in the zit being popped by a grenade. Its pus lands on Mrs. Boyd's lip during a meeting between her and Frank; in response, Phlegmming closes the investigation, dismisses Ozzy from the police force and orders Drix to leave Frank’s body. Back in the real world, Frank prepares to go to Buffalo, much to Shane's disapproval.
Unbeknownst to the duo, Thrax has survived the zit's destruction and launches a lone assault on the hypothalamus where he steals a crucial nucleotide. He then abducts Phlegmming's secretary, Leah Estrogen, and flees to the mouth to escape. His actions disable the body's ability to regulate temperature and Frank develops a dangerous fever. As Frank is hospitalized, Ozzy convinces Drix not to leave and the duo catch up to Thrax and rescue Leah. Thrax induces Frank to sneeze him out of the mouth using pollen. Drix shoots Ozzy after him and he and Thrax both land on Shane's cornea. As the two battle, they end up on one of Shane's false eyelashes. Ozzy tricks Thrax into getting his hand embedded in the lash and escapes just as it falls into a beaker of rubbing alcohol, killing Thrax.
As Frank's temperature surpasses , he goes into cardiac arrest. Clinging onto one of Shane's tears as she mourns her father, Ozzy falls back into Frank's mouth with the stolen nucleotide, reviving him just in time. Ozzy is then welcomed back into the police force as he begins a relationship with Leah and Drix stays as Ozzy's new partner, and Frank commits himself to living a healthier lifestyle. Phlegmming is reduced to a janitor in the bowels and is ejected from Frank’s body by flatulence after ignoring a notice not to trigger it.
Cast
Bill Murray as Francis "Frank" DeTorre, Shane's widowed father and Bob's brother, in whom the animated portions of the film take place.
Molly Shannon as Mrs. Boyd, Shane's science and gym teacher.
Chris Elliott as Robert "Bob" DeTorre, Frank's brother and Shane's uncle.
Elena Franklin as Shane DeTorre, Frank's 10-year-old daughter and Bob's niece.
Danny Murphy as the zookeeper superintendent
Jack McCullough as a zookeeper
Voices
Chris Rock as Osmosis "Ozzy" Jones, a quick-witted white blood cell with an impulsive personality.
Laurence Fishburne as Thrax, a villainous and deadly anthrax virus who intends to gain infamy by killing Frank within 48 hours of infection.
David Hyde Pierce as Special Agent Drixenol "Drix" Drixobenzometaphedramine, a by-the-book cold pill who becomes Ozzy's best friend and partner.
Brandy Norwood as Leah Estrogen, Mayor Phlegmming's secretary and Ozzy's love interest.
William Shatner as Mayor Phlegmming, the arrogant, incompetent and corrupt mayor of the City of Frank.
Ron Howard as Tom Colonic, Phlegmming's rival for the mayor of the City of Frank who promotes good health for Frank in his campaign.
Joel Silver (uncredited) as the unnamed Police Chief who is Ozzy's boss.
David Ossman (uncredited) as Scabies, a lead germ of the heavies before Thrax kills him.
Twisted Brown Trucker members Kid Rock, Kenny Olsen, Jason Krause, Joe C., Stefanie Eulinberg, Jimmie "Bones" Trombly, and Uncle Kracker provide the voices of the fictional band "Kidney Rock".
Production
Osmosis Jones went through development hell during production. The animated sequences, directed by Tom Sito and Piet Kroon, went into production as planned even being completed ahead of schedule, but acquiring both a director and a star actor for the live-action sequences took a considerable amount of time, until Bill Murray was cast as the main character of Frank, and Peter and Bobby Farrelly stepped in to direct the live-action sequences. As part of their contract, the Farrelly brothers are credited as the primary directors of the film, although they did no supervision of the animated portions of the film. Will Smith was interested in the part of Ozzy, but in the end, his schedule would not permit it.
Principal photography on the live-action scenes took place from April 2 to June 19, 2000, in Plymouth, Massachusetts.
Osmosis Jones was originally rated PG-13 by the MPAA for "crude language" and "bodily humor" in 2000. However, Warner Bros. edited the film to make it family-friendly; and in 2001 when it was released, the film was re-rated PG on appeal for "bodily humor".
Release
Marketing
The first trailer for Osmosis Jones was released in front of Pokémon 3: The Movie on April 6, 2001, and contains a classical masterpiece from Stanley Kubrick's film 2001: A Space Odyssey.
Home media
Osmosis Jones was released on VHS and DVD on November 13, 2001 by Warner Home Video.
Reception
Box office
Osmosis Jones had its world premiere screening on August 7, 2001, at the Grauman's Egyptian Theatre before being widely released on August 10, 2001, in 2,305 theaters worldwide. Upon its original release, the film performed poorly, and was the penultimate project produced by Warner Bros. Feature Animation (preceded by The Iron Giant and followed by Looney Tunes: Back in Action, which both also failed at the box office upon their original releases). The film opened at #7 in its first opening weekend at the U.S. box office, accumulating $5,271,248 on its opening week. The film soon grossed $13,596,911. The film was a box office bomb, unable to recover its $70 million production budget.
Critical response
On Rotten Tomatoes, Osmosis Jones has an approval rating of 56% based on 111 reviews, with an average rating of 5.5/10. The site's critical consensus reads, "The animated portion of Osmosis is zippy and fun, but the live-action portion is lethargic." On Metacritic, the film has a weighted average score of 57 out of 100, based on 28 critics, indicating "mixed or average reviews". Audiences polled by CinemaScore gave the film an average grade of "B−" on an A+ to F scale.
The animated parts of Osmosis Jones were praised for their plot and fast pace, in contrast with the criticized live-action segments. Robert Koehler of Variety praised the film for its animated and live-action segments intervening, claiming it to be "the most extensive interplay of live-action and animation since Who Framed Roger Rabbit". The New York Times wrote "the film, with its effluvia-festival brand of humor, is often fun, and the rounded, blobby rendering of the characters is likable. But the picture tries too hard to be offensive to all ages. I suspect that even the littlest viewers will be too old for that spit." Roger Ebert gave the film 3 out of 4 and wrote: "Likely to entertain kids, who seem to like jokes about anatomical plumbing. For adults, there is the exuberance of the animation and the energy of the whole movie, which is just plain clever."
The use of gross-out humor in the film's live-action sequences, as seen in most films directed by the Farrelly brothers, was widely criticized. As such, Lisa Alspector of the Chicago Reader described the film as a "cathartically disgusting adventure movie". Maitland McDonagh of TV Guide praised the film's animation and its glimpse of intelligence although did criticize the humor as being "so distasteful". Lisa Schwarzbaum of Entertainment Weekly felt that the film had a diverse premise as it "oscillates between streaky black comedy and sanitary instruction"; however the scatological themes were again pointed out. Jonathan Foreman of New York Post claimed Osmosis Jones to have generic plotting, saying that "It's no funnier than your average grade-school biology lesson and less pedagogically useful than your typical Farrelly brothers comedy." Michael Sragow of Baltimore Sun praised David Hyde Pierce's performance as Drix, claiming him to be "hilarious" and "a take-charge dose of medicine".
The film also received criticism for its use of the Kid Rock song "Cool Daddy, Cool", the full version of which has lyrics promoting statutory rape.
The film received numerous Annie Award nominations including Best Animated Feature (losing to Shrek).
Soundtrack
A soundtrack containing hip hop and R&B music was released on August 7, 2001, by Atlantic Records. The soundtrack failed to chart on the Billboard 200, but Trick Daddy's single "Take It to da House" managed to make it to number 88 on the Billboard Hot 100 singles chart.
Television series
Ozzy & Drix, an animated series that serves as a stand-alone continuation of the film, starring Phil LaMarr and Jeff Bennett as the titular characters, aired on Kids' WB for two seasons and 26 episodes from September 14, 2002 to July 5, 2004.
See also
Once Upon a Time... Life, an animated series with similar anthropomorphic representations of cells and germs.
Cells at Work!, a Japanese manga/anime series with a similar premise.
Inner Workings, a Disney short film that is set in the human body.
References
External links
2001 films
2001 American animated films
2001 action comedy films
2001 children's films
2001 directorial debut films
2000s buddy comedy films
2000s buddy cop films
American films with live action and animation
American action comedy films
American buddy comedy films
American buddy cop films
Fictional microorganisms
Films about immunity
Films about infectious diseases
Films adapted into television shows
Films directed by the Farrelly brothers
Films with screenplays by Marc Hyman
Films scored by Randy Edelman
Films set in Rhode Island
Films shot in Massachusetts
Human body in popular culture
American black comedy films
Warner Bros. films
Warner Bros. animated films
Warner Bros. Animation animated films
2000s English-language films
Animated films about father–daughter relationships
English-language crime films
English-language action comedy films
English-language thriller films
English-language buddy comedy films | Osmosis Jones | [
"Biology"
] | 2,882 | [
"Fictional microorganisms",
"Microorganisms"
] |
5,527,029 | https://en.wikipedia.org/wiki/Lax%20pair | In mathematics, in the theory of integrable systems, a Lax pair is a pair of time-dependent matrices or operators that satisfy a corresponding differential equation, called the Lax equation. Lax pairs were introduced by Peter Lax to discuss solitons in continuous media. The inverse scattering transform makes use of the Lax equations to solve such systems.
Definition
A Lax pair is a pair of matrices or operators dependent on time, acting on a fixed Hilbert space, and satisfying Lax's equation:
where is the commutator.
Often, as in the example below, depends on in a prescribed way, so this is a nonlinear equation for as a function of .
Isospectral property
It can then be shown that the eigenvalues and more generally the spectrum of L are independent of t. The matrices/operators L are said to be isospectral as varies.
The core observation is that the matrices are all similar by virtue of
where is the solution of the Cauchy problem
where I denotes the identity matrix. Note that if P(t) is skew-adjoint, U(t, s) will be unitary.
In other words, to solve the eigenvalue problem Lψ = λψ at time t, it is possible to solve the same problem at time 0, where L is generally known better, and to propagate the solution with the following formulas:
(no change in spectrum),
Through principal invariants
The result can also be shown using the invariants for any . These satisfy
due to the Lax equation, and since the characteristic polynomial can be written in terms of these traces, the spectrum is preserved by the flow.
Link with the inverse scattering method
The above property is the basis for the inverse scattering method. In this method, L and P act on a functional space (thus ψ = ψ(t, x)) and depend on an unknown function u(t, x) which is to be determined. It is generally assumed that u(0, x) is known, and that P does not depend on u in the scattering region where
The method then takes the following form:
Compute the spectrum of , giving and
In the scattering region where is known, propagate in time by using with initial condition
Knowing in the scattering region, compute and/or
Spectral curve
If the Lax matrix additionally depends on a complex parameter (as is the case for, say, sine-Gordon), the equation
defines an algebraic curve in with coordinates By the isospectral property, this curve is preserved under time translation. This is the spectral curve. Such curves appear in the theory of Hitchin systems.
Zero-curvature representation
Any PDE which admits a Lax-pair representation also admits a zero-curvature representation. In fact, the zero-curvature representation is more general and for other integrable PDEs, such as the sine-Gordon equation, the Lax pair refers to matrices that satisfy the zero-curvature equation rather than the Lax equation. Furthermore, the zero-curvature representation makes the link between integrable systems and geometry manifest, culminating in Ward's programme to formulate known integrable systems as solutions to the anti-self-dual Yang–Mills (ASDYM) equations.
Zero-curvature equation
The zero-curvature equations are described by a pair of matrix-valued functions where the subscripts denote coordinate indices rather than derivatives. Often the dependence is through a single scalar function and its derivatives. The zero-curvature equation is then
It is so called as it corresponds to the vanishing of the curvature tensor, which in this case is . This differs from the conventional expression by some minus signs, which are ultimately unimportant.
Lax pair to zero-curvature
For an eigensolution to the Lax operator , one has
If we instead enforce these, together with time independence of , instead the Lax equation arises as a consistency equation for an overdetermined system.
The Lax pair can be used to define the connection components . When a PDE admits a zero-curvature representation but not a Lax equation representation, the connection components are referred to as the Lax pair, and the connection as a Lax connection.
Examples
Korteweg–de Vries equation
The Korteweg–de Vries equation
can be reformulated as the Lax equation
with
(a Sturm–Liouville operator),
where all derivatives act on all objects to the right. This accounts for the infinite number of first integrals of the KdV equation.
Kovalevskaya top
The previous example used an infinite-dimensional Hilbert space. Examples are also possible with finite-dimensional Hilbert spaces. These include Kovalevskaya top and the generalization to include an electric field .
Heisenberg picture
In the Heisenberg picture of quantum mechanics, an observable without explicit time dependence satisfies
with the Hamiltonian and the reduced Planck constant. Aside from a factor, observables (without explicit time dependence) in this picture can thus be seen to form Lax pairs together with the Hamiltonian. The Schrödinger picture is then interpreted as the alternative expression in terms of isospectral evolution of these observables.
Further examples
Further examples of systems of equations that can be formulated as a Lax pair include:
Benjamin–Ono equation
One-dimensional cubic non-linear Schrödinger equation
Davey–Stewartson system
Integrable systems with contact Lax pairs
Kadomtsev–Petviashvili equation
Korteweg–de Vries equation
KdV hierarchy
Marchenko equation
Modified Korteweg–de Vries equation
Sine-Gordon equation
Toda lattice
Lagrange, Euler, and Kovalevskaya tops
Belinski–Zakharov transform, in general relativity.
The last is remarkable, as it implies that both the Schwarzschild metric and the Kerr metric can be understood as solitons.
References
archive
P. Lax and R.S. Phillips, Scattering Theory for Automorphic Functions, (1976) Princeton University Press.
Differential equations
Automorphic forms
Spectral theory
Exactly solvable models | Lax pair | [
"Mathematics"
] | 1,234 | [
"Mathematical objects",
"Differential equations",
"Equations"
] |
5,527,836 | https://en.wikipedia.org/wiki/Picture%20%28string%20theory%29 | In superstring theory, a picture is a choice of Fock space or, equivalently, a choice of ground state that defines a representation of the theory's state space. Each picture is denoted by a number, such as the 0 picture or −1 picture, and picture-changing operators transform from one representation to another. The use of these operators in BRST quantization is credited to Daniel Friedan, Emil Martinec, and Stephen Shenker in the 1980s, though it has a predecessor in the dual models of the early 1970s.
The difference between the ground states is indicated by the action of the superghost oscillators on them, and the number of the picture (plus 1/2) reflects the highest superghost oscillator which does not annihilate the ground state.
Further reading
References
String theory | Picture (string theory) | [
"Astronomy"
] | 174 | [
"String theory",
"Astronomical hypotheses"
] |
5,528,110 | https://en.wikipedia.org/wiki/Location%20area%20identity | Inn mobile networks, location area identity (LAI) is a unique identifier assigned to each location area of a public land mobile network (PLMN).
Overview
This internationally unique identifier is used for location updating of mobile subscribers. It is composed of a three decimal digit mobile country code (MCC), a two to three digit mobile network code (MNC) that identifies a Subscriber Module Public Land Mobile Network (SM PLMN) in that country, and a location area code (LAC) which is a 16 bit number with two special values, thereby allowing 65534 location areas within one GSM PLMN.
Broadcast
The LAI is broadcast regularly through a broadcast control channel (BCCH). A mobile station (e.g. cell phone) recognizes the LAI and stores it in the SIM Card. If the mobile station is moving and notices a change of LAI, it will issue a location update request, thereby informing the mobile provider of its new LAI. This allows the provider to locate the mobile station in case of an incoming call.
See also
Mobility management
GPRS roaming exchange
References
External links
Cell Phone Localization
GSM standard
Mobile telecommunications standards | Location area identity | [
"Technology"
] | 240 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
5,528,152 | https://en.wikipedia.org/wiki/Itv.com | itv.com is the main website of ITV plc, the UK's largest commercial television broadcaster which operates 13 out of 15 regions on the ITV network under the ITV1 brand. The website offers the ITVX streaming service, with sections for ITV News, certain ITV1 programmes and competitions. STV, Which runs the only regions not owned by ITV plc, have their own separate website at stv.tv.
History
The URL 'www.itv.com' was created on 31 October 1994, but it was registered elsewhere for some years before Carlton and Granada bought the domain name after merging their respective online services, carlton.com and G-Wizz. These companies have since merged to form ITV plc. The original ITV URL was www.itv.co.uk, which now redirects to itv.com. itv.co.uk was and still is registered to the ITV Network Limited as opposed to ITV plc. Previous corporate logos had ".com" added as a logo for the Web site. However, this trend stopped with the 2011 redesign wherein a black ITV logo began being used when referring to the Web site. ITV.com's slogan used to be "total freedom of entertainment".
Overview
2007 redesign and video on demand
The 2007 redesign of the website featured a media player through which viewers gain access to whole shows, simulcasts, previews and catchups of broadcast content, much of it within a 30-day window, all for free for the viewer, funded by advertising. Users also have access to the network's programme archive, 'behind-the-scenes' programming, games and user-generated content. The site also makes available some 1,000 hours of exclusive archive content, which will grow over time. In addition, the site also introduces interactive services and community elements.
Soaps such as Coronation Street and Emmerdale were the first genre to launch on 12 June 2007, followed by Games on 19 June 2007, Drama and Best of ITV on 29 June 2007, Lifestyle on 9 July 2007, Entertainment on 17 July 2007, Sport on 25 July, with every other section launching on 31 July, making the site now fully launched. Also on 12 June 2007 simulcasts of ITV, ITV2, ITV3 and ITV4 were launched. (However, not all programming will be shown due to rights restrictions.)
In March 2008, the Web site was revamped with changes including the home page and the video player, with a new Catch Up TV section.
2009–10 redesign
On 10 December 2008 it was announced that itv.com would undergo another redesign to introduce more social media features and a greater emphasis on its most popular TV shows with a "fewer, bigger, better" strategy. The redesign was code named 'Project Penguin', and it followed the announcement of ITV's Catch Up service being rebranded to ITV Player.
Parts of the site have already had a redesign, including a new look for the home page. The Coronation Street, Loose Women and football sections pages were updated in the early months of 2009. This Morning was relaunched in August. ITV News was relaunched 2010, coinciding with a new Tonight page. On 11 September 2009, video coding was changed from Microsoft Silverlight to Flash, and on 25 September 2009, ITV relaunched its TV classics section. On 19 November ITV Player had a complete overhaul of its site and relaunched with a new Web site with a few new addons, including a search bar, a light dimmer and bigger video players. A Brand New TV Shows directory was relaunched on 10 December, which then led to the old style Drama, Soap, Entertainment pages on the site being replaced with similar pages in the TV Shows directory. On 4 February a brand new weather page launched and on 22 February a brand new ITV News Web site launched. Stories from ITV News correspondents, the latest news programme, news blogs and a meet the team page are all on the brand new Web site. From Summer 2010, the ITV Web site underwent another overhaul, with tweaks to the layout such as centring pages when used on wide-screen displays as well as improved pages for various ITV shows with high-resolution images and videos.
Integration of ITV Local
On 4 March 2009, ITV announced that ITV Local would close as a separate business. On 17 March 2009, it did, with the ITV regions integrated into itv.com. The new itv.com/local relaunched during October 2009 and are now prominent in regional news.
2011 design
On 31 March 2011 ITV.com had another new design. The home page was completely redesigned, as were the channel minisites. The TV guide and TV shows sections incorporated the new page headers and footers whilst becoming centred for better presentation on wide-screen displays. The news option in the navigational bar offers national and regional news. The sports section also become centred, with a redesigned itv.com/f1 site launching soon thereafter. Other sports minisites are expected to redesigned in the same way when their respective new seasons begin. Many programmes minisites relating to current programmes were centred, whilst shows returning for new series saw their sites redesigned with high-resolution background images and increased interactive and video content. This redesign continues to build upon the social networking functionality introduced in previous versions of the site.
The ITV Player relaunched on Monday 22 August 2011 with a new look and new features. ITV has said there will be upcoming improvements to the ITV Player such as dedicated applications for Android and iOS devices. Currently improvements to the reliability, subtitles on some programmes and a dedicated CITV section have since launched.
ITV has signalled their intention to trial a micro-payments feature on their Web site. It is currently unclear what they will charge for, but this could include consumers paying for alternative story lines for soaps and archive content. The video archive section of the site, introduced in the 2007 redesign was removed when the redesign went live.
2012 news, sport and weather relaunch
ITV relaunched its news website in March 2012, including the sport and weather pages.
2013 rebrand
As ITV underwent a corporate rebrand in January 2013, the ITV.com website was redesigned to complement the new look.
ITV Mobile was an entertainment portal that offers catch-ups and clips from ITV's soaps: Coronation Street and Emmerdale, ITV Sport and ITV's entertainment shows: This Morning and The X Factor. It also provided the latest information for regional weather forecasts. In 2006, ITV began streaming ITV1 to customers of 3 in the United Kingdom. The service was priced at 99p per day's viewing of ITV, with a £5 per month for an 18 channel unlimited viewing TV pack. The service also was broadcast alongside other PSB channels on the short lived BT Movio service, which used the DAB network in the UK to transmit video channels. The streaming of ITV on mobile was introduced on Vodafone and Orange in 2007. iOS users could also stream ITV1 and ITV2 to their devices via the ITV Mobile site.
NewsFix
From 2007, ITN and ITV Mobile have produced an ITV News video service. Bulletins were sent to mobiles twice a day, once in the morning and once in the afternoon, two bulletins at the weekend. The service previously charged users £2. The service was eventually replaced with the ITV News website and app
See also
ITVX - ITV's streaming service which is the main feature of itv.com
stv.tv - STV website, serving Central and Northern Scotland
References
External links
British entertainment websites
ITV (TV network)
Mobile content
Television websites | Itv.com | [
"Technology"
] | 1,532 | [
"Mobile content"
] |
5,529,328 | https://en.wikipedia.org/wiki/Process%20consultant | A process consultant is a highly qualified professional that has insights into and understands the psychological and social dynamics of working with various client systems such as whole organizations, groups, and individuals. Part of the field called Human Systems Intervention, process consultation is a philosophy of helping, a general theory and methodology of intervening (e.g. Schein 1999).
Skills
Given the complex nature of intervening, a process consultant's expertise includes the following (and many other) skills:
Works concomitantly with groups and individuals (managers/directors) towards a larger change process such as strategic visioning, strategic planning, etc.
Based on the context, selects from a variety of methods, tools and change theories a facilitative intervention that will most benefit the client system.
Stays aware of covert organizational processes, group dynamics, and interpersonal issues.
Role in organizational development
In organization development, a process consultant is a specialized type of consultant who acts as a facilitator to help groups deal with issues involving the process in a meeting, rather than with the actual tasks themselves.
Role in small group development
A process consultant may be used at any time during the Stages of Group Development. Occasionally, a process consultant is used when a group is either in its formative stage, or normative stage. However, more often than not, they participate when the group is in conflict.
Role in conflict resolution
Often a group finds itself in conflict over facts, goals, methods or values. It is the role of the process consultant to help the group reach consensus over the type of conflict it faces.
Once the type of conflict is identified, the process consultant then helps the group work through the steps required to break the impasse.
It is important to note that the process consultant's role is not to solve the problem, but to help the group solve its own problem. The reason for this is because it is the group, not the consultant who will have to live with the consequences of its decision.
Role in conflict management
Occasionally, due to the nature of conflict, the process consultant may need to guide the group toward conflict management rather than conflict resolution.
Techniques used
Initially a process consultant will not lead or participate in a group meeting, but rather will act as an observer. During this time, they observe the group dynamics to determine what interpersonal relationships may contribute to the group's issues.
At some point, they will begin to actively participate in the meeting, by asking clarifying questions or paraphrasing. Eventually, they will make their observations known by giving the group feedback.
Education required
To enter this field, a background in psychology and small group learning is helpful. Experience in reading body language and possessing analytical skills are also useful. However, receiving some training in experiential education will probably be the most beneficial.
References
Block, Peter (1981), Flawless Consulting, University Associates, Inc
Schein, Edgar. (1999). Process Consultation Revisited. Reading MA: Addison-Wesley.
Business process management
Conflict (process)
Consulting occupations | Process consultant | [
"Biology"
] | 609 | [
"Behavior",
"Aggression",
"Human behavior",
"Conflict (process)"
] |
5,529,490 | https://en.wikipedia.org/wiki/Zeek | Zeek is a free and open-source software network analysis framework. Vern Paxson began development work on Zeek in 1995 at Lawrence Berkeley National Lab. Zeek is a network security monitor (NSM) but can also be used as a network intrusion detection system (NIDS). The Zeek project releases the software under the BSD license.
Output
Zeek's purpose is to inspect network traffic and generate a variety of logs describing the activity it sees. A complete list of log files is available at the project documentation site.
Log example
The following is an example of one entry in JSON format from the conn.log:
Threat hunting
One of Zeek's primary use cases involves cyber threat hunting.
Name
The principal author, Paxson, originally named the software "Bro" as a warning regarding George Orwell's Big Brother from the novel Nineteen Eighty-Four. In 2018 the project leadership team decided to rename the software. At LBNL in the 1990s, the developers ran their sensors as a pseudo-user named "zeek", thereby inspiring the name change in 2018.
Zeek deployment
Security teams identify locations on their network where they desire visibility. They deploy one or more network taps or enable switch SPAN ports for port mirroring to gain access to traffic. They deploy Zeek on servers with access to those visibility points. The Zeek software on the server deciphers network traffic as logs, writing them to local disk or remote storage.
Zeek application architecture and analyzers
Zeek's event engine analyzes live or recorded network traffic to generate neutral event logs. Zeek uses common ports and dynamic protocol detection (involving signatures as well as behavioral analysis) to identify network protocols.
Developers write Zeek policy scripts in the Turing complete Zeek scripting language. By default Zeek logs information about events to files, but analysts can also configure Zeek to take other actions, such as sending an email, raising an alert, executing a system command, updating an internal metric, or calling another Zeek script.
Zeek analyzers perform application layer decoding, anomaly detection, signature matching and connection analysis. Zeek's developers designed the software to incorporate additional analyzers. The latest method for creating new protocol analyzers relies on the Spicy framework.
References
External links
Bro: A System for Detecting Network Intruders in Real-Time – Vern Paxson
Zeek Nedir? Nasıl Kurulur? – KernelBlog Emre Yılmaz (in Turkish)
Free security software
Computer security software
Unix security software
Intrusion detection systems
Software using the BSD license | Zeek | [
"Engineering"
] | 536 | [
"Cybersecurity engineering",
"Computer security software"
] |
5,529,543 | https://en.wikipedia.org/wiki/EMILE | EMILE is the Early Mac Image Loader, a bootloader for loading Linux on Macintosh computers that have m68k processors. It was written by Laurent Vivier, and is meant to eventually replace the Penguin booter that is more usually in use.
In contrast to the Penguin booter, which requires a working classic Mac OS installation, EMILE modifies the boot block on a hard disk to boot Linux directly.
External links
EMILE, site SourceForge
EMILE, mirror GitHub
Free boot loaders | EMILE | [
"Technology"
] | 104 | [
"Operating system stubs",
"Computing stubs"
] |
5,529,638 | https://en.wikipedia.org/wiki/Stress%E2%80%93energy%E2%80%93momentum%20pseudotensor | In the theory of general relativity, a stress–energy–momentum pseudotensor, such as the Landau–Lifshitz pseudotensor, is an extension of the non-gravitational stress–energy tensor that incorporates the energy–momentum of gravity. It allows the energy–momentum of a system of gravitating matter to be defined. In particular it allows the total of matter plus the gravitating energy–momentum to form a conserved current within the framework of general relativity, so that the total energy–momentum crossing the hypersurface (3-dimensional boundary) of any compact space–time hypervolume (4-dimensional submanifold) vanishes.
Some people (such as Erwin Schrödinger) have objected to this derivation on the grounds that pseudotensors are inappropriate objects in general relativity, but the conservation law only requires the use of the 4-divergence of a pseudotensor which is, in this case, a tensor (which also vanishes). Mathematical developments in the 1980's have allowed pseudotensors to be understood as sections of jet bundles, thus providing a firm theoretical foundation for the concept of pseudotensors in general relativity.
Landau–Lifshitz pseudotensor
The Landau–Lifshitz pseudotensor, a stress–energy–momentum pseudotensor for gravity, when combined with terms for matter (including photons and neutrinos), allows the energy–momentum conservation laws to be extended into general relativity.
Requirements
Landau and Lifshitz were led by four requirements in their search for a gravitational energy momentum pseudotensor, :
that it be constructed entirely from the metric tensor, so as to be purely geometrical or gravitational in origin.
that it be index symmetric, i.e. , (to conserve angular momentum)
that, when added to the stress–energy tensor of matter, , its total ordinary 4-divergence (, not ) vanishes so that we have a conserved expression for the total stress–energy–momentum. (This is required of any conserved current.)
that it vanish locally in an inertial frame of reference (which requires that it only contains first order and not second or higher order derivatives of the metric). This is because the equivalence principle requires that the gravitational force field, the Christoffel symbols, vanish locally in some frames. If gravitational energy is a function of its force field, as is usual for other forces, then the associated gravitational pseudotensor should also vanish locally.
Definition
Landau and Lifshitz showed that there is a unique construction that satisfies these requirements, namely
where:
Gμν is the Einstein tensor (which is constructed from the metric)
gμν is the inverse of the metric tensor, gμν
is the determinant of the metric tensor. , hence its appearance as .
are partial derivatives, not covariant derivatives
is the Einstein gravitational constant
G is the Newtonian constant of gravitation
Verification
Examining the 4 requirement conditions we can see that the first 3 are relatively easy to demonstrate:
Since the Einstein tensor, , is itself constructed from the metric, so therefore is
Since the Einstein tensor, , is symmetric so is since the additional terms are symmetric by inspection.
The Landau–Lifshitz pseudotensor is constructed so that when added to the stress–energy tensor of matter, , its total 4-divergence vanishes: . This follows from the cancellation of the Einstein tensor, , with the stress–energy tensor, by the Einstein field equations; the remaining term vanishes algebraically due to the commutativity of partial derivatives applied across antisymmetric indices.
The Landau–Lifshitz pseudotensor appears to include second derivative terms in the metric, but in fact the explicit second derivative terms in the pseudotensor cancel with the implicit second derivative terms contained within the Einstein tensor, . This is more evident when the pseudotensor is directly expressed in terms of the metric tensor or the Levi-Civita connection; only the first derivative terms in the metric survive and these vanish where the frame is locally inertial at any chosen point. As a result, the entire pseudotensor vanishes locally (again, at any chosen point) , which demonstrates the delocalisation of gravitational energy–momentum.
Cosmological constant
When the Landau–Lifshitz pseudotensor was formulated it was commonly assumed that the cosmological constant, , was zero. Nowadays, that assumption is suspect, and the expression frequently gains a term, giving:
This is necessary for consistency with the Einstein field equations.
Metric and affine connection versions
Landau and Lifshitz also provide two equivalent but longer expressions for the Landau–Lifshitz pseudotensor:
Metric tensor version:
Affine connection version:
This definition of energy–momentum is covariantly applicable not just under Lorentz transformations, but also under general coordinate transformations.
Einstein pseudotensor
This pseudotensor was originally developed by Albert Einstein.
Paul Dirac showed that the mixed Einstein pseudotensor
satisfies a conservation law
Clearly this pseudotensor for gravitational stress–energy is constructed exclusively from the metric tensor and its first derivatives. Consequently, it vanishes at any event when the coordinate system is chosen to make the first derivatives of the metric vanish because each term in the pseudotensor is quadratic in the first derivatives of the metric tensor field. However it is not symmetric, and is therefore not suitable as a basis for defining the angular momentum.
See also
Bel–Robinson tensor
Gravitational wave
Notes
References
Tensors
Tensors in general relativity | Stress–energy–momentum pseudotensor | [
"Physics",
"Engineering"
] | 1,137 | [
"Tensors in general relativity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
5,529,740 | https://en.wikipedia.org/wiki/Sex%20allocation | Sex allocation is the allocation of resources to male versus female reproduction in sexual species. Sex allocation theory tries to explain why many species produce equal number of males and females.
In dioecious species, where individuals are either male or female for their entire lifetimes, the allocation decision lies between producing male or female offspring. In sequential hermaphrodites, where individuals function as one sex early in life and then switch to the other, the allocation decisions lie in what sex to be first and when to change sex. Animals may be dioecious or sequential hermaphrodites. Sex allocation theory also applies to flowering plants, which can be dioecious, simultaneous hermaphrodites, have unisexual plants and hermaphroditic plants in the same population, have unisexual flowers and hermaphroditic flowers on the same plant or to have only hermaphroditic flowers.
Fisher's principle and equal sex allocation
R.A. Fisher developed an explanation, known as Fisher's principle, of why sex ratios in many animals are 1:1. If there were 10 times more females in a population than males, a male would on average be able to mate with more partners than a female would. Parents who preferentially invested in producing male offspring would have a fitness advantage over those who preferentially produced females. This strategy would result in increasing numbers of males in the population, thus eliminating the original advantage of males. The same would occur if there were originally more males than females in a population. The evolutionarily stable strategy (ESS) in this case would be for parents to produce a 1:1 ratio of males and females.
This explanation assumed that males and females are equally costly for parents to produce. However, if one sex were more costly than the other, parents would allot their resources to their offspring differentially. If parents could have two daughters for the same cost as one male because males took twice the energy to rear, parents would preferentially invest in daughters. Females would increase in the population until the sex ratio was 2 females: 1 male, meaning that a male could have twice the offspring a female could. As a result, males will be twice as costly while producing twice as many offspring, so that males and females provide the same proportion of offspring in proportion to the investment the parent allotted, resulting in an ESS. Therefore, parents allot equal investment of effort in both sexes. More generally, the expected sex ratio is the ratio of the allotted investment between the sexes, and is sometimes referred to as Fisherian sex ratios.
However, there are many examples of organisms that do not demonstrate the expected 1:1 ratio or the equivalent investment ratio. The idea of equal allocation fails to explain these expected ratios because it assume that relatives do not interact with one another, and that the environment has no effect.
Interactions between relatives
W.D. Hamilton hypothesized that non-Fisherian sex ratios can result when relatives interact with one another. He argued that if relatives experienced competition for resources, or benefited from the presence of other relatives, then sex ratios would become skewed. This led to a great deal of research on whether competition or cooperation between relatives results in differential sex ratios that do not support Fisher's principle.
Local resource competition
Local resource competition (LRC) was first hypothesized by Anne Clark. She argued that the African bushbaby (Otolemur crassicaudatus) demonstrated a male-biased sex ratio because daughters associated with mothers for longer periods of time than did sons. Since sons disperse further from the maternal territory than do daughters, they do not remain on the territories and do not act as competitors with mothers for resources. Clark predicted that the effect of the LRC on sex allocation resulted in a mother investing preferentially in male offspring to reduce competition between daughters and herself. By producing more male offspring that disperse and do not compete with her, the mother will have a greater fitness than she would if she had produced the ratio predicted by the equal investment theory.
Further research has found that LRC may influence the sex ratio in birds. Passerine birds demonstrate largely daughter-based dispersal, while ducks and geese demonstrate mainly male-based dispersal. Local resource competition has been hypothesized to be the reason that passerine birds are more likely to be female, while ducks and geese are more likely to have male offspring.
Other studies have hypothesized that LRC is likely to influence sex ratios in roe deer, as well as primates. Consistent with these hypotheses, the sex-ratios in roe deer and several primates have been found to be skewed towards the sex that does not compete with mothers.
Local mate competition
Local mate competition (LMC) can be considered a special type of LRC. Fig wasps lay fertilized eggs within figs, and no females disperse. In some species, males are wingless upon hatching and cannot leave the fig to seek mates elsewhere. Instead, males compete with their brothers in order to fertilize their sisters in the figs; after fertilization, the males die. In such a case, mothers would preferentially adjust the sex ratio to be female-biased, as only a few males are needed in order to fertilize all of the females. If there were too many males, competition between the males will result in some failing to mate, and the production of those males would therefore be a waste of the mother's resources. A mother that allotted more resources to the production of female offspring would therefore have greater fitness than one who produced fewer females.
Support for LMC influencing sex ratio was found by examining the sex ratios of different fig wasps. Species with wingless males that can only mate with sisters were predicted to have higher rates of female-biased sex ratios, while species with winged males that can travel to other figs to fertilize non-related females were predicted to have less biased sex ratios. Consistent with LMC influencing sex ratio, these predictions were found to be true. In the latter case, LMC is reduced, and investment in male offspring is less likely to be “wasted” from the mother's point of view.
Research on LMC has focused on insects, such as wasps and ants, because they often face strong LMC. Other animals that often disperse from natal groups are much less likely to experience LMC.
Local resource enhancement
Local resource enhancement (LRE) occurs when relatives help one another instead of competing with one another in LRC or LMC. In cooperative breeders, mothers are assisted by their previous offspring in raising new offspring. In animals with these systems, females are predicted to preferentially have offspring that are the helping sex if there are not enough helpers. However, if there are already enough helpers, it is predicted that females would invest in offspring of the other sex, as this would allow them to increase their own fitness by having dispersing offspring with a greater rate of reproduction than the helpers. It is also predicted that the strength of the selection upon the mothers to adjust the sex ratio of their offspring depends upon the magnitude of the benefits they gain from their helpers.
These predictions were found to be true in African wild dogs, where females disperse more rapidly than males from their natal packs. Males are therefore more helpful towards their mothers, as they remain in the same pack as her and help provide food for her and her new offspring. The LRE the males provide is predicted to result in a male-biased sex ratio, which is the pattern observed in nature. Consistent with predictions of LRE influencing sex ratios, African wild dog mothers living in smaller packs were seen to produce more male-biased sex ratios than mothers in a larger pack, since they had fewer helpers and would benefit more from additional helpers than mothers living in larger packs.
Evidence for LRE leading to sex ratios biased in favor of helpers has also been found in a number of other animals, including the Seychelles warbler (Acrocephalus sechellensis) and various primates.
Trivers–Willard hypothesis
The Trivers-Willard hypothesis provides a model for sex allocation that deviates from Fisherian sex ratios. Trivers and Willard (1973) originally proposed a model that predicted individuals would skew the sex ratio of males to females in response to certain parental conditions, which was supported by evidence from mammals. Though individuals may not consciously decide to have fewer or more offspring of the same sex, their model suggested that individuals could be selected to adjust the sex ratio of offspring produced based on their ability to invest in offspring, if fitness returns for male and female offspring differ based on these conditions. While the Trivers-Willard hypothesis applied specifically to instances where preferentially having female offspring as maternal condition deteriorates was more advantageous, it spurred a great deal of further research on how environmental conditions can differentially affect sex ratios, and there are now a number of empirical studies that have found individuals adjust their ratio of male and female offspring.
Food availability
In many species, the abundance of food in a given habitat dictates the level of parental care and investment in offspring. This, in turn, influences the development and viability of the offspring. If food availability has differential effects on the fitness of male and female offspring, then selection should shift offspring sex ratios based on specific conditions of food availability. Appleby (1997) proposed evidence for conditional sex allocation in a study done on tawny owls (Strix aluco). In tawny owls, a female-biased sex ratio was observed in breeding territories where there was an abundance of prey (field voles). In contrast, in breeding territories with a scarcity of prey, a male-biased sex ratio was seen. This appeared to be adaptive because females demonstrated higher reproductive success when prey density was high, whereas males did not appear to have any reproductive advantage with high prey density. Appleby hypothesized that parents should adjust the sex ratio of their offspring based on the availability of food, with a female sex bias in areas of high prey density and a male sex bias in areas of low prey density. The results support the Trivers-Willard model, as parents produced more of the sex that benefited most from plentiful resources.
Wiebe and Bortolotti (1992) observed sex ratio adjustment in a sexually dimorphic (by size) population of American kestrels (Falco sparverius). In general, the larger sex in a species requires more resources than the smaller sex during development and is thus more costly for parents to raise. Wiebe and Bortolotti provided evidence that kestral parents produced more of the smaller (less costly) sex given limited food resources and more of the larger (more costly) sex given an abundance of food resources. These findings modify the Trivers-Willard hypothesis by suggesting sex ratio allocation can be biased by sexual size dimorphism as well as parental conditions.
Maternal condition or quality
A study by Clutton-Brock (1984) on red deer (Cervus elaphus), a polygynous species, examined the effects of dominance rank and maternal quality on female breeding success and sex ratios of offspring. Based on the Trivers-Willard model, Clutton-Brock hypothesized that the sex ratio of mammalian offspring may change according to maternal condition, where high-ranked females should produce more male offspring and low-ranked females should produce more female offspring. This is based on the assumption that high-ranked females are in better condition, so that they have more access to resources and can afford to invest more in their offspring. In the study, high-ranked females were shown to give birth to healthier offspring than low-ranked females, and the offspring of high-ranked females also developed into healthier adults. Clutton-Brock suggested that the advantage of being a healthy adult was more beneficial for male offspring because stronger males are more capable of defending harems of females during breeding seasons. Therefore, Clutton-Brock proposed that males produced by females in better conditions are more likely to have greater reproductive success in the future than males produced by females in poorer conditions. These findings support the Trivers-Willard hypothesis, as parental quality affected the sex of their offspring, in such a way as to maximize their reproductive investment.
Mate attractiveness and quality
Similar to the idea behind the Trivers-Willard hypothesis, studies show that mate attractiveness and quality may also explain differences in sex ratios and offspring fitness. Weatherhead and Robertson (1979) predicted that females bias the sex ratio of their offspring in favor of sons if they are mated to more attractive and better quality males. This is related to Fisher's “sexy son” hypothesis, which suggests a causal link between male attractiveness and the quality of sons based on the inheritance of “good genes” that should improve the reproductive success of sons. Fawcett (2007) predicted that it is adaptive for females to adjust their sex ratio to favor sons in response to attractive males. Based on a computer model, he proposed that if sexual selection favors costly male traits, i.e. ornamentation, and costly female preferences, females should produce more male offspring when they mate with an attractive male compared to an unattractive male. Fawcett proposed that there is a direct correlation between female bias for male offspring and attractiveness of their mate. Computer simulations have costs and constraints, and selection may be weaker in natural populations than it was in Fawcett's study. While his results provide support for the Trivers-Willard hypothesis that animals adaptively adjust the sex ratio of offspring due to environmental variables, further empirical studies are needed to see if sex ratio is adjusted in response to mate attractiveness.
Sex change
The principles of the Trivers-Willard hypothesis can also be applied to sequentially hermaphroditic species, in which individuals undergo sex change. Ghiselin (1969) proposed that individuals change from one sex to another as they age and grow because larger body size provides a greater advantage to one sex than the other. For example, in the bluehead wrasse, the largest males have 40 times the mating success of smaller ones. Thus, as individuals age, they can maximize their mating success by changing from female to male. Removal of the largest males on a reef results in the largest females changing sex to male, supporting the hypothesis that competition for mating success drives sex change.
Sex allocation in plants
A great deal of research has focused on sex allocation in plants to predict when plants would be dioecious, simultaneous hermaphrodites, or demonstrate both in the same population or plant. Research has also examined how outcrossing, which occurs when individual plants can fertilize and be fertilized by other individuals or selfing (self-pollination) affect sex allocation.
Selfing in simultaneous hermaphrodites has been predicted to favor allocating fewer resources to the male function, as it is hypothesized to be more advantageous for hermaphrodites to invest in female functions, so long as they have enough males to fertilize themselves. Consistent with this hypothesis, as selfing in wild rice (Oryza perennis) increases, the plants allocate more resources to the female function than to male.
Charlesworth and Charlesworth (1981) applied similar logic to both outcrossing and selfing species, and created a model that predicted when dioecy would be favored over hermaphroditism, and vice versa. The model predicted that dioecy evolves if investing in one sexual function has accelerating fitness benefits than investing in both sexual functions, while hermaphroditism evolves if investing in one sexual function had decreasingly lower fitness benefits. It has been difficult to measure exactly how much fitness individual plants are able to gain from investing in one or both sexual functions, and further empirical research is needed to support this model.
Mechanisms of sex allocation decisions
Depending on the mechanism of sex determination for a species, decisions about sex allocation may be carried out in different ways.
In haplodiploid species, like bees and wasps, females control the sex of offspring by deciding whether or not to fertilize each egg. If she fertilizes the egg, it will become diploid and develop as a female. If she does not fertilize the egg, it will remain haploid and develop as a male. In an elegant experiment, researchers showed that female N. vitripennis parasitoid wasps altered the sex ratio in their offspring in response to the environmental cue of eggs laid by other females.
Historically, many theorists have argued that the Mendelian nature of chromosomal sex determination limits opportunities for parental control of offspring sex ratio. However, adaptive adjustment of sex ratio has been found among many animals, including primates, red deer, and birds. The exact mechanism of such allocation is unknown, but several studies indicate that hormonal, pre-ovulatory control may be responsible. For example, higher levels of follicular testosterone in mothers, signifying maternal dominance, correlated with a higher chance of forming a male embryo in cows. Higher corticosterone levels in breeding female Japanese quails were associated with female-biased sex ratios at laying.
In species that have environmental sex determination, like turtles and crocodiles, the sex of an offspring is determined by environmental features such as temperature and day length. The direction of bias differs between species. For example, in turtles with ESD, males are produced at lower temperatures, but in many alligators, males are produced at higher temperatures.
References
Sex-determination systems | Sex allocation | [
"Biology"
] | 3,611 | [
"Sex-determination systems",
"Sex"
] |
5,529,757 | https://en.wikipedia.org/wiki/Fundamental%20thermodynamic%20relation | In thermodynamics, the fundamental thermodynamic relation are four fundamental equations which demonstrate how four important thermodynamic quantities depend on variables that can be controlled and measured experimentally. Thus, they are essentially equations of state, and using the fundamental equations, experimental data can be used to determine sought-after quantities like G (Gibbs free energy) or H (enthalpy). The relation is generally expressed as a microscopic change in internal energy in terms of microscopic changes in entropy, and volume for a closed system in thermal equilibrium in the following way.
Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume.
This is only one expression of the fundamental thermodynamic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in terms of the enthalpy H as
in terms of the Helmholtz free energy F as
and in terms of the Gibbs free energy G as
.
The first and second laws of thermodynamics
The first law of thermodynamics states that:
where and are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively.
According to the second law of thermodynamics we have for a reversible process:
Hence:
By substituting this into the first law, we have:
Letting be reversible pressure-volume work done by the system on its surroundings,
we have:
This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions that depend on only the initial and final states of a thermodynamic process, the above relation holds also for non-reversible changes. If the composition, i.e. the amounts of the chemical components, in a system of uniform temperature and pressure can also change, e.g. due to a chemical reaction, the fundamental thermodynamic relation generalizes to:
The are the chemical potentials corresponding to particles of type .
If the system has more external parameters than just the volume that can change, the fundamental thermodynamic relation generalizes to
Here the are the generalized forces corresponding to the external parameters . (The negative sign used with pressure is unusual and arises because pressure represents a compressive stress that tends to decrease volume. Other generalized forces tend to increase their conjugate displacements.)
Relationship to statistical mechanics
The fundamental thermodynamic relation and statistical mechanical principles can be derived from one another.
Derivation from statistical mechanical principles
The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially a definition of heat, i.e. heat is the change in the internal energy of a system that is not caused by a change of the external parameters of the system.
However, the second law of thermodynamics is not a defining relation for the entropy. The fundamental definition of entropy of an isolated system containing an amount of energy is:
where is the number of quantum states in a small interval between and . Here is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on . The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size .
Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have:
The fundamental assumption of statistical mechanics is that all the states at a particular energy are equally likely. This allows us to extract all the thermodynamical quantities of interest. The temperature is defined as:
This definition can be derived from the microcanonical ensemble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on x. According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
The generalized force, X, corresponding to the external parameter x is defined such that is the work performed by the system if x is increased by an amount dx. E.g., if x is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate is given by:
Since the system can be in any energy eigenstate within an interval of , we define the generalized force for the system as the expectation value of the above expression:
To evaluate the average, we partition the energy eigenstates by counting how many of them have a value for within a range between and . Calling this number , we have:
The average defining the generalized force can now be written:
We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then will change because the energy eigenstates depend on x, causing energy eigenstates to move into or out of the range between and . Let's focus again on the energy eigenstates for which lies within the range between and . Since these energy eigenstates increase in energy by Y dx, all such energy eigenstates that are in the interval ranging from E − Y dx to E move from below E to above E. There are
such energy eigenstates. If , all these energy eigenstates will move into the range between and and contribute to an increase in . The number of energy eigenstates that move from below to above is, of course, given by . The difference
is thus the net contribution to the increase in . Note that if Y dx is larger than there will be energy eigenstates that move from below to above . They are counted in both and , therefore the above expression is also valid in that case.
Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:
The logarithmic derivative of with respect to x is thus given by:
The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that:
Combining this with
Gives:
which we can write as:
Derivation of statistical mechanical principles from the fundamental thermodynamic relation
It has been shown that the fundamental thermodynamic relation together with the following three postulates
is sufficient to build the theory of statistical mechanics without the equal a priori probability postulate.
For example, in order to derive the Boltzmann distribution, we assume the probability density of microstate satisfies . The normalization factor (partition function) is therefore
The entropy is therefore given by
If we change the temperature by while keeping the volume of the system constant, the change of entropy satisfies
where
Considering that
we have
From the fundamental thermodynamic relation, we have
Since we kept constant when perturbing , we have . Combining the equations above, we have
Physics laws should be universal, i.e., the above equation must hold for arbitrary systems, and the only way for this to happen is
That is
It has been shown that the third postulate in the above formalism can be replaced by the following:
However, the mathematical derivation will be much more complicated.
References
External links
The Fundamental Thermodynamic Relation
Thermodynamics
Statistical mechanics
Thermodynamic equations | Fundamental thermodynamic relation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,698 | [
"Thermodynamic equations",
"Equations of physics",
"Thermodynamics",
"Statistical mechanics",
"Dynamical systems"
] |
5,529,962 | https://en.wikipedia.org/wiki/Least-concern%20species | A least-concern species is a species that has been evaluated and categorized by the International Union for Conservation of Nature (IUCN) as not being a focus of wildlife conservation because the specific species is still plentiful in the wild. They do not qualify as threatened, near threatened, or (before 2001) conservation dependent.
Species cannot be assigned the "Least Concern" category unless they have had their population status evaluated. That is, adequate information is needed to make a direct, or indirect, assessment of its risk of extinction based on its distribution or population status.
Evaluation
Since 2001 the category has had the abbreviation "LC", following the IUCN 2001 Categories & Criteria (version 3.1). Before 2001 "least concern" was a subcategory of the "Lower Risk" category and assigned the code "LR/lc" or lc. Around 20% of least concern taxa (3261 of 15,636) in the IUCN database still use the code "LR/lc", which indicates they have not been re-evaluated since 2000.
Number of species
While "least concern" is not considered a red listed category by the IUCN, the 2006 IUCN Red List still assigns the category to 15,636 taxa. The number of animal species listed in this category totals 14,033 (which includes several undescribed species such as a frog from the genus Philautus). There are also 101 animal subspecies listed and 1500 plant taxa (1410 species, 55 subspecies, and 35 varieties). No fungi or protista have the classification, though only four species in those kingdoms have been evaluated by the IUCN. Humans were formally assessed as a species of least concern in 2008.
List of LC species
See also
Conservation status
References
External links
List of Least Concern species as identified by the IUCN Red List of Threatened Species
Biota by conservation status
IUCN Red List | Least-concern species | [
"Biology"
] | 382 | [
"Biota by conservation status",
"Biodiversity"
] |
5,530,014 | https://en.wikipedia.org/wiki/Oncofetal%20antigen | Oncofetal antigens are proteins which are typically present only during fetal development but are found in adults with certain kinds of cancer. These proteins are often measurable in the blood of individuals with cancer and may be used to both diagnose and follow treatment of the tumors. One example of an oncofetal antigen is alpha-fetoprotein, which is produced by hepatocellular carcinoma and some germ cell tumors. Another example is carcinoembryonic antigen, which is elevated in people with colon cancer and other tumors. Other oncofetal antigens are trophoblast glycoprotein precursor and immature laminin receptor protein (also known as oncofetal antigen protein). Oncofetal antigens are promising targets for vaccination against several types of cancers.
External links
Entrez protein entry for trophoblast glycoprotein precursor
References
Proteins | Oncofetal antigen | [
"Chemistry"
] | 193 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
5,530,147 | https://en.wikipedia.org/wiki/Herbig%20Ae/Be%20star | A Herbig Ae/Be star (HAeBe) is a pre-main-sequence star – a young () star of spectral types A or B. These stars are still embedded in gas-dust envelopes and are sometimes accompanied by circumstellar disks. Hydrogen and calcium emission lines are observed in their spectra. They are 2-8 Solar mass () objects, still existing in the star formation (gravitational contraction) stage and approaching the main sequence (i.e. they are not burning hydrogen in their center).
Description
In the Hertzsprung–Russell diagram, Herbig Ae/Be stars are located to the right of the main sequence. They are named after the American astronomer George Herbig, who first distinguished them from other stars in 1960.
The original Herbig criteria were:
Spectral type earlier than F0 (in order to exclude T Tauri stars),
Balmer emission lines in the stellar spectrum (in order to be similar to T Tauri stars),
Projected location within the boundaries of a dark interstellar cloud (in order to select really young stars near their birthplaces),
Illumination of a nearby bright reflection nebula (in order to guarantee physical link with star formation region).
There are now several known isolated Herbig Ae/Be stars (i.e. not connected with dark clouds or nebulae). Thus the most reliable criteria now can be:
Spectral type earlier than F0,
Balmer emission lines in the stellar spectrum,
Infrared radiation excess (in comparison with normal stars) due to circumstellar dust (in order to distinguish from classical Be stars, which have infrared excess due to free-free emission).
Sometimes Herbig Ae/Be stars show significant brightness variability. They are believed to be due to clumps (protoplanets and planetesimals) in the circumstellar disk. In the lowest brightness stage the radiation from the star becomes bluer and linearly polarized (when the clump obscures direct star light, scattered from disk light relatively increases – it is the same effect as the blue color of our sky).
Analogs of Herbig Ae/Be stars in the smaller mass range (<2 ) – F, G, K, M spectral type pre-main-sequence stars – are called T Tauri stars. More massive (>8 ) stars in pre-main-sequence stage are not observed, because they evolve very quickly: when they become visible (i.e. disperses surrounding circumstellar gas and dust cloud), the hydrogen in the center is already burning and they are main-sequence objects.
Planets
Planets around Herbig Ae/Be stars include:
HD 95086 b around an A-type star
HD 100546 b around a B-type star
Gallery
References
Sources
Thé P.S., de Winter D., Pérez M.R. (1994) 0
Pérez M.R., Grady C.A. (1997), Observational Overview of Young Intermediate-Mass Objects: Herbig Ae/Be Stars, Space Science Reviews, Vol 82, p. 407-450
Waters L. B. F. M., Waelkens, C. (1998), HERBIG Ae/Be STARS, Annual Review of Astronomy and Astrophysics, Vol. 36, p. 233-266
Herbig Ae/Be stars
Star types
Star formation
1960 in science | Herbig Ae/Be star | [
"Astronomy"
] | 695 | [
"Star types",
"Astronomical classification systems"
] |
5,530,265 | https://en.wikipedia.org/wiki/TechnoSphere%20%28virtual%20environment%29 | TechnoSphere was an online digital environment launched on September 1, 1995 and hosted on a computer at a UK university. Created by Jane Prophet and Dr. Gordon Selley, TechnoSphere was a place where users from around the globe could create creatures and release them into the 3D environment, described by the creators as a "digital ecology." Earlier incarnations of TechnoSphere did not have the advantage of web-accessible 3D graphics, but was still governed by chaos theory and similar algorithms that determined each creature's unique behavior based on their components and interactions with each other and their environment.
The online program was one of many digital artificial life simulations that evolved as the World Wide Web began to grow. Many museums and classrooms found the tool to be a valuable complement to learning material on natural selection and ecosystems. The experiment operated online until 2002. It was relaunched on January 15, 2007, but became offline again as of November 2012.
Description
TechnoSphere was a real-time, 3D simulation of an environment that was populated by virtual creatures. Users across the globe had the capability to create their own creatures through a website. TechnoSphere III, one of many incarnations of the original design, used an artificial life program and fractal landscapes, which were governed by a complex set of rules and algorithms that determined how the virtual ecosystem reacted. The program was capable of modeling such concepts as simple evolution and carrying capacity. Despite limited available creature designs, no two would ever behave in the same way, due to chance interactions with its environment and other creatures.
Physically, the virtual landscape of TechnoSphere consisted of 16 km2 of terrain. It was capable of supporting approximately 4,000 creatures, though other sources suggest that as many as 20,000 creatures typically would coexist in the virtual environment at one time. After the relaunch, it was explicitly stated that the software limited the number of creatures at 200,000. Because each creature's behavior was unique, no single event could have been predicted, though some significant patterns developed. For example, even though there was no explicit flocking algorithm written into the program, creatures could be found organizing themselves into groups, most likely impelled by urges to mate and eat. The programs that supported the website were scalable, and could be modified to support a larger or smaller community of creatures.
Creatures
Users accessing the site were able to create their own artificial life forms, building carnivores or herbivores from a select few component parts (heads, bodies, eyes, and wheels). Their "digital DNA" was linked to each component and the completed creature's attributes (speed, visual perception, rate of digestion, etc.) was determined by the combination of each feature's strengths and weaknesses—their "fitness for survival." Once a creature design was finished, users would name their digital creature, tag it with their e-mail address, and enter it into the digital environment.
There they chased or evaded each other, ate, grew, and mated. They also produced offspring, which were variants of the parents, sometimes incorporating aspects of both parents and other times favoring one parent creature's attributes over the others. General behavior patterns had emerged, but it was difficult to predict what was going to happen based solely on a creature's design. The one thing all TechnoSphere creatures did have in common was that they would all eventually die.
There was only one gender in TechnoSphere, so the creature that initiated mating was the parent that ended up carrying and caring for the offspring. Creature behavior was directed by a set of algorithms called Creature Comforts, designed by Julian Saunderson. It dictated, for example, that mating behavior (recombination of digital DNA) could only be initiated if both creatures' hunger was at least 50% satiated.
When significant events occurred in the TechnoSphere, a user's creature would send brief email messages "home." Users were also able to visit the website and view 2D snapshots of their creature, check family trees, "world" statistics, and search for other creatures and their users.
Popularity
One report described the project's popularity by citing that the online version had attracted over a 100,000 users who had created 3,286,148 and growing creatures. Over the years in which the website was operating, the growing popularity facilitated necessary updates to the server software and hardware, causing website downtime and often slow response times.
Museums and education
Many museums and educators found the digital ecology interesting and some teachers even used TechnoSphere as a teaching tool. The technological innovations and digital images produced by the project were of such interest that temporary installations were put in at several museums, including the National Museum of Photography, Film and Television (now the National Media Museum) (Bradford, UK), Casula Powerhouse Arts Centre (Sydney, Australia), and the Donald R. and Joan F. Beall Center for Art and Technology at the University of California, Irvine (Irvine, California, [US). Museum visitors created creatures using touchscreen terminals and then released them immediately into the TechnoSphere. Once in the digital environment, the creatures could be observed within the world on a series of large projection screens, further expanding the popularity of the project.
See also
Digital organism simulator
References
External links
June 25, 2003 internet archived version of the website
Association for Heritage Interpretation - Prophet, Jane. TechnoSphere.
"An A-Life Ecology on the Internet" - Gordon Selley. 4D Dynamics Conference, 20–21 September 1995. De Montford University, England.
"Digital Beings" - Jane Prophet. European Media Art Festival. Kunsthalle, Osnabruck, Germany September 9, 1995.
"TechnoSphere: a case study in networked collaboration" - Jane Prophet. Agents of Change: the photographers guide to the future. Fifth National Photography Conference. 22–24 September 1995.
"Get A-Life" - Jane Prophet. Virtual Futures Conference. Warwick University 27 May 1995.
"Report on the Artificial Life Environment" - A report on TechnoSphere version I by Julian Saunderson (edited by Jane Prophet).
"Get-A-Life Munchy Morsels" - A report on TechnoSphere version II by Rycharde Hawkes.
Artificial life models
Science websites
Internet properties established in 1995 | TechnoSphere (virtual environment) | [
"Biology"
] | 1,272 | [
"Artificial life models",
"Biological models"
] |
5,530,574 | https://en.wikipedia.org/wiki/Thermal%20transmittance | Thermal transmittance is the rate of transfer of heat through matter. The thermal transmittance of a material (such as insulation or concrete) or an assembly (such as a wall or window) is expressed as a U-value. The thermal insulance of a structure is the reciprocal of its thermal transmittance.
U-value
Although the concept of U-value (or U-factor) is universal, U-values can be expressed in different units. In most countries, U-value is expressed in SI units, as watts per square metre-kelvin:
W/(m2⋅K)
In the United States, U-value is expressed as British thermal units (Btu) per hour-square feet-degrees Fahrenheit:
Btu/(h⋅ft2⋅°F)
Within this article, U-values are expressed in SI unless otherwise noted. To convert from SI to US customary values, divide by 5.678.
Well-insulated parts of a building have a low thermal transmittance whereas poorly insulated parts of a building have a high thermal transmittance. Losses due to thermal radiation, thermal convection and thermal conduction are taken into account in the U-value. Although it has the same units as heat transfer coefficient, thermal transmittance is different in that the heat transfer coefficient is used to solely describe heat transfer in fluids while thermal transmittance is used to simplify an equation that has several different forms of thermal resistances.
It is described by the equation:
Φ = A × U × (T1 - T2)
where Φ is the heat transfer in watts, U is the thermal transmittance, T1 is the temperature on one side of the structure, T2 is the temperature on the other side of the structure and A is the area in square metres.
Thermal transmittances of most walls and roofs can be calculated using ISO 6946, unless there is metal bridging the insulation in which case it can be calculated using ISO 10211. For most ground floors it can be calculated using ISO 13370. For most windows the thermal transmittance can be calculated using ISO 10077 or ISO 15099. ISO 9869 describes how to measure the thermal transmittance of a structure experimentally.
Choice of materials and quality of installation has a critical impact on the window insulation results. The frame and double sealing of the window system are the actual weak points in the window insulation.
Typical thermal transmittance values for common building structures are as follows:
Single glazing: 5.7 W/(m2⋅K)
Single glazed windows, allowing for frames: 4.5 W/(m2⋅K)
Double glazed windows, allowing for frames: 3.3 W/(m2⋅K)
Double glazed windows with advanced coatings: 2.2 W/(m2⋅K)
Double glazed windows with advanced coatings and frames: 1.2 W/(m2⋅K)
Triple glazed windows, allowing for frames: 1.8 W/(m2⋅K)
Triple glazed windows, with advanced coatings and frames: 0.8 W/(m2⋅K)
Well-insulated roofs: 0.10 W/(m2⋅K)
Poorly insulated roofs: 1.0 W/(m2⋅K)
Well-insulated walls: 0.15 W/(m2⋅K)
Poorly insulated walls: 2 W/(m2⋅K)
Well-insulated floors: 0.2 W/(m2⋅K)
Poorly insulated floors: 1.0 W/(m2⋅K)
In practice the thermal transmittance is strongly affected by the quality of workmanship and if insulation is fitted poorly, the thermal transmittance can be considerably higher than if insulation is fitted well
Calculating thermal transmittance
When calculating a thermal transmittance it is helpful to consider the building's construction in terms of its different layers. For instance a cavity wall might be described as in the following table:
In this example the total insulance is 1.64 K⋅m2/W. The thermal transmittance of the structure is the reciprocal of the total thermal insulance. The thermal transmittance of this structure is therefore 0.61 W/(m2⋅K).
(Note that this example is simplified as it does not take into account any metal connectors, air gaps interrupting the insulation or mortar joints between the bricks and concrete blocks.)
It is possible to allow for mortar joints in calculating the thermal transmittance of a wall, as in the following table. Since the mortar joints allow heat to pass more easily than the light concrete blocks, the mortar is said to "bridge" the light concrete blocks.
The average thermal insulance of the "bridged" layer depends upon the fraction of the area taken up by the mortar in comparison with the fraction of the area taken up by the light concrete blocks. To calculate thermal transmittance when there are "bridging" mortar joints it is necessary to calculate two quantities, known as Rmax and Rmin.
Rmax can be thought of as the total thermal insulance obtained if it is assumed that there is no lateral flow of heat and Rmin can be thought of as the total thermal insulance obtained if it is assumed that there is no resistance to the lateral flow of heat.
The U-value of the above construction is approximately equal to 2 / (Rmax + Rmin)
Further information about how to deal with "bridging" is given in ISO 6946.
Measuring thermal transmittance
Whilst calculation of thermal transmittance can readily be carried out with the help of software which is compliant with ISO 6946, a thermal transmittance calculation does not fully take workmanship into account and it does not allow for adventitious circulation of air between, through and around sections of insulation. To take the effects of workmanship-related factors fully into account it is necessary to carry out a thermal transmittance measurement.
ISO 9869 describes how to measure the thermal transmittance of a roof or a wall by using heat flux sensor. These heat flux meters usually consist of thermopiles which provide an electrical signal which is in direct proportion to the heat flux. Typically they might be about in diameter and perhaps about thick and they need to be fixed firmly to the roof or wall which is under test in order to ensure good thermal contact. When the heat flux is monitored over a sufficiently long time, the thermal transmittance can be calculated by dividing the average heat flux by the average difference in temperature between the inside and outside of the building. For most wall and roof constructions the heat flux meter needs to monitor heat flows (and internal and external temperatures) continuously for a period of 72 hours to be conform the ISO 9869 standards.
Generally, thermal transmittance measurements are most accurate when:
The difference in temperature between the inside and outside of the building is at least .
The weather is cloudy rather than sunny (this makes accurate measurement of temperature easier).
There is good thermal contact between the heat flux meter and the wall or roof being tested.
The monitoring of heat flow and temperatures is carried out over at least 72 hours.
Different spots on a building element are measured or a thermographic camera is used to secure the homogeneity of the building element.
When convection currents play a part in transmitting heat across a building component, then thermal transmittance increases as the temperature difference increases. For example, for an internal temperature of and an external temperature of , the optimum gap between panes in a double glazed window will be smaller than the optimum gap for an external temperature of .
The inherent thermal transmittance of materials can also vary with temperaturethe mechanisms involved are complex, and the transmittance may increase or decrease as the temperature increases.
References
Thermodynamics
Building insulation materials | Thermal transmittance | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,579 | [
"Thermodynamics",
"Dynamical systems"
] |
16,047,618 | https://en.wikipedia.org/wiki/P-adic%20valuation | In number theory, the valuation or -adic order of an integer is the exponent of the highest power of the prime number that divides .
It is denoted .
Equivalently, is the exponent to which appears in the prime factorization of .
The -adic valuation is a valuation and gives rise to an analogue of the usual absolute value.
Whereas the completion of the rational numbers with respect to the usual absolute value results in the real numbers , the completion of the rational numbers with respect to the -adic absolute value results in the numbers .
Definition and properties
Let be a prime number.
Integers
The -adic valuation of an integer is defined to be
where denotes the set of natural numbers (including zero) and denotes divisibility of by . In particular, is a function .
For example, , , and since .
The notation is sometimes used to mean .
If is a positive integer, then
;
this follows directly from .
Rational numbers
The -adic valuation can be extended to the rational numbers as the function
defined by
For example, and since .
Some properties are:
Moreover, if , then
where is the minimum (i.e. the smaller of the two).
Formula for the -adic valuation of Integers
Legendre's formula shows that .
For any positive integer , and so .
Therefore, .
This infinite sum can be reduced to .
This formula can be extended to negative integer values to give:
-adic absolute value
The -adic absolute value (or -adic norm, though not a norm in the sense of analysis) on is the function
defined by
Thereby, for all and
for example, and
The -adic absolute value satisfies the following properties.
{| class="wikitable"
|-
|Non-negativity ||
|-
|Positive-definiteness ||
|-
|Multiplicativity ||
|-
|Non-Archimedean ||
|}
From the multiplicativity it follows that for the roots of unity and and consequently also
The subadditivity follows from the non-Archimedean triangle inequality .
The choice of base in the exponentiation makes no difference for most of the properties, but supports the product formula:
where the product is taken over all primes and the usual absolute value, denoted . This follows from simply taking the prime factorization: each prime power factor contributes its reciprocal to its -adic absolute value, and then the usual Archimedean absolute value cancels all of them.
A metric space can be formed on the set with a (non-Archimedean, translation-invariant) metric
defined by
The completion of with respect to this metric leads to the set of -adic numbers.
See also
-adic number
Valuation (algebra)
Archimedean property
Multiplicity (mathematics)
Ostrowski's theorem
Legendre's formula, for the -adic valuation of
Lifting-the-exponent lemma, for the -adic valuation of
References
Algebraic number theory
p-adic numbers | P-adic valuation | [
"Mathematics"
] | 620 | [
"P-adic numbers",
"Algebraic number theory",
"Number theory"
] |
16,049,023 | https://en.wikipedia.org/wiki/RKM%20code | The RKM code, also referred to as "letter and numeral code for resistance and capacitance values and tolerances", "letter and digit code for resistance and capacitance values and tolerances", or informally as "R notation" is a notation to specify resistor and capacitor values defined in the international standard IEC 60062 (formerly IEC 62) since 1952. Other standards including DIN 40825 (1973), BS 1852 (1975), IS 8186 (1976), and EN 60062 (1993) have also accepted it. The updated IEC 60062:2016, amended in 2019, comprises the most recent release of the standard.
Overview
Originally meant also as part marking code, this shorthand notation is widely used in electrical engineering to denote the values of resistors and capacitors in circuit diagrams and in the production of electronic circuits (for example in bills of material and in silk screens). This method avoids overlooking the decimal separator, which may not be rendered reliably on components or when duplicating documents.
The standards also define a color code for fixed resistors.
Part value code
For brevity, the notation omits to always specify the unit (ohm or farad) explicitly and instead relies on implicit knowledge raised from the usage of specific letters either only for resistors or for capacitors, the case used (uppercase letters are typically used for resistors, lowercase letters for capacitors), a part's appearance, and the context.
The notation also avoids using a decimal separator and replaces it by a letter associated with the prefix symbol for the particular value.
This is not only for brevity (for example when printed on the part or PCB), but also to circumvent the problem that decimal separators tend to "disappear" when photocopying printed circuit diagrams.
Another advantage is the easier sortability of values which helps to optimize the bill of materials by combining similar part values to improve maintainability and reduce costs.
The code letters are loosely related to the corresponding SI prefix, but there are several exceptions, where the capitalization differs or alternative letters are used.
For example, 8K2 indicates a resistor value of 8.2 kΩ. Additional zeros imply tighter tolerance, for example 15M0.
When the value can be expressed without the need for a prefix, an "R" or "F" is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω.
For resistances, the standard dictates the use of the uppercase letters L (for 10−3), R (for 100 = 1), K (for 103), M (for 106), and G (for 109) to be used instead of the decimal point.
The usage of the letter R instead of the SI unit symbol Ω for ohms stems from the fact that the Greek letter Ω is absent from most older character encodings (though it is present in the now-ubiquitous Unicode) and therefore is sometimes impossible to reproduce, in particular in some CAD/CAM environments. The letter R was chosen because visually it loosely resembles the Ω glyph, and also because it works nicely as a mnemonic for resistance in many languages.
The letters G and T weren't part of the first issue of the standard, which pre-dates the introduction of the SI system (hence the name "RKM code"), but were added after the adoption of the corresponding SI prefixes.
The introduction of the letter L in more recent issues of the standard (instead of an SI prefix m for milli) is justified to maintain the rule of only using uppercase letters for resistances (the otherwise resulting M was already in use for mega).
Similar, the standard prescribes the following lowercase letters for capacitances to be used instead of the decimal point: p (for 10−12), n (for 10−9), μ (for 10−6), m (for 10−3), but uppercase F (for 100 = 1) for farad.
The letters p and n weren't part of the first issue of the standard, but were added after the adoption of the corresponding SI prefixes.
In cases where the Greek letter μ is not available, the standard allows it to be replaced by u (or U, when only uppercase letters are available). This usage of u instead of μ is also in line with ISO 2955 (1974, 1983), DIN 66030 (Vornorm 1973; 1980, 2002), BS 6430 (1983) and Health Level 7 (HL7), which allow the prefix μ to be substituted by the letter u (or U) in circumstances in which only the Latin alphabet is available.
Several manufacturers of resistors utilize the RKM code as part of the components' manufacturer's part numbers (MPNs).
Similar codes
Though non-standard, some manufacturers also use the RKM code to mark inductors with "R" indicating the decimal point in microhenry (e.g. 4R7 for 4.7 μH).
A similar non-standard notation using the unit symbol instead of a decimal separator is sometimes used to indicate voltages (i.e. 0V8 for 0.8 V, 1V8 for 1.8 V, 3V3 for 3.3 V or 5V0 for 5.0 V) in contexts where a decimal separator would be inappropriate (e.g. in signal or pin names, in file names, or in labels or subscripts).
Tolerance code
Letter code for resistance and capacitance tolerances:
Before the introduction of the RKM code, some of the letters for symmetrical tolerances (viz. G, J, K, M) were already used in US military contexts following the American War Standard (AWS) and Joint Army-Navy Specifications (JAN) since the mid-1940s.
Temperature coefficient code
Letter codes for the temperature coefficient of resistance (TCR):
Production date codes
Twenty-year cycle code
First character: Year of production in twenty-year cycle
A = 2030, 2010, 1990, 1970
B = 2031, 2011, 1991, 1971
C = 2032, 2012, 1992, 1972
D = 2033, 2013, 1993, 1973
E = 2034, 2014, 1994, 1974
F = 2035, 2015, 1995, 1975
H = 2036, 2016, 1996, 1976
J = 2037, 2017, 1997, 1977
K = 2038, 2018, 1998, 1978
L = 2039, 2019, 1999, 1979
M = 2020, 2000, 1980
N = 2021, 2001, 1981
P = 2022, 2002, 1982
R = 2023, 2003, 1983
S = 2024, 2004, 1984
T = 2025, 2005, 1985
U = 2026, 2006, 1986
V = 2027, 2007, 1987
W = 2028, 2008, 1988
X = 2029, 2009, 1989
Second character: Month of production
1 to 9 = January to September
O = October
N = November
D = December
Example: J8 = August 2017 (or August 1997)
Some manufacturers also used the production date code as a stand-alone code to indicate the production date of integrated circuits.
Some manufacturers specify a three-character date code with a two-digit week number following the year letter.
IEC 60062 also specifies a four-character year/week code.
Ten-year cycle code
First character: Year of production in ten-year cycle
0 = 2020
1 = 2021
2 = 2022, 2012
3 = 2023, 2013
4 = 2024, 2014
5 = 2025, 2015
6 = 2026, 2016
7 = 2017
8 = 2018
9 = 2019
Second character: Month of production
1 to 9 = January to September
X = October
Y = November
Z = December
Example: 78 = August 2017
IEC 60062 also specifies a four-character year/week code.
Four-year cycle code
IEC 60062 also specifies a single-character four-year cycle year/month code.
Marking codes for E series preferred values
Three-character resistor marking code
For resistances following the (E48 or) E96 series of preferred values, the former EIA-96 as well as IEC 60062:2016 define a special three-character marking code for resistors to be used on small parts. The code consists of two digits denoting one of the "positions" in the series of E96 values followed by a letter indicating the multiplier.
Two-character capacitor marking code
For capacitances following the (E3, E6, E12 or) E24 series of preferred values, the former ANSI/EIA-198-D:1991, ANSI/EIA-198-1-E:1998 and ANSI/EIA-198-1-F:2002 as well as the amendment IEC 60062:2016/AMD1:2019 to IEC 60062 define a special two-character marking code for capacitors for very small parts which leave no room to print any longer codes onto them. The code consists of an uppercase letter denoting the two significant digits of the value followed by a digit indicating the multiplier. The EIA standard also defines a number of lowercase letters to specify a number of values not found in E24.
Corresponding standards
IEC 62:1952 (aka IEC 60062:1952), first edition, 1952-01-01
IEC 62:1968 (aka IEC 60062:1968), second edition, 1968-01-01
IEC 62:1968/AMD1:1968 (aka IEC 60062:1968/AMD1:1968), amended second edition, 1968-12-31
IEC 62:1974 (aka IEC 60062:1974)
IEC 62:1974/AMD1:1988 (aka IEC 60062:1974/AMD1:1988), amended third edition, 1988-04-30
IEC 62:1974/AMD2:1989 (aka IEC 60062:1974/AMD2:1989), amended third edition, 1989-01-01
IEC 62:1992 (aka IEC 60062:1992), fourth edition, 1992-03-15
IEC 62:1992/AMD1:1995 (aka IEC 60062:1992/AMD1:1995), amended fourth edition, 1995-06-19
IEC 60062:2004 (fifth edition, 2004-11-08)
IEC 60062:2016 (sixth edition, 2016-07-12)
IEC 60062:2016/COR1:2016 (corrected sixth edition, 2016-12-05)
IEC 60062:2016/AMD1:2019 (amendment 1, 2019-08-20)
IEC 60062:2016+AMD1:2019 CSV (consolidated version 6.1, 2019-08-20)
EN 60062:1993
EN 60062:1994 (1994-10)
EN 60062:2005
EN 60062:2016
EN 60062:2016/AC:2016-12 (corrected edition)
EN 60062:2016/A1:2019 (amendment 1)
BS 1852:1975 (related to IEC 60062:1974)
BS EN 60062:1994
BS EN 60062:2005
BS EN 60062:2016
DIN 40825:1973-04 (capacitor/resistor value code), DIN 41314:1975-12 (date code)
DIN IEC 62:1985-12 (aka DIN IEC 60062:1985-12)
DIN IEC 62:1989-10 (aka DIN IEC 60062:1989-10)
DIN IEC 62:1990-11 (aka DIN IEC 60062:1990-11)
DIN IEC 62:1993-03 (aka DIN IEC 60062:1993-03)
DIN EN 60062:1997-09
DIN EN 60062:2001-11
DIN EN 60062:2005-11
DIN EN 60062:2017-06
DIN EN 60062:2020-03
ČSN EN 60062
DS/EN 60062
EVS-EN 60062
(GOST) ГОСТ IEC 60062-2014 (related to IEC 60062-2004)
ILNAS-EN 60062
I.S. EN 60062
NEN EN IEC 60062
NF EN 60062
ÖVE/ÖNORM EN 60062
PN-EN 60062
prМКС EN 60062
SN EN 60062
TS 2932 EN 60062
UNE-EN 60062
BIS IS 4114-1967
IS 8186-1976 (related to IEC 62:1974)
JIS C 5062, JIS C 60062
TGL 31667
See also
Electronic color code
SI prefix
Metric prefix
Engineering notation
E notation
Cifrão (a similar scheme for a currency)
Fermata (a remotely similar musical notation)
Explanatory notes
References
Standards
Electrical components
Encodings | RKM code | [
"Technology",
"Engineering"
] | 2,725 | [
"Electrical engineering",
"Electrical components",
"Components"
] |
16,049,255 | https://en.wikipedia.org/wiki/Larson%E2%80%93Miller%20relation | The Larson–Miller relation, also widely known as the Larson–Miller parameter and often abbreviated LMP, is a parametric relation used to extrapolate experimental data on creep and rupture life of engineering materials.
Background and usage
F.R. Larson and J. Miller proposed that creep rate could adequately be described by the Arrhenius type equation:
Where r is the creep process rate, A is a constant, R is the universal gas constant, T is the absolute temperature, and is the activation energy for the creep process. Taking the natural log of both sides:
With some rearrangement:
Using the fact that creep rate is inversely proportional to time, the equation can be written as:
Taking the natural log:
After some rearrangement the relation finally becomes:
, where B =
This equation is of the same form as the Larson–Miller relation.
where the quantity LMP is known as the Larson–Miller parameter. Using the assumption that activation energy is independent of applied stress, the equation can be used to relate the difference in rupture life to differences in temperature for a given stress. The material constant C is typically found to be in the range of 20 to 22 for metals when time is expressed in hours and temperature in degrees Rankine.
The Larson–Miller model is used for experimental tests so that results at certain temperatures and stresses can predict rupture lives of time spans that would be impractical to reproduce in the laboratory.
Expanding the equation as a Taylor series makes the relationship easier to understand. Only the first terms are kept.
Changing the time, by a factor of 10, changes the logarithm by 1 and the LMP changes by an amount equal to the temperature.
To get an equal change in LMP by changing the temperature, the temperature needs to be raised or lowered by about 5% of its absolute value.
Typically a 5% increase in absolute temperature will increase the rate of creep by a factor of ten.
The equation was developed during the 1950s while Miller and Larson were employed by GE performing research on turbine blade life.
MPC project Omega
The Omega Method is a comprehensive approach developed for assessing the remaining life of components operating in the creep range. Unlike other methods such as replication, life summation based on Larson-Miller parameters, or Kachanov's approach.
The Omega Method aims to overcome limitations in accurately estimating strain accumulation, damage, and the rate of damage accumulation. It provides a broader methodology for life assessment that incorporates strain-rate parameters, multi-axial damage parameters (including Omega), and material-specific property relations.
In 1986, the Petroleum and Chemical Committee of MPC initiated a research program to evaluate different approaches to life assessment. Through extensive experimentation on various materials, including carbon steel and hard chromium-molybdenum steel, several important observations were made:
• Carbon steel exhibited minimal damage from creep strain, even under high stress levels and temperatures.
• The creep resistance of hard and brittle materials was significantly influenced by small amounts of strain, although visible creep cavities or cracks were not observed.
• Laboratory-damaged or ex-service materials showed negligible primary or secondary creep during subsequent testing.
• Strain rate consistently increased with strain during tests, and the rate of strain rate increase with stress was higher than predicted by Norton's law.
Based on their findings, the researchers concluded that strain rate, at the operating stress and temperature, can indicate material damage. They aimed to develop a model linking strain rate, strain, consumed life, and remaining life. Initially designed for thermally stabilized materials, the Omega Method's applicability extends to diverse situations. It incorporates Kachanov's equations for strain rate acceleration, prioritizing monotonically increasing strain rates. Emphasizing strain rate's significance, the method recommends referencing an ex-service database for ex-service materials.
In API 579, the MPC Project Omega program, which incorporates the Omega Method, offers a broader methodology for assessing remaining life compared to the Larson-Miller model. It considers strain-rate parameters, multi-axial damage parameters (including Omega), and material-specific property relations in the refining and petrochemical industry.
The MPC Project Omega program provides a comprehensive framework encompassing the Larson-Miller model for predicting remaining life in the creep regime.
The remaining life of a component, L, can be calculated using the following equations, where stress is in ksi (MPa), temperature is in degrees Fahrenheit (degrees Celsius), and the remaining life and time are in hours.
where
where
Von Mises yield criterion is specifically applicable to ductile materials
Value obtained by MPC Omega project for the equation for different materials can be found in ASME API 579-1/ASME FFS-1-2021 Fitness-For-Service.
Here, Nomenclature
L = rupture life (hours)
= initial or reference creep strain rate at the start of the time period being evaluated based on stress state and temperature
= Omega multiaxial damage parameter
to = cure-fit coefficients for the Yield strength data, the MPC Project Omega creep strain rate parameter, or the Larson Miller Parameter
= adjustment factor for creep ductility in the Project Omega Model; a range of +0.3 for brittle behavior and -0.3 for ductile behavior can be used
= reference temperature
= temperature
= stress parameter
= parameter based on the state-of-stress for MPC Project Omega Life Assessment Model = 3.0 - pressurized sphere or formed head.
= Bailey Norton coefficient evaluated at the reference stress in the current load increment, used in the MPC Project Omega Life Assessment Method
= Omega uniaxial damage parameter
to = curve-fit coefficients for the MPC project Omega parameter, as applicable
= Prager factor equal to 0.33, for MPC project Omega Life Assessment Model
to = principal stress
= effective stress
Creep test program for MPC project Omega
Source:
can be obtained by accelerated creep test in which strain is recirded, interpolating the data
When adopting the Omega Method for a remaining life assessment, it is sufficient to estimate the creep strain rate at the service stress and temperature by conducting creep tests on the material that has been exposed to service conditions.
The creep test program followed the guidelines provided in technical literature and API 579-1 for the implementation of the Omega Method. The program consisted of the following steps:
Conducted five longitudinal tests on specimens to determine the current strain rate at two different temperature values and stress levels close to those experienced in service.
Chose test conditions based on the strain-gage capability to accurately measure small strain rate values.
Performed two additional tests on transversal (circumferential) specimens to compare material behavior in different directions, minimizing the influence of welds on specimen behavior.
Tested additional longitudinal specimens with a "reduced" gage length to investigate the influence of geometry on test results.
Machined all specimens according to EN 10291 specifications for uniaxial creep testing in tension.
Carried out tests under constant load with continuous monitoring of creep strains.
Excluded the need for creep tests to define the "2" parameter based on the assumption of reliable data from previous experimental programs.
Observed initial differences in strain rates between circumferential and longitudinal tests, but further tests confirmed minimal effect of specimen drawing direction and attributed any differences to specimen geometry.
Demonstrated satisfactory agreement between all tests and the model proposed by the Omega approach, with good alignment to API 579-1 data.
Determined the value of A(t) that provided the best fit for the experimental data using the API 579-1 curve.
The consumed life fraction "f" was calculated using the Omega approach equations based on the average behavior of non-exposed material. The calculated value of "f" provides an estimate of how much of the material's life has been used
The consumed life fraction "f" can be calculated using the following equation:
Here, represents the initial strain rate, represents the current strain rate, represents the logarithm of the strain rate at time , and represents the logarithm of the initial strain rate.
Overall, the creep test program involved conducting tests on various specimens, comparing different directions, ensuring compliance with testing standards, and validating the results with the Omega Method model and API 579-1 data.
Creep Life Assessment Using the Omega Method: A Case Study for creep strength enhanced ferritic (CSEF) steels
Source:
CSEF steels have complex microstructures, and conventional techniques struggle to accurately assess their creep life. The Omega method offers a systematic approach that combines hardness measurements with other techniques to overcome these challenges. The article highlights that the Omega method provides a systematic approach for creep life assessment by combining hardness measurements with other techniques such as the potential drop method and tertiary creep modeling. The potential drop method measures the electric potential drop ratio, which is correlated to the hardness drop. This correlation enables accurate creep life prediction using the hardness model. This integration of hardness measurements and the potential drop method enhances the accuracy of creep life assessments.
Compared to the Larson-Miller parameter commonly used for creep life assessment, the Omega method offers several advantages for assessing CSEF steels. The Omega method provides a more suitable and accurate approach by considering microstructural factors and utilizing hardness measurements, which are directly influenced by the material's degradation. This combination of microstructure and mechanical property assessment allows for a comprehensive evaluation of the creep life of CSEF steels, leading to more reliable predictions of the material's remaining useful life.
In comparison to the Larson-Miller parameter, which is commonly used for creep life assessment, the Omega method offers advantages in assessing CSEF steels. CSEF steels exhibit different degradation behavior compared to conventional steels, making it challenging to apply conventional techniques. The Omega method, with its focus on microstructural factors and hardness measurements, provides a more suitable approach for accurately assessing the creep life of CSEF steels.
See also
Creep (deformation)
References
Sources
Hertzberg, Richard W. Deformation and Fracture Mechanics of Engineering Materials, Fourth Edition. John Wiley and Sons, Inc., Hoboken, NJ: 1996.
Larson, Frank R. and Miller, James: A Time-Temperature Relationship for Rupture and Creep Stresses. Trans. ASME, vol. 74, pp. 765−775.
Fuchs, G. E. High Temperature Alloys, Kirk–Othmer Encyclopedia of Chemical Technology
Smith & Hashemi, Foundations of Material Science and Engineering
Dieter, G.E. Mechanical Metallurgy, Third Edition, McGraw-Hill Inc., 1986, p 461-465, .
Materials science | Larson–Miller relation | [
"Physics",
"Materials_science",
"Engineering"
] | 2,178 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
16,049,560 | https://en.wikipedia.org/wiki/Acetogenin | Acetogenins are a class of polyketide natural products found in plants of the family Annonaceae. They are characterized by linear 32- or 34-carbon chains containing oxygenated functional groups including hydroxyls, ketones, epoxides, tetrahydrofurans and tetrahydropyrans. They are often terminated with a lactone or butenolide. Over 400 members of this family of compounds have been isolated from 51 different species of plants. Many acetogenins are characterized by neurotoxicity.
Examples include:
Annonacin
Annonins
Bullatacin
Uvaricin
Structure
Structurally, acetogenins are a series of C-35/C-37 compounds usually characterized by a long aliphatic chain bearing a terminal methyl-substituted α,β-unsaturated γ-lactone ring, as well as one to three tetrahydrofuran (THF) rings. These THF rings are located along the hydrocarbon chain, along with a number of oxygenated moieties (hydroxyls, acetoxyls, ketones, epoxides) and/or double bonds.
Research
Acetogenins have been investigated for their biological properties, but are a concern due to neurotoxicity. Purified acetogenins and crude extracts of the common North American pawpaw (Asimina triloba) or the soursop (Annona muricata) remain under laboratory studies.
Mechanism of action
Acetogenins inhibit NADH dehydrogenase, a key enzyme in energy metabolism.
References
External links
Fatty alcohols
Polyketides
NADH dehydrogenase inhibitors
Plant toxins | Acetogenin | [
"Chemistry"
] | 348 | [
"Biomolecules by chemical classification",
"Natural products",
"Chemical ecology",
"Plant toxins",
"Polyketides"
] |
16,051,457 | https://en.wikipedia.org/wiki/Aratashen | Aratashen (, also Romanized as Arratashen; also, Artashen; until 1978 Zeyva Hayi – meaning "Armenian Zeyva", Zeyva, Bol’shaya Zeyva and Nerkin-Zeyva) is a town in the Armavir Province of Armenia. It is located on the Ararat Plain.
Archaeology
A neolithic-chalcolithic tell is located south of the town.
The first occupation phase at Aratashen was pre-ceramic, going back to 6500 BCE. Parallels are found in the southeastern Trans-Caucasia, and in the northeastern Mesopotamia, especially based on the construction techniques and the lithic and bone tools.
Also the pottery, after it appears, is somewhat similar. The best parallels are with Kul Tepe of Nakhichevan to the south, and with the northern Near East, such as the lower levels of Hajji Firuz Tepe, at Dalma Tepe, and at Tilki Tepe.
The Shulaveri-Shomutepe culture, that developed in the neighbouring Kura basin and the Artsakh steppe, does not have close parallels with the early Aratashen artifacts.
At Aratashen, obsidian was discovered that came from Turkish sources at Meydan Dağ, in the Lake Van basin; these samples were found in association with the very early Halaf culture ware, which probably also came to Aratashen from the same area.
Other types of pottery appear at the end of the fifth millennium BC. At this time, the plain of Ararat was in contact with the contemporary populations of northern Mesopotamia, and also with those of the ‘Sioni culture’ of the Kura basin.
The pottery of the second occupation phase of Aratashen is becoming close to that in the Ararat plain. Here we see the influence of the Late Chalcolithic horizon of approximately 4300–3500 BC in the whole of northern Mesopotamia, such as:
"... development of straw-tempered ware, initial use of the slow wheel, early forms of standardization in manufacture and typological features ("Coba bowls"), a frequent surface treatment with light scraping..."
This pottery has many Transcaucasian, or Sioni culture features. Sioni culture generally succeeded the
Shulaveri-Shomutepe culture in some areas. Here we already see the features of the later Kura–Araxes culture pottery.
Metalwork
There's evidence of very early metallurgy at Aratashen, going back to the first half of the sixth millennium BCE. According to A. Courcier,
In the Neolithic level IId of Aratashen, dated to the beginnings of the sixth millennium BCE, several fragments of copper ores (malachite and azurite) and 57 arsenical copper beads were discovered. Close to Aratashen, at Khatunark, one fragment of copper ore (malachite) has been discovered in a level dated to the first half of the sixth millennium BCE. This artefact, together with those found at Aratashen, suggest the nascent emergence of metallurgy in the Ararat region already during the Late Neolithic.
Equally early metalwork has also been discovered in the excavations at Aruchlo in Georgia, and at Mentesh Tepe in the Tovuz district of modern-day Azerbaijan, and at several other sites in Southern Caucasus.
At Aratashen and Khatunakh/Aknashen, there are similarities to the contemporary sites of Kultepe I, and Alikemek-Tepesi. Another prehistoric site that is close to Aratashen is Masis Blur.
See also
Armavir Province
References
External links
World Gazeteer: Armenia – World-Gazetteer.com
Populated places in Armavir Province
Kura-Araxes culture
Archaeometallurgy
Yazidi populated places in Armenia
Shulaveri–Shomu culture | Aratashen | [
"Chemistry",
"Materials_science"
] | 809 | [
"Archaeometallurgy",
"Metallurgy"
] |
16,052,279 | https://en.wikipedia.org/wiki/Bioprecipitation | Bioprecipitation is the concept of rain-making bacteria and was proposed by David Sands from Montana State University in the 1970s. This is precipitation that is beneficial for microbial and plant growth, it is a feedback cycle beginning with land plants generating small air-borne particles called aerosols that contain microorganisms that influence the formation of clouds by their ice nucleation properties. The formation of ice in clouds is required for snow and most rainfall. Dust and soot particles can serve as ice nuclei, but biological ice nuclei are capable of catalyzing freezing at much warmer temperatures. The ice-nucleating bacteria currently known are mostly plant pathogens. Recent research suggests that bacteria may be present in clouds as part of an evolved process of dispersal.
Ice-nucleating proteins derived from ice-nucleating bacteria are used for snowmaking. A symbiotic relationship between sulphate reducing, lead reducing, sulphur oxidizing, and denitrifying bacteria was found to be responsible for biotransformation and bioprecipitation.
Plant pathogens
Most known ice-nucleating bacteria are plant pathogens. These pathogens can cause freezing injury in plants. In the United States alone, it has been estimated that frost accounts for approximately $1 billion in crop damage each year. The ice-minus variant of P. syringae is a mutant, lacking the gene responsible for ice-nucleating surface protein production. This lack of surface protein provides a less favorable environment for ice formation. Both strains of P. syringae occur naturally, but recombinant DNA technology has allowed for the synthetic removal or alteration of specific genes, enabling the creation of the ice-minus strain. The introduction of an ice-minus strain of P. syringae to the surface of plants would incur competition between the strains. Should the ice-minus strain win out, the ice nucleate provided by P. syringae would no longer be present, lowering the level of frost development on plant surfaces at normal water freezing temperature (0°C).
Dispersal of bacteria through rainfall
Bacteria present in clouds may have evolved to use rainfall as a means of dispersing themselves into the environment. The bacteria are found in snow, soils and seedlings in locations, such as, Antarctica, the Yukon Territory of Canada and the French Alps, according to Brent Christner, a microbiologist at Louisiana State University. It has been suggested that the bacteria are part of a constant feedback loop between terrestrial ecosystems and clouds. According to Christine, this bacteria may rely on the rainfall to spread to new habitats, in much the same way as plants rely on windblown pollen grains, which could possibly a key element of the bacterial life cycle.
Snowmaking
Certain species of bacteria and fungi are known to act as efficient biological ice nuclei at temperatures between −10 and 0 °C. Without ice nuclear agents, to freeze water the temperature has to be at least -40 °C. But ice nucleating bacteria can freeze at -1 °C instead of -40 °C. Even after the death of the bacteria, the glycoproteins continue ice crystallization. It does so by mimicking ice at the site of ice nucleating sites, which it acts as a template for the formation of ice lattice. Many ski resorts use a commercially available freeze-dried preparation of ice-nucleating proteins derived from the bacterium species Pseudomonas syringae to make snow in a snowgun. Pseudomonas syringae is a well studied plant pathogen that can infect plants, which results in loss. By studying this pathogen it can help us understand the plant immune system.
See also
Pseudomonas syringae
Ice-minus bacteria
Aeroplankton
References
Bacteria
Weather modification | Bioprecipitation | [
"Engineering",
"Biology"
] | 771 | [
"Planetary engineering",
"Prokaryotes",
"Weather modification",
"Bacteria",
"Microorganisms"
] |
16,052,374 | https://en.wikipedia.org/wiki/Coturnism | Coturnism is an illness featuring muscle tenderness and rhabdomyolysis (muscle cell breakdown) after consuming quail (usually common quail, Coturnix coturnix, from which the name derives) that have fed on poisonous plants.
Causes
From case histories it is known that the toxin is stable, as four-month-old pickled quail have been poisonous. Humans vary in their susceptibility; only one in four people who consumed quail soup containing the toxin fell ill. It is apparently fat-soluble, as potatoes fried in quail fat are also poisonous.
Coniine from hemlock consumed by quail has been suggested as the cause of coturnism, though quail resist eating hemlock. Hellebore has also been suggested as the source of the toxin. It has also been asserted that this evidence points to the seeds of the annual woundwort (Stachys annua) being the causal agent. It has been suggested that Galeopsis ladanum seeds are not responsible.
Epidemiology
Migration routes and season may affect quail risk. Quail are never poisonous outside the migration season nor are the vast majority poisonous while migrating. European common quail migrate along three different flyways, each with different poisoning characteristics, at least in 20th-century records. The western flyway across Algeria to France is associated with poisonings only on the spring migration and not on the autumn return. The eastern flyway, which funnels down the Nile Valley, is the reverse. Poisonings were only reported in the autumn migration before the quail had crossed the Mediterranean. The central flyway across Italy had no associated poisonings.
History
The condition was certainly known by the 4th century BC to the ancient Greek (and subsequently Roman) naturalists, physicians, and theologians. The Bible (Numbers 11:31-34) mentions an incident where the Israelites became ill after having consumed large amounts of quail in Sinai. Philo gives a more detailed version of the same Biblical story (The Special Laws: 4: 120–131). Early writers used quail as the standard example of an animal that could eat something poisonous to man without ill effects for themselves. Aristotle (On Plants 820:6-7), Philo (Geoponics: 14: 24), Lucretius (On the Nature of Things: 4: 639–640), Galen (De Temperamentis: 3:4) and Sextus Empiricus (Outlines of Pyrrhonism: 1: 57) all make this point.
Central to these ancient accounts is the idea that quail became toxic to humans after consuming seeds from hellebore or henbane (Hyoscyamus niger). However Sextus Empiricus suggested that quail ate hemlock (Conium maculatum), an idea revived in the 20th century. Confirmation that the ancients understood the problem comes from a 10th-century text, Geoponica, based on ancient sources. This states, "Quails may graze hellebore putting those who afterwards eat them at risk of convulsions and vertigo....".
References
External links
Toxic effect of noxious substances eaten as food
Quails
Bird problems with humans
Bird feeding
Plant toxins | Coturnism | [
"Chemistry"
] | 668 | [
"Chemical ecology",
"Plant toxins"
] |
16,052,392 | https://en.wikipedia.org/wiki/Haff%20disease | Haff disease is the development of rhabdomyolysis (swelling and breakdown of skeletal muscle, with a risk of acute kidney failure) within 24 hours of ingesting fish.
History
The disease was first described in 1924 in the vicinity of Königsberg, Germany (now Kaliningrad, Russia) on the Baltic coast, in people staying around the northern part of the Vistula Lagoon (German: Frisches Haff).
Over the subsequent fifteen years, about 1000 cases were reported in people, birds and cats, usually in the summer and fall, and a link was made with the consumption of fish (burbot, eel and pike). Since that time, only occasional reports have appeared of the condition, mostly from the Soviet Union and Germany.
In 1997, six cases of Haff disease were reported in California and Missouri, all after the consumption of buffalo fish (Ictiobus cyprinellus).
In July and August 2010, dozens of people contracted rhabdomyolysis after eating Procambarus clarkii in Nanjing, China. A month later, the Chinese authorities claimed they had Haff disease.
An outbreak was reported in Brooklyn, New York on 18 November 2011, when two household members were stricken by the syndrome after eating buffalo fish. On February 4, 2014, two cases of Haff disease were reported in Cook County, Illinois following the consumption of buffalo fish.
A group from Brazil identified a Haff disease outbreak in the State of Bahia that lasted from December 2016 to April 2017, with 67 cases identified. In August 2018, a couple from São Paulo, southeastern Brazil, fell ill and needed semi-intensive hospital care after eating fish of the species known in Portuguese as "arabaiana" or "olho-de-boi" (ox-eye), possibly the southern yellowtail amberjack, Seriola lalandi, which they had bought in the city of Fortaleza, State of Ceará, northeastern Brazil, and, according to them, looked "perfect". The day following their admission to hospital the patients already presented an alteration of their urine, which, according to the woman who fell ill, "was very dark, indeed looked like Coca-Cola".
On March 2, 2021, a 31-year-old woman from Recife, Brazil, died from Haff disease after ingesting yellowtail amberjack.
Poison
The exact nature of the poison is still unclear. In the U.S. outbreak, the source of the fish was traced by the Centers for Disease Control and Prevention, and studies of other fish from the same sources showed a hexane-soluble (and hence non-polar lipid) substance that induced similar symptoms in mice; other food-borne poisons commonly found in fish could not be detected. It cannot be inactivated by cooking, as all six CDC cases had consumed cooked or fried fish. Palytoxin has been proposed as a disease model. It has also been suggested that the toxin may have thiaminase activity (i.e. it degrades thiamine, also known as vitamin B1).
Symptoms
Some of the reported symptoms include:
Muscle pain
Light to dark brown-colored urine
Nausea
Chest pain
Vomiting
Shortness of breath
Profuse sweating
Pain to light touch
Dry mouth, numbness of thighs or whole body, back pain, and stomach cramps are also reported, but seen less frequently.
References
Secondary sources
Jürgen W. Schmidt: Die "Haffkrankheit" in Ostpreussen im Herbst 1932, in: Preussenland - Mitteilungen der Historischen Kommission für Ost- und Westpreussische Landesforschung und aus dem Geheimen Staatsarchiv Preussischer Kulturbesitz Heft 2/2009 (47. Jg.) pp. 57–60
External links
Poisons
Nephrology | Haff disease | [
"Environmental_science"
] | 799 | [
"Poisons",
"Toxicology"
] |
16,052,558 | https://en.wikipedia.org/wiki/3FUN | 3Fun (3FUN) is a location-based mobile online dating application for Android and iOS. 3Fun is available in the United States, United Kingdom, Brazil, the Netherlands, and several other countries. As of May 2019, there were approximately 100,000 monthly downloads, and more than 2 million total downloads. In January 2022, 3Fun partnered with British security company DigitalXRAID to implement penetration testing.
See also
Comparison of online dating services
Feeld
Grindr
Scruff
Spoonr
References
External links
Mobile social software
Online dating services for specific interests
Online dating services of Canada | 3FUN | [
"Technology"
] | 123 | [
"Mobile software stubs",
"Mobile technology stubs"
] |
16,053,164 | https://en.wikipedia.org/wiki/Potholder | A potholder is a piece of textile (often quilted) or silicone used to cover the hand when holding hot kitchen cooking equipment, like pots and pans. They are frequently made of polyester and/or cotton. Crocheted potholders can be made out of cotton yarn as a craft project/folk art.
A potholder offers protection for only one hand at a time. To lift a pan with two hot handles using both hands, two potholders are needed. For holding a hot piece of equipment, the potholder is folded around it and grasped with the hand. Generally a rubber surface will be on one side to grip and a fabric side to absorb the heat on the other side.
When made of textile fabric, potholders typically have an inner layer of a material providing thermal insulation sandwiched between more colorful or decorative outsides. The most common type commercially available nowadays has the form of a square, with a side length varying from to and slightly rounded corners, and a textile loop at one of the corners for hanging.
Cultural uses
Throughout the potholder's history, it has also been used as a representative symbol of various cultural movements. During the United States Abolitionist Movement, they were displayed by women who wanted to show their support for the Abolitionist cause. These provided women with some way to casually identify as part of the Abolitionist Movement without overtly expressing such. Additionally, it is sometimes used by Cajun cultures as part of their Mardi Gras masks. During the internment of Japanese Americans during World War II, the interred Japanese created several potholders out of various colored fabrics in order to reflect their own culture. This was done to break up the monotony, as the colorful nature of the crafts was in stark contrast to the generally bland surroundings of the camps.
Style development
The earliest records of potholders in the United States stem from the early 1800s, when potholders were made out of lace, as well as crocheted and embroidered. The usage of lace fell away at the same time as the popularization of geometric patterns on potholders, lasting up until the late 1970s. At the same time, applique methods began to become popularly used around the 1960s, and remained the most popular method, alongside of quilting up through the 1970s to the present.
Materials
The textile potholder was nonexistent in art or writing until around the 19th century. Evidence in art points towards hooks that were used to carry hot-handled pots in ancient Greece, but it wasn't until the Antislavery Bazaars of the mid-19th century that equivocally the first home-made potholders were made. These crafts were illustrated with various designs and advertised the phrase 'Any holder but a Slave Holder." By creating such a political craft, which shares similar dimensions and fabrication with the contemporary potholder, women who may have never associated with the abolitionist movement had the opportunity to do so. The popularity of potholders is concomitant with the rise in proliferation of magazines. In these magazines, patterns were given for 'teapot holders' which strongly resemble potholders. Insulation capability was limited in early models. Since their genesis as standard household items, potholders have been largely associated with home crafting, and crocheting has long been the leading method in this strain followed by knitting and patchwork. Needlework patterns in the 1950s were often impractical and over-designed with holes and elaborate spacing that would burn the user or wear out the holder quickly. In the 1970s, quilting and applique-made potholder patterns gained popularity, enduring into the present day.
Safety
Potholders are a form of personal protective equipment. They are used in kitchen settings to protect the kitchen staff from heat related injuries. However, one issue with having pot holders in commercial kitchens is that they are not sanitary. According to Food Service Technology 2.2, research has shown that Potholders are one of the sole culprits of cross contamination in the kitchen. This occurs when someone in the kitchen is working with raw food and then uses a potholder without proper sanitation. When the next person uses the potholder, they are subject to all the germs left behind previously. Another issue with potholders is that the materials that they are made out of are often not water resistant, making them impossible to wash. This poses a problem because kitchens are full of accidental spills, and if a potholder becomes soiled, it can be difficult to clean. If a pot holder becomes wet in any way, it becomes a steam burn risk. Because of these risks, potholders have been banned from commercial kitchens in New Zealand, but there is no sign of this happening in the United States anytime soon.
Wool
Pot holders need to withstand temperatures over 400 degrees Fahrenheit to protect skin from hot dishes and make a potentially harmful task harmless. A common fabric used for potholders is wool because it can withstand very hot temperatures.
What makes wool a unique product is that it is essentially flame resistant. The combination of sulfur and nitrogen provides a blend of materials that provide a flame-resistant product. The reason wool is used for many types of heat protection is that it has a high ignition temperature. This is defined as the temperature in which an item will produce a flame. Wool can be heated to over 1,000 degrees Fahrenheit before the igniting of this fabric. Even when the fabric comes in contact with flames it does not disseminate the flame. This provides an even greater protective quality with the wool's low ignition temperature and the inability for flames to spread throughout the fiber. When wool is heated to a certain degree, it begins to "char" on the outside. This can provide a protective outer layer on the outside fabric. When the char on the outside of the fabric is consistent with the original properties of the wool, it can produce a safer version of the product. When the heat is directly applied to the fabric, the "char" forms a semi-liquid state which can be wiped off the fabric providing no evidence of the heat contact.
Home production
In the early days of "do it yourself" reliance and domestic craftwork, projects like quilting, sewing, knitting, and crocheting were used for both labor and leisure. These activities are commonly used methods to make potholders. Craftwork of this nature has often been associated with women and children as a tradition to be passed down from mother to daughter since before the eighteenth century. By the mid-eighteenth century, it was seen as the woman's duty to decorate and fill their home with these different types of homemaking crafts. Potholders appeared as one of these crafts in the late nineteenth century, usually marketed to accompany kettles and teapots.
Patterns to create potholders at home were first seen in the United States in pamphlets and magazines, including periodicals like Workbasket, whose primary target audience consisted of the middle and working classes. This appearance of needlework patterns in magazines began around 1880 and continued to be prominent through the 1930s. During the Depression Era, designs for potholders were being published by the household press as well as makers of yarns and threads. This period of time is when potholders blossomed into the useful, diverse art form that is recognized by needleworkers today.
Common types of potholder making at home include quilting, knitting, and crocheting. These techniques use different mediums such as yarn or scraps of fabric in order to create potholders of all different colors and patterns. Many "DIY" tutorials teach how to make a simple square potholder, but there are also many that teach a variety of shapes, sizes, and designs, including little houses or flowers. These homemade kitchen tools are often considered good for home decor or gift giving.
In advertising
By the early 20th century, potholders were regularly featured in United States' advertisements. They were featured in magazine and newspaper ads for kitchen appliances, usually providing protection between a woman's hands and her pot or pan of freshly cooked food. The appliance being advertised would often be featured as a backdrop. A typical advertisement would show a young, smiling woman using potholders to remove her freshly cooked bread from the oven. Though the advertisement would usually be for an oven or stove, the potholders are featured as a mainstay in a trendy young woman's kitchen.
See also
Oven glove
Mittens
References
External links
Kitchenware
Personal protective equipment | Potholder | [
"Engineering",
"Environmental_science"
] | 1,711 | [
"Safety engineering",
"Personal protective equipment",
"Environmental social science"
] |
7,046,835 | https://en.wikipedia.org/wiki/Lycoperdon%20umbrinum | Lycoperdon umbrinum, commonly known as the umber-brown puffball, is a type of Puffball mushroom in the genus Lycoperdon. It is found in China, Europe, Africa, and North America.
Description
This species has a fruit body that is shaped like a top, with a short, partly buried stipe. It is tall and broad. It is approximately the size of a golf ball but may grow to be as big as a tennis ball. Lycoperdon umbrinum is very similar to Lycoperdon molle; they are so similar that scientists used refer to them with the same name. When looking closer, the density of spines on L. umbrinum are sparser and the spores of L. molle are much larger with a greater density of spines. The spores of L. umbrinum are spherical and either smooth or ornamented with spines and appear olive yellow in KOH.
The fruit body is initially pale brown then reddish to blackish brown, and the outer wall has slender, persistent spines up to 1 mm long. Spores are roughly spherical, 3.5–5.5 μm in diameter, with fine warts and a pedicel that is 0.5–15 μm long. It is uncommon and found mostly in coniferous woods on sandy soils.
The species is considered edible. Be sure to identify properly before eating because it could be confused with the toxic earth ball or deadly Amanita.
Ecology and habitat
This fungus is saprophytic, commonly growing in forests and under conifers. It has also been seen growing in poor quality soil in hardwood and conifer areas. It has been observed growing by itself, dispersed, or many together.
The fruiting period is from June through September. Unlike agarics which have gills that hold spores, when conditions are right, these puffballs will become dry and burst to release their spores. Upon rupturing, they can release trillions of spores.
An interesting characteristic about Lycoperdon umbrinum is that it likely has a mycorrhizal relationship with Pinus patula. One study investigated this relationship and found these species were often growing near each other. Additionally, there were development of branched and finger-like mycorrhizae underneath the L. umbrinum fruiting bodies. This study was done in South Africa where it is common that coniferous plants grown on large scale have this mutualism (and L. umbrinum is one of them).
Edibility and medicinal uses
Lycoperdon umbrinum is edible and has been found to have some medicinal purposes. This mushroom has historically been used by the Mam ethnic group in Mexico. They call it “wutz anim” or “dead’s eye” which they use to keep away the evil eye. They typically prepare it by boiling and eat it by itself or with other plants. This group also uses it against asthma (creating a powder mixed with other plants) and additional uses that seem to overlap with the uses of baby powder. In some parts of the country, there is a mushroom gathering tradition (where these mushrooms are used for food, medicine, religious purposes, or for selling) that the whole family is a part of.
In the lab, L. umbrinum has been found to have significant antibacterial properties and potentially antimicrobial properties. It was found that Aspergillus tamarii (an endophytic fungus) is associated with L. umbrinum through a beneficial mutualistic relation. This fungus, extracted from L. umbrinum has significant antibacterial properties specifically on Salmonella typhi, Staphylococcus aureus, Bacillus subtilis, and Escherichia coli. L. umbrinum was also found to have antimicrobial activities against methicillin resistant Staphylococcus aureus (MRSA). Lycoperdon umbrinum and Trametes versicolor were found to inhibit the MRSA growth to the greatest degree (compared to the other fungi in the study) indicating that these species could hold a new source of antimicrobial properties to fight MRSA.
Although it may have helpful antibacterial and antimibrobial properties, spore inhalation should be avoided. Inhalation of Lycoperdon spp. could cause lycoperdonosis. This is a reaction to inhalation or ingestion of puffball spores which can lead to unpleasant symptoms.
Nutrients
Lycoperdon umbrinum contains tocopherols with α- and β- isoforms and has high ash content (indicating it has minerals important for nutrition).
References
External links
California Fungi
Edible fungi
Fungi of Asia
Fungi of Europe
Fungi of North America
Puffballs
Fungi described in 1801
Taxa named by Christiaan Hendrik Persoon
umbrinum
Fungus species | Lycoperdon umbrinum | [
"Biology"
] | 1,004 | [
"Fungi",
"Fungus species"
] |
7,047,611 | https://en.wikipedia.org/wiki/Clyde%20Arc | The Clyde Arc (known locally as the Squinty Bridge) is a road bridge spanning the River Clyde in Glasgow, Scotland, connecting Finnieston near the SEC Armadillo and SEC with Pacific Quay and Glasgow Science Centre in Govan. Prominent features of the bridge are its innovative curved design, and that it crosses the river at an angle. The Arc is the first city centre traffic crossing over the river built since the Kingston Bridge was opened to traffic in 1970.
The bridge was named the "Clyde Arc" upon its official opening on 18 September 2006. It had been previously known as the "Finnieston Bridge", or the "Squinty Bridge".
Design
The Clyde Arc was designed by Halcrow Group and built by BAM Nuttall. Glasgow City Council instigated the project in conjunction with Scottish Enterprise and the Scottish Government. Piling works for the bridge were carried out from a large floating barge on the Clyde, whilst the bridge superstructure was fabricated offsite. The bridge-deck concrete-slab units were cast at an onsite pre-casting yard. Planning permission was granted in 2003 and construction of the bridge began in May 2005. It was structurally completed in April 2006. The bridge project cost an estimated £20.3M and is designed to last 120 years.
The bridge has a main span of with two end spans of , resulting in a total span of . The design of the main span features a steel arch. The supports for the main span are located within the river with the abutments located behind the existing quay walls. The central navigation height at mean water height is .
It was officially opened on 18 September 2006 by Glasgow City Council leader Steven Purcell, although pedestrians were allowed to walk across it the previous two days as part of Glasgow's annual "Doors Open" Weekend.
The bridge connects Finnieston Street on the north bank of the river to Govan Road on the southern bank. The bridge takes four lanes of traffic, two of which are dedicated to public transport and two for private and commercial traffic. There are also pedestrian and cycle paths. The new bridge was built to provide better access to Pacific Quay and allow better access to regeneration areas on both banks of the Clyde. The bridge has been designed to cope with a possible light rapid transit system (light railway scheme) or even a tram system.
The bridge is the first part of several development projects planned to regenerate Glasgow. The £40M Tradeston Bridge was also completed (a further proposed pedestrian bridge linking Springfield Quay with Lancefield Quay was not). The canting basin and Govan Graving Docks next to Pacific Quay are subject to development along with Tradeston and Laurieston. A derelict area of Dalmarnock was used as the 'athletes' village' for the 2014 Commonwealth Games in Glasgow.
Support hanger failure
The bridge was closed between 14 January and 28 June 2008 due to the failure of one support hanger, and cracks found in a second.
On the night of 14 January 2008 the connecting fork on one of the bridge's 14 hangers (supporting cables that transfer the weight of the roadway to the bridge's arch) snapped; Strathclyde Police quickly closed the bridge to traffic. Robert Booth, a spokesman for Glasgow City Council said:
A detailed inspection on 24 January found a stress fracture in a second support cable stay, like the one which had failed previously. Engineers determined that all of these connectors would have to be replaced; rather than a brief closure the bridge would have to remain closed for six months. In addition traffic on the river below was also halted. In March Nuttall began installing five temporary saddle frames atop the bridge's arch; these allowed the weight of the bridge to be supported without the hangers. This allowed them to replace defective fork connectors at the top and bottom of each hanger.
The bridge recommenced on 28 June 2008 with just two of its four lanes in use, having had all the cast steel connectors replaced with milled steel connectors. Once reopened, Glasgow City Council estimated that 6,500 crossings will be made every day using the bridge.
New Civil Engineer reported subcontractor Watson Steel Structures was suing Macalloy, the supplier of the failed connectors, for £1.8 million. Watson alleged components obtained from Macalloy did not meet British Standards or their own specifications; parts were inadequately manufactured, and did not tally with test certificates provided by the firm. Macalloy denied the claim and countered Watson Steel Structures Ltd had only specified minimum yield stress for the components.
See also
Hulme Arch Bridge
References
External links
Photographs taken at the opening ceremony
Clyde Arc - Clyde Waterfront project details
Road bridges in Scotland
Bridges in Glasgow
Bridges across the River Clyde
Through arch bridges in the United Kingdom
Pedestrian bridges in Scotland
Bridges completed in 2006
Engineering failures
Govan
2006 establishments in Scotland | Clyde Arc | [
"Technology",
"Engineering"
] | 986 | [
"Systems engineering",
"Reliability engineering",
"Technological failures",
"Engineering failures",
"Civil engineering"
] |
7,048,539 | https://en.wikipedia.org/wiki/Pinning%20force | Pinning force is a force acting on a pinned object from a pinning center. In solid state physics, this most often refers to the vortex pinning, the pinning of the magnetic vortices (magnetic flux quanta, Abrikosov vortices) by different kinds of the defects in a type II superconductor. Important quantities are the individual maximal pinning force, which defines the depinning of a single vortex, and an average pinning force, which defines the depinning of the correlated vortex structures and can be associated with the critical current density (the maximal density of non-dissipative current). The interaction of the correlated vortex lattice with system of pinning centers forms the magnetic phase diagram of the vortex matter in superconductors. This phase diagram is especially rich for high temperature superconductors (HTSC) where the thermo-activation processes are essential.
The pinning mechanism is based on the fact that the amount of grain boundary area is reduced when a particle is located on a grain boundary. It is also assumed that particles are spherical and the particle-matrix interface is incoherent. When a moving grain boundary meets a particle at an angle , the particle exerts a pinning force on the grain boundary that is equal to ; with the particle radius and the energy per unit of grain boundary area.
References
See also
Flux pinning
Superconductivity
Magnetism | Pinning force | [
"Physics",
"Materials_science",
"Engineering"
] | 286 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
7,048,553 | https://en.wikipedia.org/wiki/Project%20Juno | Project Juno was a privately funded campaign which selected Helen Sharman to be the first Briton in space.
As the United Kingdom did not, at that time, have a human spaceflight programme (until the UK joined the human spaceflight elements of ESA's exploration programme in December 2012, which led to Tim Peake's ESA mission in 2015), a private consortium was formed to raise money to pay the Soviet Union for a seat on a Soyuz mission to the Mir space station. The Soviet Union had recently flown Toyohiro Akiyama, a Japanese journalist, under a similar arrangement.
Selection
A call for applicants was publicized in the UK (one ad read "Astronaut wanted. No experience necessary"), leading to 13,000 applications. Juno selected four candidates to train in the Soviet Union:
Gordon Brooks (Royal Navy physician, then 33)
Major Timothy Mace (Army Air Corps, 33)
Clive Smith (Kingston University lecturer, 27)
Helen Sharman (food technologist, 26)
Eventually Mace and Sharman were selected to continue full-time training at Star City. After learning Russian and familiarising themselves with the science programme, Smith and Brooks were employed to teach the other two how to perform the experiments and then to conduct them in a life sized mock up of Mir for live media during the mission.
Funding
The cost of the flight was to be funded by various innovative schemes, including sponsoring by private British companies and a lottery system. Corporate sponsors included British Aerospace, Memorex, and Interflora, and television rights were sold to ITV.
The flight cost £7 million.
Ultimately the Juno consortium failed to raise the entire sum, and the Soviet Union considered cancelling the mission. However Mikhail Gorbachev directed the mission to proceed at Soviet cost. The ambitious microgravity experiments originally planned were dropped when time ran out for sending required equipment on an automated 'Progress' flight. Helen did perform experiments designed by British schools that could be done with existing equipment aboard Mir along with a British microbiology screening investigation taken over by the Russians.
Flight and after
Sharman was launched aboard Soyuz TM-12 on 18 May 1991, and returned aboard Soyuz TM-11 on 26 May 1991.
Both Sharman and Mace were candidates but not selected in the 1992 and 1998 European Space Agency selection rounds for its astronaut corps. Brooks was also put forward for the European Astronaut Corps in 1982, but dropped out when employed on AI systems elsewhere. Mace did not fly in space, but married the daughter of cosmonaut Vitali Zholobov. He was later the helicopter pilot for President of South Africa Nelson Mandela. He died in September 2014 from cancer.
See also
British National Space Centre
British space programme
British astronauts
References
External links
BBC article
Spacefacts bio of Timothy Mace
Article about Gordon Brooks and Juno
JUNO Amateur Radio Contacts with Schools
Human spaceflight programs
Science and technology in the United Kingdom
Russia–United Kingdom relations
Soviet Union–United Kingdom relations
Soyuz program
Space programme of the United Kingdom | Project Juno | [
"Engineering"
] | 603 | [
"Space programs",
"Human spaceflight programs"
] |
7,048,893 | https://en.wikipedia.org/wiki/Molybdenum%28II%29%20chloride | Molybdenum dichloride describes chemical compounds with the empirical formula MoCl2. At least two forms are known, and both have attracted much attention from academic researchers because of the unexpected structures seen for these compounds and the fact that they give rise to hundreds of derivatives. The form discussed here is Mo6Cl12. The other molybdenum(II) chloride is potassium octachlorodimolybdate.
Structure
Rather than adopting a close-packed structure typical of metal dihalides, e.g., cadmium chloride, molybdenum(II) chloride forms a structure based on clusters. Molybdenum(II), which is a rather large ion, prefers to form compounds with metal-metal bonds, i.e. metal clusters. In fact all "lower halides" (i.e. where halide/M ratio is <4) in the "early transition metal series (Ti, V, Cr, Mn triads) do. The species Mo6Cl12 is polymeric, consisting of cubic Mo6Cl84+ clusters interconnected by chloride ligands that bridge from cluster to cluster. This material converts readily to salts of the dianion [Mo6Cl14]2−. In this anion, each Mo bears one terminal chloride but is otherwise part of an Mo6 octahedron embedded inside a cube defined by eight chloride centers. Thus, the coordination environment of each Mo is four triply bridging chloride ligands, four Mo neighbors, and one terminal Cl. The cluster has 24e−, four being provided by each Mo2+.
]
Synthesis and reactions
Mo6Cl12 is prepared by the reaction of molybdenum(V) chloride with molybdenum metal:
12 MoCl5 + 18 Mo → 5 Mo6Cl12
This reaction proceeds via the intermediacy of MoCl3 and MoCl4, which also are reduced by the presence of excess Mo metal. The reaction is conducted in a tube furnace at 600–650 °C.
Once isolated, Mo6Cl12 undergoes many reactions with retention of the Mo612+ core. Heating in concentrated HCl gives (H3O)2[Mo6Cl14]. The terminal chloride ligands, labeled "ausser" are readily exchanged:
(H3O)2[Mo6Cl14] + 6 HI → (H3O)2[Mo6Cl8I6] + 6 HCl
Under more forcing conditions, all 14 ligands can be exchanged, to giving salts of [Mo6Br14]2− and [Mo6I14]2−.
.
Related clusters
A variety of clusters are structurally related to [Mo6Cl14]2−. The tungsten analogue is known. Ta and Nb form related clusters where halides are bridge edges of the Ta6 octahedron vs faces. The resulting formula is [Ta6Cl18]4−.
Sulfido and selenido derivatives are also well studied. [Re6Se8Cl6]4− has the same number of valence electrons as does [Mo6Cl14]2−.
The Mo-S clusters Mo6S8L6, analogues of the "Chevrel phases", have been prepared by the reaction of sulfide sources with Mo6Cl12 in the presence of donor ligands L.
References
Chlorides
Molybdenum halides
Molybdenum(II) compounds | Molybdenum(II) chloride | [
"Chemistry"
] | 720 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
7,048,926 | https://en.wikipedia.org/wiki/Negative-bias%20temperature%20instability | Negative-bias temperature instability (NBTI) is a key reliability issue in MOSFETs, a type of transistor aging. NBTI manifests as an increase in the threshold voltage and consequent decrease in drain current and transconductance of a MOSFET. The degradation is often approximated by a power-law dependence on time. It is of immediate concern in p-channel MOS devices (pMOS), since they almost always operate with negative gate-to-source voltage; however, the very same mechanism also affects nMOS transistors when biased in the accumulation regime, i.e. with a negative bias applied to the gate.
More specifically, over time positive charges become trapped at the oxide-semiconductor boundary underneath the gate of a MOSFET. These positive charges partially cancel the negative gate voltage without contributing to conduction through the channel as electron holes in the semiconductor are supposed to. When the gate voltage is removed, the trapped charges dissipate over a time scale of milliseconds to hours. The problem has become more acute as transistors have shrunk, as there is less averaging of the effect over a large gate area. Thus, different transistors experience different amounts of NBTI, defeating standard circuit design techniques for tolerating manufacturing variability which depend on the close matching of adjacent transistors.
NBTI has become significant for portable electronics because it interacts badly with two common power-saving techniques: reduced operating voltages and clock gating. With lower operating voltages, the NBTI-induced threshold voltage change is a larger fraction of the logic voltage and disrupts operations. When a clock is gated off, transistors stop switching and NBTI effects accumulate much more rapidly. When the clock is re-enabled, the transistor thresholds have changed and the circuit may not operate. Some low-power designs switch to a low-frequency clock rather than stopping completely in order to mitigate NBTI effects.
Physics
The details of the mechanisms of NBTI have been debated, but two effects are believed to contribute: trapping of positively charged holes, and generation of interface states.
preexisting traps located in the bulk of the dielectric are filled with holes coming from the channel of pMOS. Those traps can be emptied when the stress voltage is removed, so that the Vth degradation can be recovered over time.
interface traps are generated, and these interface states become positively charged when the pMOS device is biased in the "on" state, i.e. with negative gate voltage. Some interface states may become deactivated when the stress is removed, so that the Vth degradation can be recovered over time.
The existence of two coexisting mechanisms has resulted in scientific controversy over the relative importance of each component, and over the mechanism of generation and recovery of interface states.
In sub-micrometer devices nitrogen is incorporated into the silicon gate oxide to reduce the gate leakage current density and prevent boron penetration. It is known that incorporating nitrogen enhances NBTI. For new technologies (45 nm and shorter nominal channel lengths), high-κ metal gate stacks are used as an alternative to improve the gate current density for a given equivalent oxide thickness (EOT). Even with the introduction of new materials like hafnium oxide in the gate stack, NBTI remains and is often exacerbated by additional charge trapping in the high-κ layer.
With the introduction of high κ metal gates, a new degradation mechanism has become more important, referred to as PBTI (for positive bias temperature instabilities), which affects nMOS transistor when positively biased. In this case, no interface states are generated and 100% of the Vth degradation may be recovered.
See also
Hot carrier injection
Electromigration
References
J.H. Stathis, S. Mahapatra, and T. Grasser, “Controversial issues in negative bias temperature instability”, Microelectronics Reliability, vol 81, pp. 244–251, Feb. 2018.
T. Grasser et al., “The paradigm shift in understanding the bias temperature instability: From reaction–diffusion to switching oxide traps”, IEEE Transactions on Electron Devices 58 (11), pp. 3652–3666, Nov. 2011.
D.K. Schroder, “Negative bias temperature instability: What do we understand?”, Microelectronics Reliability, vol. 47, no. 6, pp. 841–852, June 2007.
JH Stathis and S Zafar, “The negative bias temperature instability in MOS devices: A review”, Microelectronics Reliability, vol 46, no. 2, pp. 278–286, Feb. 2006.
M. Alam and S. Mahapatra, “A comprehensive model of PMOS NBTI degradation”, Microelectronics Reliability, vol. 45, no. 1, pp. 71–81, Jan. 2005.
Semiconductor device defects
Semiconductor device fabrication
Electronic engineering
Hardware testing | Negative-bias temperature instability | [
"Materials_science",
"Technology",
"Engineering"
] | 1,041 | [
"Computer engineering",
"Microtechnology",
"Technological failures",
"Semiconductor device defects",
"Semiconductor device fabrication",
"Electronic engineering",
"Electrical engineering"
] |
7,048,937 | https://en.wikipedia.org/wiki/GY6%20engine | The GY6 engine design is a four-stroke single-cylinder in a near-horizontal orientation that is used on a number of small motorcycles or scooters made in Taiwan, China, and other southeast Asian countries. It has since become a generic technology. Kymco went on to produce Honda clones such as the Pulsar (CB125), made to Honda standards, as part of their range.
Honda's KCW125 (the commercial name in Japan is "Spacy") was modified by Taiwan's Kwang Yang Motor Co., Ltd. (KYMCO), under Honda's consultancy, and became a standard model called the GY6, which various Taiwan makers imitated and minor-changed. Apparently, vehicles of this model were imported from Taiwan by various manufacturers and traders, and spread mainly in the southern coastal regions of China.
Configuration
The GY6 single is forced-air-cooled, with a chain-driven overhead camshaft and a crossflow hemi cylinder head. Fuel metering is by a single constant-velocity style sidedraft carburetor, typically a Keihin CVK clone or similar.
Ignition is by capacitor discharge ignition (CDI), with a magnetic trigger on the flywheel. Because the trigger is on the flywheel instead of the camshaft, the ignition will fire on both the compression and exhaust strokes, making it a wasted spark ignition. An integrated magneto provides 50 V AC power for the CDI system and 20-30 V AC rectified and regulated to 12 V DC for chassis accessories such as lighting and to charge a battery.
It includes an integrated swingarm, which houses a centrifugally-controlled continuously variable transmission (CVT) using a rubber belt sometimes called a VDP. At the rear of the swingarm, a centrifugal clutch connects the transmission to a simple integral gear-reduction unit. There is no clutch of any kind between the CVT and the crankshaft; it is engaged via a centrifugal clutch at the rear pulley in the same fashion as Vespa Grande, Bravo, and variated Ciao model, as well as Honda Camino/Hobbit scooters/mopeds. An electric starter, backup kick-starter, and rear brake hardware is also housed in the swingarm.
References
Further reading
Chinese, Taiwanese & Korean Scooters 50cc Thru 200cc, '04-'09: 50, by Max Haynes and Phil Mather. Haynes Manuals. 2009.
Interfirm relations under late industrialization in China: the supplier system in the motorcycle industry; Volume 40 of I.D.E. occasional papers series. Moriki Ōhara. Institute of Developing Economies, Japan External Trade Organization. 2006. . p. 44, 53. Full text at HighBeam Research
The Little Book of Trikes By Adam Quellin. Veloce Publishing Ltd, 2011. . p. 64.
Scooters Service and Repair Manual. by Phil Mather and Alan Harold Ahlstrand. Haynes Manuals. 2006.
Honda
Motorcycle engines | GY6 engine | [
"Technology"
] | 638 | [
"Motorcycle engines",
"Engines"
] |
7,049,110 | https://en.wikipedia.org/wiki/Lactarius%20controversus | Lactarius controversus, commonly known as the poplar milkcap, is a large funnel-capped fungus within the genus Lactarius, which are collectively known as 'milk caps'. They all exude milky drops (lactate) from the flesh and gills when damaged.
Taxonomy
Accredited to Christian Hendrik Persoon, one of the fathers of mycology.
Description
It is distinguishable mainly by its pinkish-buff gills and rosy markings on the upper cap surface, often arranged in concentric rings. Like other fungi in the genus, it has crumbly, rather than fibrous, flesh, and when this is broken the fungus exudes a white milky liquid. Mature specimens are funnel-shaped, with decurrent gills and a concave cap from 15 to 30 (45) cm in diameter. It has firm, tough flesh, and a stipe which is shorter than the fruitbody is wide. The spore print is creamy-pink in colour.
Lactarius controversus is similar to several white milk-caps in the genus Lactifluus which however are only distantly related: The 'fleecy milk-cap' Lactifluus vellereus, its sister species Lf. bertillonii, and the 'peppery milk-cap' Lf. piperatus all lack the pinkish gills and 'rosy' cap markings.
Distribution and habitat
It is found in Britain, and Europe, and usually grows with species of Salix (Goat willow or Creeping willow) on heaths and moors. It is uncommon. It is widespread in North America growing with aspen, poplar, and willow. Found in the aspen forests of the Sierra Nevada, and has been noted in New Mexico.
Edibility
This mushroom is considered inedible in western Europe due to its very acrid taste, but is eaten, and even commercially collected, in south-eastern European countries such as Serbia and Turkey.
See also
List of Lactarius species
References
controversus
Inedible fungi
Fungi described in 1800
Fungi of Europe
Taxa named by Christiaan Hendrik Persoon
Fungus species | Lactarius controversus | [
"Biology"
] | 435 | [
"Fungi",
"Fungus species"
] |
7,049,255 | https://en.wikipedia.org/wiki/Lactifluus%20vellereus | Lactifluus vellereus (formerly Lactarius vellereus), commonly known as the fleecy milk-cap, is a quite large fungus in the genus Lactifluus. It is one of the two most common milk-caps found with beech trees, with the other being Lactarius subdulcis.
Taxonomy and systematics
Lactifluus vellereus is one of a handful of north temperate milk caps that belong to the genus Lactifluus which has been separated from Lactarius on phylogenetic grounds. Its closest species is L. bertillonii, with which it forms a rather isolated clade in the genus.
Description
Like other mushrooms in the family Russulaceae, the L. vellereus fruit body has crumbly, rather than fibrous, flesh, and when this is broken the fungus exudes a milky latex. The mature caps are white to cream, funnel-shaped, and up to in diameter. It has firm flesh, and a stipe which is shorter than the fruit body is wide. The gills are fairly distant (quite far apart), decurrent, and narrow, and have brown specks from the drying milk. The spore print is white in colour.
Lactifluus bertillonii is closely related and very similar, but has hotter milk. Another similar, but phylogenetically distant species is Lactarius controversus, distinguishable mainly by its white gills and lack of rosy markings on the upper cap.
Distribution and habitat
The mushroom is found in deciduous woods, from late summer to early winter. It is found in Britain and Europe.
Edibility
The milk tastes mild on its own, but hot when tasted with the flesh. It is considered inedible because of its peppery taste.
See also
List of Lactifluus species
References
vellereus
Inedible fungi
Fungi described in 1821
Fungi of Europe
Taxa named by Elias Magnus Fries
Fungus species | Lactifluus vellereus | [
"Biology"
] | 401 | [
"Fungi",
"Fungus species"
] |
7,049,417 | https://en.wikipedia.org/wiki/Dose%20fractionation | Dose fractionation effects are utilised in the treatment of cancer with radiation therapy. When the total dose of radiation is divided into several, smaller doses over a period of several days, there are fewer toxic effects on healthy cells. This maximizes the effect of radiation on cancer and minimizes the negative side effects. A typical fractionation scheme divides the dose into 30 units delivered every weekday over six weeks.
Background
Experiments in radiation biology have found that as the absorbed dose of radiation increases, the number of cells which survive decreases. They have also found that if the radiation is fractionated into smaller doses, with one or more rest periods in between, fewer cells die. This is because of self-repair mechanisms which repair the damage to DNA and other biomolecules such as proteins. These mechanisms can be over expressed in cancer cells, so caution should be used in using results for a cancer cell line to make predictions for healthy cells if the cancer cell line is known to be resistant to cytotoxic drugs such as cisplatin. The DNA self repair processes in some organisms is exceptionally good; for instance, the bacterium Deinococcus radiodurans can tolerate a 15 000 Gy (1.5 MRad) dose.
In the graph to the right, called a cell survival curve, the dose vs. surviving fraction have been drawn for a hypothetical group of cells with and without a rest time for the cells to recover. Other than the recovery time partway through the irradiation, the cells would have been treated identically.
The human body contains many types of cells, and the human can be killed by the loss of a single type of cell in a vital organ. For many short-term radiation deaths due to what is commonly known as radiation sickness (3 to 30 days after exposure), it is the loss of bone marrow cells (which produce blood cells), and the loss of other cells in the wall of the intestines, that is fatal.
Radiation fractionation as cancer treatment
Fractionation effects are utilised in the treatment of cancer with radiation therapy. When the total dose of radiation is divided into several, smaller doses over a period of several days, there are fewer toxic effects on healthy cells. This maximizes the effect of radiation on cancer and minimizes the negative side effects. A typical fractionation scheme divides the dose into 30 units delivered every weekday over six weeks.
Hypofractionation is a treatment regimen that delivers higher doses of radiation in fewer visits. The logic behind this treatment is that applying greater amounts of radiation works to lower the effects of accelerated tumor growth that typically occurs during the later stages of radiotherapy.
Hyperfractionation is dividing the same total dose into more deliveries. Treatments are given more than once a day. Hyperfractionated radiation therapy is given over the same period of time (days or weeks) as standard radiation therapy.
Accelerated fractionation (two deliveries per day and/or deliveries on weekends as well) has also been investigated.
References
Cell biology
Radiation
DNA | Dose fractionation | [
"Physics",
"Chemistry",
"Biology"
] | 612 | [
"Transport phenomena",
"Physical phenomena",
"Cell biology",
"Waves",
"Radiation"
] |
7,050,373 | https://en.wikipedia.org/wiki/TIFRAC | TIFRAC (Tata Institute of Fundamental Research Automatic Calculator) was the first computer developed in India, at the Tata Institute of Fundamental Research in Mumbai. Initially a TIFR Pilot Machine was developed in the 1950s (operational in 1956). Based on the IAS machine design, the development of the final machine was started in 1955 and was formally commissioned (and named TIFRAC, by Jawaharlal Nehru) in 1960. The full machine was in use until 1965.
TIFRAC included 2,700 vacuum tubes, 1,700 germanium diodes and 12,500 resistors. It had 2,048 40-bit words of ferrite core memory. This machine was an early adopter of ferrite core memory.
The main assembly of TIFRAC, which had vacuum tubes was housed in a massive steel rack measuring 18 feet x 2.5 feet x 8 feet. It was fabricated from modules of 4 feet x 2.5 feet x 8 feet. Each module had steel doors on either side for accessing the circuits.
A cathode ray tube display system was developed to serve as an auxiliary output to the computer for analogue and digital display of both graphs and alpha-numeric symbols.
A manual console served as the input/output control unit of the computer. The software of TIFRAC were written in a series of commands of 0s and 1s (machine code).
A British-built HEC 2M computer, happened to be the first digital computer in India, which was imported and installed in Indian Statistical Institute, Kolkata, during 1955. Prior to that, this institute had developed a small analog computer in 1953, which is technically the first computer in India.
See also
List of vacuum tube computers
References
Specific
Vacuum tube computers
Information technology in India
History of Mumbai | TIFRAC | [
"Technology"
] | 367 | [
"Computing stubs"
] |
7,050,437 | https://en.wikipedia.org/wiki/Desymmetrization | Desymmetrization is a chemical reaction that converts prochiral substrates into chiral products. Desymmetrisations are so pervasive that they are rarely described as such except when they proceed enantioselectively. The enantioselective reactions require chiral catalysts or chiral reagents. According to IUPAC, desymmetrization involves the "... the conversion of a prochiral molecular entity into a chiral one."
Examples
Typical substrates are epoxides, diols, dienes, and cyclic carboxylic acid anhydrides.
One example is the conversion of cis-3,5-diacetoxycyclopentene to monoacetate. This particular conversion utilizes the enzyme cholinesterase.
In another example, a symmetrical cyclic imide is subjected to asymmetric deprotonation resulting in a chiral product with high enantioselectivity.
Partial hydrogenation]] converts benzil (PhC(O)C(O)Ph) into chiral hydrobenzoin. The process can be implemented enantioselectively using transfer hydrogenation.
(Ph =
The precursor benzil has C2v symmetry, and the product is C2 symmetric.
Citric acid is also a symmetric molecule that can be desymmetrized by partial methylation.
The alcoholysis of cyclic anhydrides can be conducted enantiosymmetrically using chiral amine catalysts.
A related example is the hydrolysis of prochiral diesters catalyzed by chiral phosphoric acids.
Formal symmetry considerations
Desymmetrizations involve the loss of an improper axis of rotation (mirror plane, center of inversion, rotation-reflection axis). In other words, desymmetrisations convert prochiral precursors into chiral products.
References
Stereochemistry | Desymmetrization | [
"Physics",
"Chemistry"
] | 386 | [
"Stereochemistry",
"Space",
"Stereochemistry stubs",
"nan",
"Spacetime"
] |
7,050,601 | https://en.wikipedia.org/wiki/QTI | The IMS Question and Test Interoperability specification (QTI) defines a standard format for the representation of assessment content and results, supporting the exchange of this material between authoring and delivery systems, repositories and other learning management systems. It allows assessment materials to be authored and delivered on multiple systems interchangeably. It is, therefore, designed to facilitate interoperability between systems.
The specification consists of a data model that defines the structure of questions, assessments and results from questions and assessments together with an XML data binding that essentially defines a language for interchanging questions and other assessment material. The XML binding is used for exchanging questions between different authoring tools and by publishers. The assessment and results parts of the specification are less widely used.
As can be seen below, the standard is mainly implemented by commercial products, few open source assessment systems support it and the most popular open source learning management systems (Moodle does not support it and Canvas only supports the old 1.2 version) limit support. The fact that you have to register to download the specs begs the openness of the standard.
Background
QTI was produced by the IMS Global Learning Consortium (IMS GLC), which is an industry and academic consortium that develops specifications for interoperable learning technology. QTI was inspired by the need for interoperability in question design, and to avoid people losing or having to re-type questions when technology changes. Developing and validating good questions can be time consuming, and it's desirable to be able to create them in a platform and technology neutral format. IMS has less than 800 members and is not the voice for the entire industry. You cannot use the QTI name in anything other than an RFP according to their website.
QTI version 1.0 was materially based on a proprietary Questions Markup Language (QML) language defined by QuestionMark, but the language has evolved over the years and can now describe almost any reasonable question that one might want to describe. (QML is still in use by Questionmark).
Version 2.0 was finalized in 2005 and addressed the item (that is, the individual question) level of the specification only. A draft version of Version 2.1, which covered the structure of tests and results, was also released in 2005. But because Version 2.0 did not address test-level issues and was not compatible with Version 1, and because 2.1 was still under development, adoption of Version 2 was retarded. This was compounded in 2009 when IMS GLC withdrew the Version 2.1 draft and advised the user community that the only version "fully endorsed" by IMS GLC was 1.2.1, in effect also deprecating Version 2.0. Despite this, after several more drafts, 2.1 was finalized and released in 2012.
The current version is 2.2, which was finalized in 2015, and has subsequently had two minor revisions, 2.2.1 and 2.2.2, the latest of which was in November 2017. Version 2.2 updated and improved integration with W3C standards such as HTML5, SSML, PLS, CSS, ARIA, and MathML, and otherwise made relatively small changes to the Version 2.1 core specification.
Version 2.x is a significant improvement on Version 1, defining a new underlying interaction model. It is also notable for its significantly greater degree of integration with other specifications (some of which did not exist during the production of v1): the specification addresses the relationship with IMS Content Packaging v1.2, IEEE Learning Object Metadata, IMS Learning Design, IMS Simple Sequencing and other standards such as XHTML. It also provides guidance on representing context-specific usage data and information to support the migration of content from earlier versions of the specification.
Version 3 is now available.
IMS is now called 1EdTech.
Certification
IMS offers certification of compliance to QTI standards, as noted in the table below. However, it is only offered to members of the consortium, which costs US$1,000 to US$7,500 per year. There is also a cost to certify your software in addition to the Membership cost. This effectively leaves open source projects without the ability to be certified.
Timeline
Applications with IMS QTI support
See also
GIFT (file format)
References
External links
IMS Global Learning Consortium: IMS Question & Test Interoperability Specification
List of software that implement QTI
Complete Guide to QTI
XML
XML-based standards | QTI | [
"Technology"
] | 928 | [
"Computer standards",
"XML-based standards"
] |
7,051,723 | https://en.wikipedia.org/wiki/Hazard%20and%20operability%20study | A hazard and operability study (HAZOP) is a structured and systematic examination of a complex system, usually a process facility, in order to identify hazards to personnel, equipment or the environment, as well as operability problems that could affect operations efficiency. It is the foremost hazard identification tool in the domain of process safety. The intention of performing a HAZOP is to review the design to pick up design and engineering issues that may otherwise not have been found. The technique is based on breaking the overall complex design of the process into a number of simpler sections called nodes which are then individually reviewed. It is carried out by a suitably experienced multi-disciplinary team during a series of meetings. The HAZOP technique is qualitative and aims to stimulate the imagination of participants to identify potential hazards and operability problems. Structure and direction are given to the review process by applying standardized guideword prompts to the review of each node. A relevant IEC standard calls for team members to display 'intuition and good judgement' and for the meetings to be held in "an atmosphere of critical thinking in a frank and open atmosphere [sic]."
The HAZOP technique was initially developed for systems involving the treatment of a fluid medium or other material flow in the process industries, where it is now a major element of process safety management. It was later expanded to the analysis of batch reactions and process plant operational procedures. Recently, it has been used in domains other than or only loosely related to the process industries, namely: software applications including programmable electronic systems; software and code development; systems involving the movement of people by transport modes such as road, rail, and air; assessing administrative procedures in different industries; assessing medical devices; etc. This article focuses on the technique as it is used in the process industries.
History
The technique is generally considered to have originated in the Heavy Organic Chemicals Division of Imperial Chemical Industries (ICI), which was then a major British and international chemical company.
Its origins have been described by Trevor Kletz, who was the company's safety advisor from 1968 to 1982. In 1963 a team of three people met for three days a week for four months to study the design of a new phenol plant. They started with a technique called critical examination which asked for alternatives but changed this to look for deviations. The method was further refined within the company, under the name operability studies, and became the third stage of its hazard analysis procedure (the first two being done at the conceptual and specification stages) when the first detailed design was produced.
In 1974 a one-week safety course including this procedure was offered by the Institution of Chemical Engineers (IChemE) at Teesside Polytechnic. Coming shortly after the Flixborough disaster, the course was fully booked, as were ones in the next few years. In the same year the first paper in the open literature was also published. In 1977 the Chemical Industries Association published a guide. Up to this time the term 'HAZOP' had not been used in formal publications. The first to do this was Kletz in 1983, with what were essentially the course notes (revised and updated) from the IChemE courses. By this time, hazard and operability studies had become an expected part of chemical engineering degree courses in the UK.
Nowadays, regulators and the process industry at large (including operators and contractors) consider HAZOP a strictly necessary step of project development, at the very least during the detailed design phase.
Method
The method is applied to complex processes, for which sufficient design information is available and not likely to change significantly. This range of data should be explicitly identified and taken as the "design intent" basis for the HAZOP study. For example, a prudent designer will have allowed for foreseeable variations within the process, creating a larger design envelope than just the basic requirements, and the HAZOP will be looking at ways in which this might not be sufficient.
A common use of the HAZOP is relatively early through the detailed design of a plant or process. However, it can also be applied at other stages, including later operational life of existing plants, in which case it is usefully applied as a revalidation tool to ensure that unduly managed changes have not crept in since first plant start-up. Where design information is not fully available, such as during front-end loading, a coarse HAZOP can be conducted; however, where a design is required to have a HAZOP performed to meet legislative or regulatory requirements, such an early exercise cannot be considered sufficient and a later, detailed design HAZOP also becomes necessary.
For process plants, identifiable sections (nodes) are chosen so that for each a meaningful design intent can be specified . They are commonly indicated on piping and instrumentation diagrams (P&IDs) and process flow diagrams (PFDs). P&IDs in particular are the foremost reference document for conducting a HAZOP. The extent of each node should be appropriate to the complexity of the system and the magnitude of the hazards it might pose. However, it will also need to balance between "too large and complex" (fewer nodes, but the team members may not be able to consider issues within the whole node at once) and "too small and simple" (many trivial and repetitive nodes, each of which has to be reviewed independently and documented).
For each node, in turn, the HAZOP team uses a list of standardized guidewords and process parameters to identify potential deviations from the design intent. For each deviation, the team identifies feasible causes and likely consequences then decides (with confirmation by risk analysis where necessary, e.g., by way of an agreed upon risk matrix) whether the existing safeguards are sufficient, or whether an action or recommendation to install additional safeguards or put in place administrative controls is necessary to reduce the risks to an acceptable level.
The degree of preparation for the HAZOP is critical to the overall success of the review. "Frozen" design information provided to the team members with time for them to familiarize themselves with the process, an adequate schedule allowed for the performance of the HAZOP, provision of the best team members for their role. Those scheduling a HAZOP should take into account the review scope, the number of nodes to be reviewed, the provision of completed design drawings and documentation and the need to maintain team performance over an extended time-frame. The team members may also need to perform some of their normal tasks during this period and the HAZOP team members can tend to lose focus unless adequate time is allowed for them to refresh their mental capabilities.
The team meetings should be managed by an independent, trained HAZOP facilitator (also referred to as HAZOP leader or chairperson), who is responsible for the overall quality of the review, partnered with a dedicated scribe to minute the meetings. As the IEC standard puts it:The success of the study strongly depends on the alertness and concentration of the team members and it is therefore important that the sessions are not too long and that there are appropriate intervals between sessions. How these requirements are achieved is ultimately the responsibility of the study leader. For a medium-sized chemical plant, where the total number of items to be considered is around 1200 pieces of equipment and piping, about 40 such meetings would be needed. Various software programs are now available to assist in the management and scribing of the workshop.
Guidewords and parameters
Source:
In order to identify deviations, the team applies (systematically i.e. in a given order) a set of guidewords to each node in the process. To prompt discussion, or to ensure completeness, appropriate process parameters are considered in turn, which apply to the design intent. Typical parameters are flow (or flowrate), temperature, pressure, level, composition, etc. The IEC standard notes guidewords should be chosen that are appropriate to the study, neither too specific (limiting ideas and discussion) nor too general (allowing loss of focus). A fairly standard set of guidewords (given as an example the standard) is as follows:
Where a guide word is meaningfully applicable to a parameter (e.g., "no flow", "more temperature"), their combination should be recorded as a credible potential deviation from the design intent that requires review.
The following table gives an overview of commonly used guideword-parameter pairs (deviations) and common interpretations of them.
Once the causes and effects of any potential hazards have been established, the system being studied can then be modified to improve its safety. The modified design should then be subject to a formal HAZOP close-out, to ensure that no new problems have been added.
HAZOP team
A HAZOP study is a team effort. The team should be as small as practicable and having relevant skills and experience. Where a system has been designed by a contractor, the HAZOP team should contain personnel from both the contractor and the client company. A minimum team size of five is recommended. In a large process there will be many HAZOP meetings and the individuals within the team may change, as different specialists and deputies will be required for the various roles. As many as 20 individuals may be involved. Each team member should have a definite role as follows:
In earlier publications it was suggested that the study leader could also be the recorder but separate roles are now generally recommended.
The use of computers and projector screens enhances the recording of meeting minutes (the team can see what is minuted and ensure that it is accurate), the display of P&IDs for the team to review, the provision of supplemental documented information to the team and the logging of non-HAZOP issues that may arise during the review, e.g., drawing/document corrections and clarifications. Specialist software is now available from several suppliers to support the recording of meeting minutes and tracking the completion of recommended actions.
See also
Hazard analysis
Hazard analysis and critical control points
HAZID
Process safety management
Risk assessment
Safety engineering
Workplace safety standards
Notes
References
Further reading
Explanation by a software supplier:
Process safety | Hazard and operability study | [
"Chemistry",
"Engineering"
] | 2,067 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
7,052,521 | https://en.wikipedia.org/wiki/Tonelli%E2%80%93Hobson%20test | In mathematics, the Tonelli–Hobson test gives sufficient criteria for a function ƒ on R2 to be an integrable function. It is often used to establish that Fubini's theorem may be applied to ƒ. It is named for Leonida Tonelli and E. W. Hobson.
More precisely, the Tonelli–Hobson test states that if ƒ is a real-valued measurable function on R2, and either of the two iterated integrals
or
is finite, then ƒ is Lebesgue-integrable on R2.
References
Integral calculus
Theorems in analysis | Tonelli–Hobson test | [
"Mathematics"
] | 125 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical analysis stubs",
"Calculus",
"Mathematical problems",
"Mathematical theorems",
"Integral calculus"
] |
7,052,669 | https://en.wikipedia.org/wiki/Distillation%20Design | Distillation Design is a book which provides complete coverage of the design of industrial distillation columns for the petroleum refining, chemical and petrochemical plants, natural gas processing, pharmaceutical, food and alcohol distilling industries. It has been a classical chemical engineering textbook since it was first published in February 1992.
The subjects covered in the book include:
Vapor–liquid equilibrium(VLE): Vapor–liquid K values, relative volatilities, ideal and non-ideal systems, phase diagrams, calculating bubble points and dew points
Key fractional distillation concepts: theoretical stages, x-y diagrams, multicomponent distillation, column composition and temperature profiles
Process design and optimization: minimum reflux and minimum stages, optimum reflux, short-cut methods, feed entry location
Rigorous calculation methods: Bubble point method, sum rates method, numerical methods (Newton–Raphson technique), inside out method, relaxation method, other methods
Batch distillation: Simple distillation, constant reflux, varying reflux, time and boilup requirements
Tray design and tray efficiency: tray types, tray capacities, tray hydraulic parameters, tray sizing and determination of column diameter, point and tray efficiencies, tray efficiency prediction and scaleup
Packing design and packing efficiency: packing types, packing hydraulics and capacities, determination of packing efficiency by transfer unit method and by HETP method, packed column sizing
See also
External links
McGraw Hill website page
Distillation
Engineering textbooks
Science books
Technology books | Distillation Design | [
"Chemistry"
] | 308 | [
"Distillation",
"Separation processes"
] |
7,052,986 | https://en.wikipedia.org/wiki/Ei%20Ni%C5%A1 | Ei Niš (full legal name: Holding-Korporacija Elektronska industrija a.d. Niš) or Electronics Industry Niš, is a holding company with headquarters in Niš, Serbia. The company has operated from 1948 until it declared bankruptcy in 2016.
History
It originated in 1948 from the foundation of the Institute for the Production of Radio Sets and X-ray Machines, "RR Niš." In the 1970s and 1980s it was one of the greatest Yugoslavian companies employing over 10 thousand people. However, during the 1990s most of the company business collapsed, due to the war in Yugoslavia, lack of investing in research and sanctions the country was facing.
During the 2000s, the company manufactured acoustic equipment, electronic tubes including cathode-ray tubes, printed plates, electronic machine elements, hydraulics, pneumatics, appliances, air-conditioners, medical equipment, roentgen machines, TV sets, radio receivers, and semiconductors. It was one of the few remaining makers of electronic vacuum tubes.
In 2016, after decades of insolvency, it declared bankruptcy before regional Business Court.
Elektronska Industrija Niš works in partnership with other European companies, such as Alcatel (telephony), Honeywell, Bull, Silicon graphics (computers), Sagem, Siemens, Hellige, or even ITT or Philips.
Operations
The most successful period of business was from 1965 to 1980. At that time, EI had plants in Nis, Belgrade, Svrljig, Žitoraža, Aleksinac, Đevđelia (Macedonia), Serbia (Croatia), about 50 factories with 28,000 workers and an annual gross product of about 700 million dollars. At that time, the electronic industry had a large range of consumer products for domestic needs, the military and export. Appliances and devices were produced in large series: radios, loudspeakers, amplifiers, sound systems, computers, electric meters, washing machines, dishwashers, ordinary irons, televisions, telephones, telephone exchanges, HF devices, radio stations, railway signaling, traffic lights and accompanying equipment, special purpose devices for the needs of the army and militia, printed electronic circuits, industrial electronics, auto electronics, X-ray devices, medical devices, color cathode ray tubes, electronic tubes, air conditioners, cardboard packaging, plastic goods, ferromagnetic materials, resistors, capacitors, mechanical parts of devices, semiconductor elements.
Ei had spread plants into all Yugoslav republics and managed to develop full production process, most of all, semiconductor production.
The subsidiaries that have operated within the holding company Ei under various names and in various organizational forms are:
EI Acoustics Svrjig
EI Machine Shop Nis
EI Household Appliances Nis
EI Auto Service Nis
EI Beokom Belgrade
EI VEP Nis (or EI Vakum Electronic Products Nis)
EI VF Zemun (later privatized to "VF Holding" AD)
EI Ekos Eds Nis
EI Ekos Nis
EI Exim Belgrade
EI Expocom Belgrade
EI Elbas Zemun
EI Electromedicine Nis
EI Electrical Products Nis
EI Elkom Zemun (or EI Avala Zemun)
EI Elmag Nis
EI Energy and Maintenance Zemun
EI Eurostand Nis
EI Protection Nis
EI Industrial Electronics Nis
EI Informatics Nis ("Technicom Informatics" Nis)
EI Irin Nis
EI Research and Development Institute Belgrade (now IRITEL)
EI Jugorendgen (or EI X-ray Machine Factory)
EI Ceramics Djevđelija
EI Color Cathode Tubes Nis
EI Commerce Nis
EI Components Nis
EI Measuring Devices Nis
EI Metal Nis
EI Nikola Tesla Zemun
EI Nit Nis
EI Opek Nis
EI Pack Aleksinac
EI Pionir Zemun
EI Plastics Nis
EI Semiconductors Nis
EI Trade Nis
EI Professional Electronics Nis
EI Pupin Zemun
EI Radio Tubes Nis
EI Computers Nis
EI Components Nis
EI October 7 Nis
EI Service Network Belgrade
EI Sigraph Nis
EI Owl Nis (after privatization by HDS Nis)
EI Standard (or EI UIT Nis)
EI Television Nis
EI Test Nis
EI Transport and Trade Nis
EI Service Nis
EI Factory of Electric Hand Tools Zemun
EI Machinery and Equipment Factory Nis EI Machine Parts Factory Nis
EI Radio Receiver Factory Nis (or EI Radio Acoustics)
EI Signal Devices Factory Belgrade
EI Specific Elements Factory Nis
EI Femid Bela Palanka
EI Ferit Zemun
EI Hanivel Nis
EI Chegar Nis
EI SKO Nis or EI Internal Bank Nis
EI Printed Circuits Nis
Sponsorships
Ei agreed with local club FK Radnički Niš on sponsorship deal during 80s. The club peaked in the early 80s when it reached the UEFA Cup semifinals after already being near very top in the Yugoslav league for few year.
Products
Ei managed to develop full spectrum of electronics production, but it was mostly know in the market by loudspeakers, amplifiers, sound systems, computers, electric meters, washing machines, dishwashers, ordinary irons, televisions, telephones, telephone exchanges.
EI products included the computers Pecom 32, Pecom 64, as well as the Lira series, starting with the Lira 512.
See also
History of computer hardware in Yugoslavia
List of computer systems from Yugoslavia
List of companies of the Socialist Federal Republic of Yugoslavia
References
External links
Elektronska industrija Niš: Od giganta do stečaja at koreni.rs
1948 establishments in Serbia
Companies based in Niš
Computer companies of Serbia
Defunct companies of Serbia
Electronics companies established in 1948
Electronics companies of Serbia
Medical technology companies of Serbia
Serbian brands
Vacuum tubes | Ei Niš | [
"Physics"
] | 1,273 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
7,053,991 | https://en.wikipedia.org/wiki/Energy%20Technologies%20Institute | The Energy Technologies Institute (ETI) was a public-private partnership between global energy and engineering companies and the UK Government that was established in the United Kingdom in 2007. The government set up the ETI following an announcement in the 2006 budget speech. The purpose of the ETI is to “accelerate the development, demonstration and eventual commercial deployment of a focused portfolio of energy technologies, which will increase energy efficiency, reduce greenhouse gas emissions and help achieve energy and climate change goals”. The institute works with a range of academic and commercial bodies.
Deployment of the technologies involved, which are expected to contribute to the reduction of the UK's carbon emissions, is expected to begin around 2018.
Commentators generally welcomed the new body as likely to make a positive contribution in the efforts to minimise climate change. At the same time, they pointed to the slow pace of government action in promoting energy conservation and implementing existing low-carbon technologies, compared to progress in a number of other European countries.
Funding
In addition to initial funding for the ETI, the Department for Business will provide £50 million a year over a period of 10 years starting in 2008–09. When establishing the ETI, the government expected the separate Energy Research Partnership to raise matching funding from commercial organisations.
As of September 2006 EDF Energy, Shell, BP and E.ON UK had committed to providing funds. By 2014, this had grown to include Caterpillar and Rolls-Royce.
Objectives
Five objectives were set for the institute:
To increase the level of research and development funding to meet the UK's energy policy goals.
To deliver research and development that facilitates the rapid commercial deployment of cost-effective, low-carbon energy technologies.
To provide better strategic focus for commercially applicable energy related research and development in the UK.
To connect and manage networks of the best scientists and engineers to deliver focussed energy research and development projects to accelerate eventual commercial deployment.
To build research and development capacity in the UK in the relevant technical disciplines to deliver the UK's energy policy goals.
The ETI describes as its vision: "Affordable, secure, sustainable energy for present and future generations."
Research focus
The institute set out to focus research on a mixture of technologies.
As of 2014, the ETI states that typically it supports projects that:
Develop and demonstrate system level capabilities based on novel low carbon energy technologies or services
Create additional value through the capabilities of the ETI Industry Members and Project Partners
Create new partnerships - improving skills, knowledge, capabilities and supply chain capacity
Create benefit in the UK and globally - through deployment, skills, knowledge base or exports
Reduce risk associated with novel energy systems and supply chains
Identify barriers requiring “next generation” science and technology support
Inform development of regulations, standards, and policy
At the same time, the institute focuses on a mix of technologies to increase security of supply, and solutions to address fuel poverty.
In 2017 the ETI started the Nuclear Cost Drivers Project, which aims to identify cost reductions in nuclear power plant design, construction and operation, so enabling more widespread deployment of new nuclear.
Background
Historically, since the privatisation of the country's energy industries, public sector support for energy research and development in the UK has come from a variety of bodies with little co-ordination between them. Problems experienced as a result of this included poor continuity of funding, and the availability of funding for certain parts of the research-development-commercialisation process but not others. Funding levels have also been low by international standards.
Location
In September 2007, it was announced that the Midlands Consortium had been chosen to host the ETI. The consortium comprises the Universities of Birmingham, Loughborough and Nottingham with financial support from Advantage West Midlands and the East Midlands Development Agency.
The hub of the ETI is based at Loughborough University, on the Holywell Park area of the campus, at the heart of the university's Science and Enterprise Park.
Closure
In December 2019, after 12 years in operation, the ETI was closed. Data and findings from the ETI will continue to be available online through the programme pages until 2025.
See also
Cenex, also at Loughborough
UK Energy Research Centre
Energy Systems Catapult
References
External links
Energy Technologies website
Institute website
Energy Technologies Institute prospectus
Government press release
UK Research Councils’ Energy Programme
UK Energy Research Centre
Battersea Power Station Company Ltd
Energy in the United Kingdom
Energy research institutes
Research institutes in Leicestershire
Organizations established in 2008
Loughborough University
Organisations based in Leicestershire | Energy Technologies Institute | [
"Engineering"
] | 896 | [
"Energy research institutes",
"Energy organizations"
] |
7,054,016 | https://en.wikipedia.org/wiki/Thermomagnetic%20convection | Ferrofluids can be used to transfer heat, since heat and mass transport in such magnetic fluids can be controlled using an external magnetic field.
B. A. Finlayson first explained in 1970 (in his paper "Convective instability of ferromagnetic fluids", Journal of Fluid Mechanics, 40:753-767) how an external magnetic field imposed on a ferrofluid with varying magnetic susceptibility, e.g., due to a temperature gradient, results in a nonuniform magnetic body force, which leads to thermomagnetic convection. This form of heat transfer can be useful for cases where conventional convection fails to provide adequate heat transfer, e.g., in miniature microscale devices or under reduced gravity conditions.
Ozoe group has studied thermomagnetic convection both experimentally and numerically. They showed how to enhance, suppress, and invert the convection modes. They have also carried out scaling analysis for paramagnetic fluids in microgravity conditions.
A comprehensive review of thermomagnetic convection (in A. Mukhopadhyay, R. Ganguly, S. Sen, and I. K. Puri, "Scaling analysis to characterize thermomagnetic convection", International Journal of Heat and Mass Transfer 48:3485-3492, (2005)) also shows that this form of convection can be correlated with a dimensionless magnetic Rayleigh number. Subsequently, this group explained that fluid motion occurs due to a Kelvin body force with two terms. The first term can be treated as a magnetostatic pressure. In contrast, the second is important only if there is a spatial gradient of the fluid susceptibility, e.g., in a non-isothermal system. The colder fluid that has a larger magnetic susceptibility is attracted towards regions with larger field strength during thermomagnetic convection, which displaces warmer fluid of lower susceptibility. They showed that thermomagnetic convection can be correlated with a dimensionless magnetic Rayleigh number. Heat transfer due to this form of convection can be much more effective than buoyancy-induced convection for systems with small dimensions.
The ferrofluid magnetization depends on the local value of the applied magnetic field H and on the fluid magnetic susceptibility. In a ferrofluid flow encompassing varying temperatures, the susceptibility is a function of the temperature. This produces a force that can be expressed in the Navier–Stokes or momentum equation governing fluid flow as the "Kelvin body force (KBF)". Recently, Kumar et.al shed new light on the 20-plus year-old question of the appropriate tensor form of the Kelvin body force in Ferrofluids.
The KBF creates a static pressure field that is symmetric about a magnet, e.g., a line dipole, that produces a curl-free force field, i.e., curl(ℑ) = 0 for constant temperature flow. Such a symmetric field does not alter the velocity. However, if the temperature distribution about the imposed magnetic field is asymmetric, so is the KBF in which case curl(ℑ) ≠ 0. Such an asymmetric body force leads to ferrofluid motion across isotherms.
References
Convection
Magnetism
Continuum mechanics | Thermomagnetic convection | [
"Physics",
"Chemistry"
] | 696 | [
"Transport phenomena",
"Physical phenomena",
"Continuum mechanics",
"Classical mechanics",
"Convection",
"Thermodynamics"
] |
7,054,022 | https://en.wikipedia.org/wiki/Ethocybin | Ethocybin (CEY-19; 4-phosphoryloxy-DET; 4-PO-DET) is a homologue of the mushroom alkaloid psilocybin, and a semi-synthetic psychedelic alkaloid of the tryptamine family. Effects of ethocybin are comparable to those of a shorter LSD or psilocybin trip, although intensity and duration vary depending on dosage, individual physiology, and set and setting.
Chemistry
As with psilocybin, miprocybin and metocybin, ethocybin is a prodrug that is converted into the pharmacologically active compound ethocin in the body by dephosphorylation. This chemical reaction takes place under strongly acidic conditions or enzymatically by phosphatases in the body.
Albert Hofmann was the first to produce this chemical, soon after his discovery of psilocin and psilocybin. It was sold under the code name CEY-39.
Pharmacology
As with psilocybin, ethocybin is rapidly dephosphorylated in the body to 4-HO-DET which then acts as a partial agonist at the 5-HT2A serotonin receptor in the brain where it mimics the effects of serotonin (5-HT).
Medicine
Ethocybin has been studied as a treatment for several disorders since the early 1960s, and numerous papers are devoted to this material. Its short-lived action was considered a virtue. A 2010 Study showed that Ethocybin helped with bipolar affective disorder.
Effects
Ethocybin is absorbed through the lining of the mouth and stomach. Effects begin 20–45 minutes after ingestion, and last from 2–4 hours depending on dose, species, and individual metabolism. The effects are somewhat shorter compared to psilocybin.
Pharmacology
Ethocybin is probably metabolized mostly in the liver where it becomes ethocin, but is also broken down by the enzyme monoamine oxidase.
Mental and physical tolerance to psilocybin builds and dissipates quickly. Taking ethocybin more than three or four times in a week (especially two days in a row) can result in diminished effects. Tolerance dissipates after a few days, so frequent users often keep doses spaced five to seven days apart to avoid the effect.
Legality
Ethocybin is not controlled in the US, but possession or sale may be considered illegal under the Federal Analog Act.
References
Alkaloids
Psychedelic tryptamines
Designer drugs
Organophosphates
Diethylamino compounds | Ethocybin | [
"Chemistry"
] | 555 | [
"Biomolecules by chemical classification",
"Natural products",
"Prodrugs",
"Organic compounds",
"Chemicals in medicine",
"Alkaloids"
] |
7,204,039 | https://en.wikipedia.org/wiki/SuperPrime | SuperPrime is a computer program used for calculating the primality of a large set of positive natural numbers. Because of its multi-threaded nature and dynamic load scheduling, it scales excellently when using more than one thread (execution core). It is commonly used as an overclocking benchmark to test the speed and stability of a system.
Background information
In August 1995, the calculation of Pi up to 4,294,960,000 decimal digits was achieved by using a supercomputer at the University of Tokyo. The program used to achieve this was ported to personal computers, for operating systems such as Windows NT and Windows 95 and called Super-PI. SuperPrime is another take on this procedure, substituting raw floating-point calculations for the value of Pi with more complex instructions to calculate the primality of a set of natural numbers.
Landmarks
On September 29, 2006, a milestone was broken when bachus_anonym of www.xtremesystems.org broke the 30 seconds barrier using a highly overclocked Core 2 Duo machine
See also
Erodov.com, the 'home forum' for the SuperPrime benchmark.
References
External links
Main Thread @ www.erodov.com
Prime numbers | SuperPrime | [
"Mathematics"
] | 256 | [
"Prime numbers",
"Mathematical objects",
"Number stubs",
"Numbers",
"Number theory"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.