id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160
values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
70,539,880 | https://en.wikipedia.org/wiki/GNz7q |
GNz7q is a starburst galaxy with a candidate proto-supermassive black hole in the early Universe, at a redshift of 7.1899 ± 0.0005, estimated to have existed only 750 years after the Big Bang. It was discovered in the Great Observatories Origins Deep Survey-North (GOODS-North) field taken by the Hubble Space Telescope.
The discovery is "the first observation of a rapidly growing black hole in the early universe" and is thought to help explain the growth of supermassive black holes less than a billion years after the Big Bang.
See also
Direct collapse black hole, a process by which black holes may form less than a few hundred million years after the Big Bang
J0313–1806, the earliest known supermassive black hole as of 2021, formed a few hundred million years after the Big Bang
References
Sources
External links
Zoom Into GNz7q, video, European Space Agency
Supermassive black holes
Astronomical objects discovered in 2022
Starburst galaxies
Ursa Major | GNz7q | Physics,Astronomy | 214 |
24,146,102 | https://en.wikipedia.org/wiki/C26H32F2O7 | {{DISPLAYTITLE:C26H32F2O7}}
The molecular formula C26H32F2O7 (molar mass: 494.52 g/mol, exact mass: 494.21161) may refer to:
Diflorasone diacetate
Fluocinonide
Molecular formulas | C26H32F2O7 | Physics,Chemistry | 72 |
33,539,823 | https://en.wikipedia.org/wiki/Project%20Rio%20Blanco | Project Rio Blanco was an underground nuclear test that took place on May 17, 1973 in Rio Blanco County, Colorado, approximately 36 miles (58 km) northwest of Rifle.
Three 33-kiloton nuclear devices were detonated nearly simultaneously in a single emplacement well at depths of below ground level. The tests were conducted in fine-grain, low-permeability sandstone lenses at the base of the Fort Union Formation and the upper portion of the Mesaverde Formation.
This was the third and final natural-gas-reservoir stimulation test in the Plowshare program, which was designed to develop peaceful uses for nuclear explosives. The two previous tests were Project Gasbuggy in New Mexico and Project Rulison in Colorado.
The United States Atomic Energy Commission conducted the test in partnership with CER Geonuclear Corporation and Continental Oil Company.
A placard, erected in 1976, now marks the site where the test was conducted. The site is accessible via a dirt road, Rio Blanco County Route 29.
Devices
As the creation of tritium was of greatest concern, the three devices used were specially designed to reduce tritium production, creating less than tritium each, primarily from the medium surrounding the devices. To reduce emplacement costs, the devices were very narrow in diameter, less than wide.
References
Explosions in 1973
May 1973 events in the United States
American nuclear weapons testing
American nuclear test sites
Rio Blanco
Rio Blanco County, Colorado
Rio Blanco
1973 in Colorado
Rio Blanco
Rio Blanco | Project Rio Blanco | Chemistry | 303 |
208,174 | https://en.wikipedia.org/wiki/8 | 8 (eight) is the natural number following 7 and preceding 9.
Etymology
English eight, from Old English , æhta, Proto-Germanic *ahto is a direct continuation of Proto-Indo-European *oḱtṓ(w)-, and as such cognate with Greek and Latin , both of which stems are reflected by the English prefix oct(o)-, as in the ordinal adjective octaval or octavary, the distributive adjective is octonary.
The adjective octuple (Latin ) may also be used as a noun, meaning "a set of eight items"; the diminutive octuplet is mostly used to refer to eight siblings delivered in one birth.
The Semitic numeral is based on a root *θmn-, whence Akkadian smn-, Arabic ṯmn-, Hebrew šmn- etc.
The Chinese numeral, written (Mandarin: bā; Cantonese: baat), is from Old Chinese *priāt-, ultimately from Sino-Tibetan b-r-gyat or b-g-ryat which also yielded Tibetan brgyat.
It has been argued that, as the cardinal number is the highest number of items that can universally be cognitively processed as a single set, the etymology of the numeral eight might be the first to be considered composite, either as "twice four" or as "two short of ten", or similar.
The Turkic words for "eight" are from a Proto-Turkic stem *sekiz, which has been suggested as originating as a negation of eki "two", as in "without two fingers" (i.e., "two short of ten; two fingers are not being held up");
this same principle is found in Finnic *kakte-ksa, which conveys a meaning of "two before (ten)". The Proto-Indo-European reconstruction *oḱtṓ(w)- itself has been argued as representing an old dual, which would correspond to an original meaning of "twice four".
Proponents of this "quaternary hypothesis" adduce the numeral , which might be built on the stem new-, meaning "new" (indicating the beginning of a "new set of numerals" after having counted to eight).
Evolution of the Arabic digit
The modern digit 8, like all modern Arabic numerals other than zero, originates with the Brahmi numerals.
The Brahmi digit for eight by the 1st century was written in one stroke as a curve └┐ looking like an uppercase H with the bottom half of the left line and the upper half of the right line removed.
However, the digit for eight used in India in the early centuries of the Common Era developed considerable graphic variation, and in some cases took the shape of a single wedge, which was adopted into the Perso-Arabic tradition as ٨ (and also gave rise to the later Devanagari form ८); the alternative curved glyph also existed as a variant in Perso-Arabic tradition, where it came to look similar to our digit 5.
The digits as used in Al-Andalus by the 10th century were a distinctive western variant of the glyphs used in the Arabic-speaking world, known as ghubār numerals (ghubār translating to "sand table"). In these digits, the line of the 5-like glyph used in Indian manuscripts for eight came to be formed in ghubār as a closed loop, which was the 8-shape that became adopted into European use in the 10th century.
Just as in most modern typefaces, in typefaces with text figures the character for the digit 8 usually has an ascender, as, for example, in .
The infinity symbol ∞, described as a "sideways figure eight", is unrelated to the digit 8 in origin; it is first used (in the mathematical meaning "infinity") in the 17th century, and it may be derived from the Roman numeral for "one thousand" CIƆ, or alternatively from the final Greek letter, ω.
In mathematics
8 is a composite number and the first number which is neither prime nor semiprime. By Mihăilescu's Theorem, it is the only nonzero perfect power that is one less than another perfect power. 8 is the first proper Leyland number of the form , where in its case and both equal 2. 8 is a Fibonacci number and the only nontrivial Fibonacci number that is a perfect cube. Sphenic numbers always have exactly eight divisors. 8 is the base of the octal number system.
Geometry
A polygon with eight sides is an octagon. A regular octagon can fill a plane-vertex with a regular triangle and a regular icositetragon, as well as tessellate two-dimensional space alongside squares in the truncated square tiling. This tiling is one of eight Archimedean tilings that are semi-regular, or made of more than one type of regular polygon, and the only tiling that can admit a regular octagon. The Ammann–Beenker tiling is a nonperiodic tesselation of prototiles that feature prominent octagonal silver eightfold symmetry, that is the two-dimensional orthographic projection of the four-dimensional 8-8 duoprism.
An octahedron is a regular polyhedron with eight equilateral triangles as faces. is the dual polyhedron to the cube and one of eight convex deltahedra. The stella octangula, or eight-pointed star, is the only stellation with octahedral symmetry. It has eight triangular faces alongside eight vertices that forms a cubic faceting, composed of two self-dual tetrahedra that makes it the simplest of five regular compounds. The cuboctahedron, on the other hand, is a rectified cube or rectified octahedron, and one of only two convex quasiregular polyhedra. It contains eight equilateral triangular faces, whose first stellation is the cube-octahedron compound.
Vector spaces
The octonions are a hypercomplex normed division algebra that are an extension of the complex numbers. They are a double cover of special orthogonal group SO(8). The special unitary group SO(3) has an eight-dimensional adjoint representation whose colors are ascribed gauge symmetries that represent the vectors of the eight gluons in the Standard Model. Clifford algebras display a periodicity of 8.
Group theory
The lie group E8 is one of 5 exceptional lie groups. The order of the smallest non-abelian group whose subgroups are all normal is 8.
List of basic calculations
In science
Physics
In nuclear physics, the second magic number.
Chemistry
The most stable allotrope of a sulfur molecule is made of eight sulfur atoms arranged in a rhombic form.
In technology
A byte is eight bits.
In culture
Currency
Sailors and civilians alike from the 1500s onward referred to evenly divided parts of the Spanish dollar as "pieces of eight", or "bits".
In religion, folk belief and divination
Buddhism
In general, "eight" seems to be an auspicious number for Buddhists. The Dharmacakra, a Buddhist symbol, has eight spokes. The Buddha's principal teaching—the Four Noble Truths—ramifies as the Noble Eightfold Path and the Buddha emphasizes the importance of the eight attainments or jhanas.
Islam
The octagram Rub el Hizb is often used in Islamic symbology.
As a lucky number
The number eight is considered to be a lucky number in Chinese and other Asian cultures. Eight (; accounting ; pinyin bā) is considered a lucky number in Chinese culture because it sounds like the word meaning to generate wealth (; Pinyin: fā). Property with the number 8 may be valued greatly by Chinese. For example, a Hong Kong number plate with the number 8 was sold for $640,000. The opening ceremony of the Summer Olympics in Beijing started at 8 seconds and 8 minutes past 8 pm (local time) on 8 August 2008.
In Pythagorean numerology the number 8 represents victory, prosperity and overcoming.
is also considered a lucky number in Japan, but the reason is different from that in Chinese culture. Eight gives an idea of growing prosperous, because the letter () broadens gradually.
The Japanese thought of as a holy number in the ancient times. The reason is less well-understood, but it is thought that it is related to the fact they used eight to express large numbers vaguely such as (literally, eightfold and twentyfold), (literally, eight clouds), (literally, eight millions of Gods), etc. It is also guessed that the ancient Japanese gave importance to pairs, so some researchers guess twice as , which is also guessed to be a holy number in those times because it indicates the world (north, south, east, and west) might be considered a very holy number.
In numerology, 8 is the number of building, and in some theories, also the number of destruction.
In astrology
In the Middle Ages, 8 was the number of "unmoving" stars in the sky, and symbolized the perfection of incoming planetary energy.
In sports and other games
In association football, the number 8 has historically been the number of the Central Midfielder.
In baseball:
The center fielder is designated as number 8 for scorekeeping purposes.
In rugby league:
Most competitions (though not the Super League, which uses static squad numbering) use a position-based player numbering system in which one of the two starting props wears the number 8.
In the 2008 Games of the XXIX Olympiad held in Beijing, the official opening was on 08/08/08 at 8:08:08 p.m. CST.
In literature
In Terry Pratchett's Discworld series, eight is a magical number and is considered taboo. Eight is not safe to be said by wizards on the Discworld and is the number of Bel-Shamharoth. Also, there are eight days in a Disc week and eight colours in a Disc spectrum, the eighth one being octarine.
In slang
An "eighth" is a common measurement of marijuana, meaning an eighth of an ounce. It is also a common unit of sale for psilocybin mushrooms.
In Colombia and Venezuela, "volverse un ocho" (meaning to tie oneself in a figure 8) refers to getting in trouble or contradicting oneself.
In China, "8" is used in chat speak as a term for parting. This is due to the closeness in pronunciation of "8" (bā) and the English word "bye".
Other uses
A figure 8 is the common name of a geometric shape, often used in the context of sports, such as skating. Figure-eight turns of a rope or cable around a cleat, pin, or bitt are used to belay something.
References
External links
The Octonions, John C. Baez
Integers
8 (number) | 8 | Mathematics | 2,300 |
60,100,177 | https://en.wikipedia.org/wiki/Chloropolymer | Chloropolymers are macromolecules synthesized from alkenes in which one or more hydrogens of the polymer were replaced by chlorine. A common example of a chloropolymer is polyvinyl chloride (PVC) and poly(dichlorophosphazene) which has a polymer formula of (PNCl2)n, the precursor of which is hexachlorophosphazene, which itself has been called chloropolymer.
References
chloropol
Polymer chemistry | Chloropolymer | Chemistry,Materials_science,Engineering | 116 |
162,600 | https://en.wikipedia.org/wiki/Hacktivism | Hacktivism (or hactivism; a portmanteau of hack and activism), is the use of computer-based techniques such as hacking as a form of civil disobedience to promote a political agenda or social change. A form of Internet activism with roots in hacker culture and hacker ethics, its ends are often related to free speech, human rights, or freedom of information movements.
Hacktivist activities span many political ideals and issues. Freenet, a peer-to-peer platform for censorship-resistant communication, is a prime example of translating political thought and freedom of speech into code. Hacking as a form of activism can be carried out by a singular activist or through a network of activists, such as Anonymous and WikiLeaks, working in collaboration toward common goals without an overarching authority figure. For context, according to a statement by the U.S. Justice Department, Julian Assange, the founder of WikiLeaks, plotted with hackers connected to the "Anonymous" and "LulzSec" groups, who have been linked to multiple cyberattacks worldwide. In 2012, Assange, who was being held in the United Kingdom on a request for extradition from the United States, gave the head of LulzSec a list of targets to hack and informed him that the most significant leaks of compromised material would come from the National Security Agency, the Central Intelligence Agency, or the New York Times.
"Hacktivism" is a controversial term with several meanings. The word was coined to characterize electronic direct action as working toward social change by combining programming skills with critical thinking. But just as hack can sometimes mean cyber crime, hacktivism can be used to mean activism that is malicious, destructive, and undermining the security of the Internet as a technical, economic, and political platform. In comparison to previous forms of social activism, hacktivism has had unprecedented success, bringing in more participants, using more tools, and having more influence in that it has the ability to alter elections, begin conflicts, and take down businesses.
According to the United States 2020–2022 Counterintelligence Strategy, in addition to state adversaries and transnational criminal organizations, "ideologically motivated entities such as hacktivists, leaktivists, and public disclosure organizations, also pose significant threats".
Origins and definitions
Writer Jason Sack first used the term hacktivism in a 1995 article in conceptualizing New Media artist Shu Lea Cheang's film Fresh Kill. However, the term is frequently attributed to the Cult of the Dead Cow (cDc) member "Omega," who used it in a 1996 e-mail to the group. Due to the variety of meanings of its root words, the definition of hacktivism is nebulous and there exists significant disagreement over the kinds of activities and purposes it encompasses. Some definitions include acts of cyberterrorism while others simply reaffirm the use of technological hacking to effect social change.
Forms and methods
Self-proclaimed "hacktivists" often work anonymously, sometimes operating in groups while other times operating as a lone wolf with several cyber-personas all corresponding to one activist within the cyberactivism umbrella that has been gaining public interest and power in pop-culture. Hacktivists generally operate under apolitical ideals and express uninhibited ideas or abuse without being scrutinized by society while representing or defending themselves publicly under an anonymous identity giving them a sense of power in the cyberactivism community.
In order to carry out their operations, hacktivists might create new tools; or integrate or use a variety of software tools readily available on the Internet. One class of hacktivist activities includes increasing the accessibility of others to take politically motivated action online.
Repertoire of contention of hacktivism includes among others:
Code: Software and websites can achieve political goals. For example, the encryption software PGP can be used to secure communications; PGP's author, Phil Zimmermann said he distributed it first to the peace movement. Jim Warren suggests PGP's wide dissemination was in response to Senate Bill 266, authored by Senators Biden and DeConcini, which demanded that "...communications systems permit the government to obtain the plain text contents of voice, data, and other communications...". WikiLeaks is an example of a politically motivated website: it seeks to "keep governments open".
Mirroring: Website mirroring is used as a circumvention tool in order to bypass various censorship blocks on websites. This technique copies the contents of a censored website and disseminates it on other domains and sub-domains that are not censored. Document mirroring, similar to website mirroring, is a technique that focuses on backing up various documents and other works. RECAP is software that was written with the purpose to 'liberate US case law' and make it openly available online. The software project takes the form of distributed document collection and archival. Major mirroring projects include initiatives such as the Internet Archive and Wikisource.
Anonymity: A method of speaking out to a wide audience about human rights issues, government oppression, etc. that utilizes various web tools such as free and/or disposable email accounts, IP masking, and blogging software to preserve a high level of anonymity.
Doxing: The practice in which private and/or confidential documents and records are hacked into and made public. Hacktivists see this as a form of assured transparency, experts claim it is harassment.
Denial-of-service attacks: These attacks, commonly referred to as DoS attacks, use large arrays of personal and public computers that hackers take control of via malware executable files usually transmitted through email attachments or website links. After taking control, these computers act like a herd of zombies, redirecting their network traffic to one website, with the intention of overloading servers and taking a website offline.
Virtual sit-ins: Similar to DoS attacks but executed by individuals rather than software, a large number of protesters visit a targeted website and rapidly load pages to overwhelm the site with network traffic to slow the site or take it offline.
Website defacements: Hackers infiltrate a web server to replace a specific web page with one of their own, usually to convey a specific message.
Website redirects: This method involves changing the address of a website within the server so would-be visitors of the site are redirected to a site created by the perpetrator, typically to denounce the original site.
Geo-bombing: A technique in which netizens add a geo-tag while editing YouTube videos so that the location of the video can be seen in Google Earth.
Protestware: The use of malware to promote a social cause or protest. Protestware is self-inflicted by a project's maintainer in order to spread a message; most commonly in a disruptive manner. The term was popularized during the Russo-Ukrainian War after the peacenotwar supply chain attack on the npm ecosystem.
Controversy
Depending on who is using the term, hacktivism can be a politically motivated technology hack, a constructive form of anarchic civil disobedience, or an undefined anti-systemic gesture. It can signal anticapitalist or political protest; it can denote anti-spam activists, security experts, or open source advocates.
Some people describing themselves as hacktivists have taken to defacing websites for political reasons, such as attacking and defacing websites of governments and those who oppose their ideology. Others, such as Oxblood Ruffin (the "foreign affairs minister" of Cult of the Dead Cow and Hacktivismo), have argued forcefully against definitions of hacktivism that include web defacements or denial-of-service attacks.
Hacktivism is often seen as shadowy due to its anonymity, commonly attributed to the work of fringe groups and outlying members of society. The lack of responsible parties to be held accountable for the social-media attacks performed by hactivists has created implications in corporate and federal security measures both on and offline.
While some self-described hacktivists have engaged in DoS attacks, critics suggest that DoS attacks are an attack on free speech and that they have unintended consequences. DoS attacks waste resources and they can lead to a "DoS war" that nobody will win. In 2006, Blue Security attempted to automate a DoS attack against spammers; this led to a massive DoS attack against Blue Security which knocked them, their old ISP and their DNS provider off the Internet, destroying their business.
Following denial-of-service attacks by Anonymous on multiple sites, in reprisal for the apparent suppression of WikiLeaks, John Perry Barlow, a founding member of the EFF, said "I support freedom of expression, no matter whose, so I oppose DDoS attacks regardless of their target... they're the poison gas of cyberspace...". On the other hand, Jay Leiderman, an attorney for many hacktivists, argues that DDoS can be a legitimate form of protest speech in situations that are reasonably limited in time, place and manner.
Notable hacktivist events
In late 1990s, the Hong Kong Blondes helped Chinese citizens get access to blocked websites by targeting the Chinese computer networks. The group identified holes in the Chinese internet system, particularly in the area of satellite communications. The leader of the group, Blondie Wong, also described plans to attack American businesses that were partnering with China.
In 1996, the title of the United States Department of Justice's homepage was changed to "Department of Injustice". Pornographic images were also added to the homepage to protest the Communications Decency Act.
In 1998, members of the Electronic Disturbance Theater created FloodNet, a web tool that allowed users to participate in DDoS attacks (or what they called electronic civil disobedience) in support of Zapatista rebels in Chiapas.
In December 1998, a hacktivist group from the US called Legions of the Underground emerged. They declared a cyberwar against Iraq and China and planned on disabling internet access in retaliation for the countries' human rights abuses. Opposing hackers criticized this move by Legions of the Underground, saying that by shutting down internet systems, the hacktivist group would have no impact on providing free access to information.
In July 2001, Hacktivismo, a sect of the Cult of the Dead Cow, issued the "Hacktivismo Declaration". This served as a code of conduct for those participating in hacktivism, and declared the hacker community's goals of stopping "state-sponsored censorship of the Internet" as well as affirming the rights of those therein to "freedom of opinion and expression".
During the 2009 Iranian election protests, Anonymous played a role in disseminating information to and from Iran by setting up the website Anonymous Iran; they also released a video manifesto to the Iranian government.
Google worked with engineers from SayNow and Twitter to provide communications for the Egyptian people in response to the government sanctioned Internet blackout during the 2011 protests. The result, Speak To Tweet, was a service in which voicemail left by phone was then tweeted via Twitter with a link to the voice message on Google's SayNow.
On Saturday 29 May 2010 a hacker calling himself 'Kaka Argentine' hacked into the Ugandan State House website and posted a conspicuous picture of Adolf Hitler with the swastika, a Nazi Party symbol.
During the Egyptian Internet black out, January 28 – February 2, 2011, Telecomix provided dial up services, and technical support for the Egyptian people. Telecomix released a video stating their support of the Egyptian people, describing their efforts to provide dial-up connections, and offering methods to avoid internet filters and government surveillance. The hacktivist group also announced that they were closely tracking radio frequencies in the event that someone was sending out important messages.
Project Chanology, also known as "Operation Chanology", was a hacktivist protest against the Church of Scientology to punish the church for participating in Internet censorship relating to the removal of material from a 2008 interview with Church of Scientology member Tom Cruise. Hacker group Anonymous attempted to "expel the church from the Internet" via DDoS attacks. In February 2008 the movement shifted toward legal methods of nonviolent protesting. Several protests were held as part of Project Chanology, beginning in 2008 and ending in 2009.
On June 3, 2011, LulzSec took down a website of the FBI. This was the first time they had targeted a website that was not part of the private sector. That week, the FBI was able to track the leader of LulzSec, Hector Xavier Monsegur.
On June 20, 2011, LulzSec targeted the Serious Organised Crime Agency of the United Kingdom, causing UK authorities to take down the website.
In August 2011 a member of Anonymous working under the name "Oliver Tucket" took control of the Syrian Defense Ministry website and added an Israeli government web portal in addition to changing the mail server for the website to one belonging to the Chinese navy.
Anonymous and New World Hackers claimed responsibility for the 2016 Dyn cyberattack in retaliation for Ecuador's rescinding Internet access to WikiLeaks founder Julian Assange at their embassy in London. WikiLeaks alluded to the attack. Subsequently, FlashPoint stated that the attack was most likely done by script kiddies.
In 2013, as an online component to the Million Mask March, Anonymous in the Philippines crashed 30 government websites and posted a YouTube video to congregate people in front of the parliament house on November 5 to demonstrate their disdain toward the Filipino government.
In 2014, Sony Pictures Entertainment was hacked by a group by the name of Guardians Of Peace (GOP) who obtained over 100 Terabytes of data including unreleased films, employee salary, social security data, passwords, and account information. GOP hacked various social media accounts and hijacked them by changing their passwords to diespe123 (die pictures entertainment) and posting threats on the pages.
In 2016, Turkish programmer Azer Koçulu removed his software package, left-pad, from npm, causing a cascading failure of other software packages that contained left-pad as a dependency. This was done after Kik, a messaging application, threatened legal action against Koçulu after he refused to rename his kik package. npm ultimately sided with Kik, prompting Koçulu to unpublish all of his packages from npm in protest, including left-pad.
British hacker Kane Gamble, who was sentenced to 2 years in youth detention, posed as John Brennan, the then director of the CIA, and Mark F. Giuliano, a former deputy director of the FBI, to access highly sensitive information. The judge said Gamble engaged in "politically motivated cyber-terrorism."
In 2021, Anonymous hacked and leaked the databases of American web hosting company Epik.
As a response against 2022 Russian invasion of Ukraine, Anonymous performed multiple cyberattacks against Russian computer systems.
Following the Israel-Hamas war since 2023, multiple cyberattacks attacks were seen from pro-Israel and pro-Palestine hacktivist groups. India's pro-Israel hacktivists took down the portals of Palestinian National Bank, the National Telecommunications Company and the website of Hamas. Multiple Israeli websites were flooded with malicious traffic by pro-Palestine hacktivists. Israeli newspaper The Jerusalem Post reported that its website was down due to a series of cyberattacks initiated against them.
Notable hacktivist people/groups
WikiLeaks
WikiLeaks is a media organisation and publisher founded in 2006. It operates as a non-profit and is funded by donations and media partnerships. It has published classified documents and other media provided by anonymous sources. It was founded by Julian Assange, an Australian editor, publisher, and activist, who is currently challenging extradition to the United States over his work with WikiLeaks. Since September 2018, Kristinn Hrafnsson has served as its editor-in-chief. Its website states that it has released more than ten million documents and associated analyses. WikiLeaks' most recent publication was in 2021, and its most recent publication of original documents was in 2019. Beginning in November 2022, many of the documents on the organisation's website could not be accessed.
WikiLeaks has released document caches and media that exposed serious violations of human rights and civil liberties by various governments. It released footage, which it titled Collateral Murder, of the 12July 2007 Baghdad airstrike, in which Iraqi Reuters journalists and several civilians were killed by a U.S. helicopter crew. WikiLeaks has also published leaks such as diplomatic cables from the United States and Saudi Arabia, emails from the governments of Syria and Turkey, corruption in Kenya and at Samherji. WikiLeaks has also published documents exposing cyber warfare and surveillance tools created by the CIA, and surveillance of the French president by the National Security Agency. During the 2016 U.S. presidential election campaign, WikiLeaks released emails from the Democratic National Committee (DNC) and from Hillary Clinton's campaign manager, showing that the party's national committee had effectively acted as an arm of the Clinton campaign during the primaries, seeking to undercut the campaign of Bernie Sanders. These releases resulted in the resignation of the chairwoman of the DNC and caused significant harm to the Clinton campaign. During the campaign, WikiLeaks promoted false conspiracy theories about Hillary Clinton, the Democratic Party and the murder of Seth Rich.
WikiLeaks has won a number of awards and has been commended for exposing state and corporate secrets, increasing transparency, assisting freedom of the press, and enhancing democratic discourse while challenging powerful institutions. WikiLeaks and some of its supporters say the organisation's publications have a perfect record of publishing authentic documents. The organisation has been the target of campaigns to discredit it, including aborted ones by Palantir and HBGary. WikiLeaks has also had its donation systems disrupted by problems with its payment processors. As a result, the Wau Holland Foundation helps process WikiLeaks' donations.
The organisation has been criticised for inadequately curating some of its content and violating the personal privacy of individuals. WikiLeaks has, for instance, revealed Social Security numbers, medical information, credit card numbers and details of suicide attempts. News organisations, activists, journalists and former members have also criticised the organisation over allegations of anti-Clinton and pro-Trump bias, various associations with the Russian government, buying and selling of leaks, and a lack of internal transparency. Journalists have also criticised the organisation for promotion of false flag conspiracy theories, and what they describe as exaggerated and misleading descriptions of the contents of leaks. The CIA defined the organisation as a "non-state hostile intelligence service" after the release of Vault 7.
Anonymous
Perhaps the most prolific and well known hacktivist group, Anonymous has been prominent and prevalent in many major online hacks over the past decade. Anonymous is a decentralized group that originated on the forums of 4chan during 2003, but didn't rise to prominence until 2008 when they directly attacked the Church of Scientology in a massive DoS attack. Since then, Anonymous has participated in a great number of online projects such as Operation: Payback and Operation: Safe Winter. However, while a great number of their projects have been for a charitable cause, they have still gained notoriety from the media due to the nature of their work mostly consisting of illegal hacking.
Following the Paris terror attacks in 2015, Anonymous posted a video declaring war on ISIS, the terror group that claimed responsibility for the attacks. Since declaring war on ISIS, Anonymous since identified several Twitter accounts associated with the movement in order to stop the distribution of ISIS propaganda. However, Anonymous fell under heavy criticism when Twitter issued a statement calling the lists Anonymous had compiled "wildly inaccurate," as it contained accounts of journalists and academics rather than members of ISIS.
Anonymous has also been involved with the Black Lives Matter movement. Early in July 2015, there was a rumor circulating that Anonymous was calling for a Day of Rage protests in retaliation for the shootings of Alton Sterling and Philando Castile, which would entail violent protests and riots. This rumor was based on a video that was not posted with the official Anonymous YouTube account. None of the Twitter accounts associated with Anonymous had tweeted anything in relation to a Day of Rage, and the rumors were identical to past rumors that had circulated in 2014 following the death of Mike Brown. Instead, on July 15, a Twitter account associated with Anonymous posted a series of tweets calling for a day of solidarity with the Black Lives Matter movement. The Twitter account used the hashtag "#FridayofSolidarity" to coordinate protests across the nation, and emphasized the fact that the Friday of Solidarity was intended for peaceful protests. The account also stated that the group was unaware of any Day of Rage plans.
In February 2017 the group took down more than 10,000 sites on the Dark web related to child porn.
DkD[||
DkD[||, a French cyberhacktivist, was arrested by the OCLCTIC (office central de lutte contre la criminalité liée aux technologies de l’information et de la communication), in March 2003. DkD[|| defaced more than 2000 pages, many were governments and US military sites. Eric Voulleminot of the Regional Service of Judicial Police in Lille classified the young hacker as "the most wanted hacktivist in France"
DkD[|| was a very known defacer in the underground for his political view, doing his defacements for various political reasons. In response to his arrest, The Ghost Boys defaced many sites using the “Free DkD[||!!” slogan.
LulzSec
In May 2011, five members of Anonymous formed the hacktivist group Lulz Security, otherwise known as LulzSec. LulzSec's name originated from the conjunction of the internet slang term "lulz", meaning laughs, and "sec", meaning security. The group members used specific handles to identify themselves on Internet Relay Channels, the most notable being: "Sabu," "Kayla," "T-Flow," "Topiary," "AVUnit," and "Pwnsauce." Though the members of LulzSec would spend up to 20 hours a day in communication, they did not know one another personally, nor did they share personal information. For example, once the members' identities were revealed, "T-Flow" was revealed to be 15 years old. Other members, on the basis of his advanced coding ability, thought he was around 30 years old.
One of the first notable targets that LulzSec pursued was HBGary, which was performed in response to a claim made by the technology security company that it had identified members of Anonymous. Following this, the members of LulzSec targeted an array of companies and entities, including but not limited to: Fox Television, Tribune Company, PBS, Sony, Nintendo, and the Senate.gov website. The targeting of these entities typically involved gaining access to and downloading confidential user information, or defacing the website at hand. LulzSec while not as strongly political as those typical of WikiLeaks or Anonymous, they shared similar sentiments for the freedom of information. One of their distinctly politically driven attacks involved targeting the Arizona State Police in response to new immigration laws.
The group's first attack that garnered significant government attention was in 2011, when they collectively took down a website of the FBI. Following the incident, the leader of LulzSec, "Sabu," was identified as Hector Xavier Monsegur by the FBI, and he was the first of the group to be arrested. Immediately following his arrest, Monsegur admitted to criminal activity. He then began his cooperation with the US government, helping FBI authorities to arrest 8 of his co-conspirators, prevent 300 potential cyber attacks, and helped to identify vulnerabilities in existing computer systems. In August 2011, Monsegur pleaded guilty to "computer hacking conspiracy, computer hacking, computer hacking in furtherance of fraud, conspiracy to commit access device fraud, conspiracy to commit bank fraud, and aggravated identity theft pursuant to a cooperation agreement with the government." He served a total of one year and seven months and was charged a $1,200 fine.
SiegedSec
SiegedSec, short for Sieged Security and commonly self-referred to as the "Gay Furry Hackers", is a black-hat criminal hacktivist group that was formed in early 2022, that has committed a number of high-profile cyber attacks, including attacks on NATO, The Idaho National Laboratory, and Real America's Voice. On July 10, 2024, the group announced that they would be disbanding after attacking The Heritage Foundation.
SiegedSec is led by an individual under the alias "vio". Short for "Sieged Security", SiegedSec's Telegram channel was first created in April 2022, and they commonly refer to themselves as "gay furry hackers". On multiple occasions, the group has targeted right-wing movements through breaching data, including The Heritage Foundation, Real America's Voice, as well as various U.S. states that have pursued legislative decisions against gender-affirming care.
Related practices
Culture jamming
Hacking has been sometime described as a form of culture jamming. This term refers to the practice of subverting and criticizing political messages as well as media culture with the aim of challenging the status quo. It is often targeted toward subliminal thought processes taking place in the viewers with the goal of raising awareness as well as causing a paradigm shift. Culture jamming takes many forms including billboard hacking, broadcast signal intrusion, ad hoc art performances, simulated legal transgressions, memes, and artivism.
The term "culture jamming" was first coined in 1984 by American musician Donald Joyce of the band Negativland. However, some speculation remains as to when the practice of culture jamming first began. Social researcher Vince Carducci believes culture jamming can be traced back to the 1950s with European social activist group Situationist International. Author and cultural critic Mark Dery believes medieval carnival is the earliest form of culture jamming as a way to subvert the social hierarchy at the time.
Culture jamming is sometimes confused with acts of vandalism. However, unlike culture jamming, the main goal of vandalism is to cause destruction with any political themes being of lesser importance. Artivism usually has the most questionable nature as a form of culture jamming because defacement of property is usually involved.
Media hacking
Media hacking refers to the usage of various electronic media in an innovative or otherwise abnormal fashion for the purpose of conveying a message to as large a number of people as possible, primarily achieved via the World Wide Web. A popular and effective means of media hacking is posting on a blog, as one is usually controlled by one or more independent individuals, uninfluenced by outside parties. The concept of social bookmarking, as well as Web-based Internet forums, may cause such a message to be seen by users of other sites as well, increasing its total reach.
Media hacking is commonly employed for political purposes, by both political parties and political dissidents. A good example of this is the 2008 US Election, in which both the Democratic and Republican parties used a wide variety of different media in order to convey relevant messages to an increasingly Internet-oriented audience. At the same time, political dissidents used blogs and other social media like Twitter in order to reply on an individual basis to the presidential candidates. In particular, sites like Twitter are proving important means in gauging popular support for the candidates, though the site is often used for dissident purposes rather than a show of positive support.
Mobile technology has also become subject to media hacking for political purposes. SMS has been widely used by political dissidents as a means of quickly and effectively organising smart mobs for political action. This has been most effective in the Philippines, where SMS media hacking has twice had a significant impact on whether or not the country's Presidents are elected or removed from office.
Reality hacking
Reality hacking is any phenomenon that emerges from the nonviolent use of illegal or legally ambiguous digital tools in pursuit of politically, socially, or culturally subversive ends. These tools include website defacements, URL redirections, denial-of-service attacks, information theft, web-site parodies, virtual sit-ins, and virtual sabotage.
Art movements such as Fluxus and Happenings in the 1970s created a climate of receptibility in regard to loose-knit organizations and group activities where spontaneity, a return to primitivist behavior, and an ethics where activities and socially engaged art practices became tantamount to aesthetic concerns.
The conflation of these two histories in the mid-to-late 1990s resulted in cross-overs between virtual sit-ins, electronic civil disobedience, denial-of-service attacks, as well as mass protests in relation to groups like the International Monetary Fund and the World Bank. The rise of collectives, net.art groups, and those concerned with the fluid interchange of technology and real life (often from an environmental concern) gave birth to the practice of "reality hacking".
Reality hacking relies on tweaking the everyday communications most easily available to individuals with the purpose of awakening the political and community conscience of the larger population. The term first came into use among New York and San Francisco artists, but has since been adopted by a school of political activists centered around culture jamming.
In fiction
The 1999 science fiction-action film The Matrix, among others, popularized the simulation hypothesis — the suggestion that reality is in fact a simulation of which those affected by the simulants are generally unaware. In this context, "reality hacking" is reading and understanding the code which represents the activity of the simulated reality environment (such as Matrix digital rain) and also modifying it in order to bend the laws of physics or otherwise modify the simulated reality.
Reality hacking as a mystical practice is explored in the Gothic-Punk aesthetics-inspired White Wolf urban fantasy role-playing game Mage: The Ascension. In this game, the Reality Coders (also known as Reality Hackers or Reality Crackers) are a faction within the Virtual Adepts, a secret society of mages whose magick revolves around digital technology. They are dedicated to bringing the benefits of cyberspace to real space. To do this, they had to identify, for lack of a better term, the "source code" that allows our Universe to function. And that is what they have been doing ever since. Coders infiltrated a number of levels of society in order to gather the greatest compilation of knowledge ever seen. One of the Coders' more overt agendas is to acclimate the masses to the world that is to come. They spread Virtual Adept ideas through video games and a whole spate of "reality shows" that mimic virtual reality far more than "real" reality. The Reality Coders consider themselves the future of the Virtual Adepts, creating a world in the image of visionaries like Grant Morrison or Terence McKenna.
In a location-based game (also known as a pervasive game), reality hacking refers to tapping into phenomena that exist in the real world, and tying them into the game story universe.
Academic interpretations
There have been various academic approaches to deal with hacktivism and urban hacking. In 2010, Günther Friesinger, Johannes Grenzfurthner and Thomas Ballhausen published an entire reader dedicated to the subject. They state:
See also
Crypto-anarchism
Cyberterrorism
E-democracy
Open-source governance
Patriotic hacking
Tactical media
1984 Network Liberty Alliance
Chaos Computer Club
Cicada 3301
Decocidio
Jester
Internet vigilantism
The Internet's Own Boy: The Story of Aaron Swartz – a documentary film
milw0rm
2600: The Hacker Quarterly
Citizen Lab
HackThisSite
Cypherpunk
Jeremy Hammond
Mr. Robot – a television series
References
Further reading
Olson, Parmy. (05–14–2013). We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency. .
Coleman, Gabriella. (2014–11–4). Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous. Verso Books. .
Shantz, Jeff; Tomblin, Jordon (2014-11-28). Cyber Disobedience: Re://Presenting Online Anarchy. John Hunt Publishing. .
Deseriis, Marco (2017). Hacktivism: On the Use of Botnets in Cyberattacks. Theory, Culture & Society 34(4): 131–152.
External links
Hacktivism and Politically Motivated Computer Crime History, types of activity and cases studies
Activism by type
Hacking (computer security)
Politics and technology
Internet terminology
2000s neologisms
Culture jamming techniques
Hacker culture
Articles containing video clips | Hacktivism | Technology | 6,848 |
5,825,136 | https://en.wikipedia.org/wiki/List%20of%20most%20luminous%20stars | This is a list of stars arranged by their absolute magnitude – their intrinsic stellar luminosity. This cannot be observed directly, so instead must be calculated from the apparent magnitude (the brightness as seen from Earth), the distance to each star, and a correction for interstellar extinction. The entries in the list below are further corrected to provide the bolometric magnitude, i.e. integrated over all wavelengths; this relies upon measurements in multiple photometric filters and extrapolation of the stellar spectrum based on the stellar spectral type and/or effective temperature.
Entries give the bolometric luminosity in multiples of the luminosity of the Sun () and the bolometric absolute magnitude. As with all magnitude systems in astronomy, the latter scale is logarithmic and inverted i.e. more negative numbers are more luminous.
Most stars on this list are not bright enough to be visible to the naked eye from Earth, because of their high distances, high extinction, or because they emit most of their light outside the visible range. For a list of the brightest stars seen from Earth, see the list of brightest stars. There are three stars with over 1 million and visible to the naked eye: WR 22, WR 24 and Eta Carinae. All of these stars are located in the Carina nebula.
Measurement
Accurate measurement of stellar luminosities is difficult, even when the apparent magnitude is measured accurately, for four reasons:
The distance d to the star must be known, to convert apparent to absolute magnitude. Absolute magnitude is the apparent magnitude a star would have if it were 10 parsecs (~32 light years) away from the viewer. Because apparent brightness decreases as the square of the distance (i.e. as 1/d2), a small error (e.g. 10%) in determining d implies an error ~2× as large (thus 20%) in luminosity (see binomial approximation). Stellar distances are only directly measured accurately out to d ~1,000 light years.
The observed magnitudes must be corrected for the absorption or extinction of intervening interstellar or circumstellar dust and gas. This correction can be enormous and difficult to determine precisely. For example, until accurate infrared observations became possible ~50 years ago, the Galactic Center of the Milky Way was totally obscured to visual observations.
The magnitudes at the wavelengths measured must be corrected for those not observed. "Absolute bolometric magnitude" (which term is redundant, practically speaking, since bolometric magnitudes are nearly always "absolute", i.e. corrected for distance) is a measure of the star's luminosity, summing over its emission at all wavelengths, and thus the total amount of energy radiated by a star every second. Bolometric magnitudes can only be estimated by correcting for unobserved portions of the spectrum that have to be modelled, which is always an issue, and often a large correction. The list is dominated by hot blue stars which produce the majority of their energy output in the ultraviolet, but these may not necessarily be the brightest stars at visual wavelengths.
A large proportion of stellar systems discovered with very high luminosity have later been found to be binary. Usually, this results in the total system luminosity being reduced and spread among several components. These binaries are common both because the conditions that produce high mass high luminosity stars also favour multiple star systems, but also because searches for highly luminous stars are inevitably biased towards detecting systems with multiple more normal stars combining to appear luminous.
Because of all these problems, other references may give very different values for the most luminous stars (different ordering or different stars altogether). Data on different stars can be of somewhat different reliability, depending on the attention one particular star has received as well as largely differing physical difficulties in analysis (see the Pistol Star for an example). The last stars in the list are familiar nearby stars put there for comparison, and not among the most luminous known. It may also interest the reader to know that the Sun is more luminous than approximately 95% of all known stars in the local neighbourhood (out to, say, a few hundred light years), due to enormous numbers of somewhat less massive stars that are cooler and often much less luminous. For perspective, the overall range of stellar luminosities runs from dwarfs less than 1/10,000th as luminous as the Sun to supergiants over 1,000,000 times more luminous.
Data
This list is currently limited mostly to objects in our galaxy and the Magellanic Clouds, but a few stars in other local group galaxies can now be examined in enough detail to determine their luminosities. Some suspected binaries in this magnitude range are excluded because there is insufficient information about the luminosity of the individual components. Selected fainter stars are also shown for comparison. Despite their extreme luminosity, many of these stars are nevertheless too distant to be observed with the naked eye. Stars that are at least sometimes visible to the unaided eye have their apparent magnitude (6.5 or brighter) highlighted in blue.
Thanks to gravitational lensing, stars that are strongly magnified can be seen at much larger distances. The first star in the list, Godzilla — an LBV in the distant Sunburst galaxy — is probably the brightest star ever observed, although it is believed to be undergoing a temporary episode of increased luminosity that has lasted at least seven years, in a similar manner to the Great Eruption of Eta Carinae that was witnessed in the 19th century.
The first list shows a few of the known stars with an estimated luminosity of 1 million L☉ or greater, including the stars in open cluster, OB association and H II region. The majority of stars thought to be more than 1 million L☉ are shown, but the list is incomplete.
The second list gives some notable stars for the purpose of comparison.
A few notable stars of luminosity less than 1 million are kept here for the purpose of comparison.
Note that even the most luminous stars are much less luminous than the more luminous persistent extragalactic objects, such as quasars. For example, 3C 273 has an average apparent magnitude of 12.8 (when observing with a telescope), but an absolute magnitude of −26.7. If this object were 10 parsecs away from Earth it would appear nearly as bright in the sky as the Sun (apparent magnitude −26.744). This quasar's luminosity is, therefore, about 2 trillion (1012) times that of the Sun, or about 100 times that of the total light of average large galaxies like our Milky Way. (Note that quasars often vary somewhat in luminosity.)
In terms of gamma rays, a magnetar (type of neutron star) called SGR 1806−20, had an extreme burst reach Earth on 27 December 2004. It was the brightest event known to have impacted this planet from an origin outside the Solar System; if these gamma rays were visible, with an absolute magnitude of approximately −29, it would have been brighter than the Sun (as measured by the Swift spacecraft).
The gamma-ray burst GRB 971214 measured in 1998 was at the time thought to be the most energetic event in the observable universe, with the equivalent energy of several hundred supernovae. Later studies pointed out that the energy was probably the energy of one supernova which had been "beamed" towards Earth by the geometry of a relativistic jet.
See also
References
External links
The 150 Most Luminous Stars in the Hipparcos Catalogue
The R136 Cluster
The Magnitude system
Tim Thompson's list of Brightest Star candidates
Lists of stars
Milky Way
Stars, most luminous
luminous stars | List of most luminous stars | Astronomy | 1,593 |
69,079 | https://en.wikipedia.org/wiki/Ammonium | Ammonium is a modified form of ammonia that has an extra hydrogen atom. It is a positively charged (cationic) molecular ion with the chemical formula or . It is formed by the addition of a proton (a hydrogen nucleus) to ammonia (). Ammonium is also a general name for positively charged (protonated) substituted amines and quaternary ammonium cations (), where one or more hydrogen atoms are replaced by organic or other groups (indicated by R). Not only is ammonium a source of nitrogen and a key metabolite for many living organisms, but it is an integral part of the global nitrogen cycle. As such, human impact in recent years could have an effect on the biological communities that depend on it.
Acid–base properties
The ammonium ion is generated when ammonia, a weak base, reacts with Brønsted acids (proton donors):
The ammonium ion is mildly acidic, reacting with Brønsted bases to return to the uncharged ammonia molecule:
Thus, the treatment of concentrated solutions of ammonium salts with a strong base gives ammonia. When ammonia is dissolved in water, a tiny amount of it converts to ammonium ions:
The degree to which ammonia forms the ammonium ion depends on the pH of the solution. If the pH is low, the equilibrium shifts to the right: more ammonia molecules are converted into ammonium ions. If the pH is high (the concentration of hydrogen ions is low and hydroxide ions is high), the equilibrium shifts to the left: the hydroxide ion abstracts a proton from the ammonium ion, generating ammonia.
Formation of ammonium compounds can also occur in the vapor phase; for example, when ammonia vapor comes in contact with hydrogen chloride vapor, a white cloud of ammonium chloride forms, which eventually settles out as a solid in a thin white layer on surfaces.
Salts and characteristic reactions
Ammonium cation is found in a variety of salts such as ammonium carbonate, ammonium chloride, and ammonium nitrate. Most simple ammonium salts are very soluble in water. An exception is ammonium hexachloroplatinate, the formation of which was once used as a test for ammonium. The ammonium salts of nitrate and especially perchlorate are highly explosive, in these cases, ammonium is the reducing agent.
In an unusual process, ammonium ions form an amalgam. Such species are prepared by the addition of sodium amalgam to a solution of ammonium chloride. This amalgam eventually decomposes to release ammonia and hydrogen.
To find whether the ammonium ion is present in the salt, first, the salt is heated in presence of alkali hydroxide releasing a gas with a characteristic smell, which is ammonia.
To further confirm ammonia, it is passed through a glass rod dipped in an solution (hydrochloric acid), creating white dense fumes of ammonium chloride.
Ammonia, when passed through (copper(II) sulfate) solution, changes its color from blue to deep blue, forming Schweizer's reagent.
Ammonia or ammonium ion when added to Nessler's reagent gives a brown color precipitate known as the iodide of Million's base in basic medium.
Ammonium ion when added to chloroplatinic acid gives a yellow precipitate of ammonium hexachloroplatinate(IV).
Ammonium ion when added to sodium cobaltinitrite gives a yellow precipitate of ammonium cobaltinitrite.
Ammonium ion gives a white precipitate of ammonium bitartrate when added to potassium bitartrate.
Structure and bonding
The lone electron pair on the nitrogen atom (N) in ammonia, represented as a line above the N, forms a coordinate bond with a proton (). After that, all four bonds are equivalent, being polar covalent bonds. The ion has a tetrahedral structure and is isoelectronic with methane and the borohydride anion. In terms of size, the ammonium cation (rionic = 175 pm) resembles the caesium cation (rionic = 183 pm).
Organic ions
The hydrogen atoms in the ammonium ion can be substituted with an alkyl group or some other organic group to form a substituted ammonium ion (IUPAC nomenclature: aminium ion). Depending on the number of organic groups, the ammonium cation is called a primary, secondary, tertiary, or quaternary. Except the quaternary ammonium cations, the organic ammonium cations are weak acids.
An example of a reaction forming an ammonium ion is that between dimethylamine, , and an acid to give the dimethylammonium cation, :
Quaternary ammonium cations have four organic groups attached to the nitrogen atom, they lack a hydrogen atom bonded to the nitrogen atom. These cations, such as the tetra-n-butylammonium cation, are sometimes used to replace sodium or potassium ions to increase the solubility of the associated anion in organic solvents. Primary, secondary, and tertiary ammonium salts serve the same function but are less lipophilic. They are also used as phase-transfer catalysts and surfactants.
An unusual class of organic ammonium salts is derivatives of amine radical cations, such as tris(4-bromophenyl)ammoniumyl hexachloroantimonate.
Biology
Because nitrogen often limits net primary production due to its use in enzymes that mediate the biochemical reactions that are necessary for life, ammonium is utilized by some microbes and plants. For example, energy is released by the oxidation of ammonium in a process known as nitrification, which produces nitrate and nitrite. This process is a form of autotrophy that is common amongst Nitrosomonas, Nitrobacter, Nitrosolobus, and Nitrosospira, amongst others.
The amount of ammonium in soil that is available for nitrification by microbes varies depending on environmental conditions. For example, ammonium is deposited as a waste product from some animals, although it is converted into urea in mammals, sharks, and amphibians, and into uric acid in birds, reptiles, and terrestrial snails. Its availability in soils is also influenced by mineralization, which makes more ammonium available from organic nitrogen sources, and immobilization, which sequesters ammonium into organic nitrogen sources, both of which are mitigated by biological factors.
Conversely, nitrate and nitrite can be reduced to ammonium as a way for living organisms to access nitrogen for growth in a process known as assimilatory nitrate reduction. Once assimilated, it can be incorporated into proteins and DNA.
Ammonium can accumulate in soils where nitrification is slow or inhibited, which is common in hypoxic soils. For example, ammonium mobilization is one of the key factors for the symbiotic association between plants and fungi, called mycorrhizae. However, plants that consistently utilize ammonium as a nitrogen source often must invest into more extensive root systems due to ammonium's limited mobility in soils compared to other nitrogen sources.
Human impact
Ammonium deposition from the atmosphere has increased in recent years due to volatilization from livestock waste and increased fertilizer use. Because net primary production is often limited by nitrogen, increased ammonium levels could impact the biological communities that rely on it. For example, increasing nitrogen content has been shown to increase plant growth, but aggravate soil phosphorus levels, which can impact microbial communities.
Metal
The ammonium cation has very similar properties to the heavier alkali metal cations and is often considered a close equivalent. Ammonium is expected to behave as a metal ( ions in a sea of electrons) at very high pressures, such as inside giant planets such as Uranus and Neptune.
Under normal conditions, ammonium does not exist as a pure metal but does as an amalgam (alloy with mercury).
See also
Onium compounds
Fluoronium, ( and substituted derivatives)
Oxonium (, where R is typically hydrogen or organyl)
Hydronium (, the simplest oxonium ion)
Quaternary ammonium cation (, where R is organyl)
Tetrafluoroammonium ()
Hydrazinium ( and substituted derivatives)
Hydrazinediium ( and substituted derivatives)
Iminium ( and substituted derivatives)
Diazonium ( and substituted derivatives)
Diazynediium ( and substituted derivatives)
Aminodiazonium ( and substituted derivatives)
Hydroxylammonium ( and substituted derivatives)
Ammonium transporter
f-ratio
Nitrification
The Magnificent Possession (Isaac Asimov short story)
Ammonium hydroxide
References
Cations | Ammonium | Physics,Chemistry | 1,815 |
1,071,026 | https://en.wikipedia.org/wiki/Poudre%20B | Poudre B was the first practical smokeless gunpowder created in 1884. It was perfected between 1882 and 1884 at "Laboratoire Central des Poudres et Salpêtres" in Paris, France. Originally called "Poudre V" from the name of the inventor, Paul Vieille, it was arbitrarily renamed "Poudre B" (short for poudre blanche—white powder, as distinguished from black powder) to distract German espionage. "Poudre B" is made from 68.2% insoluble nitrocellulose, 29.8% soluble nitrocellulose gelatinized with ether and 2% paraffin. "Poudre B" is made up of very small paper-thin flakes that are not white but dark greenish grey in colour. "Poudre B" was first used to load the 8mm Lebel cartridges issued in 1886 for the Lebel rifle.
History
German-Swiss chemist Christian Friedrich Schönbein created the explosive substance nitrocellulose, or "guncotton", in 1846 by treating cotton fibers with a nitric acid and sulfuric acid mixture. However, guncotton proved to be too fast burning for direct use in firearms and artillery ammunition. French chemist Paul Vieille then followed the findings of Schönbein in 1882–1884 and, after much trial and error, succeeded in transforming guncotton into a colloidal substance by gelatinizing it in an alcohol-ether mixture which he then stabilized with amyl alcohol. He then used roller presses to transform this gelatinized colloidal substance into extremely thin sheets which, after drying, were cut up into small flakes. This single-base smokeless powder was originally named "Poudre V" after the inventor's name. That denomination was later changed arbitrarily to "Poudre B" in order to distract German espionage. The original "Poudre B" of 1884 was almost immediately replaced by improved "Poudre BF(NT)" in 1887. In 1896, "Poudre BF(NT)" was replaced by "Poudre BF(AM)", which was followed by "Poudre BN3F" in 1901. The latter was stabilized with the antioxidant diphenylamine instead of amyl alcohol, and it gave safe and regular performance as the standard French gunpowder used during World War I (1914–1918). It was followed during the 1920s by "Poudre BN3F(Ae)" and later by "Poudre BPF1", which remained in service until the 1960s.
Unlike some other countries, French military had been wary of the double-base propellant from the very beginning: (modern research shows that Vielle discovered it already in 1884-1885 and noted its high flame temperatures leading to bore erosion, which led French military to conclude it was unsuitable for military use), only using it for smoothbore mortars after 1918 and in 330-380 mm naval guns in the 1930s.
Performance
Three times more powerful than black powder for the same weight, and not generating large quantities of smoke, Poudre B gave the user a huge tactical advantage. It was hastily adopted by the French military in 1886, followed by all the major military powers within a few years.
Prior to its introduction, a squad of soldiers firing volleys would be unable to see their targets after a few shots, while their own location would be obvious because of the cloud of smoke hanging over them. The higher power of the new powder gave a higher muzzle velocity, which in turn produced a flatter bullet trajectory and thus a longer range. It also required lesser volumes of gunpowder and allowed a smaller caliber, thus lighter bullets, so a soldier could carry more ammunition. The French Army quickly introduced a new rifle, the Lebel Model 1886 firing a new 8 mm calibre cartridge, to exploit these benefits.
"Also black powder leaves a heavy residue in the bore. With the best conditions this residue causes a slight falling off of accuracy after from five to fifty shots have been fired from a rifle without cleaning, and when it was attempted to increase the velocity by decreasing the caliber, lengthening the bullet, and increasing the powder charge, the increase in the residue was so great as to destroy the accuracy unless the bore was cleaned after every shot."
Stability and safety
The earliest "Poudre B" tended to eventually become unstable, which has been attributed to evaporation of the volatile solvents, but may also have been due to the difficulty in fully removing the acids used to make guncotton. In the early years of their use both the original Poudre B and guncotton led to accidents. For example, two French battleships, the Iéna and the Liberté, blew up in Toulon harbour, in 1907 and 1911, respectively, with heavy loss of life. By the late 1890s, safer smokeless powders had been developed, including improved and stabilized versions of "Poudre B" (e.g. Poudres BN3F and BPF1), and ballistite and cordite from the late 1880s. The guncotton problem is not completely solved even today, as an occasional batch of smokeless powder will still deteriorate, although this is extremely rare.
References
External links
Explosives
Firearm propellants
19th-century inventions | Poudre B | Chemistry | 1,109 |
14,717,987 | https://en.wikipedia.org/wiki/Global%20element | In category theory, a global element of an object A from a category is a morphism
where is a terminal object of the category. Roughly speaking, global elements are a generalization of the notion of "elements" from the category of sets, and they can be used to import set-theoretic concepts into category theory. However, unlike a set, an object of a general category need not be determined by its global elements (not even up to isomorphism). For example, the terminal object of the category Grph of graph homomorphisms has one vertex and one edge, a self-loop, whence the global elements of a graph are its self-loops, conveying no information either about other kinds of edges, or about vertices having no self-loop, or about whether two self-loops share a vertex.
In an elementary topos the global elements of the subobject classifier form a Heyting algebra when ordered by inclusion of the corresponding subobjects of the terminal object. For example, Grph happens to be a topos, whose subobject classifier is a two-vertex directed clique with an additional self-loop (so five edges, three of which are self-loops and hence the global elements of ). The internal logic of Grph is therefore based on the three-element Heyting algebra as its truth values.
A well-pointed category is a category that has enough global elements to distinguish every two morphisms. That is, for each pair of distinct arrows in the category, there should exist a global element whose compositions with them are different from each other.
References
Objects (category theory) | Global element | Mathematics | 337 |
28,053,991 | https://en.wikipedia.org/wiki/Theodor%20von%20Schubert | Friedrich Theodor von Schubert (30 October 1758 – 21 October 1825) was a German astronomer and geographer.
Life and works
Born in Helmstedt, his father, Johann Ernst Schubert, was a professor of theology and abbot of Michaelstein Abbey. Theodor likewise studied theology, but didn't like it. He traveled abroad, first to Sweden in 1779. He then went to Bartelshagen, where he became the tutor of the children of Major von Cronhelm. Since the major was fond of mathematics and astronomy, Theodor had to study these himself to be able to teach those subjects. He then married the daughter of the major, Luise Friederike von Cronhelm. Afterwards, he traveled to Tallinn in Estonia, again as a house teacher. He moved on to Haapsalu, teaching mathematics to young noblemen as a preparation for a life as an officer. In 1785 he became an assistant of the Russian Academy of Sciences as a geographer, and by June 1789 he was a full member. In 1803, he became head of the astronomical observatory of the Academy. In 1805, he was a member of the failed Russian expedition to China, together with his son. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1812.
He not only produced some scientific works, but also helped popularize astronomy. Between 1788 and 1825, he published the St. Petersburger Kalender, and between 1808 and 1818 the St. Petersburger astronomischen Taschenkalender. He also wrote for the newspapers and for the German language St. Petersburger Zeitung, which he edited from 1810 until his death.
His son Friedrich von Schubert was a general in the Russian army and explorer.
The lunar crater Schubert is named after him.
Bibliography
Populäre Astronomie. 3 volumes, Petersburg (1808–10)
Theoretische Astronomie. 3 volumes, Petersburg 1798. Translated in French as Traite d'astronomie theorique, published in 1834 by Perthes & Besser.
Astronomische Bestimmung der Längen u. Breiten. Petersburg 1806 (reprinted, and translated in Russian)
Geschichte der Astronomie. Petersburg 1804
Vermischte Schriften. 7 volumes, Tübingen 1823–26 (4 volumes) and Leipzig 1840 (3 volumes).
References
Biography from Pierer's Universal-Lexikon, 4th edition 1857–1865 (in German)
Biography from the Allgemeine Deutsche Biographie, 1891 (in German, at Wikisource)
1758 births
1825 deaths
People from Helmstedt
19th-century German astronomers
19th-century astronomers from the Russian Empire
Fellows of the American Academy of Arts and Sciences
Full members of the Saint Petersburg Academy of Sciences
Historians of astronomy
Members of the Royal Swedish Academy of Sciences
18th-century German astronomers | Theodor von Schubert | Astronomy | 579 |
47,451,878 | https://en.wikipedia.org/wiki/Suillus%20subacerbus | Suillus subacerbus is a species of bolete fungus in the family Suillaceae. Described as new to science in 1968 by mycologist Robert Francis Ross McNabb, it is found in New Zealand, where it grows in association with Pinus radiata.
Description
Its fruitbodies produce convex to flattened caps measuring in diameter. The cap colour is quite variable, ranging from creamy yellow to olive grey initially, later becoming ochre, brown, or reddish orange (sometimes streaked) as the fruitbody matures. The pore surface on the cap underside is dull yellow, later becoming darker. Pores are small and angular, about 0.75–1 mm in diameter. The stipe measures long by up to wide, and lacks a ring.
The spore print is yellowish brown. Spores are smooth elliptical, with typical dimensions of 7.8–9.1 by 3.0–3.6 μm.
The complex of species that include Suillus granulatus, S. pungens, and S. acerbus appear to be closely related. McNabb suggests that Suillus subacerbus is "probably of North American origin".
References
External links
subacerbus
Fungi described in 1968
Fungi of New Zealand
Fungus species | Suillus subacerbus | Biology | 259 |
31,300,355 | https://en.wikipedia.org/wiki/Motivational%20enhancement%20therapy | Motivational enhancement therapy (MET) is a time-limited, four-session adaptation used in Project MATCH, a US-government-funded study of treatment for alcohol problems, and the "Drinkers' Check-up", which provides normative-based feedback and explores client motivation to change in light of the feedback. It is a development of motivational interviewing and motivational therapy. It focuses on the treatment of alcohol and other substance use disorders. The goal of the therapy is not to guide the patient through the recovery process, but to invoke inwardly motivated change through motivational strategies. The method has two elements: initial assessment battery session, and two to four individual therapeutic sessions with a therapist. During the first session, the specialist stimulates discussion on the patient's experiences with substance use disorder and elicits self-motivational statements by providing feedback to the initial assessment. The principles of MET are utilized to increase motivation and develop a plan for further change; coping strategies are also presented and talked over with the patient. Changes in the patients behavior are monitored and cessation strategies used are reviewed by the therapist in the subsequent sessions, where patients are encouraged to sustain abstinence and progress.
Motivational enhancement therapy is effective in helping adolescents because it focuses on the relationship of the counselor and the counselee. The most effective way to integrate this form of therapy is by light guidance directed to the intrinsic desire of the individual to change. Most adolescents will not trust their counselors which is why it is important to develop this relationship. By providing an environment that is receptive to change, a counselee can find this intrinsic motivation. A unique aspect of motivational enhancement therapy is that it is uniquely tailored to support adolescents that struggle with substance abuse by matching their attributes and readiness/willingness to change.
Affective change is how someone who is experiencing an insight of a solution. These moments can build confidence and have a positive effect on the person. Affective deals with changes in epistemic emotions. In order to invoke a change we need to have some sort of satisfaction in doing the change. Change can affect us emotionally and physically.
Motivational change can have a change in beliefs and attitudes, thus if you set your mind to it you can change the behavior. Satisfaction of a change is to have a renewed factory of oneself. Using statistical modeling we can improve our motivation to change our lives. Self-regulation explains the quality of progress towards a goal. Making small steps towards a goal will have a self of achievement as the progress is being made. This effort being made can have a positive feeling towards that goal.
Problem finding moments usually leads to moments that are able to find a solution. This is included because as we try to change our behavior there will be moments that are problematic and when we find a solution to that problem. We will learn about how to get to that certain goal with a different way of looking at how to reach a goal.
Process
Motivational enhancement therapy is a strategy of therapy that involves a variation of motivational interviewing to analyze feedback gained from client sessions. Motivational Interviewing was originated by William Miller and Stephen Rollnick based on their experiences treating problem drinkers. The idea of Motivational Interviewing is based on engaging the client to pursue a behavior change. The method revolves around goal making, with assistance from the counselor to help guide the client to that specific set goal. This concept of motivational interviewing later developed into motivational enhancement therapy. The goal of this therapy is to help lead the client to achieve the goals they have set for themselves. Its aim is to provide the client with the opportunity to develop a focus in their life, other than their addiction.
The MET approach is grounded on the trans-theoretical perspective that "individuals move through a series of stages of change as they progress in modifying problem behaviors". In understanding change, this concept of stages is notable. Every stage has certain processes used and specific tasks to be accomplished in order to achieve change. MET focuses on motivational strategies using the client's own resources rather than training them through recovery step by step. This approach is very personal to each individual client it is used with, centered around the main goal of evoking change. Oftentimes individuals who undergo motivational transformation can subjectively experience a sudden realization or understanding of a formerly perplexing situation. Like a light bulb illuminating a dark room, an otherwise dark and bewildering issue can be made clear within an individual's internal self-concept. This is termed as an "aha moment", and can aid individuals in their newfound sense of focus in life.
Reality therapy is a closely related form of therapeutical work that works specifically with the present state of life. It stresses improving relationships through our choices. It asserts that even though we cannot control how we feel we do have control over our thoughts and actions. Through this, a client will be able to achieve control over their life and work toward improving the aspects they are dissatisfied with.
Therapists use change talk. This will help with reinforcement that they think it’s impossible to change. Thinking they can do it they will succeed in achieving their goal. Our minds are powerful and can change the way we think about something. This helps because they change their minds about using drugs. They will start to hate the idea about taking substances to fulfill a need that they need.
Patients/Clients
Addicts are one of the primary populations motivational enhancement therapy lends an aid to. The therapist works closely with the client to help create an inner willingness to fight their addiction. Unlike other therapy or counseling programs that offer a step-by-step process, MET focuses on creating an internally motivated change. A typical therapy session consists of an initial assessment, and two to four treatment sessions with the therapist. In the initial session, the therapist conducts a discussion about the client's substance use. They encourage the use of self-motivational statements through Motivational Interviewing. It is in this first session where a plan for change is established between the therapist and client. The following sessions are based around achieving that plan. Early research studies have indicated that psychedelics paired with MET can result in increased levels of abstinence, and the decrease of relapse and heavy drinking days. Further experimentation with the combining of psychedelics and MET could provide additional support for individuals struggling with alcoholism.
MET has become increasingly effective. As it is rooted in the idea of self-motivation, those who seek help genuinely want it. It is also known by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) to be one of the most cost-effective methods available.
Key components
There are 5 key components to motivational enhancement therapy:
Express empathy – therapists seek to build trust and respect with the patient, making sure that each individual knows that the decision to change is ultimately up to him/her. The therapist acts as both a "supportive companion and knowledgeable consultant" in meetings.
Develop discrepancy – Client's attention is enhanced and focused on discrepancies. Raising a client's awareness of personal consequences brings about a motivation for change, allowing the client to willingly discuss options to change "in order to reduce the perceived discrepancy and regain emotional equilibrium".
Avoid argument – Arguments will be avoided and not engaged in. Therapists use strategies to help clients see true consequences and reduce the "perceived positive aspects" of behaviors, such as drinking alcohol.
Rolling with resistance – As resistance of some kind will exist. MET encourages that the therapist "roll with" these resistances, "with a goal of shifting clients perceptions". Rather than therapists providing solutions, they are usually "evoked from the client".
Support self-efficacy – Self-efficacy is defined as the way people view their own competence and achieve their own goals. Therapists encourage clients to realize they are capable of many things, including having the strength to give up alcohol.
References
Sources
Miller, W. R. (2000) Motivational Enhancement Therapy: Description of Counseling Approach. in Boren, J. J. Onken, L. S., & Carroll, K. M. (Eds.) Approaches to Drug Abuse Counseling, US Department of Health and Human Services; NIH Publication No. 00-4151 edition (2000)
Miller, W.R. and Rollnick, S. Motivational Interviewing: Preparing People for Change. NY: Guilford Press, 2002.
Miller, W.R., Zweben, A., DiClemente, C.C., Rychtarik, R.G. (1994) 'Motivational Enhancement Therapy Manual. Washington, DC:National Institute on Alcohol Abuse and Alcoholism, Project MATCH Monograph Series, Volume 2.
Sussex Publishers. (n.d.). Reality therapy. Psychology Today. https://www.psychologytoday.com/us/therapy-types/reality-therapy
Motivation
Psychotherapy by type
Alcohol and health
Substance-related disorders | Motivational enhancement therapy | Biology | 1,819 |
70,453,928 | https://en.wikipedia.org/wiki/Pi2%20Octantis | {{DISPLAYTITLE:Pi2 Octantis}}
Pi2 Octantis, Latinized from π2 Octantis, is a solitary star situated in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.64, allowing it to be faintly visible to the naked eye under ideal conditions. Located 1,570 light years away, the star is approaching the Sun with a heliocentric radial velocity of .
This object is an ageing late G-type supergiant that has 7 times the mass of the Sun and 69.02 times the radius of the Sun. It radiates at from its enlarged photosphere at an effective temperature of 4,588 K, giving it an orange-yellow glow. Despite its advanced state, Pi2 Octantis is still a young star at an age of 43 million years. It spins modestly with a projected rotational velocity of .
References
Octans
Octantis, Pi2
G-type supergiants
Octantis, 22
Durchmusterung objects
131246
73771
5545 | Pi2 Octantis | Astronomy | 224 |
22,974,812 | https://en.wikipedia.org/wiki/Morbid%20map | In genetics, a morbid map is a chart or diagram of diseases and the chromosomal location of genes the diseases are associated with. A morbid map exists as an appendix of the Online Mendelian Inheritance in Man (OMIM) knowledgebase, listing chromosomes and the genes mapped to specific sites on those chromosomes, and this format most clearly reveals the relationship between gene and phenotype.
References
External links
Morbid map at OMIM
Further reading
Genetics | Morbid map | Biology | 94 |
38,559,533 | https://en.wikipedia.org/wiki/HD%20161840 | HD 161840 is a single, blue-white hued star in the southern zodiac constellation of Scorpius. It is faintly visible to the naked eye with an apparent visual magnitude of 4.79. With an annual parallax shift of it is located roughly 500 light years from the Sun. It is moving closer with a heliocentric radial velocity of −13 km/s.
There has been some uncertainty as to the classification of this stage. Houk (1979) lists a stellar class of B8 Ib/II for HD 161840, which corresponds to a B-type bright giant/lesser supergiant mix. Multiple studies still use an older classification of B8 V, suggesting instead this is a B-type main-sequence star. Garrison and Gray (1994) assigned it a class of B8 III-IV, which would put it on the subgiant/giant star track. It has an estimated 3.93 times the mass of the Sun and 3.2 times the Sun's radius. The star is radiating 565 times the Sun's luminosity from its photosphere at an effective temperature of 11,066 K.
References
B-type main-sequence stars
B-type bright giants
Scorpius
Durchmusterung objects
161840
087220
6628 | HD 161840 | Astronomy | 271 |
470,768 | https://en.wikipedia.org/wiki/Interoperable%20Object%20Reference | An Interoperable Object Reference (IOR) is a CORBA or RMI-IIOP reference that uniquely identifies an object on a remote CORBA server.
IORs can be transmitted in binary over TCP/IP via the General Inter-ORB Protocol (the encoding may be big-endian or little-endian), or serialized into a string of hexadecimal digits (prefixed by the string IOR:) to facilitate transport by non-CORBA mechanisms such as HTTP, FTP, and e-mail.
The internal structure of an IOR may contain multiple components. Each component is identified by its integer code and has its binary format. Object Management Group assigns the codes. The typical IOR normally contains:
the IP address of the remote host,
the number of the remote port on that the CORBA server is listening,
a string defining the class of the remote object on which the methods will be invoked, and
the object key that is used by the server ORB to identify the object.
It is possible to register special objects (IOR interceptors) that can add the needed specific components to the IOR being created by the particular ORB.
Common Object Request Broker Architecture | Interoperable Object Reference | Technology | 245 |
6,213,198 | https://en.wikipedia.org/wiki/Sincalide | Sincalide (INN) is a cholecystokinetic drug administered by injection to aid in diagnosing disorders of the gallbladder and pancreas. It is the 8-amino acid C-terminal fragment of cholecystokinin, and also known as CCK-8.
Common adverse effects following administration include abdominal discomfort and nausea. These effects are more pronounced following rapid infusion.
Clinical Use
Indications
Sincalide may be used to stimulate gallbladder contraction, as may be assessed by contrast agent cholecystography or ultrasonography, or to obtain by duodenal aspiration a sample of concentrated bile for analysis of cholesterol, bile salts, phospholipids, and crystals. It can also be used to stimulate pancreatic secretion (especially in conjunction with secretin) prior to obtaining a duodenal aspirate for analysis of enzyme activity, composition, and cytology. In some instances it is used to accelerate the transit of a barium meal through the small bowel, thereby decreasing the time and-extent of radiation associated with fluoroscopy and x-ray examination of the intestinal tract.
References
External links
Peptide hormones
Octapeptides | Sincalide | Chemistry | 251 |
51,181,166 | https://en.wikipedia.org/wiki/Hylotelephium%20hybrids | Hylotelephium, syn. Sedum, is a genus of flowering plants in the family Crassulaceae. Various species have been hybridized by horticulturalists to create new cultivars. Many of the newer ones are patented.
Hylotelephium hybrids
Those cultivars marked have been given an Award of Garden Merit by the Royal Horticultural Society.
'Bertram Anderson' - very similar to "Vera Jameson," and with the same parentage, this is a newer and "improved" version of this cross (rose-red)
’Carl’ - rose-pink
'Dazzleberry' - Parentage unknown. Patented.
'Herbstfreude' - This hybrid is also known in English as "Autumn Joy," which is a literal translation from the German. It is a hybrid between Sedum telephium and H. spectabile. It is self-sterile, as it exhibits female flower parts only
’Marchant’s Best Red’ - deep reddish pink
’Matrona’ - pale pink flowers
’Mr Goodbud’ (PBR) - pink-purple: breeder’s rights protect this cultivar from unauthorised propagation
’Red Cauli’ - bright pink
’Ruby Glow’ - deep crimson-purple
'Vera Jameson' - This is reportedly a natural hybrid discovered in her garden one day by Ms. Jameson. It is said to be a cross between Sedum telephium var. maximum 'Atropurpureum' and H. cauticolum 'Ruby Glow'
References
Hybrid plants | Hylotelephium hybrids | Biology | 325 |
29,359,386 | https://en.wikipedia.org/wiki/John%20Brown%20%28artist%29 | John Brown (1752 – September 5, 1787) was a Scottish artist.
Biography
John Brown was born around 1752, in Edinburgh, Scotland, the son of a watchmaker. He studied in Edinburgh at the Trustees' Academy. Around 1769 he traveled to Rome, where he became a pupil of Alexander Runciman. They became strong friends.
For the next eleven years he lived in Rome. In Italy and Sicily he made sketches of the ruins of ancient buildings for his Scottish patrons, William Townley and Sir William Young, and sent drawings to the Royal Academy.
Brown worked on a small scale and favoured pencil, pen and wash as his media. Notable among his drawings are a number of genre scenes, such as Two Men in Conversation (c. 1775–80; Courtauld Institute, London), which show the influence of Henry Fuseli, with whom Brown was friendly.
In 1780 Brown returned to Scotland, and over the next several years drew many portraits of dignitaries, including twenty-five portraits of members of the Society of Scottish Antiquaries.
He lived in London in 1786–87, and exhibited miniature portraits. He returned to Scotland in ill health and died at Leith, Edinburgh's harbour area, in 1787.
Notes
References
1752 births
1787 deaths
Draughtsmen
Scottish portrait painters
Artists from Edinburgh
18th-century Scottish artists
18th-century Scottish male artists
Alumni of the Trustees' Academy | John Brown (artist) | Engineering | 280 |
30,280,089 | https://en.wikipedia.org/wiki/Bochner%E2%80%93Martinelli%20formula | In mathematics, the Bochner–Martinelli formula is a generalization of the Cauchy integral formula to functions of several complex variables, introduced by and .
History
Bochner–Martinelli kernel
For , in the Bochner–Martinelli kernel is a differential form in of bidegree defined by
(where the term is omitted).
Suppose that is a continuously differentiable function on the closure of a domain in n with piecewise smooth boundary . Then the Bochner–Martinelli formula states that if is in the domain then
In particular if is holomorphic the second term vanishes, so
See also
Bergman–Weil formula
Notes
References
.
.
.
.
.
.
, (ebook).
. The first paper where the now called Bochner-Martinelli formula is introduced and proved.
. Available at the SEALS Portal . In this paper Martinelli gives a proof of Hartogs' extension theorem by using the Bochner-Martinelli formula.
. The notes form a course, published by the Accademia Nazionale dei Lincei, held by Martinelli during his stay at the Accademia as "Professore Linceo".
. In this article, Martinelli gives another form to the Martinelli–Bochner formula.
Theorems in complex analysis
Several complex variables | Bochner–Martinelli formula | Mathematics | 262 |
40,260,097 | https://en.wikipedia.org/wiki/List%20of%20smallest%20known%20stars | This is a list of stars, neutron stars, white dwarfs and brown dwarfs which are the least voluminous known (the smallest stars by volume).
List
Notable small stars
This is a list of small stars that are notable for characteristics that are not separately listed.
Smallest stars by type
Timeline of smallest red dwarf star recordholders
Red dwarfs are considered the smallest star known that are active fusion stars, and are the smallest stars possible that is not a brown dwarf.
Notes
References
Volume, least
S | List of smallest known stars | Physics,Mathematics | 100 |
1,417,120 | https://en.wikipedia.org/wiki/Seliger%20Rocket | Seliger Rocket is the designation for the sounding rockets of the Berthold Seliger Forschungs- und Entwicklungsgesellschaft mbH. They were
A single-stage rocket with a length of 3.4 metres and a takeoff thrust of 50 kN. This rocket was first launched on November 19, 1962, near Cuxhaven and reached a height of 40 km.
A two-stage rocket with a length of 6 metres and a takeoff thrust of 50 kN. This rocket was first launched on February 7, 1963, and reached a height of 80 km.
A three-stage rocket with a length of 12.8 metres, a diameter of 0.56 metres and a takeoff thrust of 50 kN. This rocket was first launched on May 2, 1963, with reduced fuel and reached an altitude of 120 km. Later with maximum fuel it reached a height of 150 km.
All Seliger Rockets return to the ground by parachute. The single-stage version was completely reusable. Additional single and two-stage rockets were developed in 1963, which could be also used for military purposes. There were flight demonstrations of these rockets to military representatives of non-NATO countries on December 5, 1963.
References
See also
Rocket experiments in the area of Cuxhaven
External links
https://web.archive.org/web/20050119092811/http://www.astronautix.com/lvs/selocket.htm
Sounding rockets of Germany | Seliger Rocket | Astronomy | 307 |
19,977,844 | https://en.wikipedia.org/wiki/Voodoo%20Envy | The Voodoo Envy 133 was a notebook computer designed by VoodooPC after its acquisition by Hewlett-Packard. It was positioned as a mobile ultraportable notebook and was introduced at HP's Connecting Your World Live event in Berlin, Germany on June 10, 2008.
Overview
The chassis of the Voodoo Envy is made of carbon fiber, and it weighs and is thick all around. The system utilizes the Windows Vista operating system as well as a Linux kernel dubbed "Voodoo Instant On" or "Voodoo IOS." The laptop has often been compared to the MacBook Air for its similar size and specifications. HP claimed it to be the world's thinnest notebook, although this record has now been broken, as it is 0.70 inches throughout, whereas the Dell Adamo is 0.65 inches thick all around.
According to the specifications, its 3-cell lithium-ion battery will provide up to 3 hours and 10 minutes' battery life, depending on usage.
The HP Envy line of laptops and other products replaced the Voodoo Envy when HP and VoodooPC merged.
References
External links
VoodooPC web site
VoodooPC community site
HP laptops
Computer-related introductions in 2008 | Voodoo Envy | Technology | 238 |
14,862,734 | https://en.wikipedia.org/wiki/Alpha-4%20beta-2%20nicotinic%20receptor | The alpha-4 beta-2 nicotinic receptor, also known as the α4β2 receptor, is a type of nicotinic acetylcholine receptor implicated in learning, consisting of α4 and β2 subunits. It is located in the brain, where activation yields post- and presynaptic excitation, mainly by increased Na+ and K+ permeability.
Stimulation of this receptor subtype is also associated with growth hormone secretion. People with the inactive CHRNA4 mutation Ser248Phe are an average of 10 cm (4 inches) shorter than average and predisposed to obesity. A 2015 review noted that stimulation of the α4β2 nicotinic receptor in the brain is responsible for certain improvements in attentional performance; among the nicotinic receptor subtypes, nicotine has the highest binding affinity at the α4β2 receptor (ki=1 ), which is also the primary biological target that mediates nicotine's addictive properties.
The receptors exist in the two stoichiometries:
(α4)2(β2)3 receptors have high sensitivity to nicotine and low Ca2+ permeability (HS receptors)
(α4)3(β2)2 receptors have low sensitivity to nicotine and high Ca2+ permeability (LS receptors)
Structure
The α4β2 receptor assemble in two distinct stoichiometric forms. One stoichiometry contains three α4 and two β2 subunits [ (α4)3(β2)2 ] whereas the other stoichiometry contains two α4 and three β2 [ (α4)2(β2)3 ].
The x-ray structure of the (α4)2(β2)3 receptor is known since 2016 and reveals a circular α–β–β–α–β ordering of subunits.
Ligands
Source:
Agonists
3-Bromocytisine
Acetylcholine
Cytisine
Galantamine
Epibatidine
Epiboxidine
Nicotine
A-84,543
A-366,833
ABT-418
Arecoline
Altinicline
Dianicline
Ispronicline
Pozanicline
Rivanicline
Tebanicline
TC-1827
Varenicline
Sazetidine A: full agonist on (α4)2(β2)3, 6% efficacy on (α4)3(β2)2
N-(3-pyridinyl)-bridged bicyclic diamines
PAMs
NS-9283: 60-fold left-shifting of concentration-response curve, no change in maximum efficacy
Desformylflustrabromine
Further compounds.. (see references →)
Antagonists
(−)-7-methyl-2-exo-[3'-(6-[18F]fluoropyridin-2-yl)-5'-pyridinyl]-7-azabicyclo[2.2.1]heptane
2-fluoro-3-(4-nitro-phenyl)deschloroepibatidine
Coclaurine - alkaloid from Nelumbo nucifera
Mecamylamine
α-Conotoxin
PNU-120,596
Bupropion
Dihydro-β-erythroidine, selective
Nitrous oxide
Isoflurane
1-(6-(((R,S)-7-Hydroxychroman-2-yl)methylamino]hexyl)-3-((S)-1-methylpyrrolidin-2-yl)pyridinium bromide (compound 2) (heterobivalent ligand: D2R agonist and nAChR antagonist)
NAMs
Oxantel
See also
α3β2-Nicotinic receptor
α3β4-Nicotinic receptor
α6β2-Nicotinic receptor
α7-Nicotinic receptor
References
Ion channels
Addiction
Nicotinic acetylcholine receptors | Alpha-4 beta-2 nicotinic receptor | Chemistry | 862 |
70,347,595 | https://en.wikipedia.org/wiki/Phil%20Bagwell | Phil Bagwell (died 6 October 2012) was a computer scientist known for his work and influence in the area of persistent data structures. He is best known for his 2000 invention of hash array mapped tries.
Bagwell was probably the most influential researcher in the field of persistent data structures from 2000 until his death. His work is now a standard part of the runtimes of functional programming languages including Clojure, Scala, and Haskell.
His contributions to building the Scala community are remembered in the Phil Bagwell Memorial Scala Community Award.
Publications
"Ideal Hash Trees" (2000), EPFL Technical Report
"Fast Functional Lists, Hash-Lists, Deques and Variable Length Arrays" (2002), EPFL Technical Report
References
2012 deaths
Year of birth missing
Computer scientists | Phil Bagwell | Technology | 157 |
434,920 | https://en.wikipedia.org/wiki/Finite%20Fourier%20transform |
In mathematics the finite Fourier transform may refer to either
another name for discrete-time Fourier transform (DTFT) of a finite-length series. E.g., F.J.Harris (pp. 52–53) describes the finite Fourier transform as a "continuous periodic function" and the discrete Fourier transform (DFT) as "a set of samples of the finite Fourier transform". In actual implementation, that is not two separate steps; the DFT replaces the DTFT. So J.Cooley (pp. 77–78) describes the implementation as discrete finite Fourier transform.
or
another name for the Fourier series coefficients.
or
another name for one snapshot of a short-time Fourier transform.
See also
Fourier transform
Notes
References
Further reading
Rabiner, Lawrence R.; Gold, Bernard (1975). Theory and application of digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. pp 65–67. .
Transforms
Fourier analysis
Fourier series | Finite Fourier transform | Mathematics | 201 |
1,433,334 | https://en.wikipedia.org/wiki/Test%20script | A test script in software testing is a set of instructions that will be performed on the system under test to test that the system functions as expected.
Types of test scripts
There are various means for executing test scripts. These last two types are also done in manual testing.
Manual testing. These are more commonly called test cases.
Automated testing.
Short program written in a programming language used to test part of the functionality of a software system. Test scripts written as a short program can either be written using a special automated functional GUI test tool (such as HP QuickTest Professional, Borland SilkTest, IBM TPNS and Rational Robot) or in a well-known programming language (such as C++, C#, Tcl, Expect, Java, PHP, Perl, Powershell, Python, or Ruby). As documented in IEEE, ISO and IEC.
Extensively parameterized short programs a.k.a. Data-driven testing
Reusable steps created in a table a.k.a. keyword-driven or table-driven testing.
Usage and functionality
Automated testing may be executed continuously without the need for human intervention, they are easily repeatable, and often faster. Automated tests are useful in situations where the test is to be executed several times, for example as part of regression testing. Automated tests can be disadvantageous when poorly written, leading to incorrect testing or broken tests being carried out.
Automated tests can, like any piece of software, be poorly written or simply break during playback. They also can only examine what they have been programmed to examine. Since most systems are designed with human interaction in mind, it is good practice that a human tests the system at some point. A trained manual tester can notice that the system under test is misbehaving without being prompted or directed; automated tests can only examine what they have been programmed to examine. When used in regression testing, manual testers can find new bugs while ensuring that old bugs do not reappear while an automated test can only ensure the latter. Mixed testing, with automated and manual testing, is often used; automating what needs to be tested often and can be easily checked by a machine, and using manual testing to do test design and exploratory testing.
One should consider the return on investment for automating any given test script, i.e. does the cost to build and maintain that script cost less than it would take to simply execute it manually. Where cost can be measured in terms of time and/or money but also the opportunity cost of not freeing up people to do other work.
See also
Software testing
Unit test
Test plan
Test suite
Scenario testing
Session-based testing
References
Software testing | Test script | Engineering | 546 |
3,968,454 | https://en.wikipedia.org/wiki/Hugo%20Rietveld | Hugo M. Rietveld (7 March 1932 – 16 July 2016) was a Dutch crystallographer who is infamous for single handedly publishing joint work of Loopstra, van Laar and himself on the full profile refinement method in powder diffraction, which became later known as the Rietveld refinement method. The method was developed by Loopstra and van Laar and programmed in Algol bij Rietveld to refine neutron diffraction data, but is applicable to other diffraction experiments as well, like X-ray diffraction. The Rietveld refinement uses a least squares approach to refine a theoretical line profile (calculated from a known or postulated crystal structure) until it matches the measured profile. The introduction of this technique which used the full profile instead of individual reflections was a significant step forward in the diffraction analysis of powder samples.
Biography
Rietveld was born in the Hague. After completing Grammar School in the Netherlands he moved to Australia and studied physics at the University of Western Australia in Perth. In 1964 he obtained his PhD degree under Edward Norman Maslen with a thesis entitled "The Structure of p-Diphenylbenzene and Other Compounds", a single crystal neutron and X-ray diffraction study. Dorothy Hodgkin was an external examiner on his thesis. This investigation was the first single crystal neutron diffraction study in Australia and was conducted at the High Flux Australian Reactor (HIFAR) in the Lucas Heights suburb of Sydney.
In 1964 he became a research officer at the Energy Research Centre of the Netherlands (Energieonderzoek Centrum Nederland, ECN) in Petten, where he worked together with Bert Loopstra and Bob van Laar on the structure solution and refinement of uranates and other ceramic compounds using neutron powder diffraction. In 1967 he implemented the full profile refinement method in a computer program, which he published as his own achievement under his own name in his 1969 citation classic. After publishing this important project alone, Rietveld found his position in the small Petten group increasingly difficult. In 1974 he successfully applied for the post of head of the ECN library, a function that had been vacant for some time, and consequently he left science. He remained with the library until his retirement in 1992.
Awards
The Royal Swedish Academy of Sciences awarded Hugo M. Rietveld, the Aminoff prize in Stockholm, 31 March 1995.
Barrett Award on behalf of the Denver X-ray Conference Organizing Committee in Denver, U.S., 6 August 2003.
The Royal Award of Officer in the Order of Oranje-Nassau, for his outstanding contribution to the field of chemistry. Alkmaar, Netherlands, 28 October 2004.
Award for Distinguished Powder Diffractionists, awarded by The European Diffraction Conferences, handed out on 30 August 2010 in Darmstadt.
Hans-Kühl-Medal 2010, awarded by Gesellschaft Deutscher Chemiker (GDCh), Fachgruppe Bauchemie, handed out on 7 October 2010 in Dortmund.
Further reading
Bob van Laar, Henk Schenk, The development of powder profile refinement at the Reactor Centre Netherlands at Petten. International Union of Crystallography (December 2017).
Young, R. A. (Ed.). (1993). The Rietveld method (Vol. 6). Oxford. Oxford University Press.
References
1932 births
2016 deaths
Crystallographers
Dutch expatriates in Australia
20th-century Dutch physicists
Scientists from The Hague
University of Western Australia alumni | Hugo Rietveld | Chemistry,Materials_science | 747 |
11,268,632 | https://en.wikipedia.org/wiki/Fishpond.co.nz | Fishpond Ltd. is a New Zealand e-commerce company. It was one of the first major companies to sell books over the Internet in New Zealand. Founded by Daniel Robertson in 2004, Fishpond.co.nz is a full-scale online bookstore. It also sells DVDs, music CDs, toys, household goods, cosmetics, and electronics. It is part of a larger business called WorldFront.
The company is headquartered in Auckland, with staff in Auckland, Christchurch, Melbourne, and Perth. It maintains software development centres in Auckland and Christchurch. Fishpond.com.au has a separate website in Australia.
In the middle of 2013, Fishpond had nearly 13 million items in its catalogue and was making 20,000 sales per day. By 2018, its catalogue had increased to over 25 million items, and it was selling a product every 1.2 seconds.
International availability
Fishpond is also available in other countries with localised currency pricing.
References
External links
Fishpond
WorldFront
Online retailers of New Zealand
Review websites
2004 establishments in New Zealand
Internet properties established in 2004 | Fishpond.co.nz | Technology | 222 |
3,184,177 | https://en.wikipedia.org/wiki/Pleural%20thickening | Pleural thickening is an increase in the bulkiness of one or both of the pulmonary pleurae.
Causes
Pleural plaques
Pleural plaques are patchy collections of hyalinized collagen in the parietal pleura. They have a holly leaf appearance on X-ray. They are indicators of asbestos exposure, and the most common asbestos-induced lesion. They usually appear after 20 years or more of exposure and never degenerate into mesothelioma. They appear as fibrous plaques on the parietal pleura, usually on both sides, and at the posterior and inferior part of the chest wall as well as the diaphragm.
See also
Pleural disease
References
Asbestos
Respiratory diseases | Pleural thickening | Environmental_science | 156 |
279,651 | https://en.wikipedia.org/wiki/Truncated%20tetrahedron | In geometry, the truncated tetrahedron is an Archimedean solid. It has 4 regular hexagonal faces, 4 equilateral triangle faces, 12 vertices and 18 edges (of two types). It can be constructed by truncating all 4 vertices of a regular tetrahedron.
Construction
The truncated tetrahedron can be constructed from a regular tetrahedron by cutting all of its vertices off, a process known as truncation. The resulting polyhedron has 4 equilateral triangles and 4 regular hexagons, 18 edges, and 12 vertices. With edge length 1, the Cartesian coordinates of the 12 vertices are points
that have an even number of minus signs.
Properties
Given the edge length . The surface area of a truncated tetrahedron is the sum of 4 regular hexagons and 4 equilateral triangles' area, and its volume is:
The dihedral angle of a truncated tetrahedron between triangle-to-hexagon is approximately 109.47°, and that between adjacent hexagonal faces is approximately 70.53°.
The densest packing of the truncated tetrahedron is believed to be , as reported by two independent groups using Monte Carlo methods by and . Although no mathematical proof exists that this is the best possible packing for the truncated tetrahedron, the high proximity to the unity and independence of the findings make it unlikely that an even denser packing is to be found. If the truncation of the corners is slightly smaller than that of a truncated tetrahedron, this new shape can be used to fill space completely.
The truncated tetrahedron is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. The truncated tetrahedron has the same three-dimensional group symmetry as the regular tetrahedron, the tetrahedral symmetry . The polygonal faces that meet for every vertex are one equilateral triangle and two regular hexagons, and the vertex figure is denoted as . Its dual polyhedron is triakis tetrahedron, a Catalan solid, shares the same symmetry as the truncated tetrahedron.
Related polyhedrons
The truncated tetrahedron can be found in the construction of polyhedrons. For example, the augmented truncated tetrahedron is a Johnson solid constructed from a truncated tetrahedron by attaching triangular cupola onto its hexagonal face. The triakis truncated tetrahedron is a polyhedron constructed from a truncated tetrahedron by adding three tetrahedrons onto its triangular faces, as interpreted by the name "triakis". It is classified as plesiohedron, meaning it can tessellate in three-dimensional space known as honeycomb; an example is triakis truncated tetrahedral honeycomb.
The Friauf polyhedron is named after J. B. Friauf in which he described it as a intermetallic structure formed by a compound of metallic elements. It can be found in crystals such as complex metallic alloys, an example is dizinc magnesium MgZn2. It is a lower symmetry version of the truncated tetrahedron, interpreted as a truncated tetragonal disphenoid with its three-dimensional symmetry group as the dihedral group of order 8.
Truncating a truncated tetrahedron gives the resulting polyhedron 54 edges, 32 vertices, and 20 faces—4 hexagons, 4 nonagons, and 12 trapeziums. This polyhedron was used by Adidas as the underlying geometry of the Jabulani ball designed for the 2010 World Cup.
Truncated tetrahedral graph
In the mathematical field of graph theory, a truncated tetrahedral graph is an Archimedean graph, the graph of vertices and edges of the truncated tetrahedron, one of the Archimedean solids. It has 12 vertices and 18 edges. It is a connected cubic graph, and connected cubic transitive graph.
Examples
See also
Quarter cubic honeycomb – Fills space using truncated tetrahedra and smaller tetrahedra
Truncated 5-cell – Similar uniform polytope in 4-dimensions
Truncated triakis tetrahedron
Triakis truncated tetrahedron
Octahedron – a rectified tetrahedron
Truncated Triangular Pyramid Number
References
External links
Editable printable net of a truncated tetrahedron with interactive 3D view
The Uniform Polyhedra
Virtual Reality Polyhedra The Encyclopedia of Polyhedra
Archimedean solids
Truncated tilings
Individual graphs
Planar graphs | Truncated tetrahedron | Physics,Mathematics | 936 |
20,829,341 | https://en.wikipedia.org/wiki/C6H14N2O2 | {{DISPLAYTITLE:C6H14N2O2}}
The molecular formula C6H14N2O2 (molar mass: 146.19 g/mol) may refer to:
Lysine
β-Lysine
Meldonium
3-Methylornithine
N-Methylornithine
Molecular formulas | C6H14N2O2 | Physics,Chemistry | 71 |
2,912,664 | https://en.wikipedia.org/wiki/Precipitable%20water | Precipitable water is the depth of water in a column of the atmosphere, if all the water in that column were precipitated as rain. As a depth, the precipitable water is measured in millimeters or inches. Often abbreviated as "TPW", for Total Precipitable Water.
Measurement
There are different measurement techniques:
One type of measurement is based on the measurement of the solar irradiance on two wavelengths, one in a water absorption band, and the other not. The precipitable water column is determined using the irradiances in these bands and the Beer–Lambert law.
The precipitable water can also be calculated by integration of radiosonde data (relative humidity, pressure and temperature) over the whole atmosphere.
Data can be viewed on a Lifted-K index. The numbers represent inches of water as mentioned above for a geographical location.
Recently, methods using the Global Positioning System have been developed.
Some work has been performed to create empirical relationships between surface specific humidity and precipitable water based on localized measurements (generally a 2nd to 5th order polynomial). However, this method has not received widespread use in part because humidity is a local measurement and precipitable water is a total column measurement.
References
External links
Current global map of precipitable water
Remote Sensing of Water Vapor From GPS Receivers
Water
Atmospheric thermodynamics | Precipitable water | Environmental_science | 287 |
44,352,300 | https://en.wikipedia.org/wiki/Crisnatol | Crisnatol (BW-A770U) is an experimental anticancer agent known for its potential in inhibiting the growth of various solid tumors. Research has indicated that crisnatol acts as a DNA-intercalating agent, thereby disrupting the replication process in cancer cells. A Phase I clinical trial was conducted to assess its safety profile, pharmacokinetics, and potential efficacy in patients with solid malignancies. This study highlighted the drug’s ability to inhibit tumor growth, although associated toxicities were observed, necessitating further research to optimize its therapeutic window.
Mechanism of action
Crisnatol is a synthetic aromatic amine and a potent anticancer compound. It functions by intercalating into DNA and inhibiting topoisomerase activity, which leads to DNA damage and prevents cancer cells from proliferating. It primarily targets solid tumors and shows a higher affinity for melanoma and glioma cells. Due to its lipophilic properties, crisnatol can effectively penetrate the blood-brain barrier, making it a potential treatment for brain tumors.
Clinical trials
Crisnatol has undergone several Phase I and II clinical trials aimed at determining its pharmacokinetics, safety profile, and efficacy against various types of solid tumors. Early studies demonstrated dose-limiting toxicities, primarily neurotoxicity and hematologic toxicity, which necessitated further research to optimize dosing schedules. In one Phase I trial, crisnatol mesylate was administered as a protracted infusion in patients with advanced solid malignancies, revealing a manageable toxicity profile and some evidence of tumor regression.
More recent trials have explored combinations of crisnatol with other anticancer agents, such as cisplatin, to enhance its efficacy and minimize resistance.
Potential applications and challenges
Despite its promise, crisnatol faces challenges due to its side effects, which include neurotoxicity and dose-limiting hematologic toxicities. Research continues to focus on optimizing its therapeutic index and exploring potential applications in combination therapies. The ability of crisnatol to cross the blood-brain barrier has led to interest in its use against brain cancers, although further studies are needed to fully establish its efficacy and safety in this context.
References
Amines
Diols
Primary alcohols | Crisnatol | Chemistry | 484 |
867,515 | https://en.wikipedia.org/wiki/Tired%20light | Tired light is a class of hypothetical redshift mechanisms that was proposed as an alternative explanation for the redshift-distance relationship. These models have been proposed as alternatives to the models that involve the expansion of the universe. The concept was first proposed in 1929 by Fritz Zwicky, who suggested that if photons lost energy over time through collisions with other particles in a regular way, the more distant objects would appear redder than more nearby ones.
Zwicky acknowledged that any sort of scattering of light would blur the images of distant objects more than what is seen. Additionally, the surface brightness of galaxies evolving with time, time dilation of cosmological sources, and a thermal spectrum of the cosmic microwave background have been observed—these effects should not be present if the cosmological redshift was due to any tired light scattering mechanism. Despite periodic re-examination of the concept, tired light has not been supported by observational tests and remains a fringe topic in astrophysics.
History and reception
Tired light was an idea that came about due to the observation made by Edwin Hubble that distant galaxies have redshifts proportional to their distance. Redshift is a shift in the spectrum of the emitted electromagnetic radiation from an object toward lower energies and frequencies, associated with the phenomenon of the Doppler effect. Observers of spiral nebulae such as Vesto Slipher observed that these objects (now known to be separate galaxies) generally exhibited redshift rather than blueshifts independent of where they were located. Since the relation holds in all directions it cannot be attributed to normal movement with respect to a background which would show an assortment of redshifts and blueshifts. Everything is moving away from the Milky Way galaxy. Hubble's contribution was to show that the magnitude of the redshift correlated strongly with the distance to the galaxies.
Basing on Slipher's and Hubble's data, in 1927 Georges Lemaître realized that this correlation could fit non-static solutions to the equations of Einstein's theory of gravity, the Friedmann–Lemaître solutions. However Lemaître's article was appreciated only after Hubble's publication of 1929. The universal redshift-distance relation in this solution is attributable to the effect an expanding universe has on a photon traveling on a null spacetime interval (also known as a "light-like" geodesic). In this formulation, there was still an analogous effect to the Doppler effect, though relative velocities need to be handled with more care since distances can be defined in different ways in an expanding universe.
At the same time, other explanations were proposed that did not concord with general relativity. Edward Milne proposed an explanation compatible with special relativity but not general relativity that there was a giant explosion that could explain redshifts (see Milne universe). Others proposed that systematic effects could explain the redshift-distance correlation. Along this line, Fritz Zwicky proposed a "tired light" mechanism in 1929. Zwicky suggested that photons might slowly lose energy as they travel vast distances through a static universe by interaction with matter or other photons, or by some novel physical mechanism. Since a decrease in energy corresponds to an increase in light's wavelength, this effect would produce a redshift in spectral lines that increase proportionally with the distance of the source. The term "tired light" was coined by Richard Tolman in the early 1930s as a way to refer to this idea. Helge Kragh has noted "Zwicky’s hypothesis was the best known and most elaborate alternative to the expanding universe, but it was far from the only one. More than a dozen physicists, astronomers and amateur scientists proposed in the 1930s tired-light ideas having in common the assumption of nebular photons interacting with intergalactic matter to which they transferred part of their energy." Kragh noted in particular John Quincy Stewart, William Duncan MacMillan, and Walther Nernst.
Tired light mechanisms were among the proposed alternatives to the Big Bang and the Steady State cosmologies, both of which relied on the general relativistic expansion of the universe of the FRW metric. Through the middle of the twentieth century, most cosmologists supported one of these two paradigms, but there were a few scientists, especially those who were working on alternatives to general relativity, who worked with the tired light alternative. As the discipline of observational cosmology developed in the late twentieth century and the associated data became more numerous and accurate, the Big Bang emerged as the cosmological theory most supported by the observational evidence, and it remains the accepted consensus model with a current parametrization that precisely specifies the state and evolution of the universe. Although the proposals of "tired light cosmologies" are now more-or-less relegated to the dustbin of history, as a completely alternative proposal tired-light cosmologies were considered a remote possibility worthy of some consideration in cosmology texts well into the 1980s, though it was dismissed as an unlikely and ad hoc proposal by mainstream astrophysicists.
By the 1990s and on into the twenty-first century, a number of falsifying observations have shown that "tired light" hypotheses are not viable explanations for cosmological redshifts. For example, in a static universe with tired light mechanisms, the surface brightness of stars and galaxies should be constant, that is, the farther an object is, the less light we receive, but its apparent area diminishes as well, so the light received divided by the apparent area should be constant. In an expanding universe, the surface brightness diminishes with distance. As the observed object recedes, photons are emitted at a reduced rate because each photon has to travel a distance that is a little longer than the previous one, while its energy is reduced a little because of increasing redshift at a larger distance. On the other hand, in an expanding universe, the object appears to be larger than it really is, because it was closer to us when the photons started their travel. This causes a difference in surface brilliance of objects between a static and an expanding Universe. This is known as the Tolman surface brightness test that in those studies favors the expanding universe hypothesis and rules out static tired light models.
Redshift is directly observable and used by cosmologists as a direct measure of lookback time. They often refer to age and distance to objects in terms of redshift rather than years or light-years. In such a scale, the Big Bang corresponds to a redshift of infinity. Alternative theories of gravity that do not have an expanding universe in them need an alternative to explain the correspondence between redshift and distance that is sui generis to the expanding metrics of general relativity. Such theories are sometimes referred to as "tired-light cosmologies", though not all authors are necessarily aware of the historical antecedents.
Specific falsified models
In general, any "tired light" mechanism must solve some basic problems, in that the observed redshift must:
admit the same measurement in any wavelength-band
not exhibit blurring
follow the detailed Hubble relation observed with supernova data (see accelerating universe)
explain associated time dilation of cosmologically distant events.
A number of tired light mechanisms have been suggested over the years. Fritz Zwicky, in his paper proposing these models investigated a number of redshift explanations, ruling out some himself. The simplest form of a tired light theory assumes an exponential decrease in photon energy with distance traveled:where is the energy of the photon at distance from the source of light, is the energy of the photon at the source of light, and is a large constant characterizing the "resistance of the space". To correspond to Hubble's law, the constant must be several gigaparsecs. For example, Zwicky considered whether an integrated Compton effect could account for the scale normalization of the above model:
This expected "blurring" of cosmologically distant objects is not seen in the observational evidence, though it would take much larger telescopes than those available at that time to show this with certainty. Alternatively, Zwicky proposed a kind of Sachs–Wolfe effect explanation for the redshift distance relation:
Zwicky's proposals were carefully presented as falsifiable according to later observations:
Such broadening of absorption lines is not seen in high-redshift objects, thus falsifying this particular hypothesis.
Zwicky also notes, in the same paper, that according to a tired light model a distance-redshift relationship would necessarily be present in the light from sources within our own galaxy (even if the redshift would be so small that it would be hard to measure), that do not appear under a recessional-velocity based theory. He writes, referring to sources of light within our galaxy: "It is especially desirable to determine the redshift independent of the proper velocities of the objects observed". Subsequent to this, astronomers have patiently mapped out the three-dimensional velocity-position phase space for the galaxy and found the redshifts and blueshifts of galactic objects to accord well with the statistical distribution of a spiral galaxy, eliminating the intrinsic redshift component as an effect.
Following after Zwicky in 1935, Edwin Hubble and Richard Tolman compared recessional redshift with a non-recessional one, writing that they
These conditions became almost impossible to meet and the overall success of general relativistic explanations for the redshift-distance relation is one of the core reasons that the Big Bang model of the universe remains the cosmology preferred by researchers.
In the early 1950s, Erwin Finlay-Freundlich proposed a redshift as "the result of loss of energy by observed photons traversing a radiation field". which was cited and argued for as an explanation for the redshift-distance relation in a 1962 astrophysics theory Nature paper by University of Manchester physics professor P. F. Browne. The pre-eminent cosmologist Ralph Asher Alpher wrote a letter to Nature three months later in response to this suggestion heavily criticizing the approach, "No generally accepted physical mechanism has been proposed for this loss." Still, until the so-called "Age of Precision Cosmology" was ushered in with results from the WMAP space probe and modern redshift surveys, tired light models could occasionally get published in the mainstream journals, including one that was published in the February 1979 edition of Nature proposing "photon decay" in a curved spacetime that was five months later criticized in the same journal as being wholly inconsistent with observations of the gravitational redshift observed in the solar limb. In 1986, a paper claiming tired light theories explained redshift better than cosmic expansion was published in the Astrophysical Journal, but ten months later, in the same journal, such tired light models were shown to be inconsistent with extant observations. As cosmological measurements became more precise and the statistics in cosmological data sets improved, tired light proposals ended up being falsified, to the extent that the theory was described in 2001 by science writer Charles Seife as being "firmly on the fringe of physics 30 years ago; still, scientists sought more direct proofs of the expansion of the cosmos".
See also
Dispersion (optics)
References
Fringe physics
Light
Obsolete theories in physics
Physical cosmological concepts | Tired light | Physics | 2,359 |
5,361,848 | https://en.wikipedia.org/wiki/Rappaport%20Vassiliadis%20soya%20peptone%20broth | Rappaport-Vassiliadis soya peptone broth (RVS broth) is used as an enrichment growth medium for the isolation of Salmonella species. It is not recommended for the enrichment of Salmonella Typhi or Paratyphi, which is inhibited due to the malachite green in RVS broth. It is an alternative to selenite broth. It is not associated with potential teratogenicity problems seen with the use of selenite broth. It enriches salmonellae because they are better able to survive the high osmotic pressure in the medium and because they can multiply at relatively lower pH and higher temperatures compared with other gut bacteria. RVS broth has a pH around 5.2.
Components
A liter of RVS broth contains:
4.5g Soya peptone
7.2g Sodium chloride
1.26g Potassium dihydrogen phosphate
0.18g Dipotassium phosphate
13.58g Magnesium chloride (anhydrous)
0.036g Malachite green
References
Microbiological media
Bacteriology | Rappaport Vassiliadis soya peptone broth | Biology | 230 |
632,394 | https://en.wikipedia.org/wiki/Transit%20of%20Mercury | A transit of Mercury across the Sun takes place when the planet Mercury passes directly between the Sun and a superior planet. During a transit, Mercury appears as a tiny black dot moving across the Sun as the planet obscures a small portion of the solar disk. Because of orbital alignments, transits viewed from Earth occur in May or November. The last four such transits occurred on May 7, 2003; November 8, 2006; May 9, 2016; and November 11, 2019. The next will occur on November 13, 2032. A typical transit lasts several hours. Mercury transits are much more frequent than transits of Venus, with about 13 or 14 per century, primarily because Mercury is closer to the Sun and orbits it more rapidly.
On June 3, 2014, the Mars rover Curiosity observed the planet Mercury transiting the Sun, marking the first time a planetary transit has been observed from a celestial body besides Earth.
Scientific investigation
The orbit of the planet Mercury lies interior to that of the Earth, and thus it can come into an inferior conjunction with the Sun. When Mercury is near the node of its orbit, it passes through the orbital plane of the Earth. If an inferior conjunction occurs as Mercury is passing through its orbital node, the planet can be seen to pass across the disk of the Sun in an event called a transit. Depending on the chord of the transit and the position of the planet Mercury in its orbit, the maximum length of this event is 7h 50m.
Transit events are useful for studying the planet and its orbit. Examples of the scientific investigations based on transits of Mercury are:
Measuring the scale of the Solar System.
Investigations of the variability of the Earth's rotation and of the tidal acceleration of the Moon.
Measuring the mass of Venus from secular variations in Mercury's orbit.
Looking for long term variations in the solar radius.
Investigating the black drop effect, including calling into question the purported discovery of the atmosphere of Venus during the 1761 transit.
Assessing the likely drop in light level in an exoplanet transit.
Occurrence
Transits of Mercury can only occur when the Earth is aligned with a node of Mercury's orbit. Currently that alignment occurs within a few days of May 8 (descending node) and November 10 (ascending node), with the angular diameter of Mercury being about 12″ for May transits, and 10″ for November transits. The average date for a transit increases over centuries as a result of Mercury's nodal precession and Earth's axial precession.
Transits of Mercury occur on a regular basis. As explained in 1882 by Newcomb, the interval between passages of Mercury through the ascending node of its orbit is 87.969 days, and the interval between the Earth's passage through that same longitude is 365.254 days. Using continued fraction approximations of the ratio of these values, it can be shown that Mercury will make an almost integral number of revolutions about the Sun over intervals of 6, 7, 13, 33, 46, and 217 years.
In 1894 Crommelin noted that at these intervals, the successive paths of Mercury relative to the Sun are consistently displaced northwards or southwards. He noted the displacements as:
{| class="wikitable"
|+Displacements at subsequent transits
! Interval!! May transits !! November transits
|-
| After 6 years|| 65′ 37″ S|| 31′ 35″ N
|-
| After 7 years|| 48′ 21″ N|| 23′ 16″ S
|-
| Hence after 13 years (6 + 7)|| 17′ 16″ S|| 8′ 19″ N
|-
| ... 20 years (6 + 2 × 7)|| 31′ 05″ N|| 14′ 57″ S
|-
| ... 33 years (2 × 6 + 3 × 7)|| 13′ 49″ N|| 6′ 38″ S
|-
| ... 46 years (3 × 13 + 7)|| 3′ 27″ S|| 1′ 41″ N
|-
| ... 217 years (14 × 13 + 5 × 7)|| 0′ 17″ N || 0′ 14″ N
|}
Comparing these displacements with the solar diameter (about 31.7′ in May, and 32.4′ in November) the following may be deduced about the interval between transits:
For May transits, intervals of 6 and 7 years are not possible. For November transits, an interval of 6 years is possible but rare (the last such pair was 1993 and 1999, with both transits being very close to the solar limb), while an interval of 7 years is to be expected.
An interval of 13 years is to be expected for both May and November transits.
An interval of 20 years is possible but rare for a May transit, but is to be expected for November transits.
An interval of 33 years is to be expected for both May and November transits.
A transit having a similar path across the sun will occur 46 (and 171) years later – for both November and May transits.
A transit having an almost identical path across the Sun will occur 217 years later – for both November and May transits.
Transits that occur 46 years apart can be grouped into a series. For November transits each series includes about 20 transits over 874 years, with the path of Mercury across the Sun passing further north than for the previous transit. For May transits each series includes about 10 transits over 414 years, with the path of Mercury across the Sun passing further south than for the previous transit. Some authors have allocated a series number to transits on the basis of this 46-year grouping.
Similarly transits that occur 217 years apart can be grouped into a series. For November transits each series would include about 135 transits over 30,000 years. For May transits each series would include about 110 transits over 24,000 years. For both the May and November series, the path of Mercury across the Sun passes further north than for the previous transit. Series numbers have not been traditionally allocated on the basis of the 217 year grouping.
Predictions of transits of Mercury covering many years are available at NASA, SOLEX, and Fourmilab.
Observation
At inferior conjunction, the planet Mercury subtends an angle of , which, during a transit, is too small to be seen without a telescope. A common observation made at a transit is recording the times when the disk of Mercury appears to be in contact with the limb of the Sun. Those contacts are traditionally referred to as the 1st, 2nd, 3rd and 4th contacts – with the 2nd and 3rd contacts occurring when the disk of Mercury is fully on the disk of the sun. As a general rule, 1st and 4th contacts cannot be accurately detected, while 2nd and 3rd contacts are readily visible within the constraints of the Black Drop effect, irradiation, atmospheric conditions, and the quality of the optics being used.
Observed contact times for transits between 1677 and 1881 are given in S Newcomb's analysis of transits of Mercury. Observed 2nd and 3rd contacts times for transits between 1677 and 1973 are given in Royal Greenwich Observatory Bulletin No.181, 359-420 (1975).
Partial
Sometimes Mercury appears to only graze the Sun during a transit. There are two possible scenarios:
Firstly, it is possible for a transit to occur such that, at mid-transit, the disk of Mercury has fully entered the disk of the Sun as seen from some parts of the world, while as seen from other parts of the world the disk of Mercury has only partially entered the disk of the Sun. The transit of November 15, 1999 was such a transit, with the transit being a full transit for most of the world, but only a partial transit for Australia, New Zealand, and Antarctica. The previous such transit was on October 28, 743 and the next will be on May 11, 2391. While these events are very rare, two such transits will occur within years in December 6149 and June 6152.
Secondly, it is possible for a transit to occur in which, at mid-transit, the disk of Mercury has partially entered the disk of the Sun as seen from some parts of the world, while as seen from other parts of the world Mercury completely misses the Sun. Such a transit last occurred on May 11, 1937, when a partial transit occurred in southern Africa and southern Asia and no transit was visible from Europe and northern Asia. The previous such transit was on October 21, 1342 and the next will be on May 13, 2608.
The possibility that, at mid-transit, Mercury is seen to be fully on the solar disk from some parts of the world, and completely miss the Sun as seen from other parts of the world cannot occur.
History
The first observation of a Mercury transit was observed on November 7, 1631 by Pierre Gassendi. He was surprised by the small size of the planet compared to the Sun. Johannes Kepler had predicted the occurrence of transits of Mercury and Venus in his ephemerides published in 1630.
Images of the November 15, 1999 transit from the Transition Region and Coronal explorer (TRACE) satellite were on Astronomy Picture of the Day (APOD) on November 19. Three APODs featured the May 9, 2016 transit.
1832 event
The Shuckburgh telescope of the Royal Observatory, Greenwich in London was used for the 1832 Mercury transit. It was equipped with a micrometer by Dollond and was used for a report of the events as seen through the small refractor. By observing the transit in combination with timing it and taking measures, a diameter for the planet was taken. They also reported the peculiar effects that they compared to pressing a coin into the Sun. The observer remarked:
1907 event
For the 1907 Mercury transit, telescopes used at the Paris Observatory included:
Foucault-Eichens reflector ( aperture)
Foucault-Eichens reflector ( aperture)
Martin-Eichens reflector ( aperture)
Several small refractors
The telescopes were mobile and were placed on the terrace for the several observations.
Chronology
The table below includes all historical transits of Mercury from 1605 on:
See also
Mercury Passing Before the Sun, 1914 painting
Transit of Mercury from Mars
Transit of minor planets
Transit of Venus
Vulcan (hypothetical planet)
Gallery
References
External links
NASA: Transits of Mercury, Seven Century Catalog: 1601 CE to 2300 CE
Shadow & Substance.com: Transit of Mercury Animated for November 8, 2006
Transits of Mercury – Fourteen century catalog: 1 601 AD – 3 000 AD
Transits of Mercury on Earth – Fifteen millennium catalog: 5 000 BC – 10 000 AD
Scroll a little bit down and then click on 40540. You will get then a table from −125,000 till +125,000.
Time Lapse of the 9th May 2016 Transit of Mercury
Links to high-resolution video from a major solar telescope and more about several transits
Mercury
Stellar occultation | Transit of Mercury | Astronomy | 2,262 |
21,173,946 | https://en.wikipedia.org/wiki/Woodbourne%20Forest%20and%20Wildlife%20Preserve | The Woodbourne Forest and Wildlife Preserve is a protected area that is managed by The Nature Conservancy. It covers in northeastern Pennsylvania in the United States.
It is located just south of Montrose, Pennsylvania.
History and notable features
This nature preserve contains old fields, meadows, creeks, bogs, and forests that are home to a wide variety of animals. These include more than 180 species of birds, such as pileated woodpeckers, great horned owls and winter wrens.
The preserve's wetlands harbor frogs, snakes and nine species of salamander, including the spring salamander, northern two-lined salamander and four-toed salamander.
The preserve's forests, which are part of the Allegheny Highlands forests ecoregion, contain of old growth northern hardwood forest with eastern hemlock, sweet birch, sugar maple, northern red oak, white ash, and American beech trees.
Visitor activities include hiking, snowshoeing, cross-country skiing, birdwatching, and photography.
References
Nature reserves in Pennsylvania
Old-growth forests
Protected areas of Susquehanna County, Pennsylvania | Woodbourne Forest and Wildlife Preserve | Biology | 225 |
2,970,491 | https://en.wikipedia.org/wiki/Johnjoe%20McFadden | Johnjoe McFadden (born 17 May 1956) is an Anglo-Irish scientist, academic and writer. He is Professor of Molecular Genetics at the University of Surrey, United Kingdom.
Life
McFadden was born in Donegal, Ireland but raised in the UK. He holds joint British and Irish Nationality. He obtained his BSc in Biochemistry University of London in 1977 and his PhD at Imperial College London in 1982. He went on to work on human genetic diseases and then infectious diseases, at St Mary's Hospital Medical School, London (1982–84) and St George's Hospital Medical School, London (1984–88) and then at the University of Surrey in Guildford, UK.
For more than a decade, McFadden has researched the genetics of microbes such as the agents of tuberculosis and meningitis and invented a test for the diagnosis of meningitis. He has published more than 100 articles in scientific journals on subjects as wide-ranging as bacterial genetics, tuberculosis, idiopathic diseases and computer modelling of evolution. He has contributed to more than a dozen books and has edited a book on the genetics of mycobacteria. He produced a widely reported artificial life computer model which modelled evolution in organisms.
McFadden has lectured extensively in the UK, Europe, the US and Japan and his work has been featured on radio, television and national newspaper articles particularly for the Guardian. His present post, which he has held since 2001, is Professor of Molecular Genetics at the University of Surrey. Living in London, he is married and has one son.
Quantum evolution
McFadden wrote the popular science book, Quantum Evolution. The book examines the role of quantum mechanics in life, evolution and consciousness. The book has been described as offering an alternative evolutionary mechanism, beyond the neo-Darwinian framework.
The book received positive reviews by Kirkus Reviews and Publishers Weekly. It was negatively reviewed in the journal Heredity by evolutionary biologist Wallace Arthur.
Writing
In 2006 McFadden co-edited the book, Human Nature: Fact and Fiction on the insights of both science and literature on human nature, with contributions from Ian McEwan, Philip Pullman, Steven Pinker, A.C. Grayling and others.
in 2014 McFadden co-wrote the popular science book, Life on the Edge: The Coming Age of Quantum Biology, in which he and Jim Al-Khalili further explore quantum biology and particularly recent findings in photosynthesis, enzyme catalysis, avian navigation, olfaction, mutation and neurobiology.
The book received positive reviews, for example:
"'Life on the Edge’ gives the clearest account I’ve ever read of the possible ways in which the very small events of the quantum world can affect the world of middle-sized living creatures like us. With great vividness and clarity it shows how our world is tinged, even saturated, with the weirdness of the quantum." (Philip Pullman)
"Hugely ambitious ... the skill of the writing provides the uplift to keep us aloft as we fly through the strange and spectacular terra incognita of genuinely new science." (Tom Whipple The Times)
McFadden regularly writes articles for The Guardian newspaper on topics as varied as quantum mechanics, evolution and genetically modified crops, and has reviewed books there. The Washington Post and Frankfurter Allgemeine Sonntagszeitung have also published his articles.
Life Is Simple: How Occam’s Razor Set Science Free and Unlocked the Universe (Basic Books, 384pp) ISBN 9781529364934
See also
Electromagnetic theories of consciousness
Mind's eye
Quantum Aspects of Life
References
External links
- Johnjoe McFadden's Homepage
Johnjoe McFadden's Machines Like Us interview
- Johnjoe McFadden's homepage at the University of Surrey, UK.
Quantum Evolution - Explore the role of quantum mechanics in life, evolution and consciousness.
- Life on the Edge: The Coming of Age of Quantum Biology. Johnjoe McFadden and Jim Al-Khalili (2014)
Living people
1956 births
Alumni of Imperial College London
Academics of the University of Surrey
British science writers
British biologists
Evolutionary biologists
Extended evolutionary synthesis
Quantum biology
Writers from County Donegal
Scientists from County Donegal
21st-century Irish biologists | Johnjoe McFadden | Physics,Biology | 880 |
31,474,975 | https://en.wikipedia.org/wiki/Abstract%20model%20theory | In mathematical logic, abstract model theory is a generalization of model theory that studies the general properties of extensions of first-order logic and their models.
Abstract model theory provides an approach that allows us to step back and study a wide range of logics and their relationships. The starting point for the study of abstract models, which resulted in good examples was Lindström's theorem.
In 1974 Jon Barwise provided an axiomatization of abstract model theory.
See also
Lindström's theorem
Institution (computer science)
Institutional model theory
References
Further reading
Mathematical logic
Metatheorems
Model theory | Abstract model theory | Mathematics | 122 |
2,017,486 | https://en.wikipedia.org/wiki/Inguma | Inguma (Mauma in Baigorri) is the god of dreams in Basque mythology and religion. He is regarded as a malevolent force who enters houses at night and plagues residents with nightmares, sometimes killing them. He Storm''), the third film in the Baztán trilogy by Dolores Redondo.
References
Basque gods
Dreams in religion
Sleep in mythology and folklore | Inguma | Biology | 80 |
49,949,922 | https://en.wikipedia.org/wiki/TRIM52 | TRIM52, also known as RNF102, is a protein in the tripartite motif family. In humans, it is encoded by the gene of the same name. Knockdown of this gene induces apoptosis. This gene's overexpression has been implicated in multiple types of cancer including ovarian cancer, gastric cancer, and colon cancer.
References
Proteins
Oncogenes | TRIM52 | Chemistry | 85 |
2,710,895 | https://en.wikipedia.org/wiki/Innovators%20Under%2035 | The Innovators Under 35 is a peer-reviewed annual award and listicle published by MIT Technology Review magazine, naming the world's top 35 innovators under the age of 35.
Background
The subcategories for the awards change from year to year, but generally focus on biomedicine, computing, communications, business, energy, materials, and the web. Nominations are sent from around the world and evaluated by a panel of expert judges. In some years, an Innovator of the Year or a Humanitarian of the Year is also named from among the winners.
The purpose of the award is to honor "Exceptionally talented young innovators whose work has the greatest potential to transform the world."
History
The award was started in 1999 as the TR100, with 100 winners, but was changed to TR35 (35 winners) starting in 2005. The awards are presented to the winners at the annual Emtech conference on emerging technologies, held in the fall at the Massachusetts Institute of Technology (MIT), where there is an awards ceremony and reception. There are several regional TR35 lists produced by Technology Review also, such as the list of the top 35 innovators under 35 in Europe, MENA, Latin America, Asia Pacific, China and India. The regional winners are automatically qualified as candidates for the global list.
In 2013, the list was renamed to Innovators Under 35.
Laureates
Laureates of the award include the co-founder of Facebook, Mark Zuckerberg, the co-founders of Google, Larry Page and Sergey Brin, the co-founder of Tesla, JB Straubel, co-founder of iRobot, Helen Greiner, Linus Torvalds, Muyinatu Bell, Ewan Birney, Katherine Isbister, Jay Shendure, Mandy Chessell, Eben Upton, Shinjini Kundu, Shawn Fanning, Amy S. Bruckman, Himabindu Lakkaraju, Ali Khademhosseini, Rediet Abebe, Ahmad Nabeel, Vivian Chu, and Thomas Truong.
Notable people
Juliana Chan, Singaporean biologist and science communicator
Aaron Dollar, Yale University professor of Mechanical Engineering & Materials Science and Computer Science
References
Business and industry awards
Science and technology awards
Awards established in 1999
1999 establishments in Massachusetts | Innovators Under 35 | Technology | 482 |
75,185,459 | https://en.wikipedia.org/wiki/Interdigitation | Interdigitation is the interlinking of biological components that resembles the fingers of two hands being locked together. It can be a naturally occurring or man-made state.
Examples
Naturally occurring interdigitation includes skull sutures that develop during periods of brain growth, and which remain thin and straight, and later develop complex fractal interdigitations that provide interlocking strength. A layer of the retina where photoreception occurs is called the interdigitation zone. Adhesion or diffusive bonding occurs when sections of polymer chains from one surface interdigitate with those of an adjacent surface. In the dermis, dermal papillae (DP) (singular papilla, diminutive of Latin papula, 'pimple') are small, nipple-like extensions of the dermis into the epidermis, also known as interdigitations. The distal convoluted tubule (DCT), a portion of kidney nephron, can be recognized by several distinct features, including lateral membrane interdigitations with neighboring cells.
Some hypotheses contend that crown shyness, the interdigitation of canopy branches, leads to "reciprocal pruning" of adjacent trees.
Interdigitation is also found in biological research. Interdigitation fusion is a method of preparing calcium- and phosphate-loaded liposomes. Drugs inserted in the bilayer biomembrane may influence the lateral organization of the lipid membrane, with interdigitation of the membrane to fill volume voids. A similar interdigitation process involves investigating dissipative particle dynamics (DPD) simulations by adding alcohol molecules to the bilayers of double-tail lipids. Pressure-induced interdigitation is used to study hydrostatic pressure of bicellular dispersions containing anionic lipids.
References
Morphology (biology)
Research | Interdigitation | Biology | 396 |
21,958,586 | https://en.wikipedia.org/wiki/Momentum%20curtain | Discovered by British engineer Christopher Cockerell, the momentum curtain is a unique and efficient way to reduce friction between a vehicle and its surface of travel, be it water or land, by levitating the vehicle above this surface via a cushion of air. It is this principle of levitation upon which a hovercraft is based, and Christopher Cockerell set about applying his momentum curtain theory to hovercraft to increase their abilities in overcoming friction in travel.
Levitating a vehicle above the ground/water to reduce its drag was not a new concept. John Thornycroft, in 1877, discovered that trapping air beneath a ship's hull, or pumping air beneath it with bellows, decreased the effects of friction upon the hull thereby increasing the ship's top attainable speeds. However, technology at the time was insufficient for Thornycroft's ideas to be developed further.
Cockerell used the idea of pumped air under a hull (this then becoming a plenum, i.e. the opposite of a vacuum) and improved upon it further. Simply pumping air between a hull and the ground wasted a lot of energy in terms of leakage of air around the edges of the hull. Cockerell discovered that by means of generating a wall (curtain) of high-speed downward-directed air around the edges of a hull, that less air leaked out from the sides (due to the momentum of the high-speed air molecules), and thus a greater pressure could be attained beneath the hull. So, with the same input power, a greater amount of lift could be developed, and the hull could be lifted higher above the surface, reducing friction and increasing clearance. This theory was tried, tested and developed throughout the 1950s and 1960s until it was finally realised in full-scale in the SR-N1 hovercraft.
References
Classical mechanics | Momentum curtain | Physics | 382 |
34,032,918 | https://en.wikipedia.org/wiki/Reduced%20viscosity | In fluid dynamics, the reduced viscosity of a polymer is the ratio of the relative viscosity increment () to the mass concentration of the species of interest (c). It has units of volume per unit mass.
The reduced viscosity is given by:
where is the relative viscosity increment given by (Where is the viscosity of the solvent.)
See also
Relative viscosity
Viscosity
Intrinsic viscosity
Huggins equation
References
Viscosity | Reduced viscosity | Physics,Chemistry | 103 |
3,567,433 | https://en.wikipedia.org/wiki/Polygonal%20rifling | Polygonal rifling ( ) is a type of gun barrel rifling where the traditional sharp-edged "lands and grooves" are replaced by less pronounced "hills and valleys", so the barrel bore has a polygonal (usually hexagonal or octagonal) cross-sectional profile.
Polygonal riflings with a larger number of edges have shallower corners, which provide a better gas seal in relatively large diameter bores. For instance, in the pre-Gen 5 Glock pistols, octagonal rifling is used in the large diameter .45 ACP bore, which has an 11.23 mm (0.442 in) diameter, since it resembles a circle more closely than the hexagonal rifling used in smaller diameter bores.
History
The principle of the polygonal barrel was proposed in 1853 by Sir Joseph Whitworth, a prominent British engineer and entrepreneur. Whitworth experimented with cannons using twisted hexagonal barrels instead of traditional round rifled barrels and patented the design in 1854. In 1856, this concept was demonstrated in a series of experiments using brass howitzers. The British military, however, rejected Whitworth's polygonal designs. Afterwards, Whitworth adopted the concept on small arms, believing that polygonal rifling could be used to create a more accurate rifled musket to replace the Pattern 1853 Enfield.
During the American Civil War, Whitworth's polygonally rifled Whitworth rifle was successfully used by the Confederate States Army marksmen (known as the Whitworth Sharpshooters) to terrorize Union Army artillery crews. The muzzle-loading Whitworth rifle is often called the "sharpshooter" because of its superior accuracy compared to other rifled muskets of its era (far surpassing the breechloading Sharps rifle used by the Union Army) and is considered one of the earliest examples of a sniper rifle. The Whitworth sharpshooters killed multiple high-ranking Union officers, most famously Major General John Sedgwick, who was fatally shot at a range of during the Battle of Spotsylvania Court House.
The last service rifles to use polygonal rifling were the British Lee–Metford rifles, named after their proprietary Metford rifling, American M1895 Lee Navy rifles (both designed by James Paris Lee), and the Japanese Arisaka rifles designed by Colonel Arisaka and Colonel Nambu. The Lee-Metford rifle turned out to be a failure after the switch to the erosive Cordite proved too much for the smooth and shallow Metford rifling, which had been designed to reduce barrel fouling for black powder ammunition. When the Metford rifling design was dropped, the Lee–Metford became the Lee–Enfield rifle in favour of Enfield type rifling, which was deep-grooved for longer service life. Lee Navy shared the same fate since Rifleite adopted for 6 mm Lee Navy was analogous to Cordite. However, the Arisakas were manufactured extensively for the Imperial Japanese Army from 1897 to 1945 with no excessive rifling erosion problems, as the Japanese had adopted a better, non-erosive, rifle powder.
During World War II, polygonal rifling emerged again in the German MG 42 general-purpose machine guns, as an outgrowth of a cold-hammer forging process developed by German engineers prior to the outbreak of the war. The process addressed the need to produce large quantities of more durable gun barrels in less time than those produced with traditional methods, as the MG 42's infamously fast rate of fire tended to overheat the barrel quickly and thus warranted frequent barrel changes. The MG 42's successor, the Rheinmetall MG 3 machine gun, can also have polygonal rifling.
Heckler & Koch was the first manufacturer to begin using polygonal rifling in modern small arms like the G3A3 battle rifle and several semi-automatic hunting rifles like the HK SL7. Companies that utilize this method today include Tanfoglio, Heckler & Koch, Glock (Gen 1-4), Magnum Research, Česká Zbrojovka, Kahr Arms, Walther and Israel Weapon Industries. Polygonal rifling is usually found only in pistol barrels, and is less common in rifles; however, some extremely high end precision rifles like the Heckler & Koch PSG1 and its Pakistani variant PSR-90, and the LaRue Tactical Stealth System sniper rifle use polygonal bores.
Design
A number of advantages are claimed by the supporters of polygonal rifling. These include:
Not compromising the barrel's thickness in the area of each groove as with traditional rifling, and also less sensitive to stress concentration-induced barrel failure.
Providing a better gas seal around the projectile as polygonal bores tend to have shallower, smoother edges with a slightly smaller bore area, which translates into more efficient seal of the combustion gases trapped behind the bullet, slightly greater (consistency in) muzzle velocities and slightly increased accuracy.
Less bullet deformation, resulting in less frictional resistance when the bullet travels through the barrel, which helps to increase muzzle velocity. The lack of sharp surface deformation on the bullet (rifling marks) also reduces drag in flight.
Reduced buildup of copper or lead within the barrel, as there are no sharp rifling edges to "shred" into the bullet surface and no pronounced corners that can accumulate foulings difficult to clean, which results in easier maintenance. The reduced fouling also theoretically translates to a simpler "copper equilibrium" profile, which is potentially beneficial to accuracy.
Prolonged barrel life, as the thermomechanical stress upon the riflings are spread over a larger area, hence less wear over time.
However, precision target pistols such as those used in Bullseye and IHMSA almost universally use traditional rifling, as do target rifles. The debate among target shooters is almost always one of cut vs. button rifled barrels, as traditional rifling is dominant. Polygonal rifled barrels are used competitively in pistol action shooting, such as IDPA and IPSC competitions.
Part of the difference may be that most polygonal rifling is produced by hammer forging the barrel around a mandrel containing a reverse impression of the rifling. Hammer forging machines are tremendously expensive, far out of the reach of custom gunsmiths (unless they buy pre-rifled blanks), and so are generally only used for production barrels by large companies. The main advantage of a hammer forging process is that it can rifle, chamber, and contour a bored barrel blank in one step. First applied to rifling in Germany in 1939, hammer forging has remained popular in Europe but was only later used by gunmakers in the United States. The hammer forging process produces large amounts of stress in the barrel that must be relieved by careful heat treatment, a process that is less necessary in a traditionally cut or button rifled barrel. Due to the potential for residual stress causing accuracy problems, precision shooters in the United States tend to avoid hammer forged barrels, and this limits them in the type of available rifling. From a practical standpoint, any accuracy issues resulting from the residual stresses of hammer forging are extremely unlikely to be an issue in a defense or service pistol, or a typical hunting rifle.
Variations
Different manufacturers employ varying polygonal rifling profiles. H&K, CZ and Glock use a female type of polygonal rifling. This type has a smaller bore area than the male type of polygonal rifling designed and used by Lothar Walther. Other companies such as Noveske Rifleworks (Pac Nor) and LWRC use a rifling more like the conventional rifling, with both of each land's sides being sloped but having a flat top and defined corners; this type of rifling is more a canted land type of rifling than polygonal rifling.
Forensic examination
Polygonal rifling prevents the forensic firearms examiner from microscopically measuring the width of land and groove impressions (so-called "ballistic fingerprinting") because the polygonal riflings have a rounded profile instead of well-defined rectangular edges, which causes few noticeable surface deformations. In the FBI GRC file, the land and groove widths for these firearms are listed as 0.000. However, forensic identification of firearms (in court-cases, etc.) is based on microscopic examination of tooling marks on the surface of the bore, produced by the manufacturing process and modified by the drag of bullet jackets on that same surface. Thus, the bore surface of individual firearms is always unique.
See also
Ballistic fingerprinting
Smoothbore
References
External links
Glockmeister FAQ, with information on lead bullets in Glock firearms.
The Gun Zone 2001 e-mail questions, with information on cast bullets in Glock and H&K handguns.
Barrel making FAQ, with information on methods of making and rifling barrels
6mmBR barrel FAQ, covers new polygonal profile button rifled barrels
Polygonal Rifling, A comment from Gale McMillan about lead bullets and polygonal rifling.
Firearm components | Polygonal rifling | Technology | 1,910 |
8,159,630 | https://en.wikipedia.org/wiki/Zurab%20Rtveliashvili | Zurab Rtveliashvili (16 October 1967 – 20 April 2021) was a Georgian poet and multi-media performer. Rtveliashvili was born in Karaganda, Kazakhstan. He is featured in the 2009 documentary film At the Top of My Voice.
In 2010, Rtveliashvili was offered asylum in Stockholm, Sweden, from persecution in his native Georgia.
Death
Rtveliashvili died in Tbilisi after a battle with cancer on 20 April 2021.
Publications
I-reqtsia (1997)
Apokrifi (2001)
Anarqi (2006)
References
20th-century poets from Georgia (country)
1967 births
2021 deaths
Multimedia artists
Male poets from Georgia (country)
21st-century poets from Georgia (country)
20th-century male writers
21st-century male writers
People from Karaganda | Zurab Rtveliashvili | Technology | 169 |
474,567 | https://en.wikipedia.org/wiki/Data%20striping | In computer data storage, data striping is the technique of segmenting logically sequential data, such as a file, so that consecutive segments are stored on different physical storage devices.
Striping is useful when a processing device requests data more quickly than a single storage device can provide it. By spreading segments across multiple devices which can be accessed concurrently, total data throughput is increased. It is also a useful method for balancing I/O load across an array of disks. Striping is used across disk drives in redundant array of independent disks (RAID) storage, network interface controllers, disk arrays, different computers in clustered file systems and grid-oriented storage, and RAM in some systems.
Method
One method of striping is done by interleaving sequential segments on storage devices in a round-robin fashion from the beginning of the data sequence. This works well for streaming data, but subsequent random accesses will require knowledge of which device contains the data. If the data is stored such that the physical address of each data segment is assigned a one-to-one mapping to a particular device, the device to access each segment requested can be calculated from the address without knowing the offset of the data within the full sequence.
Other methods might be employed in which sequential segments are not stored on sequential devices. Such non-sequential interleaving can have benefits in some error correction schemes.
Advantages and disadvantages
Advantages of striping include performance and throughput. Sequential time interleaving of data accesses allows the lesser data access throughput of each storage devices to be cumulatively multiplied by the number of storage devices employed. Increased throughput allows the data processing device to continue its work without interruption, and thereby finish its procedures more quickly. This is manifested in improved performance of the data processing.
Because different segments of data are kept on different storage devices, the failure of one device causes the corruption of the full data sequence. In effect, the failure rate of the array of storage devices is equal to the sum of the failure rate of each storage device. This disadvantage of striping can be overcome by the storage of redundant information, such as parity, for the purpose of error correction. In such a system, the disadvantage is overcome at the cost of requiring extra storage.
Terminology
The segments of sequential data written to or read from a disk before the operation continues on the next disk are usually called chunks, strides or stripe units, while their logical groups forming single striped operations are called strips or stripes. The amount of data in one chunk (stripe unit), often denominated in bytes, is variously referred to as the chunk size, stride size, stripe size, stripe depth or stripe length. The number of data disks in the array is sometimes called the stripe width, but it may also refer to the amount of data within a stripe.
The amount of data in one stride multiplied by the number of data disks in the array (i.e., stripe depth times stripe width, which in the geometrical analogy would yield an area) is sometimes called the stripe size or stripe width. Wide striping occurs when chunks of data are spread across multiple arrays, possibly all the drives in the system. Narrow striping occurs when the chunks of data are spread across the drives in a single array.
Applications
Data striping is used in some databases, such as Sybase, and in certain RAID devices under software or hardware control, such as IBM's 9394 RAMAC Array subsystem. File systems of clusters also use striping. Oracle Automatic Storage Management allows ASM files to be either coarse or fine striped.
RAID
In some RAID configurations, such as RAID 0, failure of a single member drive of the RAID array causes all stored data to be lost. In other RAID configurations, such as a RAID 5 that contains distributed parity and provides redundancy, if one member drive fails the data can be restored using the other drives in the array.
LVM2
Data striping can also be achieved with Linux's Logical Volume Management (LVM). The LVM system allows for the adjustment of coarseness of the striping pattern. LVM tools will allow implementation of data striping in conjunction with mirroring. LVM offers the added benefit of read and write caching on NVM Express for slow spinning storage. LVM has other advantages that are not directly related to data striping (like snapshots, dynamic resizing, etc).
Btrfs and ZFS
Have RAID like features but with the security of chunk integrity to detect bad blocks, and the added flexibility of adding arbitrary numbers of extra drives. They also have other advantages that are not directly related to data striping (copy on write, etc).
See also
Partition alignment
Link aggregation
References
Data partitioning
RAID
Balancing technology | Data striping | Engineering | 973 |
237,170 | https://en.wikipedia.org/wiki/Quadrature%20mirror%20filter | In digital signal processing, a quadrature mirror filter is a filter whose magnitude response is the mirror image around of that of another filter. Together these filters, first introduced by Croisier et al., are known as the quadrature mirror filter pair.
A filter is the quadrature mirror filter of if .
The filter responses are symmetric about :
In audio/voice codecs, a quadrature mirror filter pair is often used to implement a filter bank that splits an input signal into two bands. The resulting high-pass and low-pass signals are often reduced by a factor of 2, giving a critically sampled two-channel representation of the original signal. The analysis filters are often related by the following formula in addition to quadrate mirror property:
where is the frequency, and the sampling rate is normalized to .
This is known as power complementary property.
In other words, the power sum of the high-pass and low-pass filters is equal to 1.
Orthogonal wavelets – the Haar wavelets and related Daubechies wavelets, Coiflets, and some developed by Mallat, are generated by scaling functions which, with the wavelet, satisfy a quadrature mirror filter relationship.
Relationship to other filter banks
The earliest wavelets were based on expanding a function in terms of rectangular steps, the Haar wavelets. This is usually a poor approximation, whereas Daubechies wavelets are among the simplest but most important families of wavelets. A linear filter that is zero for “smooth” signals, given a record of points is defined as
It is desirable to have it vanish for a constant, so taking the order , for example,
And to have it vanish for a linear ramp, so that
A linear filter will vanish for any , and this is all that can be done with a fourth-order wavelet. Six terms will be needed to vanish a quadratic curve, and so on, given the other constraints to be included. Next an accompanying filter may be defined as
This filter responds in an exactly opposite manner, being large for smooth signals and small for non-smooth signals. A linear filter is just a convolution of the signal with the filter’s coefficients, so the series of the coefficients is the signal that the filter responds to maximally. Thus, the output of the second filter vanishes when the coefficients of the first one are input into it. The aim is to have
Where the associated time series flips the order of the coefficients because the linear filter is a convolution, and so both have the same index in this sum. A pair of filters with this property are defined as quadrature mirror filters.
Even if the two resulting bands have been subsampled by a factor of 2, the relationship between the filters means that approximately perfect reconstruction is possible. That is, the two bands can then be upsampled, filtered again with the same filters and added together, to reproduce the original signal exactly (but with a small delay). (In practical implementations, numeric precision issues in floating-point arithmetic may affect the perfection of the reconstruction.)
Further reading
A. Croisier, D. Esteban, C. Galand. Perfect channel splitting by use of interpolation/decimation tree decomposition techniques. First International Conference on Sciences and Systems, Patras, August 1976, pp. 443–446.
Johnston, J. D. A Filter Family Designed for use in Quadrature Mirror Filter Banks., Acoustics, Speech and Signal Processing, IEEE International Conference, 5, 291–294, April, 1980.
Binomial QMF, also known as Daubechies wavelet filters.
NJIT Symposia on Subbands and Wavelets 1990, 1992, 1994, 1997.
Mohlenkamp, M. J. A Tutorial on Wavelets and Their Applications. University of Colorado, Boulder, Dept. of Applied Mathematics, 2004.
Polikar, R. Multiresolution Analysis: The Discrete Wavelet Transform. Rowan University, NJ, Dept. of Electrical and Computer Engineering.
References
Digital signal processing
Filter theory
Wavelets | Quadrature mirror filter | Engineering | 847 |
41,568,330 | https://en.wikipedia.org/wiki/Finrozole | Finrozole is an aromatase (CYP19A1) inhibitor.
References
Aromatase inhibitors
Nitriles
Triazoles
4-Fluorophenyl compounds | Finrozole | Chemistry | 36 |
43,358,208 | https://en.wikipedia.org/wiki/Definitive%20diagnostic%20data | Definitive diagnostic data are a specific type of data used in the investigation and diagnosis of IT system problems; transaction performance, fault/error or incorrect output.
Qualification
To qualify as Definitive Diagnostic Data it must be possible to correlate the data with a user's experience of a problem instance, and for that reason they will typically be time stamped event information. Log and trace records are common sources of Definitive Diagnostic Data.
Statistical data
Generally, statistical data can't be used as it lacks the granularity necessary to directly associate it with a user's experience of a problem instance. However, it can be adapted by reducing the sample interval to a value approaching the response time of the system transaction being performed.
Further information
Definitive Diagnostic Data, S. Kendrick, Sharkfest 2014 Conference
Offord, Paul (2011). RPR: A Problem Diagnosis Method for IT Professionals. Advance Seven Limited. .
Data
Information technology | Definitive diagnostic data | Technology | 185 |
11,882,154 | https://en.wikipedia.org/wiki/CD4%20immunoadhesin | CD4 immunoadhesin is a recombinant fusion protein consisting of a combination of CD4 and the fragment crystallizable region, similarly known as immunoglobulin. It belongs to the antibody (Ig) gene family. CD4 is a surface receptor for human immunodeficiency virus (HIV). The CD4 immunoadhesin molecular fusion allow the protein to possess key functions from each independent subunit. The CD4 specific properties include the gp120-binding and HIV-blocking capabilities. Properties specific to immunoglobulin are the long plasma half-life and Fc receptor binding. The properties of the protein means that it has potential to be used in AIDS therapy as of 2017. Specifically, CD4 immunoadhesin plays a role in antibody-dependent cell-mediated cytotoxicity (ADCC) towards HIV-infected cells. While natural anti-gp120 antibodies exhibit a response towards uninfected CD4-expressing cells that have a soluble gp120 bound to the CD4 on the cell surface, CD4 immunoadhesin, however, will not exhibit a response. One of the most relevant of these possibilities is its ability to cross the placenta.
History and significance
CD4 immunoadhesin was first developed in the mid-1990s as a potential therapeutic agent and treatment for HIV/AIDS. The protein is a fusion of the extracellular domain of the CD4 receptor and the Fc domain of human immunoglobulin G (IgG), the most abundant antibody isotype in the human body. The Fc domain of IgG contributes several important properties to the fusion protein, including increased half-life in the bloodstream, enhanced binding to Fc receptors on immune cells, and the ability to activate complement.
The development of CD4 immunoadhesin stems from the observation that the CD4 receptor plays a critical role in the entry of HIV into human cells. The CD4 receptor is used as a primary receptor by HIV to attach to the surface of target cells. HIV then uses a co-receptor, either CCR5 or CXCR4, to facilitate entry into the cell. The ability of CD4 immunoadhesin to block the interaction between the CD4 receptor and HIV was intended to prevent HIV from entering and infecting human cells.
CD4 immunoadhesin has been extensively studied in preclinical and clinical trials as a potential treatment for HIV/AIDS. In addition to its antiviral activity, CD4 immunoadhesin has also been investigated for its potential immunomodulatory effects. For example, the fusion protein has been shown to induce the production of cytokines, such as interleukin-2 (IL-2) and interferon-gamma (IFN-γ), which are important for the activation and proliferation of immune cells.
Despite its potential as a therapeutic agent, the development of CD4 immunoadhesin has faced several challenges. One major obstacle is the emergence of drug-resistant strains of HIV, which can limit the effectiveness of CD4 immunoadhesin in certain patients. Additionally, the need for frequent dosing and the potential for immune responses against the fusion protein have also limited the clinical application of CD4 immunoadhesin.
Nevertheless, knowledge on the function of CD4 immunoadhesin has contributed to increased understanding of the biology of HIV and the mechanisms of viral entry. The protein has also inspired the development of other immunoadhesin molecules, such as CD4-IgG2 and CD4-mimetic compounds, which are being investigated as potential therapies for HIV/AIDS.
Structure and function
CD4 immunoadhesin is a bifunctional protein that has the ability to block HIV infection, inhibit autoreactive T-cell activation, and potentially modulate immune responses. Its structure, which consists of the extracellular domain of CD4 and the Fc region of IgG1, allows for soluble circulation throughout the body.
The extracellular domain of CD4 contains four immunoglobulin-like domains (D1-D4), which are responsible for binding to the major histocompatibility complex (MHC) class II molecules on antigen-presenting cells. The Fc region of IgG1 is responsible for mediating effector functions such as antibody-dependent cell-mediated cytotoxicity (ADCC) and complement activation.
CD4-Ig works by mimicking the binding of CD4 to HIV, thereby preventing the virus from infecting T-helper cells. HIV infects T-helper cells by binding to the CD4 receptor and the co-receptor CCR5 or CXCR4. CD4-Ig binds to the viral envelope glycoprotein gp120, which is responsible for HIV binding to CD4. By binding to gp120, CD4-Ig prevents the virus from binding to the CD4 receptor on T-helper cells, thus preventing infection.
CD4-Ig has also been investigated as a potential treatment for other diseases that involve immune dysregulation, such as multiple sclerosis and rheumatoid arthritis. In these diseases, CD4-Ig may work by inhibiting the activation of autoreactive T-cells. CD4-Ig binds to MHC class II molecules on antigen-presenting cells, thereby preventing the activation of T-helper cells that are specific for self-antigens.
In addition to its role in blocking HIV infection and inhibiting autoreactive T-cell activation, CD4-Ig may also have immunomodulatory effects. CD4 is known to be involved in the regulation of immune responses, and CD4-Ig may therefore have the ability to modulate immune responses in a way that is beneficial for the treatment of various diseases.
CD4 immunoadhesin functions by blocking the interaction between the HIV envelope glycoprotein (gp120) and the CD4 receptor on the surface of CD4-positive cells. By binding to gp120, CD4 immunoadhesin prevents the virus from attaching to and entering host cells, thus inhibiting the spread of HIV infection. CD4 immunoadhesin has been shown to be effective in vitro and in animal models of HIV infection, and has been used in clinical trials as a potential treatment for HIV/AIDS.
Clinical applications
CD4 immunoadhesin has been studied extensively in preclinical and clinical trials as a potential treatment for HIV/AIDS. In a phase I/II clinical trial, CD4 immunoadhesin was found to be safe and well-tolerated in HIV-positive patients, and was able to reduce viral load in some patients. However, the development of CD4 immunoadhesin as a therapeutic agent for HIV/AIDS has limitations, including the emergence of drug-resistant strains of HIV, the need for frequent dosing, and the potential for immune responses against the fusion protein.
In a phase I/II clinical trial conducted by the National Institute of Allergy and Infectious Diseases (NIAID), 25 HIV-positive patients received intravenous infusions of CD4 immunoadhesin over a period of 12 weeks. The trial found that CD4 immunoadhesin was safe and well-tolerated in all patients, with no serious adverse events reported. Additionally, some patients showed a reduction in viral load, although the effect was not sustained after the end of the treatment period.
Despite these results, the development of CD4 immunoadhesin as a therapeutic agent for HIV/AIDS has faced several difficulties. One major obstacle is the emergence of drug-resistant strains of HIV, which can limit the effectiveness of CD4 immunoadhesin in certain patients. Additionally, the need for frequent dosing and the potential for immune responses against the fusion protein have also limited the clinical application of CD4 immunoadhesin.
To address these challenges, researchers have explored various strategies to improve the efficacy and safety of CD4 immunoadhesin. For example, some studies have investigated the use of CD4 immunoadhesin in combination with other antiretroviral therapies to enhance the antiviral effect and reduce the risk of drug resistance. Other studies have focused on engineering CD4 immunoadhesin variants with improved pharmacokinetic properties and reduced immunogenicity.
Future uses
CD4 immunoadhesin has been used in the treatment of various diseases; many of which are still being studied and developed. Here are some future uses of CD4 immunoadhesin:
HIV/AIDS: CD4 immunoadhesin has been studied extensively for its potential use in the treatment of HIV/AIDS. It works by binding to the viral envelope protein and blocking the entry of the virus into CD4+ T cells, thereby inhibiting viral replication. A phase I/II clinical trial involving CD4 immunoadhesin showed promising results in reducing the viral load in HIV-infected patients . Further studies are underway to explore the efficacy of CD4 immunoadhesin as a therapeutic agent for HIV/AIDS.
Autoimmune diseases: CD4 immunoadhesin has been investigated for its potential use in the treatment of autoimmune diseases such as rheumatoid arthritis, multiple sclerosis, and psoriasis. It acts by binding to the CD4 receptor on T cells and inhibiting the activation and proliferation of autoreactive T cells. Preclinical studies have shown that CD4 immunoadhesin can reduce disease severity and improve clinical outcomes in animal models of autoimmune diseases.
Cancer: CD4 immunoadhesin has shown potential in the treatment of cancer, particularly in enhancing the immune response against cancer cells. It works by targeting the CD4 receptor on T cells and stimulating the production of cytokines and chemokines that can promote tumor cell death. CD4 immunoadhesin has been shown to be effective in preclinical studies of various types of cancer, including melanoma, breast cancer, and leukemia.
Inflammatory diseases: CD4 immunoadhesin has been investigated for its potential use in the treatment of inflammatory diseases such as asthma and chronic obstructive pulmonary disease (COPD). It acts by binding to the CD4 receptor on T cells and reducing the release of pro-inflammatory cytokines and chemokines that cause inflammation in the lungs. Preclinical studies have shown that CD4 immunoadhesin can reduce inflammation and improve lung function in animal models of asthma and COPD.
References
Engineered proteins
Immunology | CD4 immunoadhesin | Biology | 2,268 |
584,238 | https://en.wikipedia.org/wiki/Cladogenesis | Cladogenesis is an evolutionary splitting of a parent species into two distinct species, forming a clade.
This event usually occurs when a few organisms end up in new, often distant areas or when environmental changes cause several extinctions, opening up ecological niches for the survivors and causing population bottlenecks and founder effects changing allele frequencies of diverging populations compared to their ancestral population. The events that cause these species to originally separate from each other over distant areas may still allow both of the species to have equal chances of surviving, reproducing, and even evolving to better suit their environments while still being two distinct species due to subsequent natural selection, mutations and genetic drift.
Cladogenesis is in contrast to anagenesis, in which an ancestral species gradually accumulates change, and eventually, when enough is accumulated, the species is sufficiently distinct and different enough from its original starting form that it can be labeled as a new form - a new species. With anagenesis, the lineage in a phylogenetic tree does not split.
To determine whether a speciation event is cladogenesis or anagenesis, researchers may use simulation, evidence from fossils, molecular evidence from the DNA of different living species, or modelling. It has however been debated whether the distinction between cladogenesis and anagenesis is necessary at all in evolutionary theory.
See also
Anagenesis
Evolutionary biology
Speciation
References
Evolutionary biology concepts
Phylogenetics | Cladogenesis | Biology | 280 |
36,845,407 | https://en.wikipedia.org/wiki/3D%20cell%20culturing%20by%20magnetic%20levitation | The Magnetic Levitation Method (MLM) is a technique for growing 3D cell cultures. In this approach, cells are treated with magnetic nanoparticles and exposed to spatially varying magnetic fields produced by neodymium magnetic drivers. The process causes cells to levitate to the air-liquid interface within a standard petri dish. The magnetic nanoparticle assemblies consist of magnetic iron oxide nanoparticles, gold nanoparticles, and cell-adhesive peptide sequences.
This method can be applied to cultures with five hundred to millions of cells and is adaptable for use in single-dish systems as well as high-throughput, low-volume systems. Additionally, magnetized cells can be utilized as building blocks for magnetic 3D bioprinting.
Overview
3D cell culture methods have been developed to enable research into the behavior of cells in an environment that represents their interactions in-vivo more accurately.
3D cell culturing by magnetic levitation uses biocompatible polymer-based reagents to deliver magnetic nanoparticles to individual cells, so that an applied magnetic driver can levitate cells off the bottom of the cell culture dish, rapidly bringing cells together near the air-liquid interface. This act initiates cell-cell interactions in the absence of any artificial surface or matrix. Magnetic fields are designed to form 3D multicellular structures, including the expression of extracellular matrix proteins. The matrix, protein expression, and response to exogenous agents of the resulting tissue show similarity to in-vivo results.
History
3D cell culturing by magnetic levitation method (MLM) was developed with collaboration between scientists at Rice University and University of Texas MD Anderson Cancer Center in 2008. 3D cell culturing technology was later licensed and commercialized by Nano3D Biosciences.
Mechanism
The mechanism of the magnetic levitation model in 3D cell culturing combines various techniques within the frame of nanobiotechnology. One approach to the process is described below.
At the beginning of the process, magnetite nanoparticles are added, then dispersed uniformly throughout the cell culture. After the cell culture containing the nanoparticles has been allowed to incubate, it is moved to a petri dish, and a magnetic drive is placed on top of the petri dish. When an external magnetic field is applied through the drive, it causes the cell culture mixture, still containing the magnetic nanoparticles, to levitate within the petri dish.
The levitation results in immediate cell-cell interaction. After the mixture disperses and stretches, there is gradual formation of 3D structures that are visible after about 4 hours. The magnetic iron oxide nanoparticles are described as the "nanoshuttle", in which their magnetic properties allows the cells to rise within the culture they are added to due to the external magnetic field, thus "shuttling".
Protein expression
Patterns of protein expression in levitated cultures resemble the patterns observed in-vivo. For example, as shown in the figure on the right, N-cadherin expression in levitated human glioblastoma (GBM) cells was similar to that seen in human tumor xenografts grown in immunodeficient mice (comparing the left and middle images), while standard 2D culture showed much weaker expression that did not match xenograft distribution (comparing the left and right images). The transmembrane protein N-cadherin is often used as an indicator of in-vivo-like tissue assembly in 3D culturing.
Referring to the figure, in the mouse and levitated culture (left and middle image), N-cadherin is clearly concentrated in the membrane, and also present in cytoplasm and cell junctions, whereas the 2D system (right image) shows N-cadherin in the cytoplasm and nucleus, but absent from the membrane.
Applications
Co-culturing, magnetic manipulation, and invasion assays
One of the challenges of in vitro modelling of complex tissues is the difficulty of co-culturing different cell types. Co-culturing of different cell types can be achieved at the onset of levitation, either by mixing different cell types before levitation, or by magnetically guiding 3D cultures in an invasion assay format.
Co-culturing in a realistic tissue architecture is important for accurately modeling in-vivo conditions. One example is increasing the accuracy of cellular assays, as shown in the figure on the right. In the figure, the human GBM cells and normal human astrocytes (NHA) are cultured separately and then magnetically guided together (left, time 0). Invasion of GBM into NHA in 3D culture provides an assay for basic cancer biology and drug screening (right, 12h to 252h).
Magnetic levitation has shown potential for maintaining cell viability and simulating in vivo conditions. However, its scalability and efficacy in comparison to traditional culturing methods have been topics of discussion.
Vascular simulation with stem cells
By facilitating the assembly of different populations of cells using the MLM, consistent generation of organoids, termed adipospheres, capable of simulating the complex intercellular interactions of endogenous white adipose tissue (WAT) can be achieved.
Co-culturing 3T3-L1 preadipocytes in a 3D space with murine endothelial bEND.3 cells can create a vascular-like network assembly with concomitant lipogenesis in perivascular cells (refer to the attached figure).
In addition to cell lines, organogenesis of white adipose tissue (WAT) can be simulated from primary cells.
Adipocyte-depleted stromal vascular fraction (SVF) containing adipose stromal cells (ASC), endothelial cells, and infiltrating leukocytes derived from mouse WAT were cultured in 3D. This revealed organoids striking in hierarchical organization with distinct capsules and internal large vessel-like structures lined with endothelial cells, as well as perivascular localization of ASC.
Upon adipogenesis induction of either 3T3-L1 adipospheres or adipospheres derived from SVF, the cells efficiently formed large lipid droplets typical of white adipocytes in-vivo, whereas only smaller lipid droplet formation is achievable in 2D. This indicates intercellular signalling that better recapitulates WAT organogenesis.
This MLM for 3D co-culturing creates a liposphere appropriate for WAT modeling ex vivo and provides a new platform for functional screens to identify molecules bioactive toward individual adipose cell populations. It can also be adopted for WAT transplantation applications and aid other approaches to WAT-based cell therapy.
Organized co-culturing to create in-vivo-like tissue
The use of additional manipulation tools may be needed to organize 3D co-cultures into a configuration similar enough to native tissue architecture.
Endothelial cells (PEC), smooth muscle cells (SMC), fibroblasts (PF), and epithelial cells (EpiC) cultured through magnetic levitation can be sequentially layered in a drag-and-drop manner to create bronchioles that maintain phenotype and induce extracellular matrix formation.
Cell types cultured
Listed below are the cell types (primary and cell lines) that have been successfully cultured by the magnetic levitation method.
References
Cell biology
Cell culture
Biotechnology
Nanoparticles
Molecular biology techniques | 3D cell culturing by magnetic levitation | Chemistry,Biology | 1,523 |
11,466,066 | https://en.wikipedia.org/wiki/Puccinia%20verruca | Puccinia verruca is a plant pathogen that causes rust on safflower.
See also
List of Puccinia species
References
External links
USDA ARS Fungal Database
Fungal plant pathogens and diseases
verruca
Fungi described in 1879
Fungus species | Puccinia verruca | Biology | 52 |
29,765,867 | https://en.wikipedia.org/wiki/Slice%20preparation | The slice preparation or brain slice is a laboratory technique in electrophysiology that allows the study of neurons from various brain regions in isolation from the rest of the brain, in an ex-vivo condition. Brain tissue is initially sliced via a tissue slicer then immersed in artificial cerebrospinal fluid (aCSF) for stimulation and/or recording. The technique allows for greater experimental control, through elimination of the effects of the rest of the brain on the circuit of interest, careful control of the physiological conditions through perfusion of substrates through the incubation fluid, to precise manipulation of neurotransmitter activity through perfusion of agonists and antagonists. However, the increase in control comes with a decrease in the ease with which the results can be applied to the whole neural system.
Slice preparation techniques
Free hand sectioning is a type of preparation techniques where a skilled operator uses razor blade for slicing. The blade is wetted with an isotonic solution before cutting to avoid tissue smudging during cutting. This method has several drawbacks such as sample size limitation and difficult to observe progress. Modern microtome devices such as Compresstome microtomes are used to prepare slices as these devices have less limitations.
Benefits
When investigating mammalian CNS activity, slice preparation has several advantages and disadvantages when compared to in vivo study.
Slice preparation is both faster and cheaper than in vivo preparation, and does not require anaesthesia beyond the initial sacrifice. The removal of the brain tissue from the body removes the mechanical effects of heartbeat and respiration, which allows for extended intracellular recording. The physiological conditions of the sample, such as oxygen and carbon dioxide levels, or pH of the extracellular fluid can be carefully adjusted and maintained. Slice work under a microscope also allows for careful placement of the recording electrode, which would not be possible in the closed in vivo system. Removing the brain tissue means that there is no blood–brain barrier, which allows drugs, neurotransmitters or their modulators, or ions to be perfused throughout the neural tissue. Furthermore, the slice preparation method can also be used as a brain-injury model. Finally, whilst the circuit isolated in a brain slice represents a simplified model of the circuit in situ, it maintains structural connections that are lost in cell cultures, or homogenised tissue.
Limitations
Slice preparation also has some drawbacks. Most obviously, an isolated slice lacks the usual input and output connections present in the whole brain. Further, the slicing process may itself compromise the tissue. To minimize complications in the slicing process, a more sophisticated tissue slicer may be used such as the Compresstome, a type of vibrating microtome used to maximizes the amount of viable tissue cells. Additionally, slicing of the brain can damage the top and bottom of the section, but beyond that, the process of decapitation and extraction of the brain before the slice is placed in solution may have effects on the tissue which are not yet understood. The slice preparation procedure itself induces a rapid and robust phenotype change in microglia, the consequences of which need to be taken into consideration when interpreting results. During recording, the tissue also "ages", degrading at a faster rate than in the intact animal. Finally, the artificial composition of the bathing solution means that the presence and relative concentrations of the necessary compounds may not be present.
See also
References
Schurr, Avital, Brain Slice Preparation in Electrophysiology, Kopf Carrier, Vol 15
Neurophysiology
Electrophysiology
Laboratory techniques | Slice preparation | Chemistry | 726 |
14,755,479 | https://en.wikipedia.org/wiki/CHRNB2 | Neuronal acetylcholine receptor subunit beta-2 is a protein that in humans is encoded by the CHRNB2 gene.
Neuronal acetylcholine receptors are homo- or heteropentameric complexes composed of homologous alpha and beta subunits. They belong to a superfamily of ligand-gated ion channels which allow the flow of sodium and potassium across the plasma membrane in response to ligands such as acetylcholine and nicotine. This gene encodes one of several beta subunits. Mutations in this gene are associated with autosomal dominant nocturnal frontal lobe epilepsy.
It has been discovered that suppression, rather than stimulation, of B2-containing nAChR currents yields an antidepressant effect. This is believed to explain the significantly increased prevalence of cigarette smoking in depressed individuals and the profound rise in depressive symptoms during abstinence.
Interactive pathway map
See also
Nicotinic acetylcholine receptor
References
Further reading
External links
Ion channels
Nicotinic acetylcholine receptors | CHRNB2 | Chemistry | 216 |
227,912 | https://en.wikipedia.org/wiki/Rose%20hip | The rose hip or rosehip, also called rose haw and rose hep, is the accessory fruit of the various species of rose plant. It is typically red to orange, but ranges from dark purple to black in some species. Rose hips begin to form after pollination of flowers in spring or early summer, and ripen in late summer through autumn.
Propagation
Roses are propagated from rose hips by removing the achenes that contain the seeds from the hypanthium (the outer coating) and sowing just beneath the surface of the soil. The seeds can take many months to germinate. Most species require chilling (stratification), with some such as Rosa canina only germinating after two winter chill periods.
Uses
Rose hips are used in bread and pies, jam, jelly, marmalade, syrup, soup, tea, wine, and other beverages.
Rose hips can be eaten raw, like berries, if care is taken to avoid the hairs inside the fruit. These urticating hairs are used as itching powder.
A few rose species are sometimes grown for the ornamental value of their hips, such as Rosa moyesii, which has prominent, large, red bottle-shaped fruits. Rosa macrophylla 'Master Hugh' has the largest hips of any readily available rose.
Rose hips are commonly used in herbal tea, often blended with hibiscus. An oil is also extracted from the seeds. Rose hip soup, known as in Swedish, is especially popular in Sweden. Rhodomel, a type of mead, is made with rose hips.
Rose hips can be used to make , the traditional Hungarian fruit brandy popular in Hungary, Romania, and other countries sharing Austro-Hungarian history. Rose hips are also the central ingredient of cockta, the fruity-tasting national soft drink of Slovenia.
Dried rose hips are also sold for crafts and home fragrance purposes. The Inupiat mix rose hips with wild redcurrant and highbush cranberries and boil them into a syrup.
Nutrients and research
Wild rose hip fruits are particularly rich in vitamin C, containing 426 mg per 100 g or 0.4% by weight (w/w). RP-HPLC assays of fresh rose hips and several commercially available products revealed a wide range of L-ascorbic acid (vitamin C) content, ranging from 0.03 to 1.3%.
Rose hips contain the carotenoids beta-carotene, lutein, zeaxanthin, and lycopene. A meta-analysis of human studies examining the potential for rose hip extracts to reduce arthritis pain concluded there was a small effect requiring further analysis of safety and efficacy in clinical trials. Use of rose hips is not considered an effective treatment for knee osteoarthritis.
See also
Rose hip seed oil
Rosa moschata
Rosa rubiginosa
Rosa gymnocarpa
Rosa roxburghii
References
External links
Fruit morphology
Herbal teas
Roses
Food ingredients | Rose hip | Technology | 612 |
76,565,600 | https://en.wikipedia.org/wiki/Microscopical%20researches%20into%20the%20accordance%20in%20the%20structure%20and%20growth%20of%20animals%20and%20plants | Microscopical researches into the accordance in the structure and growth of animals and plants is a famous treatise by Theodor Schwann published in 1839 which officially formulated the basis of the cell theory. The original title was Mikroskopische Untersuchungen über die Uebereinstimmung in der Struktur und dem Wachsthum der Thiere und Pflanzen. The book has been called "a conspicuous milestone in nineteenth century biology" by Karl Sudhoff and "epoch making" By Francis Münzer.
The book, originally published in German, was translated to English in 1847 by Henry Spencer Smith in an edition that also contained the treatise Phytogenesis, by Matthias Schleiden.
Besides the theoretical work, that Schwann called a "philosophical" section of general anatomy, Schwann provided several plates with drawings of cells and tissues and discussions of observations of other microscopists.
Cell theory
Schwann dedicated a chapter of the treatise to explicitly formulate the cell theory, stating that ("the elementary parts of all tissues are formed of cells” and that “there is one universal principle of development for the elementary parts of organisms... and this principle is in the formation of cells" (Henry Smith's translation, 1847). His book had the goal to prove via observations that the cell theory put forth for plants by Matthias Schleiden was equally valid for animals.
Schwann cell
The book is credited with the first description of what would later be called Schwann cell, a type of glial cell. The description of the cells was evident from passages such as:
and
Metabolism
The book is also credited with the introduction of the term "metabolism" for the following quote in the chapter "Theory of Cells":
References
Cell biology
Biology books
1839 in science | Microscopical researches into the accordance in the structure and growth of animals and plants | Biology | 373 |
62,541,410 | https://en.wikipedia.org/wiki/NGC%20874 | NGC 874 is a spiral galaxy located in the Cetus constellation. It is estimated to be 572 million light-years away from the Milky Way galaxy and has a diameter of approximately 80,000 light-years. NGC 874 was discovered in 1886 by Frank Muller.
References
Cetus
874
Spiral galaxies
008663 | NGC 874 | Astronomy | 68 |
44,970,257 | https://en.wikipedia.org/wiki/Kepler-438b | Kepler-438b (also known by its Kepler Object of Interest designation KOI-3284.01) is a confirmed near-Earth-sized exoplanet. It is likely rocky. It orbits on the inner edge of the habitable zone of a red dwarf, Kepler-438, about 460.2 light-years from Earth in the constellation Lyra. It receives 1.4 times our solar flux. The planet was discovered by NASA's Kepler spacecraft using the transit method, in which the dimming effect that a planet causes as it crosses in front of its star is measured. NASA announced the confirmation of the exoplanet on 6 January 2015.
Characteristics
Mass, radius and temperature
Kepler-438b is an Earth-sized planet, an exoplanet that has a mass and radius close to that of Earth. It has a radius of 1.12 , and an unknown mass. It has an equilibrium temperature of , close to that of Earth.
Host star
The planet orbits a (M-type) red dwarf star named Kepler-438. The star has a mass of 0.54 and a radius of 0.52 , both lower than those of the Sun by almost half. It has a surface temperature of 3748 K and is estimated to be about 4.4 billion years old, only 200 million years younger than the Sun and the Sun has a surface temperature of 5778 K.
The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14.467. Therefore, it is too dim to be seen with the naked eye.
Orbit and possible moons
Kepler-438b orbits its parent star once every 35 days and 5 hrs It is likely tidally locked due to its close distance to its star. A search for exomoons by the Hunt for Exomoons with Kepler project around Kepler-438b placed a maximum mass of a hypothetical moon at 29% that of the planet.
Habitability
The planet was announced as orbiting within the habitable zone of Kepler-438, a region where liquid water could exist on the surface of the planet. However it has been found that this planet is subjected to powerful radiation activity from its parent star every 100 days, much more violent storms than the stellar flares emitted by the Sun and which would be capable of sterilizing life on Earth.
Researchers at the University of Warwick say that Kepler-438b is not habitable due to the large amount of radiation it receives. The question of what makes a planet habitable is much more complex than having a planet located at the right distance from its host star so that water can be liquid on its surface: various geophysical and geodynamical aspects, the radiation, and the host star's plasma environment can influence the evolution of planets and life, if it originated. The planet is more likely to resemble a larger and cooler version of Venus.
Discovery and follow-up studies
In 2009, NASA's Kepler spacecraft was completing observing stars on its photometer, the instrument it uses to detect transit events, in which a planet crosses in front of and dims its host star for a brief and roughly regular period of time. In this last test, Kepler observed stars in the Kepler Input Catalog, including Kepler-62; the preliminary light curves were sent to the Kepler science team for analysis, who chose obvious planetary companions from the bunch for follow-up at observatories. Observations for the potential exoplanet candidates took place between 13 May 2009 and 17 March 2012. After observing the respective transits, which for Kepler-438b occurred roughly every 35 days (its orbital period), it was eventually concluded that a planetary body was responsible for the periodic 35-day transits. The discovery, along with the planetary systems of the stars Kepler-442, Kepler-440 and Kepler-443 were announced on January 6, 2015.
At nearly distant, Kepler-438b is too far from Earth for either current telescopes, or even the next generation of planned telescopes, to accurately determine its mass or whether it has an atmosphere. The Kepler spacecraft can only focus on a small, fixed region of the sky, but the next generation of planet-hunting space telescopes, such as TESS and CHEOPS, will have more flexibility. Exoplanetary systems, with stars less distant than Kepler 438, can then be studied in tandem with the James Webb Space Telescope and ground-based observatories like the future Square Kilometer Array.
See also
Kepler-442b
Kepler-452b
List of potentially habitable exoplanets
TrES-2b
References
External links
NASA – Mission overview.
NASA – Kepler Discoveries – Summary Table.
NASA – Kepler-438b at The NASA Exoplanet Archive.
NASA – Kepler-438b at The Extrasolar Planets Encyclopaedia.
Habitable Exolanets Catalog at UPR-Arecibo.
438b
Exoplanets discovered in 2015
Exoplanets in the habitable zone
Lyra
Transiting exoplanets
Near-Earth-sized exoplanets in the habitable zone | Kepler-438b | Astronomy | 1,053 |
58,883,860 | https://en.wikipedia.org/wiki/An%20Introduction%20to%20the%20Philosophy%20of%20Mathematics | An Introduction to the Philosophy of Mathematics is a 2012 textbook on the philosophy of mathematics by Mark Colyvan. It has a focus on issues in contemporary philosophy, such as the mathematical realism–anti-realism debate and the philosophical significance of mathematical practice, and largely skips over historical debates. It covers a range of topics in contemporary philosophy of mathematics including various forms of mathematical realism, the Quine–Putnam indispensability argument, mathematical fictionalism, mathematical explanation, the "unreasonable effectiveness of mathematics", paraconsistent mathematics, and the role of mathematical notation in the progress of mathematics. The book was praised as accessible and well-written and the reaction to its contemporary focus was largely positive, although some academic reviewers felt that it should have covered the historical debates over logicism, formalism and intuitionism in more detail. Other aspects of the book that received praise were its coverage of mathematical explanation, its appeal to mathematicians and other non-philosophers, and its discussion questions and further readings, whilst its epilogue and short length received a more mixed reception.
Overview
An Introduction to the Philosophy of Mathematics is a textbook on the philosophy of mathematics focusing on the issue of mathematical realism, i.e. the question of whether or not there are mathematical objects, and mathematical explanation. Colyvan described his intention for the book as being a textbook that "[gets] beyond the first half of the twentieth century and [explores] the issues capturing the attention of contemporary philosophers of mathematics". As a result, the book focuses less on historical debates in the philosophy of mathematics than other similar textbooks and more on contemporary issues, including the philosophy of mathematical practice.
Summary
The book has eight chapters and an epilogue with each chapter ending with a list of discussion questions and further readings. Chapter 1 briefly covers what Colyvan calls the "big isms" which dominated early 20th century philosophy of mathematics: logicism, formalism and intuitionism. It then turns to the philosophical issues raised by Paul Benacerraf in his papers "What is Mathematical Truth?" (1965) and "What Numbers Could Not Be" (1971).
Chapter 2 concerns the limits of mathematics and relevant constraining mathematical theorems. It discusses the Löwenheim–Skolem theorem and its connection with Cantor's theorem, including a proof of Cantor's theorem and an explanation of why the two theorems are not contradictory. It also discusses Gödel's incompleteness theorems and Gödel and Cohen's work on the independence of the continuum hypothesis. These results are used to motivate the debate between mathematical realism and anti-realism.
Following Hilary Putnam, Colyvan distinguishes between realism about mathematical truths and realism about mathematical objects in chapter 3. He argues that "‘it seems a very quick path from objective truth to objects" and so focuses subsequent discussion on realism about mathematical objects. The chapter goes on to distinguish between various types of realism, including full-blooded platonism, structuralism and the physicalist realism of writers such as Penelope Maddy, and to cover naturalism and the Quine–Putnam indispensability argument. Objections to the indispensability argument from Maddy, Hartry Field and Elliott Sober are also presented.
Chapter 4 focuses on mathematical anti-realism (aka nominalism), specifically mathematical fictionalism. It gives an introduction to the fictionalism of Hartry Field and his nominalisation program, which Colyvan calls the hard road to nominalism. Colyvan also covers so-called easy roads to nominalism; such views are "easy" because they do not attempt to remove mathematical entities from our best scientific theories as Field's nominalisation project attempts to. These include the fictionalism of Jody Azzouni and the metaphorical account of mathematical language propounded by Stephen Yablo. In the chapter, Colyvan objects to Yablo's views, claiming that mathematics appears in scientific explanations and that where metaphorical language is used in explanations, it is being used as a shorthand for a non-metaphorical explanation or else it must be interpreted literally.
Following on from this discussion, chapter 5 concerns mathematical explanation and the "explanatory turn" in the realism–anti-realism debate in which the indispensability argument was reframed in terms of the explanatory power of mathematics. It begins with a consideration of different theories of explanation, resulting in Colyvan advocating for a unification account. This is the view that explanations work by bringing multiple different phenomena under the same theoretical framework. He then distinguishes between intra-mathematical explanations, mathematical explanations of mathematical facts, and extra-mathematical explanations, mathematical explanations of non-mathematical, empirical facts. He uses the unification account of explanation to attempt to explain the difference between mathematical proofs that are explanatory and those that are not, citing proofs of Euclid's theorem, Rolle's theorem and the formula for the sum of the first n natural numbers as examples of non-explanatory proofs. He then moves on to extra-mathematical explanations, arguing that mathematics is more than just a descriptive tool and provides genuine explanations of empirical facts. He presents the examples of the mathematical explanations of the life cycles of periodical cicadas, why hive-bee honeycomb has a hexagonal structure, the distribution of asteroids across the solar system, and Lorentz contraction to support his argument.
Chapter 6 is about the applicability of mathematics and its "unreasonable effectiveness" when applied within science. To illustrate the unreasonable effectiveness of mathematics, Colyvan writes about how James Clerk Maxwell formulated the Maxwell–Ampère law as an analogue of Newtonian gravitational theory, but that it produced completely novel predictions that ended up being confirmed. He attempts to explain this unreasonable effectiveness by providing a mapping account of mathematical application. According to this account, mathematical models act as maps of physical systems by abstracting away from particular details to more structural features of the system. In this way, the abstract structures of mathematics can be used to represent physical systems via similarity relations. Colyvan presents the case study of mathematical models in population ecology to illustrate this mapping account.
Chapter 7 explores issues surrounding paraconsistent mathematics and logic. Colyvan argues that mathematical theory can be inconsistent whilst still being useful, pointing to naive set theory and early infinitesimal calculus as examples of mathematical theories that were later proved to be inconsistent but were fruitfully worked on by mathematicians. He also proposes that the logic used by mathematicians must be some kind of contradiction-tolerant or paraconsistent logic rather than classical logic to account for this fact. He provides the Logic of Paradox (LP) as an example of such a paraconsistent logic which does not lead to the principle of explosion by using modified truth tables and a third truth value i which he suggests should be referred to as "true and false".
Chapter 8 is on the philosophical significance of mathematical notation. Colyvan argues that mathematical progress can sometimes be attributed to changes in notation. The chapter includes a number of examples to support this idea. For example, Colyvan says that the shift from Roman numerals to Arabic numerals could have prompted mathematical progress because Arabic numerals, unlike Roman numerals, have recursion built in. Another example provided is the relevance of mathematical notation in the proof of the impossibility of squaring the circle which is used to illustrate the idea that the same procedure being represented in different ways can reveal non-obvious connections within mathematics. The chapter also considers the importance of definition in mathematics using the example of the evolving definition of the term polyhedron.
The epilogue is titled "Desert Island Theorems" and contains a list of 20 important theorems and 5 open problems which Colyvan believes all philosophers of mathematics should know. There is also a two-page list of "interesting numbers". Short discussions on the philosophical importance and impact of each of these theorems, problems and numbers is also included after each item.
Reception
The contemporary focus of the book was met with praise. Noah Friedman-Biglin, reviewing the book in Metascience, felt this feature of the book was "distinctive" and praised the coverage of mathematical explanation which he called "a topic which is attracting the interest of many professional philosophers of mathematics now". Richard Pettigrew, reviewing the book in The Bulletin of Symbolic Logic, felt that the book "really begins" at chapter 5 where it moves from a "fairly standard, if admirably clear, presentation of well-worn material" to "an exciting exploration of nascent topics on which there is still relatively little literature." He said that the book's use of examples whilst exploring these topics instead of fully formed arguments was a feature of the book that "will provoke [students] to formulate their own philosophical hypotheses and arguments more readily than more traditional textbooks." He concluded that "while some of the book might lack a little of the detail and rigour I'd like future students of the topic to value, Colyvan has written the first textbook that initiates the student into the current period in the philosophy of mathematics." In a review in Teaching Philosophy, Carl Wagner said that "Colyvan has a real talent for conveying the excitement of these ongoing debates, and encouraging readers to develop their own views on these issues". He described chapter 5 as "[standing] out even from the other uniformly excellent chapters of this book" and gave specific praise to its coverage of extra-mathematical explanations for being "particularly interesting". Jean-Pierre Marquis characterised the book in Mathematical Reviews as "a warm breeze after a cold winter in the rarefied atmosphere of the philosophy of mathematics." He argued that "[f]or too long now, the field has been frozen in the age of formalism, logicism and intuitionism" and that with regards to its goal of presenting more contemporary material, the book was "a splendid success". The contemporary focus of the book, as well as its use of actual mathematics, were also identified as interesting aspects of the book by Mark Hunacek, who reviewed the book for the Mathematical Association of America.
The book was also widely characterised as accessible and well-written. David Irvine said in a review in Philosophia Mathematica that the book was among the best textbooks on the philosophy of mathematics released since 2000, alongside Alexander George and Daniel Velleman's Philosophies of Mathematics, Stewart Shapiro's Thinking about Mathematics and Michael Potter's Set Theory and Its Philosophy. He said that the book's "knack for jumping right to the heart of the issue" meant that it was "[n]ever overwhelming" and concluded that it was "a pleasure to teach from and [that he could] report that students having their first exposure to topics in the philosophy of mathematics have found it to be both accessible and stimulating." Zach Weber said of the book in the Australasian Journal of Philosophy that "Colyvan has condensed his own body of research into a highly accessible textbook." In a review in Philosophy in Review, Sam Baron called the book "beautifully written" and said that it "[exemplifies] the key features that a textbook in philosophy ought to have: it is clear, lively and enjoyable to read." Hunacek described the book as "lively and entertaining" as well as "a chatty, interesting book with an agenda that sets it somewhat apart from many other books on the [philosophy of mathematics]". Cristian Soto reviewed the book in Critica: Revista Hispanoamericana de Filosofía, calling the book "an accurate and accessible preamble to some of the most interesting riddles in the [philosophy of mathematics]". Noah Friedman-Biglin, in Metascience, praised the book's writing style as "accessible" and characterised the book as "a fine contribution to a crowded area: it provides a lucidly written, non-technical introduction to some topics in the current literature on philosophy of mathematics." Marquis said that the book was "very well written and a pleasure to read" and that the chapters were "short, clear and well structured".
The book's audience and suitability as a university course text were covered in multiple reviews. Baron felt that the accessible treatment of mathematical results in the book paired with the way it "seamlessly weaves together introductory material on the debate over mathematical realism with state of the art research" made it appropriate for undergraduate or postgraduate courses. He also said that the book "covers a surprisingly wide range of topics" which he felt increased its utility in creating courses with different focuses. Friedman-Biglin felt that "students will find this book an excellent place to begin studying philosophy of mathematics, and it could easily serve as the basis for an interesting course for undergraduates." Weber felt that the book's "conversational style and brisk pacing" made it "clearly designed for a lively undergraduate course". However, he thought the book was short, being suited more to a half-semester or summer course, or as a starting point for discussions. Hunacek said that his main concern with the book was its short length which led him to wonder whether it could support a full-semester course. Nonetheless, he felt that it "has considerable value apart from its use as a [course] text" and said that "a mathematician might enjoy reading this (as I did) as a way of learning, in a painless and entertaining way, about interesting ideas". Overall, he said that he enjoyed and recommended the book. Wagner said "This book, while perhaps written primarily for philosophy students, could also be very profitably read by students and teachers of mathematics. Indeed, this reviewer hopes to use it both in a capstone course for undergraduate mathematics majors, and in a graduate seminar for secondary school mathematics teachers." Soto similarly recommended the book for mathematicians and scientists as well as philosophers, saying that it provided an "insightful guide to debates that encompass their areas".
Some reviewers discussed the coverage of certain topics in the book. Irvine felt that "If there is a weakness with the book, it is that the traditional debates over logicism, formalism, and intuitionism are covered in less than half a dozen pages, leaving readers wondering what all the fuss was about." He said that if a second edition was ever released, it should expand on these topics. He also said that he would have preferred if the book included more proofs such as a proof of Russell's paradox. Baron similarly stated that some might find the brief coverage of the "big isms" unsatisfying but argued that it was appropriate given the books focus on the issue of realism which Baron calls "largely orthogonal to the big isms charted in Chapter One." In contrast to these comments, Wagner called Colyvan's coverage of the "big isms" in chapter 1 "a masterly piece of compressed exposition". Pettigrew commented on the lack of coverage of category theory, reverse mathematics, and automated reasoning and computer-aided proofs but went on to say "no textbook can cover all topics, and one might feel that these belong more naturally to a more advanced course in the subject." Friedman-Biglin also felt that topics such as work on the nature of mathematical truth and the foundations of mathematics were missing from the book. He argued that Colyvan excluded these topics due to a desire to keep the book less technical but felt that he "goes too far in trying to mitigate its effect". However, he felt that concerns about the lack of mathematical details in some areas do not "carry much weight" as "the main line of argument in Colyvan's book might have been obscured by including too many formalisms". Marquis said on the topic that "One could quibble about this topic or that one, this reference or that one, but I think that these criticisms would miss the point. As an introduction to the field, the choice of topics proposed is entirely justified."
Pettigrew said of the epilogue that he would have preferred if it covered fewer theorems and more proofs and applications. Nonetheless, he said that "it is certainly a valuable resource for a student entering the philosophy of mathematics without a strong mathematical background." Hunacek said that he was "puzzled" by the inclusion of the epilogue which he said contained items which "seemed to have no particular philosophical significance". Wagner praised the epilogue, calling it "excellent" whilst suggesting some changes to its presentation of the theorems covered. Weber praised the further readings as "excellent guides for further study". Marquis also praised the inclusion of discussion questions and further readings, calling it "a wonderful initiative".
References
External links
An Introduction to the Philosophy of Mathematics at Cambridge University Press
2012 non-fiction books
Cambridge University Press books
Philosophy of mathematics literature
Philosophy textbooks | An Introduction to the Philosophy of Mathematics | Mathematics | 3,456 |
36,495,578 | https://en.wikipedia.org/wiki/Geodesics%20on%20an%20ellipsoid | The study of geodesics on an ellipsoid arose in connection with geodesy specifically with the solution of triangulation networks. The figure of the Earth is well approximated by an oblate ellipsoid, a slightly flattened sphere. A geodesic is the shortest path between two points on a curved surface, analogous to a straight line on a plane surface. The solution of a triangulation network on an ellipsoid is therefore a set of exercises in spheroidal trigonometry .
If the Earth is treated as a sphere, the geodesics are great circles (all of which are closed) and the problems reduce to ones in spherical trigonometry. However, showed that the effect of the rotation of the Earth results in its resembling a slightly oblate ellipsoid: in this case, the equator and the meridians are the only simple closed geodesics. Furthermore, the shortest path between two points on the equator does not necessarily run along the equator. Finally, if the ellipsoid is further perturbed to become a triaxial ellipsoid (with three distinct semi-axes), only three geodesics are closed.
Geodesics on an ellipsoid of revolution
There are several ways of defining geodesics . A simple definition is as the shortest path between two points on a surface. However, it is frequently more useful to define them as paths with zero geodesic curvature—i.e., the analogue of straight lines on a curved surface. This definition encompasses geodesics traveling so far across the ellipsoid's surface that they start to return toward the starting point, so that other routes are more direct, and includes paths that intersect or re-trace themselves. Short enough segments of a geodesics are still the shortest route between their endpoints, but geodesics are not necessarily globally minimal (i.e. shortest among all possible paths). Every globally-shortest path is a geodesic, but not vice versa.
By the end of the 18th century, an ellipsoid of revolution (the term spheroid is also used) was a well-accepted approximation to the figure of the Earth. The adjustment of triangulation networks entailed reducing all the measurements to a reference ellipsoid and solving the resulting two-dimensional problem as an exercise in spheroidal trigonometry .
It is possible to reduce the various geodesic problems into one of two types. Consider two points: at latitude and longitude and at latitude and longitude (see Fig. 1). The connecting geodesic (from to ) is , of length , which has azimuths and at the two endpoints. The two geodesic problems usually considered are:
the direct geodesic problem or first geodesic problem, given , , and , determine and ;
the inverse geodesic problem or second geodesic problem, given and , determine , , and .
As can be seen from Fig. 1, these problems involve solving the triangle given one angle, for the direct problem and for the inverse problem, and its two adjacent sides.
For a sphere the solutions to these problems are simple exercises in spherical trigonometry, whose solution is given by formulas for solving a spherical triangle. (See the article on great-circle navigation.)
For an ellipsoid of revolution, the characteristic constant defining the geodesic was found by . A systematic solution for the paths of geodesics was given by and (and subsequent papers in 1808 and 1810).
The full solution for the direct problem (complete with computational tables and a worked out example) is given by .
During the 18th century geodesics were typically referred to as "shortest lines".
The term "geodesic line" (actually, a curve) was coined by :
Nous désignerons cette ligne sous le nom de ligne géodésique [We will call this line the geodesic line].
This terminology was introduced into English either as "geodesic line" or as "geodetic line", for example ,
A line traced in the manner we have now been describing, or deduced from trigonometrical measures, by the means we have indicated, is called a geodetic or geodesic line: it has the property of being the shortest which can be drawn between its two extremities on the surface of the Earth; and it is therefore the proper itinerary measure of the distance between those two points.
In its adoption by other fields geodesic line, frequently shortened to geodesic, was preferred.
This section treats the problem on an ellipsoid of revolution (both oblate and prolate). The problem on a triaxial ellipsoid is covered in the next section.
Equations for a geodesic
Here the equations for a geodesic are developed; the derivation closely follows that of . , , , , , , and also provide derivations of these equations.
Consider an ellipsoid of revolution with equatorial radius and polar semi-axis . Define the flattening , the eccentricity , and the second eccentricity :
(In most applications in geodesy, the ellipsoid is taken to be oblate, ; however, the theory applies without change to prolate ellipsoids, , in which case , , and are negative.)
Let an elementary segment of a path on the ellipsoid have length . From Figs. 2 and 3, we see that if its azimuth is , then is related to and by
where is the meridional radius of curvature, is the radius of the circle of latitude , and is the normal radius of curvature.
The elementary segment is therefore given by
or
where and the Lagrangian function depends on through and . The length of an arbitrary path between and is given by
where is a function of satisfying and . The shortest path or geodesic entails finding that function which minimizes . This is an exercise in the calculus of variations and the minimizing condition is given by the Beltrami identity,
Substituting for and using Eqs. gives
found this relation, using a geometrical construction; a similar derivation is presented by . Differentiating this relation gives
This, together with Eqs. , leads to a system of ordinary differential equations for a geodesic
We can express in terms of the parametric latitude, , using
and Clairaut's relation then becomes
This is the sine rule of spherical trigonometry relating two sides of the triangle (see Fig. 4), , and and their opposite angles and .
In order to find the relation for the third side , the spherical arc length, and included angle , the spherical longitude, it is useful to consider the triangle representing a geodesic starting at the equator; see Fig. 5. In this figure, the variables referred to the auxiliary sphere are shown with the corresponding quantities for the ellipsoid shown in parentheses.
Quantities without subscripts refer to the arbitrary point ; , the point at which the geodesic crosses the equator in the northward direction, is used as the origin for , and .
If the side is extended by moving infinitesimally (see Fig. 6), we obtain
Combining Eqs. and gives differential equations for and
The relation between and is
which gives
so that the differential equations for the geodesic become
The last step is to use as the independent parameter in both of these differential equations and thereby to express and as integrals. Applying the sine rule to the vertices and in the spherical triangle in Fig. 5 gives
where is the azimuth at .
Substituting this into the equation for and integrating the result gives
where
and the limits on the integral are chosen so that . pointed out that the equation for is the same as the equation for the arc on an ellipse with semi-axes and . In order to express the equation for in terms of , we write
which follows from and Clairaut's relation.
This yields
and the limits on the integrals are chosen so that at the equator crossing, .
This completes the solution of the path of a geodesic using the auxiliary sphere. By this device a great circle can be mapped exactly to a geodesic on an ellipsoid of revolution.
There are also several ways of approximating geodesics on a terrestrial ellipsoid (with small flattening) ; some of these are described in the article on geographical distance. However, these are typically comparable in complexity to the method for the exact solution .
Behavior of geodesics
Fig. 7 shows the simple closed geodesics which consist of the meridians (green) and the equator (red). (Here the qualification "simple" means that the geodesic closes on itself without an intervening self-intersection.) This follows from the equations for the geodesics given in the previous section.
All other geodesics are typified by Figs. 8 and 9 which show a geodesic starting on the equator with . The geodesic oscillates about the equator. The equatorial crossings are called nodes and the points of maximum or minimum latitude are called vertices; the parametric latitudes of the vertices are given by . The geodesic completes one full oscillation in latitude before the longitude has increased by . Thus, on each successive northward crossing of the equator (see Fig. 8), falls short of a full circuit of the equator by approximately (for a prolate ellipsoid, this quantity is negative and completes more that a full circuit; see Fig. 10). For nearly all values of , the geodesic will fill that portion of the ellipsoid between the two vertex latitudes (see Fig. 9).
If the ellipsoid is sufficiently oblate, i.e., , another class of simple closed geodesics is possible . Two such geodesics are illustrated in Figs. 11 and 12. Here and the equatorial azimuth, , for the green (resp. blue) geodesic is chosen to be (resp. ), so that the geodesic completes 2 (resp. 3) complete oscillations about the equator on one circuit of the ellipsoid.
Fig. 13 shows geodesics (in blue) emanating with a multiple of up to the point at which they cease to be shortest paths. (The flattening has been increased to in order to accentuate the ellipsoidal effects.) Also shown (in green) are curves of constant , which are the geodesic circles centered . showed that, on any surface, geodesics and geodesic circle intersect at right angles.
The red line is the cut locus, the locus of points which have multiple (two in this case) shortest geodesics from . On a sphere, the cut locus is a point. On an oblate ellipsoid (shown here), it is a segment of the circle of latitude centered on the point antipodal to , . The longitudinal extent of cut locus is approximately . If
lies on the equator, , this relation is exact and as a consequence the equator is only a shortest geodesic if . For a prolate ellipsoid, the cut locus is a segment of the anti-meridian centered on the point antipodal to , , and this means that meridional geodesics stop being shortest paths before the antipodal point is reached.
Differential properties of geodesics
Various problems involving geodesics require knowing their behavior when they are perturbed. This is useful in trigonometric adjustments , determining the physical properties of signals which follow geodesics, etc. Consider a reference geodesic, parameterized by , and a second geodesic a small distance away from it. showed that obeys the Gauss-Jacobi equation
where is the Gaussian curvature at . As a second order, linear, homogeneous differential equation, its solution may be expressed as the sum of two independent solutions
where
The quantity is the so-called reduced length, and is the geodesic scale.
Their basic definitions are illustrated in Fig. 14.
The Gaussian curvature for an ellipsoid of revolution is
solved the Gauss-Jacobi equation for this case enabling and to be expressed as integrals.
As we see from Fig. 14 (top sub-figure), the separation of two geodesics starting at the same point with azimuths differing by is . On a closed surface such as an ellipsoid, oscillates about zero. The point at which becomes zero is the point conjugate to the starting point. In order for a geodesic between and , of length , to be a shortest path it must satisfy the Jacobi condition , that there is no point conjugate to between and . If this condition is not satisfied, then there is a nearby path (not necessarily a geodesic) which is shorter. Thus, the Jacobi condition is a local property of the geodesic and is only a necessary condition for the geodesic being a global shortest path. Necessary and sufficient conditions for a geodesic being the shortest path are:
for an oblate ellipsoid, ;
for a prolate ellipsoid, , if ; if , the supplemental condition is required if .
Envelope of geodesics
The geodesics from a particular point if continued past the cut locus form an envelope illustrated in Fig. 15. Here the geodesics for which is a multiple of are shown in light blue. (The geodesics are only shown for their first passage close to the antipodal point, not for subsequent ones.) Some geodesic circles are shown in green; these form cusps on the envelope. The cut locus is shown in red. The envelope is the locus of points which are conjugate to ; points on the envelope may be computed by finding the point at which on a geodesic. calls this star-like figure produced by the envelope an astroid.
Outside the astroid two geodesics intersect at each point; thus there are two geodesics (with a length approximately half the circumference of the ellipsoid) between and these points. This corresponds to the situation on the sphere where there are "short" and "long" routes on a great circle between two points. Inside the astroid four geodesics intersect at each point. Four such geodesics are shown in Fig. 16 where the geodesics are numbered in order of increasing length. (This figure uses the same position for as Fig. 13 and is drawn in the same projection.) The two shorter geodesics are stable, i.e., , so that there is no nearby path connecting the two points which is shorter; the other two are unstable. Only the shortest line (the first one) has . All the geodesics are tangent to the envelope which is shown in green in the figure.
The astroid is the (exterior) evolute of the geodesic circles centered at . Likewise, the geodesic circles are involutes of the astroid.
Area of a geodesic polygon
A geodesic polygon is a polygon whose sides are geodesics. It is analogous to a spherical polygon, whose sides are great circles. The area of such a polygon may be found by first computing the area between a geodesic segment and the equator, i.e., the area of the quadrilateral in Fig. 1 . Once this area is known, the area of a polygon may be computed by summing the contributions from all the edges of the polygon.
Here an expression for the area of is developed following . The area of any closed region of the ellipsoid is
where is an element of surface area and is the Gaussian curvature. Now the Gauss–Bonnet theorem applied to a geodesic polygon states
where
is the geodesic excess and is the exterior angle at vertex . Multiplying the equation for by , where is the authalic radius, and subtracting this from the equation for gives
where the value of for an ellipsoid has been substituted.
Applying this formula to the quadrilateral , noting that , and performing the integral over gives
where the integral is over the geodesic line (so that is implicitly a function of ). The integral can be expressed as a series valid for small .
The area of a geodesic polygon is given by summing over its edges. This result holds provided that the polygon does not include a pole; if it does, must be added to the sum. If the edges are specified by their vertices, then a convenient expression for the geodesic excess is
Solution of the direct and inverse problems
Solving the geodesic problems entails mapping the geodesic onto the auxiliary sphere and solving the corresponding problem in great-circle navigation.
When solving the "elementary" spherical triangle for in Fig. 5, Napier's rules for quadrantal triangles can be employed,
The mapping of the geodesic involves evaluating the integrals for the distance, , and the longitude, , Eqs. and and these depend on the parameter .
Handling the direct problem is straightforward, because can be determined directly from the given quantities and ; for a sample calculation, see .
In the case of the inverse problem, is given; this cannot be easily related to the equivalent spherical angle because is unknown. Thus, the solution of the problem requires that be found iteratively (root finding); see for details.
In geodetic applications, where is small, the integrals are typically evaluated as a series . For arbitrary , the integrals (3) and (4) can be found by numerical quadrature or by expressing them in terms of elliptic integrals .
provides solutions for the direct and inverse problems; these are based on a series expansion carried out to third order in the flattening and provide an accuracy of about for the WGS84 ellipsoid; however the inverse method fails to converge for nearly antipodal points.
continues the expansions to sixth order which suffices to provide full double precision accuracy for and improves the solution of the inverse problem so that it converges in all cases. extends the method to use elliptic integrals which can be applied to ellipsoids with arbitrary flattening.
Geodesics on a triaxial ellipsoid
Solving the geodesic problem for an ellipsoid of revolution is mathematically straightforward: because of symmetry, geodesics have a constant of motion, given by Clairaut's relation allowing the problem to be reduced to quadrature. By the early 19th century (with the work of Legendre, Oriani, Bessel, et al.), there was a complete understanding of the properties of geodesics on an ellipsoid of revolution.
On the other hand, geodesics on a triaxial ellipsoid (with three unequal axes) have no obvious constant of the motion and thus represented a challenging unsolved problem in the first half of the 19th century. In a remarkable paper, discovered a constant of the motion allowing this problem to be reduced to quadrature also .
Triaxial ellipsoid coordinate system
Consider the ellipsoid defined by
where are Cartesian coordinates centered on the ellipsoid and, without loss of generality, .
employed the (triaxial) ellipsoidal coordinates (with triaxial ellipsoidal latitude and triaxial ellipsoidal longitude, ) defined by
In the limit , becomes the parametric latitude for an oblate ellipsoid, so the use of the symbol is consistent with the previous sections. However, is different from the spherical longitude defined above.
Grid lines of constant (in blue) and (in green) are given in Fig. 17. These constitute an orthogonal coordinate system: the grid lines intersect at right angles. The principal sections of the ellipsoid, defined by and are shown in red. The third principal section, , is covered by the lines and or . These lines meet at four umbilical points (two of which are visible in this figure) where the principal radii of curvature are equal. Here and in the other figures in this section the parameters of the ellipsoid are , and it is viewed in an orthographic projection from a point above , .
The grid lines of the ellipsoidal coordinates may be interpreted in three
different ways:
They are "lines of curvature" on the ellipsoid: they are parallel to the directions of principal curvature .
They are also intersections of the ellipsoid with confocal systems of hyperboloids of one and two sheets .
Finally they are geodesic ellipses and hyperbolas defined using two adjacent umbilical points . For example, the lines of constant in Fig. 17 can be generated with the familiar string construction for ellipses with the ends of the string pinned to the two umbilical points.
Jacobi's solution
Jacobi showed that the geodesic equations, expressed in ellipsoidal coordinates, are separable. Here is how he recounted his discovery to his friend and neighbor Bessel ,
The day before yesterday, I reduced to quadrature the problem of geodesic lines on an ellipsoid with three unequal axes. They are the simplest formulas in the world, Abelian integrals, which become the well known elliptic integrals if 2 axes are set equal.
Königsberg, 28th Dec. '38.
The solution given by Jacobi is
As Jacobi notes "a function of the angle equals a function of the angle . These two functions are just Abelian integrals..." Two constants and appear in the solution. Typically is zero if the lower limits of the integrals are taken to be the starting point of the geodesic and the direction of the geodesics is determined by . However, for geodesics that start at an umbilical points, we have and determines the direction at the umbilical point.
The constant may be expressed as
where is the angle the geodesic makes with lines of constant . In the limit , this reduces to , the familiar Clairaut relation. A derivation of Jacobi's result is given by ; he gives the solution found by for general quadratic surfaces.
Survey of triaxial geodesics
On a triaxial ellipsoid, there are only three simple closed geodesics, the three principal sections of the ellipsoid given by , , and .
To survey the other geodesics, it is convenient to consider geodesics that intersect the middle principal section, , at right angles. Such geodesics are shown in Figs. 18–22, which use the same ellipsoid parameters and the same viewing direction as Fig. 17. In addition, the three principal ellipses are shown in red in each of these figures.
If the starting point is , , and , then and the geodesic encircles the ellipsoid in a "circumpolar" sense. The geodesic oscillates north and south of the equator; on each oscillation it completes slightly less than a full circuit around the ellipsoid resulting, in the typical case, in the geodesic filling the area bounded by the two latitude lines . Two examples are given in Figs. 18 and 19. Figure 18 shows practically the same behavior as for an oblate ellipsoid of revolution (because ); compare to Fig. 9.
However, if the starting point is at a higher latitude (Fig. 18) the distortions resulting from are evident. All tangents to a circumpolar geodesic touch the confocal single-sheeted hyperboloid which intersects the ellipsoid at .
If the starting point is , , and , then and the geodesic encircles the ellipsoid in a "transpolar" sense. The geodesic oscillates east and west of the ellipse ; on each oscillation it completes slightly more than a full circuit around the ellipsoid. In the typical case, this results in the geodesic filling the area bounded by the two longitude lines and .
If , all meridians are geodesics; the effect of causes such geodesics to oscillate east and west.
Two examples are given in Figs. 20 and 21. The constriction of the geodesic near the pole disappears in the limit ; in this case, the ellipsoid becomes a prolate ellipsoid and Fig. 20 would resemble Fig. 10 (rotated on its side). All tangents to a transpolar geodesic touch the confocal double-sheeted hyperboloid which intersects the ellipsoid at .
In Figs. 18–21, the geodesics are (very nearly) closed. As noted above, in the typical case, the geodesics are not closed, but fill the area bounded by the limiting lines of latitude (in the case of Figs. 18–19) or longitude (in the case of Figs. 20–21).
If the starting point is , (an umbilical point), and (the geodesic leaves the ellipse at right angles), then and the geodesic repeatedly intersects the opposite umbilical point and returns to its starting point. However, on each circuit the angle at which it intersects becomes closer to or so that asymptotically the geodesic lies on the ellipse , as shown in Fig. 22. A single geodesic does not fill an area on the ellipsoid. All tangents to umbilical geodesics touch the confocal hyperbola that intersects the ellipsoid at the umbilic points.
Umbilical geodesic enjoy several interesting properties.
Through any point on the ellipsoid, there are two umbilical geodesics.
The geodesic distance between opposite umbilical points is the same regardless of the initial direction of the geodesic.
Whereas the closed geodesics on the ellipses and are stable (a geodesic initially close to and nearly parallel to the ellipse remains close to the ellipse), the closed geodesic on the ellipse , which goes through all 4 umbilical points, is exponentially unstable. If it is perturbed, it will swing out of the plane and flip around before returning to close to the plane. (This behavior may repeat depending on the nature of the initial perturbation.)
If the starting point of a geodesic is not an umbilical point, its envelope is an astroid with two cusps lying on and the other two on . The cut locus for is the portion of the line between the cusps.
Applications
The direct and inverse geodesic problems no longer play the central role in geodesy that they once did. Instead of solving adjustment of geodetic networks as a two-dimensional problem in spheroidal trigonometry, these problems are now solved by three-dimensional methods .
Nevertheless, terrestrial geodesics still play an important role in several areas:
for measuring distances and areas in geographic information systems;
the definition of maritime boundaries ;
in the rules of the Federal Aviation Administration for area navigation ;
the method of measuring distances in the FAI Sporting Code .
help Muslims find their direction toward Mecca
By the principle of least action, many problems in physics can be formulated as a variational problem similar to that for geodesics. Indeed, the geodesic problem is equivalent to the motion of a particle constrained to move on the surface, but otherwise subject to no forces .
For this reason, geodesics on simple surfaces such as ellipsoids of revolution or triaxial ellipsoids are frequently used as "test cases" for exploring new methods. Examples include:
the development of elliptic integrals and elliptic functions ;
the development of differential geometry ;
methods for solving systems of differential equations by a change of independent variables ;
the study of caustics ;
investigations into the number and stability of periodic orbits ;
in the limit , geodesics on a triaxial ellipsoid reduce to a case of dynamical billiards;
extensions to an arbitrary number of dimensions ;
geodesic flow on a surface .
See also
Earth section paths
Figure of the Earth
Geographical distance
Great-circle navigation
Great ellipse
Geodesic
Geodesy
Map projection
Map projection of the triaxial ellipsoid
Meridian arc
Rhumb line
Vincenty's formulae
Notes
References
External links
Online geodesic bibliography of books and articles on geodesics on ellipsoids.
Test set for geodesics, a set of 500000 geodesics for the WGS84 ellipsoid, computed using high-precision arithmetic.
NGS tool implementing .
geod(1), man page for the PROJ utility for geodesic calculations.
GeographicLib implementation of .
Drawing geodesics on Google Maps.
Geodesy
Geodesic (mathematics)
Differential geometry
Calculus of variations
Curves | Geodesics on an ellipsoid | Mathematics | 6,068 |
1,238,550 | https://en.wikipedia.org/wiki/Index%20of%20dissimilarity | The index of dissimilarity is a demographic measure of the evenness with which two groups are distributed across component geographic areas that make up a larger area. A group is evenly distributed when each geographic unit has the same percentage of group members as the total population. The index score can also be interpreted as the percentage of one of the two groups included in the calculation that would have to move to different geographic areas in order to produce a distribution that matches that of the larger area. The index of dissimilarity can be used as a measure of segregation. A score of zero (0%) reflects a fully integrated environment; a score of 1 (100%) reflects full segregation. In terms of black–white segregation, a score of .60 means that 60 percent of blacks would have to exchange places with whites in other units to achieve an even geographic distribution. Index of dissimilarity is invariant to relative size of group.
Basic formula
The basic formula for the index of dissimilarity is:
where (comparing a black and white population, for example):
ai = the population of group A in the ith area, e.g. census tract
A = the total population in group A in the large geographic entity for which the index of dissimilarity is being calculated.
bi = the population of group B in the ith area
B = the total population in group B in the large geographic entity for which the index of dissimilarity is being calculated.
The index of dissimilarity is applicable to any categorical variable (whether demographic or not) and because of its simple properties is useful for input into multidimensional scaling and clustering programs. It has been used extensively in the study of social mobility to compare distributions of origin (or destination) occupational categories.
Numerical Example
Consider the following distribution of white and black population across neighborhoods.
Linear algebra perspective
The formula for the Index of Dissimilarity can be made much more compact and meaningful by considering it from the perspective of Linear algebra. Suppose we are studying the distribution of rich and poor people in a city (e.g. London). Suppose our city contains blocks:
Let's create a vector which shows the number of rich people in each block of our city:
Similarly, let's create a vector which shows the number of poor people in each block of our city:
Now, the -norm of a vector is simply the sum of (the magnitude of) each entry in that vector. That is, for a vector , we have the -norm:
If we denote as the total number of rich people in our city, than a compact way to calculate would be to use the -norm:
Similarly, if we denote as the total number of poor people in our city, then:
When we divide a vector by its norm, we get what is called the normalized vector or Unit vector :
Let us normalize the rich vector and the poor vector :
We finally return to the formula for the Index of Dissimilarity (); it is simply equal to one-half the -norm of the difference between the vectors and :
Numerical example
Consider a city consisting of four blocks of 2 people each. One block consists of 2 rich people. One block consists of 2 poor people. Two blocks consist of 1 rich and 1 poor person. What is the index of dissimilarity for this city?
Firstly, let's find the rich vector and poor vector :
Next, let's calculate the total number of rich people and poor people in our city:
Next, let's normalize the rich and poor vectors:
We can now calculate the difference :
Finally, let's find the index of dissimilarity ():
Equivalence between formulae
We can prove that the Linear Algebraic formula for is identical to the basic formula for . Let's start with the Linear Algebraic formula:
Let's replace the normalized vectors and with:
Finally, from the definition of the -norm, we know that we can replace it with the summation:
Thus we prove that the linear algebra formula for the index of dissimilarity is equivalent to the basic formula for it:
Zero segregation
When the Index of Dissimilarity is zero, this means that the community we are studying has zero segregation. For example, if we are studying the segregation of rich and poor people in a city, then if , it means that:
There are no blocks in the city which are "rich blocks", and there are no blocks in the city which are "poor blocks"
There is a homogeneous distribution of rich and poor people throughout the city
If we set in the linear algebraic formula, we get the necessary condition for having zero segregation:
For example, suppose you have a city with 2 blocks. Each block has 4 rich people and 100 poor people:
Then, the total number of rich people is , and the total number of poor people is . Thus:
Because , thus this city has zero segregation.
As another example, suppose you have a city with 3 blocks:
Then, we have rich people in our city, and poor people. Thus:
Again, because , thus this city also has zero segregation.
See also
Kullback–Leibler divergence
Isolation index
self-dissimilarity
Hoover index
References
External links
http://enceladus.isr.umich.edu/race/calculate.html
Index numbers | Index of dissimilarity | Mathematics | 1,110 |
40,961,197 | https://en.wikipedia.org/wiki/Phaeoramularia%20indica | Phaeoramularia indica is a species of sac fungus. The fungus was found to cause leaf spots in north-eastern Uttar Pradesh.
References
indica
Fungal plant pathogens and diseases
Leaf diseases
Fungi of India
Fungus species | Phaeoramularia indica | Biology | 48 |
41,455 | https://en.wikipedia.org/wiki/Optical%20attenuator | An optical attenuator, or fiber optic attenuator, is a device used to reduce the power level of an optical signal, either in free space or in an optical fiber. The basic types of optical attenuators are fixed, step-wise variable, and continuously variable.
Applications
Optical attenuators are commonly used in fiber-optic communications, either to test power level margins by temporarily adding a calibrated amount of signal loss, or installed permanently to properly match transmitter and receiver levels. Sharp bends stress optic fibers and can cause losses. If a received signal is too strong a temporary fix is to wrap the cable around a pencil until the desired level of attenuation is achieved. However, such arrangements are unreliable, since the stressed fiber tends to break over time.
Generally, multimode systems do not need attenuators as the multimode sources, rarely have enough power output to saturate receivers. Instead, single-mode systems, especially the long-haul DWDM network links, often need to use fiber optic attenuators to adjust the optical power during the transmission.
Principles of operation
The power reduction is done by such means as absorption, reflection, diffusion, scattering, deflection, diffraction, and dispersion, etc. Optical attenuators usually work by absorbing the light, like sunglasses absorb extra light energy. They typically have a working wavelength range in which they absorb all light energy equally. They should not reflect the light or scatter the light in an air gap, since that could cause unwanted back reflection in the fiber system. Another type of attenuator utilizes a length of high-loss optical fiber, that operates upon its input optical signal power level in such a way that its output signal power level is less than the input level.
Types
Optical attenuators can take a number of different forms and are typically classified as fixed or variable attenuators. What's more, they can be classified as LC, SC, ST, FC, MU, E2000 etc. according to the different types of connectors.
Fixed Attenuators
Fixed optical attenuators used in fiber optic systems may use a variety of principles for their functioning. Preferred attenuators use either doped fibers, or mis-aligned splices, or total power since both of these are reliable and inexpensive.
Inline style attenuators are incorporated into patch cables. The alternative build out style attenuator is a small male-female adapter that can be added onto other cables.
Non-preferred attenuators often use gap loss or reflective principles. Such devices can be sensitive to: modal distribution, wavelength, contamination, vibration, temperature, damage due to power bursts, may cause back reflections, may cause signal dispersion etc.
Loopback attenuators
Loopback fiber optic attenuator is designed for testing, engineering and the burn-in stage of boards or other equipment. Available in SC/UPC, SC/APC, LC/UPC, LC/APC, MTRJ, MPO for singlemode application.900 um fiber cable inside of the black shell for LC and SC type.
No black shell for MTRJ and MPO type.
Built-in variable attenuators
Built-in variable optical attenuators may be either manually or electrically controlled. A manual device is useful for one-time set up of a system, and is a near-equivalent to a fixed attenuator, and may be referred to as an "adjustable attenuator". In contrast, an electrically controlled attenuator can provide adaptive power optimization.
Attributes of merit for electrically controlled devices, include speed of response and avoiding degradation of the transmitted signal. Dynamic range is usually quite restricted, and power feedback may mean that long term stability is a relatively minor issue. Speed of response is a particularly major issue in dynamically reconfigurable systems, where a delay of one millionth of a second can result in the loss of large amounts of transmitted data. Typical technologies employed for high speed response include liquid crystal variable attenuator (LCVA), or lithium niobate devices. There is a class of built-in attenuators that is technically indistinguishable from test attenuators, except they are packaged for rack mounting, and have no test display.
Variable optical test attenuators
Variable optical test attenuators generally use a variable neutral density filter. Despite relatively high cost, this arrangement has the advantages of being stable, wavelength insensitive, mode insensitive, and offering a large dynamic range. Other schemes such as LCD, variable air gap etc. have been tried over the years, but with limited success.
They may be either manually or motor controlled. Motor control give regular users a distinct productivity advantage, since commonly used test sequences can be run automatically.
Attenuator instrument calibration is a major issue. The user typically would like an absolute port to port calibration. Also, calibration should usually be at a number of wavelengths and power levels, since the device is not always linear. However a number of instruments do not in fact offer these basic features, presumably in an attempt to reduce cost. The most accurate variable attenuator instruments have thousands of calibration points, resulting in excellent overall accuracy in use.
Test automation
Test sequences that use variable attenuators can be very time-consuming. Therefore, automation is likely to achieve useful benefits. Both bench and handheld-style devices are available that offer such features.
See also
Gap loss - sources and causes of unintended attenuation
Optical fiber cable
Optical fiber connector
Optical power meter
References
Fiber optics
Optical components
Telecommunications equipment
Measuring instruments | Optical attenuator | Materials_science,Technology,Engineering | 1,185 |
19,285,203 | https://en.wikipedia.org/wiki/Hepatitis%20B%20virus%20PRE%20alpha | The Hepatitis B virus PRE stem-loop alpha (HBV PRE SL alpha) is an RNA structure that is shown to play a role in nuclear export of HBV mRNAs.
HBV PREalpha consists of a 30 nt stem-loop, with a 5 nt apical loop. The conserved stem-loop was predicted within the HBV PRE sequence and confirmed by mutagenesis.
The exact role of this structure in nuclear export has not yet been determined.
See also
Hepatitis B virus PRE beta
HBV RNA encapsidation signal epsilon
Hepatitis_B virus PRE 1151–1410
References
Cis-regulatory RNA elements
Hepatitis B virus | Hepatitis B virus PRE alpha | Chemistry | 135 |
52,058,358 | https://en.wikipedia.org/wiki/Alistair%20Lawrence | Alistair B. Lawrence (born 1954) is an ethologist. He currently holds a joint chair in animal behaviour and welfare at Scotland's Rural College and the University of Edinburgh.
Education
Lawrence graduated from the University of St Andrews with a degree in zoology. He then studied for his PhD at the University of Edinburgh under the direction of David Wood-Gush. His 1985 thesis is entitled “The social organization of Scottish blackface sheep".
Career
In 1995 he received the RSPCA/BSAS award for innovative developments in animal welfare for his 'outstanding contribution to animal welfare research'.
He has published extensively throughout his career.
Lawrence is a past secretary of the International Society for Applied Ethology and is a supporter of Compassion in World Farming. He has served on the UK Farm Animal Welfare Committee and has been appointed to the council of the Universities Federation for Animal Welfare.
With Aubrey Manning he oversees the David Wood-Gush Trust Fund that set up and supports the annual Wood-Gush lecture.
References
External links
Scotland's Rural College homepage
University of Edinburgh homepage
1954 births
20th-century Scottish scientists
21st-century scientists
Academics of the University of Edinburgh
Alumni of the University of St Andrews
Alumni of the University of Edinburgh
Ethologists
Living people
People educated at Strathallan School
People from Perthshire
Scottish animal welfare scholars
20th-century British zoologists | Alistair Lawrence | Biology | 277 |
11,619,563 | https://en.wikipedia.org/wiki/GITEX | GITEX GLOBAL or GITEX (abb. of Gulf Information Technology Exhibition, a.k.a. GITEX Technology Week) is a computer expo held annually in Dubai, United Arab Emirates at Dubai World Trade Center. The event underpins the rapid technology-driven transformations, investments and projects shaping the economies of the Middle East, Africa and Asia.
History
The show was launched in 1981 as GITE and occupied Hall One of the Dubai World Trade Centre. With the launch of MacWorld at the 1988 show, GITEX (with the addition of 'X') expanded to two halls of the exhibition centre. For some years it has now filled the entire DWTC complex, and currently expanded with additional halls added to the complex, comprising 27 halls and two million feet of exhibition space.
Running for over four decades, the GITEX brand name added the suffix ‘GLOBAL’ in 2022 to highlight that technology companies, startups, speakers and attendees from over 170 countries are represented.
GITEX features a large-scale government presence, with hundreds of government entities from across the region, Ministers and public sector officials present the year’s major government digital initiatives, innovations and projects, and announce public and private sector tech partnerships.
GITEX GLOBAL includes seven multi-tech sector events, Ai Everything, North Star Dubai (region's biggest startup showcase), Fintech Surge, Future Blockchain Summit, Marketing Mania, and two new events launching in 2022, Global DevSlam (congregating the coder-developer ecosystem) and X-VERSE (curated immersive Web 3.0 journey). GITEX GLOBAL saw participation by several thought leaders as speakers.
In 2022, GITEX GLOBAL, organized by Dubai World Trade Centre, took place from 10 – 14 October.
GITEX Africa
In October 2022, GITEX GLOBAL announced to start a new annual event outside Asia and chose Africa. GITEX Africa debuted in 2023 in Marrakech, Morocco, and the exhibition was held from May 31 to June 2. The expo was inaugurated by the Prime Minister of Morocco Aziz Akhannouch.
References
Computer-related trade shows
Trade fairs in the United Arab Emirates
Events in Dubai
Technology events | GITEX | Technology | 458 |
27,890,209 | https://en.wikipedia.org/wiki/Tracking%20signal | In statistics and management science, a tracking signal monitors any forecasts that have been made in comparison with actuals, and warns when there are unexpected departures of the outcomes from the forecasts. Forecasts can relate to sales, inventory, or anything pertaining to an organization's future demand.
The tracking signal is a simple indicator that forecast bias is present in the forecast model. It is most often used when the validity of the forecasting model might be in doubt.
Definition
One form of tracking signal is the ratio of the cumulative sum of forecast errors (the deviations between the estimated forecasts and the actual values) to the mean absolute deviation. The formula for this tracking signal is:
where at is the actual value of the quantity being forecast, and ft is the forecast. MAD is the mean absolute deviation. The formula for the MAD is:
where n is the number of periods. Plugging this in, the entire formula for tracking signal is:
Another proposed tracking signal was developed by Trigg (1964). In this model, et is the observed error in period t and |et| is the absolute value of the observed error. The smoothed values of the error and the absolute error are given by:
Then the tracking signal is the ratio:
If no significant bias is present in the forecast, then the smoothed error Et should be small compared to the smoothed absolute error Mt. Therefore, a large tracking signal value indicates a bias in the forecast. For example, with a β of 0.1, a value of Tt greater than .51 indicates nonrandom errors. The tracking signal also can be used directly as a variable smoothing constant.
There have also been proposed methods for adjusting the smoothing constants used in forecasting methods based on some measure of prior performance of the forecasting model. One such approach is suggested by Trigg and Leach (1967), which requires the calculation of the tracking signal. The tracking signal is then used as the value of the smoothing constant for the next forecast. The idea is that when the tracking signal is large, it suggests that the time series has undergone a shift; a larger value of the smoothing constant should be more responsive to a sudden shift in the underlying signal.
See also
Calculating demand forecast accuracy
Demand forecasting
Notes
References
Alstrom, P., Madsen, P. (1996) "Tracking signals in inventory control systems: A simulation study", International Journal of Production Economics, 45 (1–3), 293–302,
Nahmias, Steven (2005) Production & Operations Analysis, Fifth Edition, McGraw-Hill.
Trigg, D.W. (1964) "Monitoring a forecasting system". Operational Research Quarterly, 15, 271–274.
Trigg, D.W. and Leach, A.G. (1967). "Exponential smoothing with an adaptive response rate". Operational Research Quarterly, 18 (1), 53–59
Mita Montero, J David (1973). "Análise de Sistemas de Previsão - Amortecimento Exponencial". Tese de Mestrado de Engenharia Industrial PUC-RJ, Brasil. Aplicação Industrial de Tracking Signal.
External links
Tracking signal in forecasting by Dr Muhammad Al-Salamah
Tracking Signal:A Measure of Forecast Accuracy by Tyler Hedin, Brigham Young University (Powerpoint)
Statistical deviation and dispersion
Time series
Management science
Statistical forecasting | Tracking signal | Biology | 696 |
7,820,792 | https://en.wikipedia.org/wiki/Freelance%20model | Freelance, in aerial, railway, naval, or bus model building, refers to companies that produce models that are not based on existing livery.
Such models are sometimes frowned upon in the model-building community because they do not represent existing items, but are original designs.
Since they require no licensing fees for trademark and design owners, and can thus be produced less expensively, freelance models are quite popular in the United States. They have not become popular in Europe, although a few European companies produce them.
Freelance companies
Railway
FTL - Ferrovie e Tranvie Locali - Local Railroads and Tramways
Beetrains
SAFF - Società Anonima Ferrovie Federate - Joint-stock company Federate Railroads
So.Ge.R.I.T.
SITAV Società Intermodale Trasporti Alta Valle - High Valley Intermodal Society Transport
FRA Ferrovie Regionali dell'Appennino - Regional Railways of Apennine
Bus
SAFF - Società Anonima Ferrovie Federate - Joint-stock company Federate Railroads
Scale modeling | Freelance model | Physics | 223 |
37,572,635 | https://en.wikipedia.org/wiki/List%20of%20LIMS%20software%20packages | This is a list of proprietary laboratory information management systems (LIMS) from businesses and organizations which have articles about them in Wikipedia.
BaseSpace Clarity LIMS from Illumina
BIOVIA ONE Lab LIMS from Dassault Systèmes
CCLAS from ABB Group
ELab from LabLynx
Hach WIMS from Hach Company
LABbase from Analytik Jena
LabWare LIMS from LabWare, Inc.
Labvantage from LabVantage
Nautilus LIMS from Thermo Fisher Scientific
NuGenesis 8 from Waters Corporation
OmicsHub from Integromics
readyLIMS from Analytik Jena
SampleManager LIMS from Thermo Fisher Scientific
SampleTrack from Bruker
SIMATIC IT R&D Suite from Siemens
SLIMS from Agilent Technologies
STARLIMS from Starlims
TrakCare Lab Enterprise from InterSystems
Watson LIMS from Thermo Fisher Scientific
webLIMS from LabLynx
OpreX LIMS from Yokogawa Electric
See also
Magazines and journals covering LIMS
Scientific Computing & Instrumentation
LIMS Packages | List of LIMS software packages | Technology | 215 |
349,014 | https://en.wikipedia.org/wiki/Linearly%20ordered%20group | In mathematics, specifically abstract algebra, a linearly ordered or totally ordered group is a group G equipped with a total order "≤" that is translation-invariant. This may have different meanings. We say that (G, ≤) is a:
left-ordered group if ≤ is left-invariant, that is a ≤ b implies ca ≤ cb for all a, b, c in G,
right-ordered group if ≤ is right-invariant, that is a ≤ b implies ac ≤ bc for all a, b, c in G,
bi-ordered group if ≤ is bi-invariant, that is it is both left- and right-invariant.
A group G is said to be left-orderable (or right-orderable, or bi-orderable) if there exists a left- (or right-, or bi-) invariant order on G. A simple necessary condition for a group to be left-orderable is to have no elements of finite order; however this is not a sufficient condition. It is equivalent for a group to be left- or right-orderable; however there exist left-orderable groups which are not bi-orderable.
Further definitions
In this section is a left-invariant order on a group with identity element . All that is said applies to right-invariant orders with the obvious modifications. Note that being left-invariant is equivalent to the order defined by if and only if being right-invariant. In particular a group being left-orderable is the same as it being right-orderable.
In analogy with ordinary numbers we call an element of an ordered group positive if . The set of positive elements in an ordered group is called the positive cone, it is often denoted with ; the slightly different notation is used for the positive cone together with the identity element.
The positive cone characterises the order ; indeed, by left-invariance we see that if and only if . In fact a left-ordered group can be defined as a group together with a subset satisfying the two conditions that:
for we have also ;
let , then is the disjoint union of and .
The order associated with is defined by ; the first condition amounts to left-invariance and the second to the order being well-defined and total. The positive cone of is .
The left-invariant order is bi-invariant if and only if it is conjugacy invariant, that is if then for any we have as well. This is equivalent to the positive cone being stable under inner automorphisms.
If , then the absolute value of , denoted by , is defined to be:
If in addition the group is abelian, then for any a triangle inequality is satisfied: .
Examples
Any left- or right-orderable group is torsion-free, that is it contains no elements of finite order besides the identity. Conversely, F. W. Levi showed that a torsion-free abelian group is bi-orderable; this is still true for nilpotent groups but there exist torsion-free, finitely presented groups which are not left-orderable.
Archimedean ordered groups
Otto Hölder showed that every Archimedean group (a bi-ordered group satisfying an Archimedean property) is isomorphic to a subgroup of the additive group of real numbers, .
If we write the Archimedean l.o. group multiplicatively, this may be shown by considering the Dedekind completion, of the closure of a l.o. group under th roots. We endow this space with the usual topology of a linear order, and then it can be shown that for each the exponential maps are well defined order preserving/reversing, topological group isomorphisms. Completing a l.o. group can be difficult in the non-Archimedean case. In these cases, one may classify a group by its rank: which is related to the order type of the largest sequence of convex subgroups.
Other examples
Free groups are left-orderable. More generally this is also the case for right-angled Artin groups. Braid groups are also left-orderable.
The group given by the presentation is torsion-free but not left-orderable; note that it is a 3-dimensional crystallographic group (it can be realised as the group generated by two glided half-turns with orthogonal axes and the same translation length), and it is the same group that was proven to be a counterexample to the unit conjecture. More generally the topic of orderability of 3--manifold groups is interesting for its relation with various topological invariants. There exists a 3-manifold group which is left-orderable but not bi-orderable (in fact it does not satisfy the weaker property of being locally indicable).
Left-orderable groups have also attracted interest from the perspective of dynamical systems as it is known that a countable group is left-orderable if and only if it acts on the real line by homeomorphisms. Non-examples related to this paradigm are lattices in higher rank Lie groups; it is known that (for example) finite-index subgroups in are not left-orderable; a wide generalisation of this has been recently announced.
See also
Cyclically ordered group
Hahn embedding theorem
Partially ordered group
Notes
References
Ordered groups | Linearly ordered group | Mathematics | 1,099 |
44,989,961 | https://en.wikipedia.org/wiki/Chilean%20units%20of%20measurement | A number of different units of measurement were used in Chile to measure quantities like length, mass, area, capacity, etc. From 1848, the metric system has been compulsory in Chile.
Pre-metric units
Spanish customary units were used before 1848.
Length
To measure length several units were used. Legally, one vara is equal to 0.836 m. Some of the units and their legal values as follows:
1 línea = vara
1 pulgada = vara
1 pie = vara
1 cuadra = 150 vara
1 legua = 5400 vara
Mass
Several units were used to measure mass. One libra is equal to 0.460093 kg. Some other units are given below:
1 grano = libra
1 adarme = libra
1 = libra
1 onza = libra
1 arroba = 25 libra
1 quintal = 100 libra
Capacity
Mainly two systems, dry and liquid, were used to measure capacity in Chile.
Dry
One almud was equal to 8.083 L. 12 almud were equal to one fanega.
Liquid
One cuartillo was equal to 1.111 L. 32 cuartillo were equal to one arroba.
References
Culture of Chile
Chile | Chilean units of measurement | Mathematics | 259 |
2,695,537 | https://en.wikipedia.org/wiki/Nu%20Telescopii | Nu Telescopii, Latinized from ν Telescopii, is a slightly evolved star in the southern constellation Telescopium. It has an apparent visual magnitude of 5.33, allowing it to be faintly visible to the naked eye. The object is relatively close at a distance of 169 light years but is approaching the Solar System with a heliocentric radial velocity of about .
There hasn't been much agreement on Nu Telescopii's spectral classification. It was initially categorized as Am star, with a classification of kA4mF3IV:. This indicates that the object has the calcium K-lines of an A4 star and the metallic lines of a F3 subgiant. However, Nu Telescopii was shown not to have a peculiar spectrum and was given a class of A9 Vn, indicating that it is an A-type main-sequence star displaying broad (nebulous) absorption lines due to rapid rotation. It has since been classified as an evolved A7 star with either a blended luminosity class of a giant star or subgiant (III/IV) or only subgiant (IV).
Nu Telescopii has a mass of and an age of 686 million years. It has 1.94 times the radius of the Sun and has an effective temperature of 8,199 K. These parameters yield a luminosity of from its photosphere and when viewed, has a white hue. Nu Telescopii's metallicity – what astronomers dub as elements heavier than helium – is around solar level. Its motion in space matches that of the IC 2391 cluster, making it a probable member.
There is a faint magnitude 9.3 companion star at an angular separation of 102 arc seconds along a position angle of 333°, as of 2010.
References
A-type giants
Telescopii, Nu
Telescopium
Durchmusterung objects
186543
097421
7510 | Nu Telescopii | Astronomy | 413 |
8,202,405 | https://en.wikipedia.org/wiki/Floating%20hinge | A floating hinge is a hinge that, while able to behave as a normal hinge, enables one of the objects to move away from the other - hence "float". In effect, the hinge allows for two parallel axes of rotation – one for each object joined by the hinge – and each axis can be moved relative to the position of the other.
Uses
Floating hinges are used in flatbed scanners designed to scan thick objects such as books. If a regular sheet of paper is placed on the glass and the cover is lowered over it, the glass, the paper, and the cover are very close together. If a thicker object is placed on the glass, an ordinary hinge would leave the cover at an angle to the glass; a floating hinge raises the hinged edge of the cover to the level of the book so that the cover remains parallel to the glass, but raised above it.
Floating hinges are also used in two-plate electric cooking grills, as they allow for even heating of both sides of a thick piece of food without crushing it.
See also
References
External links
Hinges
Hardware (mechanical) | Floating hinge | Physics,Technology,Engineering | 230 |
62,510,434 | https://en.wikipedia.org/wiki/GUIDE-Seq | GUIDE-Seq (Genome-wide, Unbiased Identification of DSBs Enabled by Sequencing) is a molecular biology technique that allows for the unbiased in vitro detection of off-target genome editing events in DNA caused by CRISPR/Cas9 as well as other RNA-guided nucleases in living cells. Similar to LAM-PCR, it employs multiple PCRs to amplify regions of interest that contain a specific insert that preferentially integrates into double-stranded breaks. As gene therapy is an emerging field, GUIDE-Seq has gained traction as a cheap method to detect the off-target effects of potential therapeutics without needing whole genome sequencing.
Principles
Conceived to work in concert with next-gen sequencing platforms such as Illumina dye sequencing, GUIDE-Seq relies on the integration of a blunt, double-stranded oligodeoxynucleotide (dsODN) that has been phosphothioated on two of the phosphate linkages on the 5' end of both strands. The dsODN cassette integrates into any site in the genome that contains a double-stranded break (DSB). This means that along with the target and off-target sites that may exist as a result of the activity of a nuclease, the dsODN cassette will also integrate into any spurious sites in the genome that have a DSB. This makes it critical to have a dsODN only condition that controls for errant and naturally occurring DSBs, and is required to use the GUIDE-seq bioinformatic pipeline.
After integration of the dsODN cassette, genomic DNA (gDNA) is extracted from the cell culture and sheared to 500bp fragments via sonication. The resulting sheared gDNA undergoes end-repair and adapter ligation. From here, DNA specifically containing the dsODN insert is amplified via two rounds of polymerase chain reaction (PCR) that proceeds in a unidirectional manner starting from the primers that are complementary to the dsODN. This process allows for the reading of the adjacent sequences, both the sense and anti-sense strands, flanking the insert. The final product is a panoply of amplicons, describing the DSB distribution, containing indices for sample differentiation, p5 and p7 Illumina flow-cell adapters, and the sequences flanking the dsODN cassette.
GUIDE-Seq is able to achieve detection of rare DSBs that occur with a 0.1% frequency, however this may be as a result of the limitations of next-generation sequencing platforms. The greater the depth of reads an instrument is able to achieve, the better it can detect rarer events. Additionally, GUIDE-Seq is able to detect sites not predicted by the "in silico" methods which often will predict sites based on sequence similarity and percent mismatch. There have been cases of GUIDE-Seq not detecting any off-targets for certain guide RNAs, suggesting that some RNA-guided nucleases may have no associated off-targets. GUIDE-Seq has been used to show that engineered variants of Cas9 can have reduced off-target effects.
Caveats
GUIDE-Seq has been shown to miss some off-targets, when compared to the genome-wide sequencing DIGENOME-Seq method, due to the nature of its targeting. Another caveat is that GUIDE-Seq has been observed to generate slightly different off-target sites depending on the cell line. This could be due to cell lines having different parental genetic origins, cell line specific mutations, or, in the case of some immortal cell lines such as K562s, having aneuploidy. This suggests that it would be pertinent for researchers to test multiple cell lines to validate efficacy and accuracy. GUIDE-Seq cannot be used to identify off-targets in vivo.
References
Genome editing
Molecular biology | GUIDE-Seq | Chemistry,Engineering,Biology | 810 |
14,164,636 | https://en.wikipedia.org/wiki/NFYA | Nuclear transcription factor Y subunit alpha is a protein that in humans is encoded by the NFYA gene.
Function
The protein encoded by this gene is one subunit of a trimeric complex NF-Y, forming a highly conserved transcription factor that binds to CCAAT motifs in the promoter regions in a variety of genes. Subunit NFYA associates with a tight dimer composed of the NFYB and NFYC subunits, resulting in a trimer that binds to DNA with high specificity and affinity. The sequence specific interactions of the complex are made by the NFYA subunit, suggesting a role as the regulatory subunit. In addition, there is evidence of post-transcriptional regulation in this gene product, either by protein degradation or control of translation. Further regulation is represented by alternative splicing in the glutamine-rich activation domain, with clear tissue-specific preferences for the two isoforms.
NF-Y complex serves as a pioneer factor by promoting chromatin accessibility to facilitate other co-localizing cell type-specific transcription factors.
NF-Y has also been implicated as a central player in transcription start site (TSS) selection in animals. It safeguards the integrity of the nucleosome-depleted region and PIC localization at protein-coding gene promoters.
Interactions
NFYA has been shown to interact with Serum response factor and ZHX1. NFYA, NFYB and NFYC form the NFY complex and it has been shown that the NFY complex serves as a pioneer factor by promoting chromatin accessibility to facilitate other co-localizing cell type-specific transcription factors.
Structure
The atomic structure of the NFY heterotrimer in complex with dsDNA was resolved via X-ray crystallography (PDB ID 4awl). Using one of the NFYA alpha helices as a template, structure inspired stapled peptides were designed to disrupt the NFY heterotrimer formation by preventing NFYA from binding to the NFYB/C heterodimer.
References
Further reading
External links
Transcription factors | NFYA | Chemistry,Biology | 428 |
23,842,972 | https://en.wikipedia.org/wiki/Gymnopilus%20fulvellus | Gymnopilus fulvellus is a species of mushroom in the family Hymenogastraceae.
See also
List of Gymnopilus species
External links
Gymnopilus fulvellus at Index Fungorum
fulvellus
Taxa named by Charles Horton Peck
Fungus species | Gymnopilus fulvellus | Biology | 61 |
21,465,265 | https://en.wikipedia.org/wiki/Lactarius%20pallidus | Lactarius pallidus, the pale milkcap, is an edible mushroom of the genus Lactarius. It is pale in colour, and found on the floor in beech or birch woodland. It's smooth cap features a particularly thick layer of flesh and often has an incurved margin. Though generally considered edible, it is not recommended to be eaten raw. It is common in Europe, and less common in North America and Australasia.
Taxonomy
Lactarius pallidus was classified as a member of Lactarius by Swedish mycologist Elias Magnus Fries. It was first described by Christian Hendrik Persoon, who named it Agaricus pallidus in his 1797 book Tentamen dispositionis methodicae Fungorum. It is known in English by its common name, the pale milkcap.
Description
Lactarius pallidus has a cap of across. In shape, it is initially a flattened convex, developing a funnel-shaped depression with age. It is pale buff in colour, sometimes dull but often with a rosy tint. It can also be a pale brown or pale flesh colour. The cap is smooth, firm, and sticky, and has a thick layer of white to buff flesh. The margin is incurved on younger specimens. The pale colour, incurved margin, and smooth cap are its most distinguishing features. The stem is long, by thick. In shape, the stem is cylindrical or slightly narrowed at the base, and is concolorous with the cap or whitish. The moderately decurrent, crowded gills are pale rosy buff to yellowish buff, and leak white milk. The spores are elliptic, with ridges of varying thickness running across them, forming few cross-connections. They typically measure 8 to 10 by 6 to 7 micrometres. The spores leave a spore print that is pale ochre with a slight salmon tinge.
Lactarius pallidus is similar in appearance to L. affinis, but is differentiated by the fact that the former lacks the peppery taste of the latter.
Edibility
Though generally considered edible, especially after cooking, L. pallidus has been described by some mycologists as inedible. The milk has a mild to moderately hot taste.
Distribution and habitat
Lactarius pallidus is typically found growing mycorrhizally under beech, but can also be found under birch. It is typically half-buried among leaf litter. It can sometimes be found in large groups, and occurs throughout summer and autumn. It can be found commonly in Europe but is much rarer in North America. It can also be found in Australia.
See also
List of Lactarius species
References
pallidus
Edible fungi
Fungi described in 1797
Fungi of North America
Fungi of Europe
Fungi of Australia
Fungus species | Lactarius pallidus | Biology | 570 |
13,667,573 | https://en.wikipedia.org/wiki/Sarcoscypha%20coccinea | Sarcoscypha coccinea, commonly known as the scarlet elf cup, or the scarlet cup, is a species of fungus in the family Sarcoscyphaceae of the order Pezizales. The fungus, widely distributed in the Northern Hemisphere, has been found in Africa, Asia, Europe, North and South America, and Australia. The type species of the genus Sarcoscypha, S. coccinea has been known by many names since its first appearance in the scientific literature in 1772. Phylogenetic analysis shows the species to be most closely related to other Sarcoscypha species that contain numerous small oil droplets in their spores, such as the North Atlantic island species S. macaronesica. Due to similar physical appearances and sometimes overlapping distributions, S. coccinea has often been confused with S. occidentalis, S. austriaca, and S. dudleyi.
The saprobic fungus grows on decaying sticks and branches in damp spots on forest floors, generally buried under leaf litter or in the soil. The cup-shaped fruit bodies are usually produced during the cooler months of winter and early spring. The brilliant red interior of the cups—from which both the common and scientific names are derived—contrasts with the lighter-colored exterior. The edibility of the fruit bodies is well established, but its small size, small abundance, tough texture, and insubstantial fruitings would dissuade most people from collecting for the table. The fungus has been used medicinally by the Oneida Native Americans, and also as a colorful component of table decorations in England. In the northern part of Russia, where fruitings are more frequent, it is consumed in salads, fried with smetana, or just used as colored dressing for meals. Molliardiomyces eucoccinea is the name given to the imperfect form of the fungus that lacks a sexually reproductive stage in its life cycle.
Taxonomy, naming, and phylogeny
The species was originally named Helvella coccinea by the Italian naturalist Giovanni Antonio Scopoli in 1772. Other early names include Peziza coccinea (Nikolaus Joseph von Jacquin, 1774) and Peziza dichroa (Theodor Holmskjold, 1799). Although some authors in older literature have applied the generic name Plectania to the taxon following Karl Fuckel's 1870 name change (e.g. Seaver, 1928; Kanouse, 1948; Nannfeldt, 1949; Le Gal, 1953), that name is now used for a fungus with brownish-black fruit bodies. Sarcoscypha coccinea was given its current name by Jean Baptiste Émil Lambotte in 1889. Obligate synonyms (different names for the same species based on one type) include Lachnea coccinea Gillet (1880), Macroscyphus coccineus Gray (1821), and Peziza dichroa Holmskjold (1799). Taxonomic synonyms (different names for the same species, based on different types) include Peziza aurantia Schumacher (1803), Peziza aurantiaca Persoon (1822), Peziza coccinea Jacquin (1774), Helvella coccinea Schaeffer (1774), Lachnea coccinea Phillips (1887), Geopyxis coccinea Massee (1895), Sarcoscypha coccinea Saccardo ex Durand (1900), Plectania coccinea (Fuckel ex Seaver), and Peziza cochleata Batsch (1783).
Sarcoscypha coccinea is the type species of the genus Sarcoscypha, having been first explicitly designated as such in 1931 by Frederick Clements and Cornelius Lott Shear. A 1990 publication revealed that the genus name Sarcoscypha had been used previously by Carl F. P. von Martius as the name of a tribe in the genus Peziza; according to the rules of Botanical Nomenclature, this meant that the generic name Peziza had priority over Sarcoscypha. To address the taxonomical dilemma, the genus name Sarcoscypha was conserved against Peziza, with S. coccinea as the type species, to "avoid the creation of a new generic name for the scarlet cups and also to avoid the disadvantageous loss of a generic name widely used in the popular and scientific literature". The specific epithet coccinea is derived from the Latin word meaning "deep red". The species is commonly known as the "scarlet elf cup", the "scarlet elf cap", or the "scarlet cup fungus".
S. coccinea var. jurana was described by Jean Boudier (1903) as a variety of the species having a brighter and more orange-colored fruit body, and with flattened or blunt-ended ascospores. Today it is known as the distinct species S. jurana. S. coccinea var. albida, named by George Edward Massee in 1903 (as Geopyxis coccinea var. albida), has a cream-colored rather than red interior surface, but is otherwise identical to the typical variety.
Within the large area that includes the temperate to alpine-boreal zone of the Northern Hemisphere (Europe and North America), only S. coccinea had been recognized until the 1980s. However, it had been known since the early 1900s that there existed several macroscopically indistinguishable taxa with various microscopic differences: the distribution and number of oil droplets in fresh spores; germination behavior; and spore shape. Detailed analysis and comparison of fresh specimens revealed that what had been collectively called "S. coccinea" actually consisted of four distinct species: S. austriaca, S. coccinea, S. dudleyi, and S. jurana.
The phylogenetic relationships in the genus Sarcoscypha were analyzed by Francis Harrington in the late 1990s. Her cladistic analysis combined comparisons of the sequences of the internal transcribed spacer in the non-functional RNA with fifteen traditional morphological characteristics, such as spore features, fruit body shape, and degree of curliness of the "hairs" that form the tomentum. Based on her analysis, S. coccinea is part of a clade that includes the species S. austriaca, S. macaronesica, S. knixoniana and S. humberiana. All of these Sarcoscypha species have numerous, small oil droplets in their spores. Its closest relative, S. macaronesica, is found on the Canary Islands and Madeira; Harrington hypothesized that the most recent common ancestor of the two species originated in Europe and was later dispersed to the Macaronesian islands.
Description
Initially spherical, the fruit bodies are later shallowly saucer- or cup-shaped with rolled-in rims, and measure in diameter. The inner surface of the cup is deep red (fading to orange when dry) and smooth, while the outer surface is whitish and covered with a dense matted layer of tiny hairs (a tomentum). The stipe, when present, is stout and up to long (if deeply buried) by thick, and whitish, with a tomentum. Color variants of the fungus exist that have reduced or absent pigmentation; these forms may be orange, yellow, or even white (as in the variety albida). In the Netherlands, white fruit bodies have been found growing in the polders.
Sarcoscypha coccinea is one of several fungi whose fruit bodies have been noted to make a "puffing" sound—an audible manifestation of spore-discharge where thousands of asci simultaneously explode to release a cloud of spores.
Spores are 26–40 by 10–12 μm, elliptical, smooth, colorless, hyaline (translucent), and have small lipid droplets concentrated at either end. The droplets are refractive to light and visible with light microscopy. In older, dried specimens (such as herbarium material), the droplets may coalesce and hinder the identification of species. Depending on their geographical origin, the spores may have a delicate mucilaginous sheath or "envelope"; European specimens are devoid of an envelope while specimens from North America invariably have one.
The asci are long and cylindrical, and taper into a short stem-like base; they measure 300–375 by 14–16 μm. Although in most Pezizales all of the ascospores are formed simultaneously through delimitation by an inner and outer membrane, in S. coccinea the ascospores located in the basal parts of the ascus develop faster. The paraphyses (sterile filamentous hyphae present in the hymenium) are about 3 μm wide (and only slightly thickened at the apex), and contain red pigment granules.
Anamorph form
Anamorphic or imperfect fungi are those that seem to lack a sexual stage in their life cycle, and typically reproduce by the process of mitosis in structures called conidia. In some cases, the sexual stage—or teleomorph stage—is later identified, and a teleomorph-anamorph relationship is established between the species. The International Code of Nomenclature for algae, fungi, and plants permits the recognition of two (or more) names for one and the same organism, one based on the teleomorph, the other(s) restricted to the anamorph. The name of the anamorphic state of S. coccinea is Molliardiomyces eucoccinea, first described by Marin Molliard in 1904. Molliard found the growth of the conidia to resemble those of the genera Coryne and Chlorosplenium rather than the Pezizaceae, and he considered that this suggested an affinity between Sarcoscypha and the family Helvellaceae. In 1972, John W. Paden again described the anamorph, but like Molliard, failed to give a complete description of the species. In 1984, Paden created a new genus he named Molliardiomyces to contain the anamorphic forms of several Sarcoscypha species, and set Molliardiomyces eucoccinea as the type species. This form produces colorless conidiophores (specialized stalks that bear conidia) that are usually irregularly branched, measuring 30–110 by 3.2–4.7 μm. The conidia are ellipsoidal to egg-shaped, smooth, translucent (hyaline), and 4.8–16.0 by 2.3–5.8 μm; they tend to accumulate in "mucilaginous masses".
Similar species
Similar species include S. dudleyi and S. austriaca, and in the literature, confusion amongst the three is common. Examination of microscopic features is often required to definitively differentiate between the species. Sarcoscypha occidentalis has smaller cups (0.5–2.0 cm wide), a more pronounced stalk that is 1–3 cm long, and a smooth exterior surface. Unlike S. coccinea, it is only found in the New World and in east and midwest North America, but not in the far west. It also occurs in Central America and the Caribbean. In North America, S. austriaca and S. dudleyi are found in eastern regions of the continent. S. dudleyi has elliptical spores with rounded ends that are 25–33 by 12–14 μm and completely sheathed when fresh. S. austriaca has elliptical spores that are 29–36 by 12–15 μm that are not completely sheathed when fresh, but have small polar caps on either end. The Macaronesian species S. macaronesica, frequently misidentified as S. coccinea, has smaller spores, typically measuring 20.5–28 by 7.3–11 μm and smaller fruit bodies—up to wide.
Other similar species include Plectania melastoma, Plectania nannfeldtii, and Scutellinia scutellata.
Ecology, habitat and distribution
A saprobic species, Sarcoscypha coccinea grows on decaying woody material from various plants: the rose family, beech, hazel, willow, elm, and, in the Mediterranean, oak. The fruit bodies of S. coccinea are often found growing singly or clustered in groups on buried or partly buried sticks in deciduous forests, growing from January to April. A Hungarian study noted that the fungus was found mainly on twigs of European hornbeam (Carpinus betulus) that were typically less than long. Fruit bodies growing on sticks above the ground tend to be smaller than those on buried wood. Mushrooms that are sheltered from wind also grow larger than their more exposed counterparts. The fruit bodies are persistent and may last for several weeks if the weather is cool. The time required for the development of fruit bodies has been estimated to be about 24 weeks, although it was noted that "the maximum life span may well be more than 24 weeks because the decline of the colonies seemed to be associated more with sunny, windy weather rather than with old age." One field guide calls the fungus "a welcome sight after a long, desperate winter and ... the harbinger of a new year of mushrooming".
Common over much of the Northern Hemisphere, S. coccinea occurs in the Midwest, in the valleys between the Pacific coast, the Sierra Nevada, and the Cascade Range. Its North American distribution extends north to various locations in Canada and south to the Mexican state Jalisco. The fungus has also been collected from Chile in South America. It is also found in the Old World—Europe, Africa, Asia, Australia, and India. Specimens collected from the Macaronesian islands that once thought to be S. coccinea were later determined to be the distinct species S. macaronesica. A 1995 study of the occurrence of British Sarcoscypha (including S. coccinea and S. austriaca) concluded that S. coccinea was becoming very rare in Great Britain. All species of Sarcoscypha, including S. coccinea, are Red-Listed in Europe. In Turkey, it is considered critically endangered.
The fruit bodies have been noted to be a source of food for rodents in the winter and for slugs in the summer.
Chemistry
The red color of the fruit bodies is caused by five types of carotenoid pigments, including plectaniaxanthin and β-carotene. Carotenoids are lipid-soluble and are stored within granules in the paraphyses. British-Canadian mycologist Arthur Henry Reginald Buller suggested that pigments in fruit bodies exposed to the Sun absorb some of the Sun's rays, raising the temperature of the hymenium—hastening the development of the ascus and subsequent spore discharge.
Lectins are sugar-binding proteins that are used in blood typing, biochemical studies and medical research. A lectin has been purified and characterized from S. coccinea fruit bodies that can bind selectively to several specific carbohydrate molecules, including lactose.
Uses
Sarcoscypha coccinea was used as a medicinal fungus by the Oneida people and possibly by other tribes of the Iroquois Six Nations. The fungus, after being dried and ground up into a powder, was applied as a styptic, particularly to the navels of newborn children that were not healing properly after the umbilical cord had been severed. Pulverized fruit bodies were also kept under bandages made of soft-tanned deerskin. In Scarborough, England, the fruit bodies used to be arranged with moss and leaves and sold as a table decoration.
The species is said to be edible (perhaps best dried), inedible, or "not recommended", depending on the author. Although its insubstantial fruit body and low numbers do not make it particularly suitable for the table, one source claims that "children in the Jura are said to eat it raw on bread and butter; and one French author suggests adding the cups, with a little Kirsch, to a fresh fruit salad."
References
Cited books
Fungi described in 1772
Edible fungi
Fungi of Africa
Fungi of Asia
Fungi of Australia
Fungi of Europe
Fungi of North America
Fungi of South America
Fungi of Western Asia
Sarcoscyphaceae
Fungus species | Sarcoscypha coccinea | Biology | 3,424 |
26,422,274 | https://en.wikipedia.org/wiki/Biotinylated%20dextran%20amine | Biotinylated dextran amines (BDA) are organic compounds used as anterograde and retrograde neuroanatomical tracers. They can be used for labeling the source as well as the point of termination of neural connections and therefore to study neural pathways.
BDA is delivered into the nervous system by iontophoretic or pressure injection and visualized with an avidin-biotinylated horseradish peroxidase procedure, followed by a standard or metal-enhanced diaminobenzidine (DAB) reaction. Samples can then be analyzed by optical microscopy as well as by electron microscopy.
High molecular weight BDA (10 kDa) yields sensitive and detailed labeling of axons and terminals, while low molecular weight BDA (3 kDa) yields sensitive and detailed retrograde labeling of neuronal cell bodies.
References
Amines | Biotinylated dextran amine | Chemistry | 180 |
1,387,110 | https://en.wikipedia.org/wiki/Realized%20niche%20width | Realized niche width is a phrase relating to ecology, is defined by the actual space that an organism inhabits and the resources it can access as a result of limiting pressures from other species (e.g. superior competitors). An organism's ecological niche is determined by the biotic and abiotic factors that make up that specific ecosystem that allow that specific organism to survive there. The width of an organism's niche is set by the range of conditions a species is able to survive in that specific environment.
Definition
The fundamental niche width of an organism refers to the theoretical range of conditions that an organism could survive and reproduce in without considering interspecific interactions. The fundamental niche exclusively considers limiting biotic and abiotic factors such as appropriate food sources and a suitable climate. The fundamental niche width often differs from the realized niche width (the areas where actually inhabited by a given species). This differentiation is due to interspecific competition with other species within their ecosystem while still considering the biotic and abiotic limiting factors. A species' realized niche is usually much narrower than its fundamental niche width as it is forced to adjust its niche around the superior competing species.
The physical area where a species lives, is its habitat. The set of environmental features essential to that species' survival, is its "niche." (Ecology. Begon, Harper, Townsend)
Importance
The difference between the realized and the fundamental niche is important in understanding how interactions with a variety of different species in one environment affects the fitness of another species. This is not only important in understanding how a species functions in an ecosystem, but it is also important in determining the potential and realized success of invasive species. Invasive species could thrive or be killed off in an environment where they would theoretically be able to exist based on the presence or lack of there of different species. To survive, an invasive species first has to successfully survive the journey to the new area, they then have to be able to survive in that habitat. After this, they then must to be able to successfully compete and reproduce with the other species already in the new, invaded environment. Considering these factors, not all invasive species are devastating to the new environment they inhabit as they must first overcome these other challenges before they can negatively affect their new environment.
In an organism's niche, the abiotic and biotic factors determine the ability of a species to survive; however, both the abiotic and biotic factors of that environment can be changed by that species' existence. A species' impact on its biotic environment in its niche tend to effect not only that species' ability to survive, but the other species it coexists with. Again, these changes are important in understanding the effects of invasive species in a new habitat. The ability of a new species to change an environments abiotic and biotic factors can make a previously habitable environment for a species uninhabitable. The extinction of this species can further change the biotic factors of an environment. Invasive species not only directly affect the biotic environment, but they indirectly effect this environment by affecting the species able to survive in this habitat.
Niche theory states that a species' ranges are limited by their physiological tolerances (fundamental niche) and their biotic limitations (realized niche). The survival rates of organisms facing rapid niche shifts help scientists predict the future effects of climate change and invasive species on current ecological communities. The ability of organisms to shift niches also help scientists understand community formation and speciation. Niche shifts for invasive species in their native environment differ from those in their newly invaded environment. After an invasive species is introduced to their new environment, they have to cope with new biotic factors, environmental constraints, and climate differences. These variables play a role in determining how the organism's niche will evolve. Biophysical models use links between an organism's preferred climate and their functional traits to determine where an organism could survive without taking biotic factors into account.
Experiments
Barnacles
The phenomenon of fundamental and realized niches was documented by the ecologist Joseph Connell in his study of species overlap between barnacles on intertidal rocks. He observed that Chthamalus stellatus and Balanus balanoides inhabited the upper and lower strata of intertidal rocks respectively, but only Chthamalus barnacles could survive both the upper and lower strata without desiccation. The removal of Balanus barnacles from the lower strata, resulted in the Chthamalus barnacles occupying its fundamental niche (both upper and lower strata) which is much larger than its realized niche in the upper strata.
This experiment was conducted on the rocky intertidal because of its accessibility and the large amount of previous research done on the species living there. Many of the species that live here are also sedentary or slow moving, making them easier to study. The different species are also more easily manipulated creating experimental and control groups that can be better studied because of their sedentary or slow moving state. The goal of Connell's experiment was to determine how much physical and biotic competition factors affected community structure in the rocky intertidal ecosystem. Vertical zonation also plays a role in determining the placement of different species in the rocky intertidal ecosystem which was previously thought to be due to the tides.
Invasion biology
A study by Tingley et al. focuses on the invasion of the cane toad (Rhinella marina, formerly Bufo marinus) of Australia. Through thermal acclimation and development of improved movement functions, this toad has expanded its habitat range significantly. Evidence in this study showed that there was a difference between the toad's native niche and its invaded environment niche. A review of 180 case studies showed only 50% of invasive species went through a niche shift; however, niche changes are determined in a variety of different ways making it hard to determine how accurate this study is.
It was also proven that the toad's increased range was only observed in Australia and not in its native environment even though the same physical conditions were present in both. This means that biotic factors and/or dispersal barriers limit the toad in its native environment. Without these constraints in its invaded environment, the toad is able to fill out its fundamental niche. Determining realized niches help with developing biotic control agents for invasive species, and determining an organism's fundamental niche help scientist's conclude how well a species would be able to survive and adapt to climate change.
Pathogens
Another study by Truong et al. reviewed the use of plants as the realized niche for the human pathogen Listeria monocytogenes. This paper focuses on how this pathogen uses a plant as its realized niche. The fundamental niche of this pathogen can be determined through studies where the pathogen is grown aseptically (without other pathogens); however, abiotic and biotic factors limit the ability for this pathogen to exist in nature. This study was not able to clearly determine how this pathogen and plants survive together. However, it was shown that the plants did not defend itself against the presence of this pathogen. This study did support the theory that this pathogen can use plant nutrients to survive and multiply if the plants environment and competition allows. However, more comprehensive research will need to be conducted to determine this pathogen's realized niche. This study further shows how determining an organism's realized niche can help understand this human pathogen's natural history.
References
Vix (2022) Realized Niche, Biology Online. (Accessed: November 28, 2022).
Ecology | Realized niche width | Biology | 1,530 |
74,928,292 | https://en.wikipedia.org/wiki/Water%20pollution%20in%20Haiti | Pollution of water resources in Haiti, as with many developing countries, is a major concern. The main cause of water pollution in the country is major deficiencies in the collection of solid waste and the absence or dysfunction of wastewater sanitation. In addition, the considerable increase in the population over the last decades coupled with a lack of urban planning by successive authorities in the country has led to massive degradation in the environment, while affecting the quality of available water resources. As a result, surface water and shallow groundwater are increasingly contaminated by micro-organisms such as bacteria, protozoa and viruses, exposing men, women and children to cholera, typhoid, Cryptosporidiosis and all kinds of waterborne diseases.
Causes of pollution
Untreated sewages
Haiti does not have a collective system for the collection and treatment of wastewater. Sanitation, when it exists in Haiti, is autonomous in nature where the individual is responsible for the management and evacuation of the water he produces. As a result, gray water generally ends up in open drainage channels that have been sized only for stormwater drainage. On the other hand, when drainage channels do not exist, they are then evacuated on the ground near the houses. This promotes contamination by runoff and infiltration of surface water and groundwater.
As for black water, the observation is overwhelming: in Haiti only 26% of the population has access to improved sanitation systems, with a partition of 34.5% in urban areas and 17% in rural areas. Note that more than half of these toilets were not built on septic tanks, and they are not regularly emptied. In addition, the emptying of sanitary systems, when it is done, is most often carried out by manual drainers and the excreta is simply thrown into canals or waterways. Indeed, the country has a single functional excreta treatment center with a capacity of 500 m3 per day, for a population of nearly 12,000,000 inhabitants and an area of 27,750 km2.
Other problems
In recent years, Haiti has experienced significant demographic growth and unplanned urbanization from rural areas to urban areas, particularly the Port-au-Prince metropolitan region. This has led to the creation of numerous slums without access to the most basic services. These areas are also major producers of solid waste, which is generally dumped in ravines, street corners, roadsides and other open spaces. In fact, studies of waste management in Port-au-Prince showed that 87.7% of the poorest households used ravines to dispose of their waste.
All these poor sanitation practices combined with shallow aquifers and fractured rocks result in widespread contamination, either through runoff and/or infiltration of polluted effluents, of the country's ground and surface water resources.
ways of curbing water pollution in Haiti
Addressing water pollution in Haiti requires a multifaceted approach that considers both immediate interventions and long-term solutions. Here are several strategies that can help curb water pollution in Haiti:
Improving Sanitation Infrastructure: Promoting the construction and maintenance of proper sanitation facilities such as toilets and sewage treatment systems can prevent untreated sewage from contaminating water sources.
Implementing Waste Management Practices: Establishing effective waste collection and disposal systems to reduce plastic and solid waste pollution in rivers, lakes, and coastal waters.
Promoting Sustainable Agriculture: Encouraging the adoption of organic farming practices and reducing the use of chemical fertilizers and pesticides to minimize agricultural runoff into water bodies.
Protecting Watershed Areas: Implementing measures to protect and restore critical watershed areas through reforestation and erosion control to prevent sedimentation and runoff pollution.
Educating Communities: Conducting educational campaigns to raise awareness about the importance of clean water, proper waste disposal, and hygiene practices among communities.
Regulating Industrial Discharges: Enforcing regulations on industrial wastewater discharge to ensure that pollutants from factories and industries do not contaminate water sources.
Investing in Water Treatment Technologies: Installing and maintaining water treatment facilities to improve access to clean and safe drinking water for communities.
Collaborating with International Organizations: Partnering with international organizations and NGOs to provide technical expertise, funding, and resources for water pollution control projects.
Quality of water resources
No recent survey has been carried out at the national level on the quality of water used daily by the population. However, according to a survey carried out in April 2012 in the Department of Artibonite, out of 108 sources tested for water quality, 2/3 of them presented traces of E. Coli (Escherichia coli) and 25.9%. had a concentration of more than 100 MPN/100mL which is very high-risk levels for human health.
Other studies carried out in the three main cities of the country, namely Port-au-Prince, Cap-Haïtien and Les Cayes, have shown the presence of microorganisms such as Giardia and Cryptosporidium at levels dangerous for the population. Indeed, values of 4 to 1274 cryptosporidium oocysts and 741 to 6088 Cryptosporidium oocysts were found in Port-au-Prince and Cap-Haïtien, in waters intended for use by the population.
The presence of these microorganisms in Haiti's waters is a marker of faecal contamination.
Related diseases
Water-borne diseases such as diarrhea, cholera, cryptosporidiosis, among others, are very common in the country. In this sense, they present a high health risk for the most vulnerable.
Easily catchable diseases, such as diarrhea and those resulting in malnutrition, kill between 20% and 28% of children aged 0 to 5, respectively. Cryptosporidiosis is a common cause of diarrhea in Haiti. It is responsible for 17.5% of acute diarrhea affecting children under 2 years old and 30% of chronic diarrhea affecting people with HIV.
Between October 2010 and February 2019, an epidemic of cholera introduced by Nepalese soldiers caused the death of nearly 10,000 people and infected more than 820,000. Only, to find a resurgence in October 2022 which have already affected 4 department in the country, with a total of 6,814 suspected cases of which 5,628 have been hospitalized and cause 144 deaths as of 6 November 2022.
References
11. World Bank. (2020). Water Supply, Sanitation, and Hygiene in Haiti. Retrieved from World Bank Haiti Water.
Water pollution in Haiti
Water pollution in Haiti
Water pollution in Haiti | Water pollution in Haiti | Chemistry,Environmental_science | 1,312 |
32,180,913 | https://en.wikipedia.org/wiki/Calponin%20family%20repeat | In molecular biology, the calponin family repeat is a 26 amino acid protein domain. Calponin 1 (CNN1) contains three copies of this domain. This domain is also found in vertebrate smooth muscle protein (SM22 or transgelin), and a number of other proteins whose physiological role is not yet established, including Drosophila synchronous flight muscle protein SM20, Caenorhabditis elegans unc-87 protein, rat neuronal protein NP25, and an Onchocerca volvulus antigen.
References
Protein domains | Calponin family repeat | Biology | 121 |
20,289,869 | https://en.wikipedia.org/wiki/Cranfield%20experiments | The Cranfield experiments were a series of experimental studies in information retrieval conducted by Cyril W. Cleverdon at the College of Aeronautics, today known as Cranfield University, in the 1960s to evaluate the efficiency of indexing systems. The experiments were broken into two main phases, neither of which was computerized. The entire collection of abstracts, resulting indexes and results were later distributed in electronic format and were widely used for decades.
In the first series of experiments, several existing indexing methods were compared to test their efficiency. The queries were generated by the authors of the papers in the collection and then translated into index lookups by experts in those systems. In this series, one method went from least efficient to most efficient after making minor changes to the arrangement of the way the data was recorded on the index cards. The conclusion appeared to be that the underlying methodology seemed less important than specific details of the implementation. This led to considerable debate on the methodology of the experiments.
These criticisms also led to the second series of experiments, now known as Cranfield 2. Cranfield 2 attempted to gain additional insight by reversing the methodology; Cranfield 1 tested the ability for experts to find a specific resource following the index system, Cranfield 2 instead studied the results of asking human-language questions and seeing if the indexing system provided a relevant answer, regardless of whether it was the original target document. It too was the topic of considerable debate.
The Cranfield experiments were extremely influential in the information retrieval field, itself a subject of considerable interest in the post-World War II era when the quantity of scientific research was exploding. It was the topic of continual debate for years and led to several computer projects to test its results. Its influence was considerable over a forty-year period before natural language indexes like those of modern web search engines became commonplace.
Background
The now-famous July 1945 article "As We May Think" by Vannevar Bush is often pointed to as the first complete description of the field that became information retrieval. The article describes a hypothetical machine known as "memex" that would hold all of mankind's knowledge in an indexed form that would allow it to be retrieved by anyone.
In 1948, the Royal Society held the Scientific Information Conference that first explored some of these concepts on a formal basis. This led to a small number of experiments in the field in the UK, US, and the Netherlands. The only major effort to compare different systems was led by Gull using the collection of works from the Armed Forces Technical Information Agency, which had started as a collection of aeronautics reports captured in Germany at the end of World War II. Judging of the results was carried out by experts in the two systems, and they never agreed on whether various retrieved documents were relevant to the search, with each group rejecting over 30% of the results as wrong. Further testing was cancelled as there appeared to be no consensus.
A second conference on the topic, the International Conference on Scientific Information, was held in Washington, DC in 1958, by which time computer development had reached the point where automatic index retrieval was possible. It was at this meeting that Cyril W. Cleverdon "got the bit between his teeth" and managed to arrange for funding from the US National Science Foundation to start what would later be known as Cranfield 1.
Cranfield 1
The first series of experiments directly compared four indexing systems that represented significantly different conceptual underpinnings. The four systems were:
the Universal Decimal Classification, a hierarchical system being widely introduced in libraries,
the Alphabetical Subject Catalogue which alphabetized subject headings in classic library index card collections,
the Faceted Classification Scheme which allows combinations of subjects to produce new subjects,
and Mortimer Taube's Uniterm system of co-ordinate indexing where a reference may be found on any number of separate index cards.
In an early series of experiments, participants were asked to create indexes for a collection of aerospace-related documents. Each index was prepared by an expert in that methodology. The authors of the original documents were then asked to prepare a set of search terms that should return that document. The indexing experts were then asked to generate queries into their index based on the author's search terms. The queries were then used to examine the index to see if it returned the target document.
In these tests, all but the faceted system produced roughly equal numbers of "correct" results, while the faceted concept lagged. Studying these results, the faceted system was re-indexed using a different format on the cards and the tests were re-run. In this series of tests, the faceted system was now the clear winner. This suggested the underlying theory behind the system was less important than specifics of the implementation.
The outcome of these experiments, published in 1962, generated enormous debate, both among the supporters of the various systems, as well as among researchers who complained about the experiments as a whole. Nevertheless, it appeared one conclusion was clearly supported: simple systems based on keywords appeared to work just as well as complex classificatory schemes. This is important, as the former are dramatically easier to implement.
Cranfield 2
In the first series of experiments, experts in the use of the various techniques were tasked with both the creation of the index and its use against the sample queries. Each system had its own concept about how a query should be structured, which would today be known as a query language. Much of the criticism of the first experiments focused on whether the experiments were truly testing the systems, or the user's ability to translate the query into the query language.
This led to the second series of experiments, Cranfield 2, that considered the question of converting the query into the language. To do this, instead of considering the generation of the query as a black box, each step was broken down. The outcome of this approach was revolutionary at the time; it suggested that the search terms be left in their original format, what would today be known as a natural language query.
Another major change was how the results were judged. In the original tests, a success occurred only if the index returned the exact document that had been used to generate the search. However, this was not typical of an actual query; a user looking for information on aircraft landing gear might be happy with any of the collection's many papers on the topic, but Cranfield 1 would consider such a result a failure in spite of returning relevant materials. In the second series, the results were judged by 3rd parties who gave a qualitative answer on whether the query generated a relevant set of papers, as opposed to returning a specified original document.
Continued debate
The results of the two test series continued to be a subject of considerable debate for years. In particular, it led to a running debate between Cleverdon and Jason Farradane, one of the founders of the Institute of Information Scientists in 1958. The two would invariably appear at meetings where the other was presenting and then, during the question and answer period, explain why everything they were doing was wrong. The debate has been characterized as "...fierce and unrelenting, sometimes well beyond the boundaries of civility." This chorus was joined by Don R. Swanson in the US, who published a critique on the Cranfield experiments a few years later.
In spite of these criticisms, Cranfield 2 set the bar by which many following experiments were judged. In particular, Cranfield 2's methodology, starting with natural language terms and judging the results by relevance, not exact matches, became almost universal in following experiments in spite of many objections.
Influence
With the conclusion of Cranfield 2 in 1967, the entire corpus was published in a machine-readable form. Today, this is known as the Cranfield 1400, or any variety of variations on that theme. The name refers to the number of documents in the collection, which consists of 1398 abstracts. The collection also includes 225 queries and the relevance judgments of all query:document pairs that resulted from the experimental runs. The main database of abstracts is about 1.6 MB.
The experiments were carried out in an era when computers had a few kilobytes of main memory and network access to perhaps a few megabytes. For instance, the mid-range IBM System/360 Model 50 shipped with 64 to 512 kB of core memory (tending toward the lower end) and its typical hard drive stored just over 80 MB. As the capabilities of systems grew through the 1960s and 1970s, the Cranfield document collection became a major testbed corpus that was used repeatedly for many years.
Today the collection is too small to use for practical testing beyond pilot experiments. Its place has mostly been taken by the TREC collection, which contains 1.89 million documents across a wider array of subjects, or the even more recent GOV2 collection of 25 million web pages.
See also
ASLIB
Information history
References
Citations
Bibliography
Lancaster, F. W. (1965). A case study in the application of Cranfield system evaluation techniques. Journal of Chemical Documentation, 5(2), 92–96.
External links
Cranfield papers in ACM SIGIR Museum
History of computing in the United Kingdom
Information retrieval evaluation
Science and technology in Bedfordshire | Cranfield experiments | Technology | 1,891 |
176,356 | https://en.wikipedia.org/wiki/Urbain%20Le%20Verrier | Urbain Jean Joseph Le Verrier (; 11 March 1811 – 23 September 1877) was a French astronomer and mathematician who specialized in celestial mechanics and is best known for predicting the existence and position of Neptune using only mathematics.
The calculations were made to explain discrepancies with Uranus's orbit and the laws of Kepler and Newton. Le Verrier sent the coordinates to Johann Gottfried Galle in Berlin, asking him to verify. Galle found Neptune the same night he received Le Verrier's letter, within 1° of the predicted position.
The discovery of Neptune is widely regarded as a dramatic validation of celestial mechanics, and is one of the most remarkable moments of 19th-century science.
Life
Early years
Urbain Le Verrier was born at Saint-Lô, Manche, France, to a modest bourgeois family, his parents being Louis-Baptiste Le Verrier and Marie-Jeanne-Josephine-Pauline de Baudre.
He studied at the École Polytechnique – briefly chemistry, under Gay-Lussac, writing papers on the combinations of phosphorus and hydrogen, and of phosphorus and oxygen.
He then switched to astronomy, particularly celestial mechanics, and accepted a job at the Paris Observatory. He spent most of his professional life there, eventually becoming director of the institution, 1854–1870 and again 1873–1877.
In 1846 Le Verrier became a member of the French Academy of Sciences, and in 1855 was elected a foreign member of the Royal Swedish Academy of Sciences. His name is one of the 72 names inscribed on the Eiffel Tower.
Career
Early work
Le Verrier's first work in astronomy was presented to the Académie des Sciences in September 1839, entitled Sur les variations séculaires des orbites des planètes (On the Secular Variations of the Orbits of the Planets). This work addressed the then most-important question in astronomy: the stability of the Solar System, first investigated by Laplace. He was able to derive some important limits on the motions of the system, but due to the inaccurately-known masses of the planets, his results were tentative.
From 1844 to 1847, Le Verrier published a series of works on periodic comets, in particular those of Lexell, Faye and DeVico. He was able to show some interesting interactions with the planet Jupiter, proving that certain comets were actually the reappearance of previously-known comets flung into different orbits.
Discovery of Neptune
Le Verrier's most famous achievement is his prediction of the existence of the then unknown planet Neptune, using only mathematics and astronomical observations of the known planet Uranus. Encouraged by physicist Arago, Director of the Paris Observatory, Le Verrier was intensely engaged for months in complex calculations to explain small but systematic discrepancies between Uranus's observed orbit and the one predicted from the laws of gravity of Newton. At the same time, but unknown to Le Verrier, similar calculations were made by John Couch Adams in England. Le Verrier announced his final predicted position for Uranus's unseen perturbing planet publicly to the French Academy on 31 August 1846, two days before Adams's final solution was privately mailed to the Royal Greenwich Observatory. Le Verrier transmitted his own prediction by 18 September in a letter to Johann Galle of the Berlin Observatory. The letter arrived five days later, and the planet was found with the Berlin Fraunhofer refractor that same evening, 23 September 1846, by Galle and Heinrich d'Arrest within 1° of the predicted location near the boundary between Capricorn and Aquarius.
There was, and to an extent still is, controversy over the apportionment of credit for the discovery. There is no ambiguity to the discovery claims of Le Verrier, Galle, and d'Arrest. Adams's work was begun earlier than Le Verrier's but was finished later and was unrelated to the actual discovery. Not even the briefest account of Adams's predicted orbital elements was published until more than a month after Berlin's visual confirmation. Adams made full public acknowledgement of Le Verrier's priority and credit (not forgetting to mention the role of Galle) when he gave his paper to the Royal Astronomical Society in November 1846:
Tables of the planets
Early in the 19th century, the methods of predicting the motions of the planets were somewhat scattered, having been developed over decades by many different researchers. In 1847, Le Verrier took on the task to "... embrace in a single work the entire planetary system, put everything in harmony if possible, otherwise, declare with certainty that there are as yet unknown causes of perturbations...",
a work which would occupy him for the rest of his life.
Le Verrier began by re-evaluating, to the 7th order, the technique of calculating the planetary perturbations known as the perturbing function. This derivation, which resulted in 469 mathematical terms, was complete by 1849. He next collected observations of the positions of the planets as far back as 1750. Examining these and correcting for inconsistencies with the most recent data occupied him until 1852.
Le Verrier published, in the Annales de l'Observatoire de Paris, tables of the motions of all of the known planets, releasing them as he completed them, starting in 1858.
The tables formed the fundamental ephemeris of the Connaissance des Temps, the astronomical almanac of the Bureau des Longitudes, until about 1912.
About that time, Le Verrier's work on the outer planets was revised and expanded by Gaillot.
Precession of Mercury
Le Verrier began studying the motion of Mercury as early as 1843, with a report entitled Détermination nouvelle de l 'orbite de Mercure et de ses perturbations (A New Determination of the Orbit of Mercury and its Perturbations).
In 1859, Le Verrier was the first to report that the slow precession of Mercury's orbit around the Sun could not be completely explained by Newtonian mechanics and perturbations by the known planets. He suggested, among possible explanations, that another planet (or perhaps, instead, a series of smaller 'corpuscules') might exist in an orbit even closer to the Sun than that of Mercury, to account for this perturbation. (Other explanations considered included a slight oblateness of the Sun.) The success of the search for Neptune based on its perturbations of the orbit of Uranus led astronomers to place some faith in this possible explanation, and the hypothetical planet was even named Vulcan. However, no such planet was ever found, and the anomalous precession was eventually explained by general relativity theory.
Later life
Le Verrier's methods of management were disliked by the staff of the Observatoire, and the disputes became so great that he was driven out in 1870. He was succeeded by Delaunay, but was reinstated in 1873 after Delaunay accidentally drowned. Le Verrier held the position until his death in 1877.
Le Verrier married Lucille Clotilde Choquet in 1837 and had 3 children. He died in Paris, France and was buried in the Montparnasse Cemetery. A large stone celestial globe sits over his grave. He will be remembered by the phrase attributed to Arago: "the man who discovered a planet with the point of his pen."
In 1847, he was elected to the American Philosophical Society.
Honours
Gold Medal of the Royal Astronomical Society – 1868 and 1876
Namesake of craters on the Moon and Mars, a ring of Neptune, and the asteroid 1997 Leverrier
One of the 72 names engraved on the Eiffel Tower
See also
Discovery of Neptune
List of works by Henri Chapu Statue of Le Verrier
Lyttleton, Raymond Arthur, (1968) Mysteries of the Solar System, Clarendon, Oxford, UK (1968), Chapter 7: The discovery of Neptune
References
Further reading
.
.
.
.
.
.
External links
Le Verrier on the French 50 Franc banknote
Obituary – Nature, 1877, vol. 16, p. 453
Interesting interview with M. LeVerrier, director of the Paris Observatory – New York Herald, 14 April 1877, p. 7
Archived at Ghostarchive and the Wayback Machine:
Virtual exhibition on Paris Observatory digital library
Le Verrier's works digitalized on Paris Observatory digital library
1811 births
1877 deaths
People from Saint-Lô
École Polytechnique alumni
Burials at Montparnasse Cemetery
19th-century French astronomers
French Roman Catholics
19th-century French mathematicians
Lycée Louis-le-Grand alumni
Members of the French Academy of Sciences
Members of the Royal Swedish Academy of Sciences
Foreign members of the Royal Society
Neptune
Recipients of the Gold Medal of the Royal Astronomical Society
Recipients of the Copley Medal
Discoverers of astronomical objects | Urbain Le Verrier | Astronomy | 1,796 |
47,621,338 | https://en.wikipedia.org/wiki/Paintbrush | A paintbrush is a brush used to apply paint or ink. A paintbrush is usually made by clamping bristles to a handle with a ferrule. They are available in various sizes, shapes, and materials. Thicker ones are used for filling in, and thinner ones are used for details. They may be subdivided into decorators' brushes used for painting and decorating and artists' brushes use for visual art.
History
Paintbrushes were used by man as early as the Paleolithic era in around 2.5 million years ago in order to apply pigment.
Old painting kits, estimated to be around 100,000 years old, were discovered in a cave in what is now modern South Africa.
Ancient Egyptian paintbrushes were made of split palm leaves and used by ancestors to beautify their surroundings. The oldest brushes ever found were also made of animal hair.
Parts
Bristles: Transfer paint onto the substrate surface
Ferrule: Retains the bristles and attaches them to the handle
Handle: The intended interface between the user and the tool
Trade
Brushes for use in non-artistic trade painting are geared to applying an even coat of paint to relatively large areas. Following are the globally recognized handles of trade painter's brushes:
Gourd handle: Ergonomic design that reduces stress on the wrist and hand whilst painting.
Short handle: The shorter handle provides greater precision when painting small spaces such as corners, trims & detail areas.
Flat beavertail handle: This shape is rounded and slightly flattened to fit perfectly into the palm of the hand whilst painting.
Square handle: Square shaped handle with bevelled corners is featured mainly in trim or sash brushes and is comfortable to hold when painting.
Rat tail handle: This handle is longer & thinner than the standard making it easy to hold to give greater control.
Long handle: Rounded and thin, a long handle is easy to hold like a pencil giving great control & precision when cutting in & painting tricky spaces.
Decorating
The sizes of brushes used for painting and decorating.
Decorating sizes
Decorators' brush sizes are given in millimeters (mm) or inches (in), which refers to the width of the head. Common sizes are:
Metric (mm): 10 • 20 • 40 • 50 • 60 • 70 • 80 • 90 • 100.
Customary (inches): • • • • • • • 1 • • • 2 • • 3 • • 4.
Decorating shapes
Angled: For painting edges, bristle length viewed from the wide face of the brush uniformly decrease from one end of the brush to the other
Flat: For painting flat surfaces, bristle length viewed from the wide face of the brush does not change
Tapered: Improves control, the bristle length viewed from the narrow face of the brush is longer in the center and tapers toward the edges
Striker: Large round (cylindrical) brush for exterior painting difficult areas
Decorating bristles
Bristles may be natural or synthetic. If the filaments are synthetic, they may be made of polyester, nylon or a blend of nylon and polyester.
Filaments can be hollow or solid and can be tapered or untapered. Brushes with tapered filaments give a smoother finish.
Synthetic filaments last longer than natural bristles. Natural bristles are preferred for oil-based paints and varnishes, while synthetic brushes are better for water-based paints as the bristles do not expand when wetted.
A decorator judges the quality of a brush based on several factors: filament retention, paint pickup, steadiness of paint release, brush marks, drag and precision painting. A chiseled brush permits the painter to cut into tighter corners and paint more precisely.
Brush handles may be made of wood or plastic while ferrules are metal (usually nickel-plated steel).
Art
Short handled brushes are usually used for flat or slightly tilted work surfaces such as watercolor painting and ink painting, while long handled brushes are held horizontally while working on a vertical canvas such as for oil paint or acrylic paint.
Art shapes
The styles of brush tip seen most commonly are:
Round: pointed tip, long closely arranged bristles for detail.
Flat: for spreading paint quickly and evenly over a surface. They will have longer hairs than their Bright counterpart.
Bright: shorter than flats. Flat brushes with short stiff bristles, good for driving paint into the weave of a canvas in thinner paint applications, as well as thicker painting styles like impasto work.
Filbert: flat brushes with domed ends. They allow good coverage and the ability to perform some detail work.
Fan: for blending broad areas of paint.
Angle: like the filbert, these are versatile and can be applied in both general painting application as well as some detail work.
Mop: a larger format brush with a rounded edge for broad soft paint application as well as for getting thinner glazes over existing drying layers of paint without damaging lower layers to protect the paintbrush
Rigger: round brushes with longish hairs, traditionally used for painting the rigging in pictures of ships. They are useful for fine lines and are versatile for both oils and watercolors.
Stippler and deer-foot stippler: short, stubby rounds
Liner: elongated rounds
Dagger: looks like angle with longish hairs, used for one stroke painting like painting long leaves.
Scripts: highly elongated rounds
Egbert: a filbert with extra long hair, used for oil painting
Some other styles of brush include:
Sumi: Similar in style to certain watercolor brushes, also with a generally thick wooden or metal handle and a broad soft hair brush that when wetted should form a fine tip. Also spelled Sumi-e (墨絵, Ink wash painting).
Hake (刷毛): An Asian style of brush with a large broad wooden handle and an extremely fine soft hair used in counterpoint to traditional Sumi brushes for covering large areas. Often made of goat hair.
Spotter: Round brushes with just a few short bristles. These brushes are commonly used in spotting photographic prints.
Stencil: A round brush with a flat top used on stencils to ensure the bristles don't get underneath. Also used to create texture.
Art sizes
Artists' brushes are usually given numbered sizes, although there is no exact standard for their physical dimensions. From smallest to largest, the sizes are: 20/0, 12/0, 10/0, 7/0, 6/0, 5/0, 4/0 (also written 0000), 000, 00, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 16, 18, 20, 22, 24, 25, 26, 28, 30, 2 inch, 4 inch, 6 inch, and 8 inch. Brushes as fine as 30/0 are manufactured by major companies, but are not a common size. Sizes 000 to 20 are most common.
Art bristles
Bristles may be natural—either soft hair or hog bristle—or synthetic.
Types include:
watercolor brushes which are usually made of sable, synthetic sable or nylon;
oil painting brushes which are usually made of sable or bristle;
acrylic brushes which are almost entirely nylon or synthetic.
Turpentine or thinners used in oil painting can destroy some types of synthetic brushes. However, innovations in synthetic bristle technology have produced solvent resistant synthetic bristles suitable for use in all media. Natural hair, squirrel, badger or sable are used by watercolorists due to their superior ability to absorb and hold water.
Soft hair brushes The best of these are made from kolinsky sable, other red sables, or miniver (Russian squirrel winter coat; tail) hair. Sabeline is ox hair dyed red to look like red sable and sometimes blended with it. Camel hair is a generic term for a cheaper and lower quality alternative, usually ox. It can be other species, or a blend of species, but never includes camel. Pony, goat, mongoose and badger hair are also used.
Hog bristle Often called China bristle or Chungking bristle. This is stiffer and stronger than soft hair. It may be bleached or unbleached.
Synthetic bristles These are made of special multi-diameter extruded nylon filament, Taklon or polyester. These are becoming ever more popular with the development of new water based paints.
Art handles
Artists' brush handles are commonly wooden but can also be made of molded plastic. Many mass-produced handles are made of unfinished raw wood; better quality handles are of seasoned hardwood. The wood is sealed and lacquered to give the handle a high-gloss, waterproof finish that reduces soiling and swelling. Many brush companies offer long or short brush handle sizes.
Metal ferrules may be of aluminum, nickel, copper, or nickel-plated steel. Quill ferrules are also found: these give a different "feel" to the brush, and are staple of French-style watercolour brushes.
References
External links
Painting materials
Hand tools
Brushes | Paintbrush | Engineering | 1,875 |
4,140,063 | https://en.wikipedia.org/wiki/Symbolic%20power | The concept of symbolic power, also known as symbolic domination (domination symbolique in French language) or symbolic violence, was first introduced by French sociologist Pierre Bourdieu to account for the tacit, almost unconscious modes of cultural/social domination occurring within the social habits maintained over conscious subjects. Symbolic power accounts for discipline used against another to confirm that individual's placement in a social hierarchy, at times in individual relations but most basically through system institutions also.
Also referred to as soft power, symbolic power includes actions that have discriminatory or injurious meaning or implications, such as gender dominance and racism. Symbolic power maintains its effect through the mis-recognition of power relations situated in the social matrix of a given field. While symbolic power requires a dominator, it also requires the dominated to accept their position in the exchange of social value that occurs between them.
History
The concept of symbolic power may be seen as grounded in Friedrich Engels' concept of false consciousness. To Engels, under capitalism, objects and social relationships themselves are embedded with societal value that is dependent upon the actors who engage in interactions themselves. Without the illusion of natural law governing such transactions of social and physical worth, the proletariat would be unwilling to consciously support social relations that counteract their own interests. Dominant actors in a society must consciously accept that such an ideological order exists for unequal social relationships to take place. Louis Althusser further developed it in his writing on what he called Ideological State Apparatuses, arguing that the latter's power is partly based on symbolic repression.
The concept of symbolic power was first introduced by Pierre Bourdieu in La Distinction. Bourdieu suggested that cultural roles are more dominant than economic forces in determining how hierarchies of power are situated and reproduced across societies. Status and economic capital are both necessary to maintain dominance in a system, rather than just ownership over the means of production alone. The idea that one could possess symbolic capital in addition and set apart from financial capital played a critical role in Bourdieu's analysis of hierarchies of power.
For example, in the process of reciprocal gift exchange in the Kabyle society of Algeria, when there is an asymmetry in wealth between the two parties, the better-endowed giver "can impose a strict relation of hierarchy and debt upon the receiver." Symbolic power, therefore, is fundamentally the imposition of categories of thought and perception upon dominated social agents who, once they begin observing and evaluating the world in terms of those categories—and without necessarily being aware of the change in their perspective—then perceive the existing social order as just. This, in turn, perpetuates a social structure favored by and serving the interests of those agents who are already dominant. Symbolic power differs from physical violence in that it is embedded in the modes of action and structures of cognition of individuals, and imposes the specter of legitimacy of the social order.
See also
Caciquism
Power (social and political)
Social dominance theory
Structural violence
Slavoj Žižek
References
External links
Sociological terminology
Violence
Pierre Bourdieu | Symbolic power | Biology | 634 |
1,791,995 | https://en.wikipedia.org/wiki/Missing%20years%20%28Jewish%20calendar%29 | The missing years in the Hebrew calendar refer to a chronological discrepancy between the rabbinic dating for the destruction of the First Temple in 422 BCE (3338 Anno Mundi) and the academic dating of it in 587 BCE. In a larger sense, it also refers to the discrepancy between conventional chronology versus that of Seder Olam in what concerns the Persian period during which time it exercised hegemony over Israel, a period which spanned 207 years according to conventional chronology, but only 34 years according to Seder Olam. Invariably, the resulting timeframe also affects the number of years the Second Temple stood, said by a late rabbinic tradition to have stood 420 years, but by conventional chronology 589 years.
Dating in academic sources
The academic datings in question are confirmed by a variety of Persian, Babylonian and Greek sources, which include records of datable astronomical observations such as eclipses, although there are disagreements among modern scholars, ranging from 1 to 2 years, over some of the dates in the conventional chronology.
Siege of Jerusalem (597 BC)
Both the Babylonian Chronicles and the Bible indicate that Nebuchadnezzar captured Jerusalem. The Babylonian Chronicles (as published by Donald Wiseman in 1956) establish that Nebuchadnezzar captured Jerusalem the first time on 2 Adar (16 March) 597 BCE. Before Wiseman's publication, E. R. Thiele had determined from the biblical texts that Nebuchadnezzar's initial capture of Jerusalem occurred in the spring of 597 BCE, while other scholars, including William F. Albright, more frequently dated the event to 598 BCE.
Second siege and destruction of the First Temple
According to the Bible, Nebuchadnezzar installed Zedekiah as king after his first siege, and Zedekiah ruled for 11 years before the second siege resulted in the end of his kingdom.
Although there is no dispute that Jerusalem fell the second time in the summer month of Tammuz, Albright dates the end of Zedekiah's reign (and the fall of Jerusalem) to 587 BCE, whereas Thiele offers 586 BCE. Thiele's reckoning is based on the presentation of Zedekiah's reign on an accession basis, which was used for most but not all of the kings of Judah. In that case, the year that Zedekiah came to the throne would be his first partial year; his first full year would be 597/596 BCE, and his eleventh year, the year Jerusalem fell, would be 587/586 BCE. Since Judah's regnal years were counted from Tishrei in autumn, this would place the end of his reign and the capture of Jerusalem in the summer of 586 BCE.
Dating in traditional Jewish sources
A variety of rabbinic sources state that the Second Temple stood for 420 years. In traditional Jewish calculations, based on Seder Olam Rabbah, the destruction of the Second Temple fell in the year 68 of the Common Era, implying that it was built in about 352 BCE. Adding 70 years between the destruction of the First Temple and the construction of the Second Temple, it follows that the First Temple was destroyed in around 422 BCE. While acceptance of this chronology was widespread among ancient rabbis, it was not universal: Pirkei deRabbi Eliezer, Midrash Lekach Tov, and numerous rishonim disagree with the chronology of Seder Olam Rabbah.
The traditional Jewish date recognized by the rabbis as the "year of destruction" is approximately 165 years later than the accepted year of 587 or 586 BCE. This discrepancy is referred to as the "missing years".
Details of rabbinic chronology
According to the Talmud and Seder Olam Rabbah, the Second Temple stood for 420 years, with the years divided up as follows:
103 years (35 BCE – 68 CE) = Herodian dynasty
103 years (138–35 BCE) = Hasmonean dynasty
180 years (318–138 BCE) = Seleucid Empire
34 years (352–318 BCE) = Achaemenid Empire rule while the Second Temple stood (not including additional years of Persian rule before the Temple's construction).
The date of 318 BCE for the Greek conquest of Persia is evident from the Talmud, which implies that that Greek rule began six years before the beginning of the Seleucid era (which occurred in 312/11 BCE). In academic chronology, Alexander conquered the Achaemenids between 334–330 BCE.
Seventy years passed between the destruction of the First Temple and the building of the Second Temple in the seventy-first year, according to 2 Chronicles 36:21, so construction of the Second Temple in 352 BCE implies that the First Temple was destroyed in 423 BCE.
Similarly, the Megillat Antiochus implies that the Second Temple was built in 352 BCE, and thus that the First Temple was destroyed in 423 BCE.
The figure of 420 years is likely derived from the Prophecy of Seventy Weeks in Daniel 9:24–27. The rabbis interpreted this passage as referring to a period of 490 years which would pass between the destructions of the First and Second Temples—70 years between the Temples, plus 420 years of the Second Temple, starting in the 71st year after the destruction, though the passage can plausibly be interpreted in other ways.
Proposed explanations
If traditional dates are assumed to be based on the standard Hebrew calendar, then the differing traditional and modern academic dating of events cannot both be correct. Attempts to reconcile the two systems must show one or both to have errors.
Missing years in Jewish tradition
Scholars see the discrepancy between the traditional and academic date of the destruction of the First Temple arising as a result of Jewish sages miscounting the reign lengths of several Persian kings during the Persian Empire's rule over Israel. Modern scholars tally 14 Persian kings whose combined reigns total 207 years. By contrast, ancient Jewish sages only mention four Persian kings totaling 52 years. The reigns of several Persian kings appear to be missing from the traditional calculations.
Certain verses in the Bible itself suggest a longer Persian era, such as where six generations of priests are listed in the Persian period. However, as the Bible does not mention any significant events occurring in those additional years, the later rabbis may have consciously chosen to omit the years from their chronology.
Azariah dei Rossi was likely the first Jewish authority to claim that the traditional Hebrew dating is not historically precise regarding the years before the Second Temple, and suggests that the Sages of Israel may have chosen to include in their chronology only those years of the period of Persian dominion that were clearly expressed or implied in the Bible. Additional time, the length of which was not clearly stated, was chosen to be ignored. Nachman Krochmal agreed with dei Rossi, pointing to the Greek name Antigonos mentioned in Pirkei Avot 1:3 as proof that there must have been a longer period to account for this sign of Hellenic influence. Dei Rossi and Krochmal argued that when the length of a historical period was unknown, Seder Olam Rabbah took the method of assuming the shortest possible length.
Astrologer and chronicler, Raḥamim Sar-Shalom, following the view of dei Rossi, suggests that the purpose of the author of Seder Olam was only to state the number of years of the Persian period that were included in the Bible, and that a lack of understanding of the purpose by the Amoraim is what caused them, among other things, to calculate the date from creation erroneously. The "missing years" not only offset the span of the Persian period, but also offset the number of years collected since the first man, Adam, walked the face of the earth.
Solomon Judah Loeb Rapoport noted that the traditional Jewish chronology, when combined with another rabbinic tradition, places the Exodus from Egypt at exactly 1000 years prior to the Seleucid era (known in Jewish sources as "Minyan Shtarot"). He suggests that the authors of the traditional Jewish chronology intentionally omitted years from the Persian period to obtain the round number with the intent of allowing Jews who had counted years from the Exodus to easily switch to the Seleucid era system, used by Greek rulers at the time.
David Zvi Hoffmann points out that the Mishnah in Avot (1:4) in describing the chain of tradition uses the plural "accepted from them" even though the previous Mishnah mentions only one person. He posits that there must have been another Mishnah mentioning two sages that was later removed.
Shimon Schwab interpreted the Biblical words "seal the words and close the book" () as a commandment to obscure the Biblical chronology so that it would not be possible to accurately calculate the time of the Messiah's arrival. Thus, according to Schwab, the traditional Jewish calendar intentionally omitted years from the Persian period. However, Schwab later withdrew that suggestion for numerous reasons.
A 2006 article in Ḥakirah journal suggested that the sages were concerned with the acceptance of the Mishnah. There existed a rabbinical tradition that the year 4000 marked the close of the "era of Torah". Thus, it is proposed, the sages arranged the chronology so that the redaction of the Mishnah should coincide with that date and thus have a better chance of acceptance.
Mordechai Breuer suggested that like other works of midrash, the tradition chronology in Seder Olam Rabbah was never meant to be taken literally but rather was intended to be symbolic.
Some Jewish thinkers, including Isaac Abarbanel, Chaim Hirschensohn and Adin Steinsaltz, have argued that the original Jewish chronology agreed with the academic chronology, but later misunderstandings or textual corruptions of Seder Olam Rabbah gave the impression that it refers to a shorter period of time. However, Seder Olam Rabbah's chronology is implicit in many different passages, and it is difficult to plausibly explain all of the passages in a way that agrees with the academic chronology.
Critiques of academic dating
Attempts have been made to reinterpret the historical evidence to agree with the rabbinic tradition. These advocates will sometimes accuse historians of overly trusting the historian Herodotus, although the standard dating is based on many things including archaeology and other historians. Mainstream scholarship has rejected these attempts.
Other advocates of alternative chronology will sometimes invoke the rabbinic tradition. David Rohl's New Chronology redates much of Egyptian history and he claims that his chronology matches the events of Exodus and other parts of the Bible better, as an example.
See also
Traditional Jewish chronology
Notes
References
Bibliography
Dawn of the Gods: The untold timeline of Genesis, by Marco Lupi Speranza (self published, 2018) – reconstruction in accordance with Sumerian history.
Jewish History in Conflict: A Study of the Major Discrepancy between Rabbinic and Conventional Chronology, by Mitchell First (Jason Aronson, 1997)
Talmudic and Rabbinic Chronology, by Edgar Frank (New York: Feldheim 1956)
Chronology of the Ancient World, by E.J. Bickerman (Cornell University Press, 1968, 1982)
The Crime of Claudius Ptolemy. Robert R. Newton (The Johns Hopkins University Press, Baltimore and London, 1977)
Daniel 9 in You Take Jesus and I'll Take God by S. Levine, revised edition, Hamoroh Press, Los Angeles, 1980 – explains the Jewish understanding of Daniel 9:24–27
The Romance of Biblical Chronology , by Martin Anstey (London: Marshall Brothers, 1913) – interprets Daniel as prophesying the crucifixion of Jesus, so the Temple as having been destroyed in 502 BCE
R' Shimon Schwab in "Comparative Jewish Chronology in Jubilee Volume for Rav Yosef Breuer" pp. 177–197.
David Zvi Hoffmann "Ha'mishna Rishona" (Heb.)
Fixing the History Books, Dr. Chaim S. Heifetz's Revision of Persian History, by Brad Aaronson – Jewish scholarly critique of secular dating
Fixing the Mind by Alexander Eterman – a rebuttal of Heifetz's critique.
Secular Chronology by Walter R. Dolen – Christian scholarly critique of secular dating
Significant Events In Jewish And World History – timeline based on traditional Jewish sources
Chronology
Hebrew calendar
Archaeology of Israel
Solomon's Temple
yi:יידישער לוח#פעלענדיגע יארן אין די אידישע יאר ציילונג | Missing years (Jewish calendar) | Physics | 2,614 |
314,139 | https://en.wikipedia.org/wiki/Triptych | A triptych ( ) is a work of art (usually a panel painting) that is divided into three sections, or three carved panels that are hinged together and can be folded shut or displayed open. It is therefore a type of polyptych, the term for all multi-panel works. The middle panel is typically the largest and it is flanked by two smaller related works, although there are triptychs of equal-sized panels. The form can also be used for pendant jewelry.
Beyond its association with art, the term is sometimes used more generally to connote anything with three parts, particularly if integrated into a single unit.
Etymology
The word triptych was formed in English by compounding the prefix tri- with the word diptych. Diptych is borrowed from the Latin , which itself is derived from the Late Greek () . is the neuter plural of () .
In art
The triptych form appears in early Christian art, and was a popular standard format for altar paintings from the Middle Ages onwards. Its geographical range was from the eastern Byzantine churches to the Celtic churches in the west. During the Byzantine period, triptychs were often used for private devotional use, along with other relics such as icons. Renaissance painters such as Hans Memling and Hieronymus Bosch used the form. Sculptors also used it. Triptych forms also allow ease of transport.
From the Gothic period onward, both in Europe and elsewhere, altarpieces in churches and cathedrals were often in triptych form. One such cathedral with an altarpiece triptych is Llandaff Cathedral. The Cathedral of Our Lady in Antwerp, Belgium, contains two examples by Rubens, and Notre Dame de Paris is another example of the use of triptych in architecture. The form is echoed by the structure of many ecclesiastical stained glass windows.
The triptych form's transportability was exploited during World War Two when a private citizens' committee in the United States commissioned painters and sculptors to create portable three-panel hinged altarpieces for use by Christian and Jewish U.S. troops for religious services. By the end of the war, 70 artists had created 460 triptychs. Among the most prolific were Violet Oakley, Nina Barr Wheeler, and Hildreth Meiere.
The triptych format has been used in non-Christian faiths, including, Judaism, Islam, and Buddhism. For example: the triptych Hilje-j-Sherif displayed at the National Museum of Oriental Art, Rome, Italy, and a page of the Qur'an at the Museum of Turkish and Islamic Arts in Istanbul, Turkey, exemplify Ottoman religious art adapting the motif. Likewise, Tibetan Buddhists have used it in traditional altars.
Although strongly identified as a religious altarpiece form, triptychs outside that context have been created, some of the best-known examples being works by Max Beckmann and Francis Bacon. When Bacon's 1969 triptych, Three Studies of Lucian Freud, was sold in 2013 for $142.4 million, it was the highest price ever paid for an artwork at auction at that time. That record was broken in May 2015 by $179.4 million for Pablo Picasso's 1955 painting Les Femmes d’Alger.
In photography
A photographic triptych is a common style used in modern commercial artwork. The photographs are usually arranged with a plain border between them. The work may consist of separate images that are variants on a theme, or may be one larger image split into three.
Examples
Stefaneschi Triptych by Giotto, c. 1330
Annunciation with St. Margaret and St. Ansanus by Simone Martini, 1333
The Mérode Altarpiece by Robert Campin, late 1420's
The Garden of Earthly Delights, Triptych of the Temptation of St. Anthony and The Haywain Triptych by Hieronymus Bosch
The Portinari Altarpiece by Hugo van der Goes, c. 1475
The Buhl Altarpiece, c. 1495
The Raising of the Cross by Peter Paul Rubens, 1610 or 1611
The Aino Myth triptych by Akseli Gallen-Kallela, 1891
The Pioneer by Frederick McCubbin, 1904
Departure by Max Beckmann, 1932–33
Three Studies for Figures at the Base of a Crucifixion by Francis Bacon, 1944
Gallery
See also
References
External links
The Institution of the Eucharist at the Last Supper with St. Peter and St. Paul, Metropolitan Museum of Art
On the triptych as a writing instrument
Example of triptych features and restoration
Articles containing video clips
Altarpieces
Artistic techniques
Church architecture
Iconography
Optical illusions
Picture framing
Romanesque art
Rotational symmetry
Sculpture
Symmetry
Synagogue architecture
Triptychs
Visual motifs
Binocular rivalry | Triptych | Physics,Mathematics | 984 |
10,919,113 | https://en.wikipedia.org/wiki/Protein%20L | Protein L was first isolated from the surface of bacterial species Peptostreptococcus magnus and was found to bind immunoglobulins through L chain interaction, from which the name was suggested. It consists of 719 amino acid residues. The molecular weight of protein L purified from the cell walls of Peptostreptoccus magnus was first estimated as 95kD by SDS-PAGE in the presence of reducing agent 2-mercaptoethanol, while the molecular weight was determined to 76kD by gel chromatography in the presence of 6 M guanidine HCl. Protein L does not contain any interchain disulfide loops, nor does it consist of disulfide-linked subunits. It is an acidic molecule with a pI of 4.0. Unlike protein A and protein G, which bind to the Fc region of immunoglobulins (antibodies), protein L binds antibodies through light chain interactions. Since no part of the heavy chain is involved in the binding interaction, Protein L binds a wider range of antibody classes than protein A or G. Protein L binds to representatives of all antibody classes, including IgG, IgM, IgA, IgE and IgD. Single chain variable fragments (scFv) and Fab fragments also bind to protein L.
Despite this wide binding range, protein L is not a universal antibody-binding protein. Protein L binding is restricted to those antibodies that contain kappa light chains. In humans and mice, most antibody molecules contain kappa (κ) light chains and the remainder have lambda (λ) light chains. Protein L is only effective in binding certain subtypes of kappa light chains. For example, it binds human VκI, VκIII and VκIV subtypes but does not bind the VκII subtype. Binding of mouse immunoglobulins is restricted to those having VκI light chains.
Given these specific requirements for effective binding, the main application for immobilized protein L is purification of monoclonal antibodies from ascites or cell culture supernatant that are known to have the kappa light chain. Protein L is extremely useful for purification of VLκ-containing monoclonal antibodies from culture supernatant because it does not bind bovine immunoglobulins, which are often present in the media as a serum supplement. Also, protein L does not interfere with the antigen-binding site of the antibody, making it useful for immunoprecipitation assays, even using IgM.
Gene for protein L
The gene for protein L contains five components: a signal sequence of 18 amino acids; a NH2-terminal region ("A") of 79 residues; five homologous "B" repeats of 72-76 amino acids each; a COOH terminus region of two additional "C" repeats (52 amino acids each); a hydrophilic, proline-rich putative cell wall-spanning region ("W") after the C repeats; a hydrophobic membrane anchor ("M"). The B repeats (36kD) were found to be responsible for the interaction with Ig light chains.[2]
Other antibody binding proteins
In addition to protein L, other immunoglobulin-binding bacterial proteins such as protein A, protein G and protein A/G are all commonly used to purify, immobilize or detect immunoglobulins. Each of these immunoglobulin-binding proteins has a different antibody binding profile in terms of the portion of the antibody that is recognized and the species and type of antibodies it will bind.
References
Proteins
Immunology | Protein L | Chemistry,Biology | 761 |
13,145,330 | https://en.wikipedia.org/wiki/Santa%20Special | A Santa Special is a special Christmas rail service, common on heritage steam railways (or sometimes on mainline railways, as is done by the RPSI in Ireland), where children are given the opportunity to meet "Santa Claus".
The revenues derived from Santa Specials make an important contribution to the finances of the railways as they attract large numbers of families during the off-peak winter period.
Common features of Santa Special services are
Meeting "Santa Claus" (This commonly, but not always, takes place on the train)
Each child receiving a gift from "Santa Claus"
Refreshments provided for adults - for example, a Mince Pie and a Seasonal Drink
Entertainment provided for adults and children - for example, Juggling and Balloon Sculpting
See also
List of British heritage and private railways
References
Heritage railways
Santa Claus | Santa Special | Engineering | 166 |
65,653,642 | https://en.wikipedia.org/wiki/HD%2099706 | HD 99706 is an orange-hued star in the northern circumpolar constellation of Ursa Major. With an apparent visual magnitude of 7.65, it is too dim to be visible to the naked eye but can be viewed with a pair of binoculars. Parallax measurements provide a distance estimate of approximately 480 light years from the Sun, and the Doppler shift shows it is drifting closer with a radial velocity of −30 km/s. It has an absolute magnitude of 2.12, indicating it would be visible to the naked eye as a 2nd magnitude star if it were located 10 parsecs away.
This is an aging subgiant star belonging to spectral class K0, having exhausted the supply of hydrogen at its core and begun to evolve into a giant. Its age is younger than the Sun's at billion years and it is spinning slowly with a projected rotational velocity of 2 km/s. The star has 1.5 times the mass of the Sun and has expanded to 5.5 times the Sun's radius. It is slightly enriched in heavy elements, having 110% of solar abundance. HD 99706 is radiating 13 times the luminosity of the Sun from its photosphere at an effective temperature of 4,862 K.
An imaging survey at Calar Alto Observatory in 2016 failed to detect any stellar companions to HD 99706.
Planetary system
In 2011 one superjovian exoplanet, HD 99706 b, on a mildly eccentric orbit around star HD 99706 was discovered utilizing the radial velocity method. Another superjovian exoplanet on an outer orbit was detected in 2016.
References
K-type subgiants
Planetary systems with two confirmed planets
Ursa Major
J11283020+4357597
Durchmusterung objects
99706
055994 | HD 99706 | Astronomy | 378 |
22,786,540 | https://en.wikipedia.org/wiki/Michel%20Deza | Michel Marie Deza (27 April 1939 – 23 November 2016) was a Soviet and French mathematician, specializing in combinatorics, discrete geometry and graph theory. He was the retired director of research at the French National Centre for Scientific Research (CNRS), the vice president of the European Academy of Sciences, a research professor at the Japan Advanced Institute of Science and Technology, and one of the three founding editors-in-chief of the European Journal of Combinatorics.
Deza graduated from Moscow University in 1961, after which he worked at the Soviet Academy of Sciences until emigrating to France in 1972. In France, he worked at CNRS from 1973 until his 2005 retirement.
He has written eight books and about 280 academic papers with 75 different co-authors, including four papers with Paul Erdős, giving him an Erdős number of 1.
The papers from a conference on combinatorics, geometry and computer science, held in Luminy, France in May 2007, have been collected as a special issue of the European Journal of Combinatorics in honor of Deza's 70th birthday.
Selected papers
. This paper solved a conjecture of Paul Erdős and László Lovász (in , p. 406) that a sufficiently large family of k-subsets of any n-element universe, in which the intersection of every pair of k-subsets has exactly t elements, has a common t-element set shared by all the members of the family. Manoussakis writes that Deza is sorry not to have kept and framed the US$100 check from Erdős for the prize for solving the problem, and that this result inspired Deza to pursue a lifestyle of mathematics and travel similar to that of Erdős.
. This paper considers functions ƒ from subsets of some n-element universe to integers, with the property that, when A is a small set, the sum of the function values of the supersets of A is zero. The strength of the function is the maximum value t such that all sets A of t or fewer elements have this property. If a family of sets F has the property that it contains all the sets that have nonzero values for some function ƒ of strength at most t, F is t-dependent; the t-dependent families form the dependent sets of a matroid, which Deza and his co-authors investigate.
. This paper in polyhedral combinatorics describes some of the facets of a polytope that encodes cuts in a complete graph. As the maximum cut problem is NP-complete, but could be solved by linear programming given a complete description of this polytope's facets, such a complete description is unlikely.
. This paper with his son Antoine Deza, a fellow of the Fields Institute who holds a Canada Research Chair in Combinatorial Optimization at McMaster University, combines Michel Deza's interests in polyhedral combinatorics and metric spaces; it describes the metric polytope, whose points represent symmetric distance matrices satisfying the triangle inequality. For metric spaces with seven points, for instance, this polytope has 21 dimensions (the 21 pairwise distances between the points) and 275,840 vertices.
. Much of Deza's work concerns isometric embeddings of graphs (with their shortest path metric) and metric spaces into vector spaces with the L1 distance; this paper is one of many in this line of research. An earlier result of Deza showed that every L1 metric with rational distances could be scaled by an integer and embedded into a hypercube; this paper shows that for the metrics coming from planar graphs (including many graphs arising in chemical graph theory), the scale factor can always be taken to be 2.
Books
. As MathSciNet reviewer Alexander Barvinok writes, this book describes "many interesting connections ... among polyhedral combinatorics, local Banach geometry, optimization, graph theory, geometry of numbers, and probability".
. A sequel to Geometry of cuts and metrics, this book concentrates more specifically on L1 metrics.
. Reviewed in Newsletter of the European Mathematical Society 64 (June 2007), p. 57. This book is organized as a list of distances of many types, each with a brief description.
. This book describes the graph-theoretic and geometric properties of fullerenes and their generalizations, planar graphs in which all faces are cycles with only two possible lengths.
,
.
.
.
.
.
.
Poetry in Russian
Deza, M. (1983), 59--62, Sintaksis, Paris (http://dc.lib.unc.edu/cdm/item/collection/rbr/?id=30912).
(https://web.archive.org/web/20161026002230/http://www.liga.ens.fr/~deza/InRussian/DEZA-M.pdf).
(https://web.archive.org/web/20161022031836/http://www.liga.ens.fr/~deza/InRussian/DEZA-M2.pdf).
References
Further reading
External links
Deza's web page as of August 17, 2016 on Wayback Machine
Archived copy of Deza's web page, with note of demise
1939 births
2016 deaths
Russian mathematicians
Graph theorists
20th-century French mathematicians
21st-century French mathematicians
Academic journal editors
Soviet emigrants to France
Mathematicians from Moscow | Michel Deza | Mathematics | 1,135 |
15,098,241 | https://en.wikipedia.org/wiki/Air%20raid%20on%20Bari | The air raid on Bari (, ) was an air attack by German bombers on Allied forces and shipping in Bari, Italy, on 2 December 1943, during World War II. 105 German Junkers Ju 88 bombers of Luftflotte 2 surprised the port's defenders and bombed shipping and personnel operating in support of the Allied Italian Campaign, sinking 27 cargo and transport ships, as well as a schooner, in Bari harbour.
The attack lasted a little more than an hour and put the port out of action until February 1944. The release of mustard gas from one of the wrecked cargo ships added to the loss of life. The British and US governments covered up the presence of mustard gas and its effects on victims of the raid.
Background
In early September 1943, coinciding with the Allied invasion of Italy, Italy surrendered to the Allies in the Armistice of Cassibile and changed sides, but the breakaway Italian Social Republic in central and northern Italy continued the war on the Axis side. On 11 September 1943, the port of Bari in southern Italy was taken unopposed by the British 1st Airborne Division. The port was used by the Allies to land ammunition, supplies and provisions from ships at the port for Allied forces advancing towards Rome and to push German forces out of the Italian peninsula.
Bari had inadequate air defences; no Royal Air Force (RAF) fighter aircraft squadrons were based there, and fighters within range were assigned to escort or offensive duties, not port defence. Ground defences were equally ineffective.
Little thought was given to the possibility of a German air raid on Bari, because it was believed that the Luftwaffe in Italy was stretched too thin to mount a serious attack. On the afternoon of 2 December 1943, Air Marshal Sir Arthur Coningham, commander of the Northwest African Tactical Air Force, held a press conference in Bari where he stated that the Germans had lost the air war. "I would consider it as a personal insult if the enemy should send so much as one plane over the city". That was despite the fact that German air raids by KG 54, KG 76, and other units, had hit the port area of Naples four times in the previous month, and attacked other Mediterranean targets.
Thirty ships of American, British, Polish, Norwegian and Dutch registry were in Bari Harbour on 2 December. The adjoining port city held a civilian population of 250,000. The port was lit on the night of the raid to expedite the unloading of supplies for the Battle of Monte Cassino and was working at full capacity.
Raid
On the afternoon of 2 December, Luftwaffe pilot Werner Hahn made a reconnaissance flight over Bari in an Me 210. His subsequent report reached Generalfeldmarschall Wolfram von Richthofen—who commanded Luftflotte 2. With the support of Albert Kesselring, Richthofen ordered a raid; Kesselring and his staff had earlier considered Allied airfields at Foggia as targets but the Luftwaffe lacked the resources for such an attack. Richthofen had suggested Bari as an alternative. Richthofen believed that crippling the port might slow the advance of the British Eighth Army and told Kesselring that the only aircraft available were his Junkers Ju 88A-4 bombers. Richthofen thought that a raid by 150 Ju 88s might be possible but only 105 bombers were available, some from KG 54. Most of the aeroplanes were to fly from Italian airfields but Richthofen wanted to use a few from Yugoslavia in the hope that the Allies might be fooled into thinking that the mission originated from there and misdirect any retaliatory strikes. The Ju 88 pilots were ordered to fly east to the Adriatic Sea, then swing south and west, since it was thought that the Allied forces would expect any attack to come from the north.
The attack opened at 19:25, when two or three German aircraft circled the harbour at dropping Düppel (foil strips) to confuse Allied radar. They also dropped flares, which were not needed due to the harbour being well illuminated. The German bomber force surprised the defenders and was able to bomb the harbour with great accuracy. Hits on two ammunition ships caused explosions which shattered windows away. A bulk petrol pipeline on a quay was severed and the gushing fuel ignited. A sheet of burning fuel spread over much of the harbour, engulfing undamaged ships.
Twenty-eight merchant ships laden with more than of cargo were sunk or destroyed; three ships carrying a further were later salvaged. Twelve more ships were damaged. The port was closed for three weeks and was only restored to full operation in February 1944. All Bari-based submarines were undamaged, their tough exteriors able to withstand the German attack.
John Harvey
One of the destroyed vessels—the US Liberty ship —had been carrying a secret cargo of 2000 M47A1 mustard gas bombs, each holding of the agent. According to Royal Navy historian Stephen Roskill, the cargo had been sent to Europe for potential retaliatory use if Germany carried out its alleged threat to use chemical warfare in Italy. The destruction of John Harvey caused liquid sulfur mustard from the bombs to spill into waters already contaminated by oil from the other damaged vessels. The many sailors who had abandoned their ships into the water became covered with the oily mixture, which provided an ideal solvent for the sulfur mustard. Some mustard evaporated and mingled with the clouds of smoke and flame. The wounded were pulled from the water and sent to medical facilities whose personnel were unaware of the mustard gas. Medical staff focused on personnel with blast or fire injuries and little attention was given to those merely covered with oil. Many injuries caused by prolonged exposure to low concentrations of mustard might have been reduced by bathing or a change of clothes.
Within a day, the first symptoms of mustard poisoning had appeared in 628 patients and medical staff, including blindness and chemical burns. That puzzling development was further complicated by the arrival of hundreds of Italian civilians also seeking treatment, who had been poisoned by a cloud of sulfur mustard vapor that had blown over the city when some of John Harveys cargo exploded. As the medical crisis worsened, little information was available about what was causing the symptoms, because the US military command wanted to keep the presence of chemical munitions secret from the Germans. Nearly all crewmen of John Harvey had been killed, and were unavailable to explain the cause of the "garlic-like" odour noted by rescue personnel.
Informed about the mysterious symptoms, Deputy Surgeon General Fred Blesse sent for Lieutenant Colonel Stewart Francis Alexander, an expert in chemical warfare. Carefully tallying the locations of the victims at the time of the attack, Alexander traced the epicenter to John Harvey, and confirmed mustard gas as the responsible agent when he located a fragment of the casing of a US M47A1 bomb.
By the end of the month, 83 of the 628 hospitalised military victims had died. The number of civilian casualties, thought to have been even greater, could not be accurately gauged since most had left the city to seek shelter with relatives.
An additional cause of contamination with mustard is suggested by George Southern, the only survivor of the raid to have written about it. The huge explosion of John Harvey, possibly simultaneously with another ammunition ship, sent large amounts of oily water mixed with mustard into the air, which fell down like rain on men who were on deck at the time. That affected the crews of the s and . Both ships were damaged by the force of the blast and had taken casualties. After moving the destroyers away from burning ships and towing the tanker La Drome away from the fires, the ships received orders to sail for Taranto. They threaded their way past burning wrecks, with the flotilla leader, Bicester having to follow Zetland, because her navigation equipment was damaged. Some survivors were picked up from the water in the harbour entrance by Bicester. When dawn broke, it became clear that the magnetic and gyro compasses had acquired large errors, requiring a considerable course correction. Symptoms of mustard gas poisoning then began to appear. By the time they reached Taranto, none of Bicesters officers could see well enough to navigate the ship into harbour, so assistance had to be sought from the shore.
Cover-up
A member of Allied Supreme Commander General Dwight D. Eisenhower's medical staff, Dr. Stewart F. Alexander, was dispatched to Bari following the raid. Alexander had trained at the Army's Edgewood Arsenal in Maryland, and was familiar with some of the effects of mustard gas. Although he was not informed of the cargo carried by John Harvey, and most victims suffered atypical symptoms caused by exposure to mustard diluted in water and oil (as opposed to airborne), Alexander rapidly concluded that mustard gas was present. Although he could not get any acknowledgement from the chain of command, Alexander convinced medics to treat patients for mustard gas exposure and saved many lives as a result. He also preserved many tissue samples from autopsied victims at Bari. After World War II, those samples resulted in the development of an early form of chemotherapy based on mustard, mustine.
Allied High Command suppressed news of the presence of mustard gas, in case the Germans believed that the Allies were preparing to use chemical weapons, fearing it might provoke them into pre-emptive use. The presence of multiple witnesses caused a re-evaluation of this stance and in February 1944, the US Chiefs of Staff issued a statement admitting to the accident and emphasizing that the US had no intention of using chemical weapons except in the case of retaliation. General Dwight D. Eisenhower approved Alexander's report. Winston Churchill, however, ordered all British documents to be purged. Mustard gas deaths were described as "burns due to enemy action".
US records of the attack were declassified in 1959, but the episode remained obscure until 1967 when author Glenn B. Infield published the book Disaster at Bari. In 1986, the British government admitted to survivors of the Bari raid that they had been exposed to poison gas and amended their pension payments. In 1988, through the efforts of Nick T. Spark and US Senators Dennis DeConcini and Bill Bradley, Alexander received recognition from the Surgeon General of the United States Army for his actions in the aftermath of the Bari disaster. Alexander's information contributed to Cornelius P. Rhoads' chemotherapy for cancer and Alexander turned down Rhoads' offer to work at the Sloan Kettering Institute.
In his autobiographical work Destroyer Captain, published in 1975 by William Kimber & Co., Lieutenant Commander Roger Hill describes refuelling in Bari shortly after the attack. He describes the damage done and details how a shipload of mustard gas came to be in the harbour because of intelligence reports which he viewed as "incredible".
Aftermath
An inquiry exonerated Sir Arthur Coningham of negligence in defending the port but found that the absence of previous air attacks had led to complacency.
See also
List of accidents and incidents involving transport or storage of ammunition
SS Charles Henderson
Notes
References
Further reading
External links
Bari, Air Raid On
Bari
Battles of World War II involving Germany
Military operations involving chemical weapons | Air raid on Bari | Chemistry | 2,247 |
314,703 | https://en.wikipedia.org/wiki/Erectile%20tissue | Erectile tissue is tissue in the body with numerous vascular spaces, or cavernous tissue, that may become engorged with blood. However, tissue that is devoid of or otherwise lacking erectile tissue (such as the labia minora, vestibule, vagina and urethra) may also be described as engorging with blood, often with regard to sexual arousal.
In sex organs
Erectile tissue exists in external genitals such as the corpora cavernosa of the penis and their homologs in the clitoris, also called the corpora cavernosa. During penile or clitoral erection, the corpora cavernosa will become engorged with arterial blood, a process called tumescence. This may result from any of various physiological stimuli which can be internal or external. This process of stimulation, due to internal or external stimuli, is also known as sexual arousal. The corpus spongiosum is a single tubular structure located just below the corpora cavernosa in males. This may also become slightly engorged with blood, but less so than the corpora cavernosa.
In the nose
Erectile tissue is present in the anterior part of the nasal septum and is attached to the turbinates of the nose. The nasal cycle occurs as the erectile tissue on one side of the nose congests and the other side decongests. This process is controlled by the autonomic nervous system with parasympathetic dominance being associated with congestion and sympathetic with decongestion. The time of one cycle may vary greatly between individuals, with Kahana-Zweig et al. finding a range between 15 minutes and 10.35 hours though the average was noted as 2.15 ± 1.84 hours.
Other types
Erectile tissue is also found in the urethral sponge and perineal sponge. The erection of nipples is not due to erectile tissue, but rather due to the contraction of smooth muscle under the control of the autonomic nervous system.
References
Sexual anatomy
ru:Пещеристое тело | Erectile tissue | Biology | 430 |
53,308,753 | https://en.wikipedia.org/wiki/C26H29NO2 | {{DISPLAYTITLE:C26H29NO2}}
The molecular formula C26H29NO2 (molar mass: 387.5139 g/mol) may refer to:
Afimoxifene
Droloxifene, also known as 3-hydroxytamoxifen
Molecular formulas | C26H29NO2 | Physics,Chemistry | 68 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.